Image Sensors at VLSI Symposia 2019

Image Sensors World        Go to the original article...

VLSI Symposia to be held in June this year in Kyoto, Japan, publishes its agenda with many image sensor papers:

A 640x640 Fully Dynamic CMOS Image Sensor for Always-On Object Recognition,
I. Park*, W. Jo*, C. Park*, B. Park*, J. Cheon** and Y. Chae*, *Yonsei Univ. and **Kumoh National Institute of Technology, Korea
This paper presents a 640x640 fully dynamic CMOS image sensor for always-on object recognition. A pixel output is sampled with a dynamic source follower (SF) into a parasitic column capacitor, which is readout by a dynamic single-slope (SS) ADC based on a dynamic bias comparator and an energy efficient two-step counter. The sensor, implemented in a 0.11μm CMOS, achieves 0.3% peak non-linearity, 6.8erms RN and 67dB DR. Its power consumption is only 2.1mW at 44fps and is further reduced to 260μW at 15fps with sub-sampled 320x320 mode. This work achieves the state-of-the-art energy efficiency FoM of 0.7e-·nJ.

A 132 by 104 10μm-Pixel 250μW 1kefps Dynamic Vision Sensor with Pixel-Parallel Noise and Spatial Redundancy Suppression,
C. Li*, L. Longinotti*, F. Corradi** and T. Delbruck***, *iniVation AG, **iniLabs GmbH and ***Univ. of Zurich, Switzerland
This paper reports a 132 by 104 dynamic vision sensor (DVS) with 10μm pixel in a 65nm logic process and a synchronous address-event representation (SAER) readout capable of 180Meps throughput. The SAER architecture allows adjustable event frame rate control and supports pre-readout pixel-parallel noise and spatial redundancy suppression. The chip consumes 250μW with 100keps running at 1k event frames per second (efps), 3-5 times more power efficient than the prior art using normalized power metrics. The chip is aimed for low power IoT and real-time high-speed smart vision applications.

Automotive LIDAR Technology,
M. E. Warren, TriLumina Corporation, USA
LIDAR is an optical analog of radar providing high spatial-resolution range information. It is an essential part of the sensor suite for ADAS (Advanced Driver Assistance Systems), and ultimately, autonomous vehicles. Many competing LIDAR designs are being developed by established companies and startup ventures. Although there are no standards, performance and cost expectations for automotive LIDAR are consistent across the automotive industry. Why are there so many different competing designs? We can look at the system requirements and organize the design options around a few key technologies.

A 64x64 APD-Based ToF Image Sensor with Background Light Suppression Up to 200 klx Using In-Pixel Auto-Zeroing and Chopping,
B. Park, I. Park, W. Choi and Y. C. Chae, Yonsei Univ., Korea
This paper presents a time-of-flight (ToF) image sensor for outdoor applications. The sensor employs a gain-modulated avalanche photodiode (APD) that achieves high modulation frequency. The suppression capability of background light is greatly improved up to 200klx by using a combination of in-pixel auto-zeroing and chopping. A 64x64 APD-based ToF sensor is fabricated in a 0.11μm CMOS. It achieves depth ranges from 0.5 to 2 m with 25MHz modulation and from 2 to 20 m with 1.56MHz modulation. For both ranges, it achieves a non-linearity below 0.8% and a precision below 3.4% at a 3D frame rate of 96fps.

A 640x480 Indirect Time-of-Flight CMOS Image Sensor with 4-tap 7-μm Global-Shutter Pixel and Fixed-Pattern Phase Noise Self- Compensation Scheme,
M.-S. Keel, Y.-G. Jin, Y. Kim, D. Kim, Y. Kim, M. Bae, B. Chung, S. Son, H. Kim, T. An, S.-H. Choi, T. Jung, C.-R. Moon, H. Ryu, Y. Kwon, S. Seo, S.-Y. Kim, K. Bae, S.-C. Shin and M. Ki, Samsung Electronics Co., Ltd., Korea
A 640x480 indirect Time-of-Flight (ToF) CMOS image sensor has been designed with 4-tap 7-μm global-shutter pixel in 65-nm back-side illumination (BSI) process. With novel 4-tap pixel structure, we achieved motion artifact-free depth map. Column fixed-pattern phase noise (FPPN) is reduced by introducing alternative control of the clock delay propagation path in the photo-gate driver. As a result, motion artifact and column FPPN are not noticeable in the depth map. The proposed ToF sensor shows depth noise less than 0.62% with 940-nm illuminator over the working distance up to 400 cm, and consumes 197 mW for VGA, which is 0.64 pW/pixel.

A 128x120 5-Wire 1.96mm2 40nm/90nm 3D Stacked SPAD Time Resolved Image Sensor SoC for Microendoscopy,
T. Al Abbas*, O. Almer*, S. W. Hutchings*, A. T. Erdogan*, I. Gyongy*, N. A. W.Dutton** and R. K. Henderson*, *Univ. of Edinburgh and
**STMicroelectronics, UK
An ultra-compact 1.4mmx1.4mm, 128x120 SPAD image sensor with a 5-wire interface is designed for time-resolved fluorescence microendoscopy. Dynamic range is extended by noiseless frame summation in SRAM attaining 126dB time resolved imaging at 15fps with 390ps gating resolution. The sensor SoC is implemented in STMicroelectronics 40nm/90nm 3D-stacked BSI CMOS process with 8μm pixels and 45% fill factor.

Fully Integrated Coherent LiDAR in 3D-Integrated Silicon Photonics/65nm CMOS,
P. Bhargava*, T. Kim*, C. V. Poulton**, J. Notaros**, A. Yaacobi**, E. Timurdogan**, C. Baiocco***, N. Fahrenkopf***, S. Kruger***, T. Ngai***, Y. Timalsina***, M. R. Watts** and V. Stojanovic*, *Univ. of California, Berkeley, **Massachusetts Institute of Technology and ***College of Nanoscale Science and Engineering, USA
We present the first integrated coherent LiDAR system with experimental ranging demonstrations operating within the eyesafe 1550nm band. Leveraging a unique wafer-scale 3D integration platform which includes customizable silicon photonics and nanoscale CMOS, our system seamlessly combines a high-sensitivity optical coherent detection front-end, a large-scale optical phased array for beamforming, and CMOS electronics in a single chip. Our prototype, fabricated entirely in a 300mm wafer facility, shows that low-cost manufacturing of high-performing solid-state LiDAR is indeed possible, which in turn may enable extensive adoption of LiDARs in consumer products, such as self-driving cars, drones, and robots.

Automotive Image Sensor for Autonomous Vehicle and Adaptive Driver Assistance System,
H. Matsumoto, Sony Corp.
Human vision is the most essential sensor to drive vehicle. Instead of human eyes, CMOS image sensor is the best sensing device to recognize objects and environment around the vehicle. Image sensors are also used in various use cases such as driver and passenger monitor in cabin of vehicle. For these use cases, some special functionalities and specification are needed. In this session the requirements for automotive image sensor will be discussed such as high dynamic range, flicker mitigation and low noise. In the last part the key technology to utilize image sensor, such as image recognition and computer vision will be discussed.

426-GHz Imaging Pixel Integrating a Transmitter and a Coherent Receiver with an Area of 380x470 μm2 in 65-nm CMOS,
Y. Zhu*, P. R. Byreddy*, K. K. O* and W. Choi*, **, *The Univ. of Texas at Dallas and **Oklahoma state Univ., USA
A 426-GHz imaging pixel integrating a transmitter and a coherent receiver using the three oscillators for 3-push within an area of 380x470 μm2 is demonstrated. The TX power is -11.3 dBm (EIRP) and sensitivity is -89.6 dBm for 1-kHz noise bandwidth. The sensitivity is the lowest among imaging pixels operating above 0.3 THz. The pixel consumes 52 mW from a 1.3 V VDD. The pixel can be used with a reflector with 47 dB gain to form a camera-like reflection mode image for an object 5 m away.

Monolithic Three-Dimensional Imaging System: Carbon Nanotube Computing Circuitry Integrated Directly Over Silicon Imager,
T. Srimani, G. Hills, C. Lau and M. Shulaker, Massachusetts Institute of Technology, USA
Here we show a hardware prototype of a monolithic three-dimensional (3D) imaging system that integrates computing layers directly in the back-end-of-line (BEOL) of a conventional silicon imager. Such systems can transform imager output from raw pixel data to highly processed information. To realize our imager, we fabricate 3 vertical circuit layers directly on top of each other: a bottom layer of silicon pixels followed by two layers of CMOS carbon nanotube FETs (CNFETs) (comprising 2,784 CNFETs) that perform in-situ edge detection in real-time, before storing data in memory. This approach promises to enable image classification systems with improved processing latencies.

Record-High Performance Trantenna Based on Asymmetric Nano-Ring FET for Polarization-Independent Large-Scale/Real-Time THz Imaging, E.-S. Jang*, M. W. Ryu*, R. Patel*, S. H. Ahn*, H. J. Jeon*, K. Han** and K. R. Kim*, *Ulsan National Institute of Science and Technology and **Dongguk Univ., Korea
We demonstrate a record-high performance monolithic trantenna (transistor-antenna) using 65-nm CMOS foundry in the field of a plasmonic terahertz (THz) detector. By applying ultimate structural asymmetry between source and drain on a ring FET with source diameter (dS) scaling from 30 to 0.38 micrometer, we obtained 180 times more enhanced photoresponse (∆u) in on-chip THz measurement. Through free-space THz imaging experiments, the conductive drain region of ring FET itself showed a frequency sensitivity with resonance frequency at 0.12 THz in 0.09 ~ 0.2 THz range and polarization-independent imaging results as an isotropic circular antenna. Highly-scalable and feeding line-free monolithic trantenna enables a highperformance THz detector with responsivity of 8.8kV/W and NEP of 3.36 pW/Hz0.5 at the target frequency.

Custom Silicon and Sensors Developed for a 2nd Generation Augmented Reality User Interface,
P. O'Connor, Microsoft, USA.

Go to the original article...

Leave a Reply

css.php