IEDM 2019: Omnivision 2.2um GS BSI Pixel

Image Sensors World        Go to the original article...

Omnivision's IEDM 2019 paper "A 2.2µm stacked back side illuminated voltage domain global shutter CMOS image sensor" by Geunsook Park, Alan Chih-Wei Hsuing, Keiji Mabuchi, Jingming Yao, Zhiqiang Lin, Vincent C. Venezia, Tongtong Yu, Yu-Shen Yang, Tiejun Dai, and Lindsay A. Grant presents the world's smallest GS pixel that appears to be optimized for 940nm structured light 3D vision applications:

"This paper presents a 2.2µm pixel pitch back side illuminated (BSI) Voltage Domain Global Shutter (VDGS) image sensor with Stacked Pixel Level Connection (SPLC) and full backside Deep Trench Isolation (DTI). With these cutting edge technologies, Full Well Capacity (FWC) more than 12,000 electrons and parasitic light sensitivity (PLS) larger than 100 dB are reached. A 38% Quantum Efficiency (QE) and 60% of Modulation Transfer Function (MTF) at 940nm, half Nyquist frequency (Ny/2) is demonstrated."

Go to the original article...

IEDM 2019: Samsung 64MP Sensor with 0.8um Dual CG Pixel

Image Sensors World        Go to the original article...

Samsung IEDM 2019 paper "A 0.8 µm Smart Dual Conversion Gain Pixel for 64 Megapixels CMOS Image Sensor with 12k e- Full-Well Capacitance and Low Dark Noise" by Donghyuk Park, Seung-Wook Lee, Jinhwa Han, Dongyoung Jang, Heesang Kwon, Seungwon Cha, Mihye Kim, Haewon Lee, Sungho Suh, Woong Joo, Yunki Lee, Seungjoo Nah, Heegeun Jeong, Bumsuk Kim, Sangil Jung, Jesuk Lee, Yitae Kim, Chang-Rok Moon, and Yongin Park presents the company's latest generation sensor:

"A 0.8 μm-pitch 64 megapixels ultra high resolution CMOS image sensor has been demonstrated for mobile applications for the first time. Full-well capacity (FWC) of 6k e- was achieved in 0.8 μm pixels as the best in the world, and the advanced color filter (CF) isolation technology was introduced to overcome sensitivity degradation. Dual conversion gain (CG) technology was also first applied to mobile applications to improve the FWC performance of Tetracell up to 12k e-. In addition, highly refined deep trench isolation (DTI) and photodiode design significantly improved dark noise characteristics."

Go to the original article...

LiDAR News: Livox, Velodyne

Image Sensors World        Go to the original article...

Livox announces long range 260m Horizon and 500m Tele-15 LiDARs:


Velodyne publishes a white paper "LiDAR-based Security Solutions" saying that one of the key LiDAR advantages over camera is privacy:

"With increased concerns that facial-recognition technology will be used for general surveillance,1 a system that utilizes lidar as the initial source of object detection data enables a security solution that preserves trust and anonymity. This is especially important in applications involving the general public, such as retail monitoring and queue management."


MicrocontrollerTips publishes a 4-part review "LIDAR and Time of Flight" by Bill Schweber. Few quotes:

Go to the original article...

IMASENIC Presentation

Image Sensors World        Go to the original article...

Renato Turchetta, CEO of IMASENIC, Barcelona, Spain presents "CMOS image sensors for scientific, bio-medical and space applications: more than pretty pictures!" at CERN seminar. Few slides from the document:

Go to the original article...

Assorted News: Goodix, Sony

Image Sensors World        Go to the original article...

Digitimes: Goodix is to become the largest TSMC customer for its 8-inch fabs, thanks to its burgeoning business of under-display optical fingerprint sensors.

Sony opens Osaka-area design center for CMOS image sensors on April 1, 2020. "With the opening of the Osaka office, Sony has strengthened its CMOS image sensor design and development capabilities for mobile devices and the IoT market, which are expected to expand in the future, by acquiring talented analog and logic design engineers in Kansai. We aim to expand our product lineup."


Bloomberg reports that Sony has hard time manufacturing enough image sensors to keep up with high demand.

"For the second straight year, the Japanese company will run its chip factories constantly through the holidays to try and keep up with demand for sensors used in mobile phone cameras, according to Terushi Shimizu, the head of Sony’s semiconductor unit.

“Judging by the way things are going, even after all that investment in expanding capacity, it might still not be enough,” Shimizu said in an interview at the Tokyo headquarters. “We are having to apologize to customers because we just can’t make enough.”

Sony in May said it controls 51% of the image sensor market as measured by revenue and is targeting a 60% share by fiscal 2025. Shimizu estimates Sony’s portion of the pie grew by a few percentage points this year alone.

Sony is now looking to a new generation of sensors that can see the world in three dimensions. “This was the year zero for time of flight,” Shimizu said. “Once you start seeing interesting applications of this technology, it will motivate people to buy new phones.


Sony’s ToF camera module.Photographer: Kiyoshi Ota/Bloomberg

Go to the original article...

4-Tap ToF Pixel for LiDAR Applications

Image Sensors World        Go to the original article...

MDPI paper "A Time-of-Flight Range Sensor Using Four-Tap Lock-In Pixels with High near Infrared Sensitivity for LiDAR Applications" by Sanggwon Lee, Keita Yasutomi, Masato Morita, Hodaka Kawanishi, and Shoji Kawahito from Shizuoka University, Japan promises enhanced range and ambient light tolerance:

"In this paper, a back-illuminated (BSI) time-of-flight (TOF) sensor using 0.2 µm silicon-on-insulator (SOI) complementary metal oxide semiconductor (CMOS) technology is developed for long-range laser imaging detection and ranging (LiDAR) application. A 200 µm-thick bulk silicon in the SOI substrate is fully depleted by applying high negative voltage at the backside for higher quantum efficiency (QE) in a near-infrared (NIR) region. The proposed SOI-based four-tap charge modulator achieves a high-speed charge modulation and high modulation contrast of 71% in a NIR region. In addition, in-pixel drain function is used for short-pulse TOF measurements. A distance measurement up to 27 m is carried out with +1.8~−3.0% linearity error and range resolution of 4.5 cm in outdoor conditions. The measured QE of 55% is attained at 940 nm which is suitable for outdoor use due to the reduced spectral components of solar radiation."

Go to the original article...

History of Innovations

Image Sensors World        Go to the original article...

International Journal of Engineering Research & Technology (IJERT) publishes a paper "CMOS Image Sensors: Recent Innovations in Imaging Technology" by Gagan Khanduri, Dev Bhoomi Institute of Technology, Dehradun, India. Most of the "recent innovations" in the paper are a deep history by now:

Go to the original article...

GPixel GSENSE2020BSI sCMOS Spec

Image Sensors World        Go to the original article...

GPixel publishes a fairly detailed datasheet explaining 4MP, 6.5um pixel GSENSE2020BSI sCMOS sensor operation and performance (link to datasheet file has been removed on GPixel request):

"Same as other sCMOS sensors in the GSENSE series, GSENSE2020BSI outputs the image signal from both the top readout chain and the bottom readout chain simultaneously. These two images have exact the same exposure time, but different analog gains, where the low gain (LG) image is optimized for high full well capacity, and the high gain (HG) image is optimized for low readout noise. User may combine both high gain (HG) and low gain (LG) images from the sensor to generate one HDR image off-chip. Figure 1 shows the sensor operation for high dynamic range (HDR) image combination."

Go to the original article...

IEDM 2019: Sony on Nanophotonics in BSI Pixel Design

Image Sensors World        Go to the original article...

Sony presentation at IEDM 2019 "Nanophotonics contributions to state-of-the-art CMOS Image Sensors" by Sozo Yokogawa discusses the recent features that Sony has added to its BSI pixel lineup:

"Recent progress of Back-illuminated CMOS image sensor (BI-CIS), focusing on their pixel improvements with design of optical properties using subwavelength sizescale structures and photonics technologies, are reviewed. These technologies contribute not only improving BI-CIS basic performances but also adding new functions for versatile sensing applications."

Go to the original article...

Ams Aims to Expand Market for its Miniature Endoscopic Camera

Image Sensors World        Go to the original article...

BusinessWire: ams releases the NanoVision and NanoBerry evaluation kits, which provide a ready-made platform for the development of innovative solutions based on the NanEyeC miniature image sensor. The NanEyeC camera is a full-featured image sensor supplied as a tiny 1mm x 1mm surface-mount module. This small form factor can produce an amazing 100kpixel resolution up to 58fps.

The new NanoVision demo kit for the NanEyeC is based on an Arduino development platform. The NanoBerry evaluation kit uses a NanEyeC image sensor add-on board to the Raspberry Pi board. Ams considers a multitude of new markets spanning from low frame-rate applications like presence detection to more demanding operations like eye tracking or stereo vision systems.

Go to the original article...

LiDAR News: Pioneer, ON Semi, Yandex, More…

Image Sensors World        Go to the original article...

Pioneer announces the mass production readiness of its MEMS LiDAR: Pioneer has "developed a mass-production model of 3D LiDAR sensor, which is a much compact size, an extended measurement distance and improved performance. The sensor is expected to be equipped in advanced autonomous driving vehicles (supporting level-three and above autonomous vehicles) and will be released in first half of FY2020, and started a full-scale production from autumn 2020.

The 3D-LiDAR sensor, which is to be mass-produced in autumn 2020 (“2020 model”, hereafter) has adopted Micro Electric Memory Systems (MEMS) mirror-based scanning method. In addition to offering high resolution, it has been downsized to less than 20% of previous model (“2018 model”, hereafter) while achieving 1.5 to 2 times the measurement distance. There are three types of sensors with different angles of view and measurement distances, and an angle type, making it possible to accommodate customer needs by combining the different types.
"


BusinessWire: ON Semi to demo its SPAD-based silicon photomultiplier (SiPM) LiDAR sensor inside Robosense RS-LiDAR-M1 scanning LiDAR:

SiPMs are quickly displacing APDs in solid-state LiDAR systems at the near-infrared (NIR), 905nm wavelength,” said Wade Appelman, VP of the SensL Division, Intelligent Sensing Group at ON Semiconductor. “We are excited to be highlighting our LiDAR partners who have designed in our latest AEC-Q101 qualified R-Series detectors because of their market leading 15% photon detection efficiency (PDE), a critical performance parameter to achieve long distance ranging.

Low density versions of the technology have been used in consumer applications, but these devices do not work beyond 2 meters and are not reliable enough in bright lighting conditions. The new devices are said to be far more flexible and can be used with a variety of scene illumination architectures for ToF including scanning and flash.


VentureBeat: Russian Yandex announces its LiDAR plans too in its Medium post:

"Two lidar sensors will hit the streets in the coming months — one with a 120-degree view that’s solid-state (meaning the entire system is built on a silicon chip) and a second that provides a 360-degree view of its surroundings.

Third-party lidars analyze and filter data as soon as it’s collected. Using our lidars, we receive more information about the vehicle’s surroundings since we can access the sensors’ raw data,” says Dmitry Polishchuk, Head of Self-Driving Cars at Yandex. “With our lidars, we can analyze the raw data and synchronize it with information from other sensors, so that the car can better identify objects. In addition, our current prototypes are already half the cost of existing devices. With the transition to mass production, the cost of our lidars will be even lower, and we will ultimately save up to 75% on the cost of sensors.


IDTechEx forecasts
that LiDAR will become quite a sizable market, but only after 2028:

"By 2030, the autonomous driving system (including lidars, radars, cameras, computers, software and maps) market will reach $57 billion; the market value will more than triple by 2040, reaching $173 billion."


Yu Huang publishes a large 223-page presentation on LiDAR technology. Among other things, the presentation discusses the LiDAR calibration complexities:

Go to the original article...

Differential VIS-UV Sensor

Image Sensors World        Go to the original article...

MDPI paper "An Optical Filter-Less CMOS Image Sensor with Differential Spectral Response Pixels for Simultaneous UV-Selective and Visible Imaging" by Yhang Ricardo Sipauba Carvalho da Silva, Rihito Kuroda, and Shigetoshi Sugawa from Tohoku University, Japan, belongs to the Special Issue Special issue on the 2019 International Image Sensor Workshop (IISW2019).

"This paper presents a complementary metal-oxide-semiconductor (CMOS) image sensor (CIS) capable of capturing UV-selective and visible light images simultaneously by a single exposure and without employing optical filters, suitable for applications that require simultaneous UV and visible light imaging, or UV imaging in variable light environment. The developed CIS is composed by high and low UV sensitivity pixel types, arranged alternately in a checker pattern. Both pixel types were designed to have matching sensitivities for non-UV light. The UV-selective image is captured by extracting the differential spectral response between adjacent pixels, while the visible light image is captured simultaneously by the low UV sensitivity pixels. Also, to achieve high conversion gain and wide dynamic range simultaneously, the lateral overflow integration capacitor (LOFIC) technology was introduced in both pixel types. The developed CIS has a pixel pitch of 5.6 µm and exhibits 172 µV/e− conversion gain, 131 ke− full well capacity (FWC), and 92.3 dB dynamic range. The spectral sensitivity ranges of the high and low UV sensitivity pixels are of 200–750 nm and 390–750 nm, respectively. The resulting sensitivity range after the differential spectral response extraction is of 200–480 nm. This paper presents details regarding the CIS pixels structures, doping profiles, device simulations, and the measurement results for photoelectric response and spectral sensitivity for both pixel types. Also, sample images of UV-selective and visible spectral imaging using the developed CIS are presented."

Go to the original article...

Challenges and Solutions to Next-Generation Single-Photon Imagers

Image Sensors World        Go to the original article...

EPFL publishes PhD Thesis "Challenges and Solutions to Next-Generation Single-Photon Imagers" by Samuel Burri.

"In this thesis, we look at the challenges of massively parallel photon-counting cameras from all performance angles. The thesis deals with a number of performance issues that emerge when the number of pixels exceeds about 1/4 of megapixels, proposing characterization techniques and solutions to mitigate performance degradation and non-uniformity. Two cameras were created to validate the proposed techniques. The first camera, SwissSPAD, comprises an array of 512 x 128 SPAD pixels, each with a one-bit memory and a gating mechanism to achieve 5ns high precision time windows with high uniformity across the array. With a massively parallel readout of over 10 Gigabit/s and positioning of the integration time window accurate to the pico-second range, fluorescence lifetime imaging and fluorescence correlation spectroscopy imaging achieve a speedup of several orders of magnitude while ensuring high precision in the measurements. Other possible applications include wide-field time-of-flight imaging and the generation of quantum random numbers at highest bit-rates. Lately super-resolution microscopy techniques have also used SwissSPAD. The second camera, LinoSPAD, takes the concepts of SwissSPAD one step further by moving even more 'intelligence' to the FPGA and reducing the sensor complexity to the bare minimum. This allows focusing the optimization of the sensor on the most important metrics of photon efficiency and fill factor. As such, the sensor consists of one line of SPADs that have a direct connection each to the FPGA where complex photon processing algorithms can be implemented. As a demonstration of the capabilities of current lowcost FPGAs we implemented an array of time-to-digital converters that can handle up to 8.5 billion photons per second, measuring each one of them and accounting them in high precision histograms. Using simple laser diodes and a circuit to generate light pulses in the picosecond range, we demonstrate a ubiquitous 3D time-of-flight sensor. The thesis intends to be a first step towards achieving the world's first megapixel SPAD camera, which, we believe, is in grasp thanks to the architectural and circuital techniques proposed in this thesis. In addition, we believe that the applications proposed in this thesis offer a wide variety of uses of the sensors presented in this thesis and in future ones to come."

Go to the original article...

TechInsights Reviews 2019 Trends and Achievements

Image Sensors World        Go to the original article...

TechInsights Senior Technology Analyst Ray Fontaine publishes an interesting summary of 2019 achievements "Imaging + Sensing End-of-Year Highlights." The most important points are:

  • Smartphone imaging: Push to higher resolutions, sub-micron pixels, larger sensor areas
  • More experiments with PDAF, new CFA patterns
  • ToF pixel pitch reduced down to 5um
  • Event-driven sensors show up in mass market products (Samsung S5K231YX DVS inside home monitoring system)

Looking into 2020, "We are looking forward to more back-illuminated global shutter products to analyze, continued high resolution and sub-micron pixel development, enhanced near-infrared (NIR) sensors, and the push towards non-Si detectors."

Go to the original article...

Image Sensors at EI 2020

Image Sensors World        Go to the original article...

Electronic Imaging Conference to be held on Jan. 27-30 in Burlingame, CA, unveils its agenda with quite a few image sensor papers:

3D-IC smart image sensors
Laurent Millet, Stephane Chevobbe
CEA/LETI, CEA/LIST, France
This presentation will introduce 3D-IC technologies applied to imaging, and give some examples of 3D-IC or stacked sensors and their 3D partitioning topologies. A focus will be given on our stacked vision chip that embeds flexible pre-processing at high-speed and low latency, like fast event detection, edge detection or convolution computation. The perspectives will show how this technology can pave the way for new sensor architectures and applications.

Indirect time-of-flight CMOS image sensor using 4-tap charge-modulation pixels and range-shifting multi-zone technique
Kamel Mars, Keita Kondo, Michihiro Inoue, Shohei Daikoku, Masashi Hakamata, Keita Yasutomi, Keiichiro Kagawa, Sung-Wook Jun, Yoshiyuki Mineyama, Satoshi Aoyama, Shoji Kawahito
Shizuoka University, Tokyo Institute of Technology, Brookman Technology, Japan

This paper presents an indirect TOF image sensor using short pulse modulation based 4-tap one drain pixels and fast sub-frames readout for range shifted multiple pulse capturing time window. measurement uses a short pulse modulation technique combined with short multiple sub-frames where the accumulations number for each sub-frame is carefully selected for near and far zone in order to ovoid sensor saturation due to strong laser power or strong ambient light. Current setup uses two sub-frames where the gate opening sequence is set as G21G2G3G4 and where the gate pulse width is set to 10ns. The proposed timing sequence allows 3-time windows at each sub-frame. By combining the last gate of the first sub-frame and the first gate of the second sub-frame, an extra time window is also obtained making seven measurable time windows in total. The process of combining the two sub-frame is achieved offline by an automated calculation algorithm allowing automated and smooth measurement of two zone simultaneously. TOF image and range of 10.5m have been successfully measured using 2-subframes and 7-time windows where the used light pulse width is also set to 10ns allowing a 1.5m measurement range for each window. A depth resolution of 1 percent was achieved at 10m range.

A short-pulse based time-of-flight image sensor using 4-tap charge-modulation pixels with accelerated carrier response
Michihiro Inoue, Shohei Daikoku, Keita Kondo, Akihito Komazawa, Keita Yasutomi, Keiichiro Kagawa, Shoji Kawahito
Shizuoka University, Japan

Most of the reported CMOS indirect TOF range imagers are designed for CW (continuous wave) modulation and their range resolutions have been greatly improved by using high modulation frequency of over 100MHz. On the other hand, for extending the applications of indirect TOF image sensors to outdoor and high ambient light environments, a short-pulse-based TOF image sensor with multi-tap charge-modulation pixels will be a good candidate. The TOF sensor to be announced this time shows that the pixel with three n-type doping layers and substrate biasing has a sufficient gating response to the light pulse width of 4ns with the linearity of 3%.

A high-linearity time-of-flight image sensor using a time-domain feedback technique
Juyeong Kim, Keita Yasutomi, Keiichiro Kagawa, Shoji Kawahito
Shizuoka University, Japan

In this paper, we proposed a time-domain feedback technique for Time-of-Flight (ToF) image sensor. The time-domain feedback has an advantage of easily time-to-digital conversion and effectively suppressing the linearity error. The time-domain feedback technique has been implemented by 2-tap lock-in pixels and 5b digitally-controlled delay lines (DCDLs). The prototype ToF sensor is fabricated in a 0.11μm (1P4M) CIS process. The lock-in pixels, having a size of 16.8×16.8μm2, are driven by 7ns of pulse signal from the 5b DCDLs. The light pulse delay is controlled to measure the performance. Full-range is set to 0 to 105cm with an 11b for the full scale in 22ms. Our sensor has attained the linearity of less than 0.3%, and the range resolution of 2.67mm (peak) and 0.27mm (mean) has been achieved without any calibration techniques.

A 4-tap global shutter pixel with enhanced IR sensitivity for VGA time-of-flight CMOS image sensors
Taesub Jung, Yonghun Kwon, Sungyoung Seo, Min-Sun Keel, Changkeun Lee, Sung-Ho Choi, Sae-Young Kim, Sunghyuck Cho, Youngchan Kim, Young-Gu Jin, Moosup Lim, Hyunsurk Ryu, Yitae Kim, Joonseok Kim, Chang-Rok Moon
Samsung Electronics, Korea

An indirect time-of-flight (ToF) CMOS image sensor has been designed with 4-tap 7 µm global shutter pixel in back-side illumination process. 15000 e- of high full-well capacity (FWC) per a tap of 3.5 µm pitch and 3.6 e- of read-noise has been realized by employing true correlated double sampling (CDS) structure with storage gates (SGs). Noble characteristics such as 86 % of demodulation contrast (DC) at 100MHz operation, 37 % of higher quantum efficiency (QE) and lower parasitic light sensitivity (PLS) at 940 nm have been achieved. As a result, the proposed ToF sensor shows depth noise less than 0.3 % with 940 nm illuminator in even long distance.

An over 120dB dynamic range linear response single exposure CMOS image sensor with two-stage lateral overflow integration trench capacitors
Yasuyuki Fujihara, Maasa Murata, Shota Nakayama, Rihito Kuroda, Shigetoshi Sugawa
Tohoku University, Japan

This paper presents a prototype linear response single exposure CMOS image sensor with two-stage lateral overflow integration trench capacitors (LOFITreCs) exhibiting over 120dB dynamic range with 11.4Me- full well capacity and maximum signal-to-noise ratio (SNR) of 70dB. The measured SNR at all switching points were over 35dB thanks to the proposed two-stage LOFITreCs.

Deep image demosaicing for submicron image sensors (JIST-first)
Irina Kim, Seongwook Song, SoonKeun Chang, SukHwan Lim, Kai Guo
Samsung Electronics, Korea

The latest trend in image sensor technology allowing submicron pixel size for high-end mobile devices comes at very high image resolutions and with irregularly sampled Quad Bayer Color Filter Array (CFA). Sustaining image quality becomes a challenge for the Image Signal Processor (ISP), namely for demosaicing. Inspired by the success of deep learning approach to standard Bayer demosaicing, we aim to investigate how artifacts-prone Quad Bayer Array can benefit from it. We found that deeper networks are capable to improve image quality and reduce artifacts; however, deeper networks can be hardly deployed on mobile devices given very high image resolutions: 24MP, 36MP, 48MP. In this paper, we propose an efficient end-to-end solution to bridge this gap - a Duplex Pyramid Network (DPN). Deep hierarchical structure, residual learning, linear feature maps depth growth allow very large receptive field, yielding better details restoration and artifacts reduction, while staying computationally efficient. Experiments show that the proposed network outperforms state-of-the-art for both Bayer and Quad Bayer demosaicing. For challenging Quad Bayer CFA it reduces visual artifacts better than other deep networks including artifacts existing in conventional commercial solution. While superior in image quality, it is x2-x25 times faster than state-of-the-art deep neural networks and therefore feasible for deployment on mobile devices, paving the way for a new era of on-device deep ISPs.

Imaging in the autonomous vehicle revolution
Gary Hicok
NVIDIA, USA

Innovation of imaging capabilities for AVs has been rapidly improving to the point that the cornerstone AV sensors are cameras. Much like the human brain processes visual data taken in by the eyes, AVs must be able to make sense of this constant flow of information, which requires high-performance computing to respond to the flow of sensor data. This presentation will delve into how these developments in imaging are being used to train, test and operate safe autonomous vehicles. Attendees will walk away with a better understanding of how deep learning, sensor fusion, surround vision and accelerated computing are enabling this deployment.

Single-shot multi-frequency pulse-TOF depth imaging with sub-clock shifting for multi-path interference separation
Tomoya Kokado, Yu Feng, Masaya Horio, Keita Yasutomi, Shoji Kawahito, Takashi Komuro, Hajime Ngahara, Keiichiro Kagawa
Shizuoka University, Saitama University, Osaka University, Japan

Short-pulse-based time-of-flight (TOF) depth imaging using on a multi-tap macro-pixel computational ultra-fast CMOS image sensor with temporally coded shutters was demonstrated. To separate multi-path components and shorten the minimal separation between the adjacent pulses in a single shot and to overcome the range-resolution tradeoff, an application of multi-frequency coded shutters and sub-clock shifting is proposed. The computational CMOS image sensor incorporates an array of macro-pixels each of which is composed of four sub-pixels. The subpixels are implemented with four-tap lateral electric field charge modulators (LEFMs) with dedicated charge draining gates. For the macro-pixel, 16 different temporal binary shutters are applied to acquire a mosaic image of cross-correlations between an incident temporal optical signal and the temporal shutters. The effectiveness of the proposed method was verified experimentally with the computational CMOS image sensor. The clock frequency for the shutter generator was 73MHz. A 520nm sub-ns pulse laser was used. A two-component multi-path optical signal created by a transparent acrylic plate and a mirror, which were placed 8.2m apart each other, and change in time of flight that was a half as long as the minimal time window were successfully distinguished.

Improving the disparity for depth extraction by decreasing the pixel height in monochrome CMOS image sensor with offset pixel apertures
Jimin Lee1, Sang-Hwan Kim, Hyeunwoo Kwen, Seunghyuk Chang, JongHo Park, Sang-Jin Lee, Jang-Kyoo Shin
Kyungpook National University, Korea Advanced Institute of Science and Technology, Korea

This paper introduces the disparity improvement due to pixel height decrease in monochrome CMOS image sensor (CIS) with offset pixel apertures (OPAs) for depth extraction. A 3D image is a stereoscopic image created by adding depth information to a planar two-dimensional image. In the monochrome CIS with the OPAs described in this paper, the disparity is an important factor for obtaining depth information. As the pixel height decreases, the incident angle of light transferred from the microlens to the metal pattern opening increases. Therefore, the light response angle of left-OPA (LOPA) pixel and right-OPA (ROPA) pixel increases and thus the disparity improves. In this work, silicon-region-etching (SRE) process is applied to the proposed monochrome CIS with OPAs and the overall height of the pixel is lowered. Monochrome CIS with OPAs is used for the experiment, and a chief-ray-angle (CRA) experiment is implemented to measure the change of the disparity according to the pixel height. The proposed monochrome CIS with OPAs was designed and manufactured using the 0.11-μm CIS process. Improved disparity due to decreased pixel height has been experimentally verified.

Planar microlenses for near infrared CMOS image sensors
Lucie Dilhan, Jérôme Vaillant, Alain Ostrovsky, Lilian Masarotto, Céline Pichard, Romain Paquet
University Grenoble Alpes, CEA, STMicroelectronics, France

In this paper we present planar microlenses designed to improve the sensitivity of SPAD pixels. We designed diffractive and metasurface planar microlens structures based on rigorous optical simulations, then we implemented the diffractive microlens on a SPAD design available on STMicroelectronics 40nm CMOS testchips (32 x 32 SPAD array), and compared with the process of reference melted microlens. We characterized circuits and demonstrated optical gain from our designed microlenses.

Event threshold modulation in dynamic vision spiking imagers for data throughput reduction
Luis Cubero, Arnaud Peizerat, Dominique Morche, Gilles Sicard
LETI, CEA, University Grenoble Alpes, France

Dynamic vision sensors are growing in popularity for Computer Vision and moving scenes: its output is a stream of events reflecting temporal lighting changes, instead of absolute values. One of its advantages is fast detection of events, as they are read asynchronously as spikes. However, high event data throughput implies an increasing workload for the read-out. That can lead to data loss or to prohibitively large power consumption for constrained devices. This work presents a technique to reduce that event data throughput at the cost of a very compact additional circuitry at the pixel level: less events are generated while preserving most of the information. Our simulated example depicts a data throughput reduced to 14 %, in the case of the most aggressive version of our approach.

Go to the original article...

Samsung Promotes its 108MP Sensor

Image Sensors World        Go to the original article...

Samsung publishes a promotional article about its 108MP ISOCELL Bright HMX mobile sensor.


Go to the original article...

ams Announces X-Ray Sensor

Image Sensors World        Go to the original article...

BusinessWire: ams announces the AS5950 integrated sensor chip for X-ray detection will enable an improved CT detector for more detailed images at lower system costs.

The AS5950 is a CMOS device that combines a high-sensitivity photodiode array and a 64-channel ADC on the same die. As a single chip, the AS5950 is easier to mount in a CT detector module. Current CT scanner manufacturers need to assemble a discrete photodiode array on a complex PCB, connected via long traces to a discrete read-out chip. In 8- and 16-slice CT scanners, replacement of this complex PCB assembly with a single AS5950 chip dramatically reduces the image-noise performance and – importantly – manufacturers’ materials and production costs.

Jose Vinau, Marketing Director for the Medical & Specialty Sensors business line at ams, says: “ams wants to help make CT scanners more affordable and available throughout the world. The introduction of the AS5950 and its module will reduce the hurdles in assembly and manufacturing of an X-ray detector.

Go to the original article...

IEDM 2019: Samsung Presents its Event-Based Sensor

Image Sensors World        Go to the original article...

Samsung presented a paper "Low-Latency Interactive Sensing for Machine Vision" by Paul K. J. Park, Jun-Seok Kim, Chang-Woo Shin, Hyunku Lee, Weiheng Liu, Qiang Wang, Yohan Roh, Jeonghan Kim, Yotam Ater, Evgeny Soloveichik, and Hyunsurk Eric Ryu at IEDM last week.

"In this paper, we introduce the low-latency interactive sensing and processing solution for machine vision applications. The event-based vision sensor can compress the information of moving objects in a costeffective way, which in turn, enables the energy-efficient and real-time processing in various applications such as person detection, motion recognition, and Simultaneous Localization and Mapping (SLAM). Our results show that the proposed technique can achieve superior performance than conventional methods in terms of accuracy and latency.

For this, we had previously proposed 640x480 VGA-resolution DVS with a 9-um pixel pitch supporting a data rate of 300Meps by employing a fully synthesized word-serial group address-event representation (G-AER) which handles massive events in parallel by binding neighboring 8 pixels into a group [3]. The chip only consumes a total of 27mW at a data rate of 100Keps and 50mW at 300Meps.
"

Go to the original article...

ON Semi Marketing on Vision IoT

Image Sensors World        Go to the original article...

ON Semi publishes a marketing webinar about its Vision IoT solutions:

Go to the original article...

Micro-power ToF Camera

Image Sensors World        Go to the original article...

IEEE Sensors Journal publishes EPFL open-access paper "An Ultra-Low Power PPG and mm-Resolution ToF PPD-Based CMOS Chip Towards All-in-One Photonic Sensors" by Assim Boukhayma, Antonino Caizzone, and Christian Enz describing an extremely low power ToF camera:

"This paper presents a CMOS photonic sensor covering multiple applications from ambient light sensing to time resolved photonic sensing. The sensor is made of an array of gated pinned photodiodes (PPDs) averaged using binning and passive switched-capacitor (SC) charge sharing combined with ultra-low-power amplification and analog-to-digital conversion. The chip is implemented in a 180 nm CMOS image sensor (CIS) process and features high sensitivity, low-noise and low-power performance. Measurement results demonstrate uW health monitoring through Photoplethysmography (PPG), 10 ps resolution for time resolved light sensing and mm precision for time-of-flight (ToF) distance ranging obtained with a frame rate of 50 Hz and 20 dB ambient light rejection."


Go to the original article...

Brillnics 2.8um, 120 Ke− Full Well Pixel with 160 µV/e− Conversion Gain

Image Sensors World        Go to the original article...

MDPI paper "A 120-ke− Full-Well Capacity 160-µV/e− Conversion Gain 2.8-µm Backside-Illuminated Pixel with a Lateral Overflow Integration Capacitor" by Isao Takayanagi, Ken Miyauchi, Shunsuke Okura, Kazuya Mori, Junichi Nakamura, and Shigetoshi Sugawa from Brillnics, Ritsumeikan University, and Tohoku University is a part of Special issue on the 2019 International Image Sensor Workshop (IISW2019).

"In this paper, a prototype complementary metal-oxide-semiconductor (CMOS) image sensor with a 2.8-μm backside-illuminated (BSI) pixel with a lateral overflow integration capacitor (LOFIC) architecture is presented. The pixel was capable of a high conversion gain readout with 160 μV/e− for low light signals while a large full-well capacity of 120 ke− was obtained for high light signals. The combination of LOFIC and the BSI technology allowed for high optical performance without degradation caused by extra devices for the LOFIC structure. The sensor realized a 70% peak quantum efficiency with a normal (no anti-reflection coating) cover glass and a 91% angular response at ±20° incident light. This 2.8-μm pixel is potentially capable of higher than 100 dB dynamic range imaging in a pure single exposure operation."

Go to the original article...

VGA to Stay in Smartphones

Image Sensors World        Go to the original article...

IFNews quotes Industrial Securities report forecasting that VGA and 1.3MP sensors are here to stay in smartphones:


IFNews also quotes Credit Suisse report that "Samsung Electronics is winding down production of low-pixel-count CISs (16MP and below) and preferentially allocating logic manufacturing capacity to 24MP/48MP and above CISs. Omnivision is benefiting from this in particular and has raised prices by around 15% in 4Q19. This is causing CIS market conditions to improve rapidly."

Go to the original article...

Leakage Non-Uniformity and RTN

Image Sensors World        Go to the original article...

MDPI paper "Leakage Current Non-Uniformity and Random Telegraph Signals in CMOS Image Sensor Floating Diffusions Used for In-Pixel Charge Storage" by by Alexandre Le Roch, Vincent Goiffon, Olivier Marcelot, Philippe Paillet, Federico Pace, Jean-Marc Belloir, Pierre Magnan, and Cédric Virmontois from Université de Toulouse, CEA, and Centre Nationale d’Etudes Spatiales (CNES), France belongs to Special Issue "Special issue on the 2019 International Image Sensor Workshop (IISW2019)"

"The leakage current non-uniformity, as well as the leakage current random and discrete fluctuations sources, are investigated in pinned photodiode CMOS image sensor floating diffusions. Different bias configurations are studied to evaluate the electric field impacts on the FD leakage current. This study points out that high magnitude electric field regions could explain the high floating diffusion leakage current non-uniformity and its fluctuation with time called random telegraph signal. Experimental results are completed with TCAD simulations allowing us to further understand the role of the electric field in the FD leakage current and to locate a high magnitude electric field region in the overlap region between the floating diffusion implantation and the transfer gate spacer."

Go to the original article...

Fab Equipment Spending Upswing Led by Image Sensors

Image Sensors World        Go to the original article...

PRNewswire: The rebound in fab equipment spending is led by image sensors market, according to SEMI:

"Lead by Sony, image sensors spending is expected to jump 20 percent in the first half of 2020 and soar by over 90 percent in the second half, peaking at US$1.6 billion."

Go to the original article...

IEDM 2019: Samsung to use 14nm FinFET Process for 144MP Sensor

Image Sensors World        Go to the original article...

Samsung presented 14nm FinFET process optimized for imaging applications at IEDM last week: "14nm FinFET process technology platform for over 100M pixel density and ultra low power 3D Stack CMOS Image Sensor" by Donghee Yu, Choong jae Lee, Myounkyu Park, Junghwan Park, Seungju Hwang, Joonhyung Lee, Sunghun Yu, Hyunjung Shin, ByoungHo Kim, Jong-Won Choi, Sangil Jung, Minho Kwon2, Il-Seon Ha, Chaesung Kim, Sanghyun Cho, Seunghyun Lim, Won-Woong Kim, Moo-Young Kim, Seonghye Park, Ki-Don Lee3, Rakesh Ranjan, Shigenobu Maeda, and Gitae Jeong.

"CMOS Image Sensor(CIS) products need higher voltage device and better analog characteristics than conventional SOC & Logic products. This work presents newly developed 14nm FinFET process with 2.xV high voltage FinFET device characteristics showing excellent analog and low power digital characteristics comparing to 28nm planar process. Gm is improved by 30% and 67% in FinFET process for NMOS and PMOS, respectively. Rout characteristics increased by 40 times and 6 times over 28nm planar process. Interface state density(Nit) improved by more than 40% and flicker noise characteristics also improved by 64% and 42% for NMOS and PMOS, respectively. Digital logic Transistor ion-ioff performance improved by by 32% and by 211% for NMOS and PMOS, respectively compared to 28nm planar device and the chip power consumption of digital logic functional block reduced by 34% in real Si of 12M pixel product. 14nm FinFET process expected to improve power consumption by 42% in 144M pixel density."

Go to the original article...

Dark Current and Plasma Damage

Image Sensors World        Go to the original article...

MDPI paper "CMOS Image Sensors and Plasma Processes: How PMD Nitride Charging Acts on the Dark Current" by Yolène Sacchettini, Jean-Pierre Carrère, Romain Duru, Jean-Pierre Oddou, Vincent Goiffon, and Pierre Magnan from STMicroelectronics and ISAE-SUPAERO, Université de Toulouse is apart of Special Issue on the 2019 International Image Sensor Workshop (IISW2019).

"Plasma processes are known to be prone to inducing damage by charging effects. For CMOS image sensors, this can lead to dark current degradation both in value and uniformity. An in-depth analysis, motivated by the different degrading behavior of two different plasma processes, has been performed in order to determine the degradation mechanisms associated with one plasma process. It is based on in situ plasma-induced charge characterization techniques for various dielectric stack structures (dielectric nature and stack configuration). A degradation mechanism is proposed, highlighting the role of ultraviolet (UV) light from the plasma in creating an electron hole which induces positive charges in the nitride layer at the wafer center, and negative ones at the edge. The trapped charges de-passivate the SiO2/Si interface by inducing a depleted interface above the photodiode, thus emphasizing the generation of dark current. A good correlation between the spatial distribution of the total charges and the value of dark current has been observed."

Go to the original article...

Light Co. Changes its Focus to Automotive 3D

Image Sensors World        Go to the original article...

Light Co. appears to change its main technology focus to automotive 3D perception. The L16 camera and Nokia 9 smartphone info has been moved to the "Case Studies" tab on Light web site.

"A missing piece in long-range depth perception

For automobiles to safely navigate the real world, they need to be able to perceive as humans do: a full picture with accurate depth throughout ranges. Lidar provides accurate information, but only up to a point, and with limited resolution. Radar detects when an object is in the far distance but it isn't sophisticated enough to discern whether its a truck or a barn. The range that radar is truly capable of is also often far less than claimed.

The Opportunity

The hole that exists in long-range, accurate sensing for ADAS/ADS is where Light comes in. We are developing an incredibly resilient perception technology that provides precise object detection, definition, and tracking through extended ranges. All in real-time."

Go to the original article...

Samsung to Adopt RISC-V for its Image Sensors

Image Sensors World        Go to the original article...

The Register writer Chris Williams reports from RISC-V Summit held this week in Silicon Valley that Samsung is going to use RISC-V in its image sensors, as well as in AI edge devices. Earlier this year, Sony too presented at RISC-V conference in Japan.

Go to the original article...

MagikEye to Demo its Invertible Light Image Sensor Technology

Image Sensors World        Go to the original article...

BusinessWire: Magik Eye Inc. will be holding demonstrations for its Invertible Light Technology (ILT) at the 2020 CES. ILT is said to be an alternative to ToF and Structured Light 3D imaging solutions, the smallest, fastest and most power-efficient 3D sensing method. “We are pleased to demonstrate our new 3D sensing solutions that will enable exciting use cases for applications in robotics and smart phones,” said Takeo Miyazawa, Founder & CEO of MagikEye. The company's presentation is available at Slideshare.

Go to the original article...

Free ToF Book

Image Sensors World        Go to the original article...

INRIA, Grenoble, France, posts a ToF book based on its cooperative research project with the 3D Mixed Reality Group at the Samsung Advanced Institute of Technology. The book "Time-of-Flight Cameras: Principles, Methods and Applications" by Miles Hansard, Seungkyu Lee, Ouk Choi, and Radu Horaud is dated by November 2012:

"This book describes a variety of recent research into time-of-flight imaging. Time-of-flight cameras are used to estimate 3D scene-structure directly, in a way that complements traditional multiple-view reconstruction methods. The first two chapters of the book explain the underlying measurement principle, and examine the associated sources of error and ambiguity. Chapters three and four are concerned with the geometric calibration of time-of-flight cameras, particularly when used in combination with ordinary colour cameras. The final chapter shows how to use time-of-flight data in conjunction with traditional stereo matching techniques. The five chapters, together, describe a complete depth and colour 3D reconstruction pipeline. This book will be useful to new researchers in the field of depth imaging, as well as to those who are working on systems that combine colour and time-of-flight cameras."

Go to the original article...

css.php