Assorted Videos: GPixel, National Tsing Hua University, Hort-Eye

Image Sensors World        Go to the original article...

GPixel CMO Wim Wuyts talks about the company's solutions for 3D imaging:


IEEE Sensors publishes a 2012 presentation "Linear CMOS Image Sensor with Time-Delay Integration and Interlaced Super-Resolution Pixel" by Jui-Hsin Chang, Kuo-Wei Cheng, Chih-Cheng Hsieh, Wen-Hsu Chang, Hann-Huei Tsai, and Chin-Fong Chiu from National Tsing Hua University, Taiwan:


University of Melbourne and Hort-Eye Pte. publish a video "Multispectral Image Sensors using Metasurfaces" by Ranjith Unnithan:

Go to the original article...

ON Semi on 1.1um Pixel Spatial Resolution Measurements

Image Sensors World        Go to the original article...

ON Semi presents at Conference: Electronic Imaging 2018, Burlingame, CA, "Characterization of Image Sensor Resolution by Single Pixel illumination" by Victor Lenchenkov, Orit Skorka, Robert Gravelle, Ulrich Boettiger, and Radu Ispasoiu. Few slides:

"Illumination single 1.1 um pixel provide information for image sensor PSF and MTF and can be used for image processing and simulation calibration. Measurements PSF across image plane could result in image correction as function of image height."


One can see that microlens layer gives quite a marginal MTF improvement in 1.1um pixel measurements:

Go to the original article...

Microsoft iToF Camera Model

Image Sensors World        Go to the original article...

Microsoft publishes an Optics Express paper "Time-of-flight camera characterization with functional modeling for synthetic scene generation" by Sergio Ortiz, Mukhil Azhagan Mallaiyan Sathiaseelan, and Augustine Cha.

"In this manuscript, we design, describe, and present a functional model of Time-of-Flight (ToF) cameras. The model can be used to generate randomized scenes that incorporate depth scenarios with various objects at various depths with varied orientations and illumination intensity. In addition to the potential to generate any random depth scenario, the camera, pixels, and binning are modelled incorporating radial distortion based on camera intrinsic and extrinsic. The model also includes ToF artifacts such as Signal Noise, Crosstalk and Multipath. We measured experimentally the Noise in Time-of-Flight. We experimentally fitted, and simulated with state-of-the art Simulator the Crosstalk effect, and characterized multipath according with the existing literature. Our work can be used to generate as many images as needed for neural network (NN) training and testing. The proposed approach can also be used to benchmark and evaluate both End-to-End ToF algorithms as well as specialized algorithms for denoising, unwrapping, crosstalk, and multipath correction."

Go to the original article...

Brigates Renews its IPO Efforts

Image Sensors World        Go to the original article...

Aiji Micro APP: Chinese company Brigates (Ruixin Micro) renews its efforts to go public at Shanghai Stock Exchange:

"After years of intensive cultivation, Ruixin Micro has possessed a number of domestic leading and internationally advanced core technologies in the field of image sensor circuit design, pixel design and image processing. In particular, the MCCD and ECCD technologies independently developed by Ruixin Micro combine the advantages of traditional CCD and CMOS, significantly improve the imaging quality of image sensors, and promote the development of domestic image sensor technology. At present, the company has become one of the few companies in the world that master ECCD technology."

Go to the original article...

Thesis on Pixel Noise Reduction

Image Sensors World        Go to the original article...

Delft University publishes Xiaoliang GE's PhD thesis "Temporal Noise Reduction in CMOS Image Sensors."

"In pursuit of achieving the noise condition of single-photon imaging, system-level and circuit-level innovations and optimizations for CMOS image sensor (CIS) noise reduction are called for. Stimulated by this motivation, this thesis focuses on reducing the temporal noise generated in the pixels and the readout electronics."

Go to the original article...

Sony SWIR Sensor Designers Interview

Image Sensors World        Go to the original article...

Sony publishes an interview with its SWIR InGaAs stacked sensor designers. Few quotes:

"...the miniaturization of SWIR image sensors had been hindered by the use of bump connection. We already knew that this problem could possibly be solved using the Cu-Cu connection*2, which is the stacking technology Sony had developed for years for image sensors. This technology would make it possible to align pixels at a micro-pitch. Meanwhile, conventional image sensors use silicon as a photoelectric conversion layer, but this material does not absorb the SWIR range of light. So, we needed to use indium gallium arsenide (InGaAs) as the photodiode material that can absorb the SWIR spectrum and convert the light energy into electric signals. This material was never used in Sony’s image sensors before, but however, Another division of Sony had the compound semiconductor technology to produce InGaAs.

Another point was that conventional SWIR image sensors had many defects. White patches would appear in the dark image due to the quality issues particular to InGaAs. We had the technology to make compound semiconductors based on our years of expertise in developing laser technology, so we aimed to leverage it in creating defect-free, high-quality products.

As it was the Group’s first-ever SWIR image sensor to be developed using InGaAs, we thoroughly reviewed challenges to be addressed, which resulted in more than 300 in the early stages of the development. As we proceeded further into the development, we found more issues to deal with. There turned out to be so many challenges awaiting."

Go to the original article...

Yole on Mobile CIS Market

Image Sensors World        Go to the original article...

Yole Developpement publishes "Smartphone flagship battle from sensors to modules to image quality - Webcast"


China seems to become an image sensor land: 3 out of 10 largest semiconductor companies are designing image sensors, while SMIC, the 4th company, manufactures them:

Go to the original article...

Sony Reports 9% YoY CIS Sales Drop

Image Sensors World        Go to the original article...

Sony reports lower mobile image sensor sales in the last quarter, leaves full-year forecast unchanged:

Go to the original article...

New Theory of RTS Noise

Image Sensors World        Go to the original article...

LAAS (Laboratoire d'analyse et d'architecture des systèmes), ST, ISAE-SUPAERO, CEA, and CNR-IOM publish a paper "Clusters of Defects as a Possible Origin of Random Telegraph Signal in Imager Devices: a DFT based Study" by Antoine Jay, Anne Hémeryck, Fuccio Cristiano, Denis Rideau, Pierre-Louis Julliard, Vincent Goiffon, Alexandre Le Roch, Nicolas Richard, Layla Martin-Samos, and Stefano de Gironcoli, presented at International Conference on Simulation of Semiconductor Processes and Devices (SISPAD) in Sept. 2021.

"The origin of the random telegraph signal (RTS) observed in semiconductors-based electronic devices is still subject to debates. In this work, by means of atomistic simulations, typical clusters of defects as could be obtained after irradiation or implantation are studied as a possible cause for RTS. It is shown that:
(i) a cluster of defects is highly metastable,
(ii) it introduces several electronic states in the band gap,
(iii) it has an electronic cross section much higher than the one of point defects.
These three points can simultaneously explain why an electron- hole generation rate can switch with time, while respecting the experimental measurement."


The new theory is said to be able to explain all of the following RTN observations simultaneously:

Go to the original article...

IC Insights: CMOS Sensor Market to Grow by Only 7% in 2021

Image Sensors World        Go to the original article...

IC Insights is disappointed by the slow CMOS sensor market growth this year:

"The total optoelectronics market is expected to be held back by sales growth of just 7% in CMOS image sensors this year.  CMOS image sensor leader Sony blames softer growth conditions in 2021 on trade frictions between the U.S. and China and a “deterioration of product mix.”  CMOS image sensor sales have also been impacted by market fluctuations in some end-use applications, and shortages of ICs and other components used in digital-imaging systems."

Go to the original article...

Last 4 Days of Free Download of Single-Slope Column-Parallel ADC Book!

Image Sensors World        Go to the original article...

Only 4 days remain for free download of Now Publishers book "Welcome to the World of Single-Slope Column-Level Analog-to-Digital Converters for CMOS Image Sensors" by Albert Theuwissen and Guy Meynants.

Complimentary downloads of this book will be available till 1st of November 2021. After that date, you can receive the alert member discount price of $40 (includes non-trackable shipping) by quoting the Promotion Code: 137791. Please note that the discount price applies only to purchases of print copies by individuals paying in advance by credit card. The discounted price will not be honored for institutions or booksellers.

Go to the original article...

Chinese Startup RuisiZhixin Develops 8MP Combined Event-Driven and Regular Sensor

Image Sensors World        Go to the original article...

36kr.com, EET-China (Google automatic translation): Chinese startup Beijing Ruisi Zhixin Technology (English name AlpsenTek) presents hybrid vision sensors combining event-driven sensor and regular frame-based one in a single pixel array:

"The ALPIX-Pilatus (ALPIX-P for short) released by Ruisizhixin this time is developed based on "Hybrid Vision" and is the world's first chip that integrates traditional image sensor technology and bionic event camera technology in the same pixel . ALPIX-P can quickly switch between the traditional image sensor mode and the bionic event camera mode. In image mode, ALPIX-P is a global exposure sensor with a maximum frame rate of 120 frames, which is completely consistent with existing traditional image sensors and is fully compatible with existing mature vision systems. In the bionic event camera mode, ALPIX-P has the characteristics of ultra-high frame rate, high effective information ratio, and large dynamic range.

At present, the chip has begun to send samples. It also verified that the "Hybrid Vision" technology is feasible in mass production, cost, and performance .

Based on the design of ALPIX-P, Ruisi Zhixin has developed two ALPIX series sensor chip products, namely ALPIX-Titlis (abbreviated as ALPIX-T) fused low-power bionic vision sensor and ALPIX-Eiger (abbreviated as ALPIX-E). ) Fusion high-end bionic vision sensor.

The team has thoroughly studied the technical route of the bionic event visual sensor. In addition, the core founders all graduated from Cambridge University, ETH Zurich, Zhejiang University and other prestigious universities, and have many years of working experience in NXP, ARM, Freescale, Intel, Magic Leap and other companies. At the beginning of the establishment, the core team members were only about 6 people. Now the team has expanded to 50 people, of which R&D personnel accounted for 85% .

Recalling the financing history of RISZ, the company has now completed two rounds of financing. At the end of 2019, the company completed an angel round of financing of tens of millions of RMB. The investors were Zhongke Chuangxing and Lenovo Venture Capital . At the end of 2020, Ruisi Zhixin completed a Pre-A round of financing of nearly 100 million yuan, which was jointly led by Hikvision and Yaotu Capital.
"


The company explains the advantages of its hybrid sensors:

"Compared with traditional CIS, the bionic vision sensor has the characteristics of fast speed (>5000 frames/s), low power consumption (tens of mW), small data volume and large dynamic range (>120dB), which can successfully resolve the problem areas currently faced by computer vision as described earlier.

However, during actual operation, many scenarios require both event flow signals for rapid prediction, and traditional images for feature extraction.

AlpsenTek realized that the integration of bionic vision sensor chip and high-end image sensor chip technology may be a breakthrough.

This two-in-one function has the characteristics of low power consumption, high speed, high data efficiency and high dynamic range of bionic vision sensors. Compared with similar competing products, ALPIX effectively improves the signal-to-noise ratio of the chip, reduces noise, improves dark light performance, and meets the high performance needs demanded from customers.

Early on, many industrial giants in China recognized the potential of sensor chip technology. When AlpsenTek emerged with its unique technical strength, it also attracted the attention of giants in the industrial chain.

Recently, AlpsenTek secured USD15.5 million in a Pre-A round of financing. Investors include: Hikvision, Glory Ventures, Fargo Capital, iFlytek Venture Capital, Sunny Optical, Allwinner Technology, and Cowin Capital, in addition to previous shareholders Lenovo Capital and Casstar, who continue to invest."

"A product’s appearance is not designed, rather it is driven by the demand of the users,” the company's CEO Deng Jian says, “New technology can't change life, but new products have the power to do this.


The company's patent applications EP3731516 and WO2020216867 show its pixel structure:

Go to the original article...

Old Presentations on Event-Driven Sensors

Image Sensors World        Go to the original article...

Iaria publishes two Laurent Fesquet's (University Grenoble Alpes) presentations on event-driven sensors: 2016 "Low-Power Event-driven Image Sensor Architectures" by Laurent Fesquet, Amani Darwish, Gilles Sicard and 2018 "Sensing and Sampling for Low-Power Applications" by Laurent Fesquet alone. Few slides are given below:

Go to the original article...

CIS Production BIST

Image Sensors World        Go to the original article...

ST and University of Montpellier publish a paper "A Fast and Low Cost Embedded Test Solution for CMOS Image Sensors" by Julia Lefèvre, Philippe Debaud, Patrick Girard, and Arnaud Virazel presented at 2021 IEEE International Test Conference (ITC).

"This paper presents a novel test solution directly embedded inside CMOS Image Sensors (CIS) to sort out PASS and FAIL dies during production test. The solution aims at reducing test time, which can represent up to 30% of the final product cost. By simplifying the way optical tests are usually applied with an ATE, the proposed Built-In Self-Test (BIST) solution overcomes the drawbacks of long test time and huge amount of test data storage. We experimented our solution by considering that roughly half of the tests usually performed with an ATE can be embedded and applied using the proposed fast and low cost BIST engine. Results obtained on more than 2,400 sensors have shown that our solution reduces test time by about 30% without impacting the defect coverage. The area cost of our solution is about 1% of the digital part of the sensor, i.e., approximately 0.25% of the total sensor area. The proposed embedded CIS test solution outperforms existing solutions in terms of area overhead and test time saving, thus encouraging its future implementation in an industrial production flow."

Go to the original article...

Smartsens Progresses with IPO at Valuation of $4.4B

Image Sensors World        Go to the original article...

EE-Ofweek (Google automatic translation): Smartsens plans to go public and has been accepted by the SSE Science and Technology Innovation Board on June 28. The sponsor is China Securities. It is proposed to publicly issue no more than 49.1 million shares, not less than 10%, and plan to raise 2.82 billion yuan for the equipment and system construction project of the R&D center, the image sensor chip test project, CMOS sensor chip upgrade and industrialization project and supplementary working capital (790 million yuan).

These numbers put Smarsens valuation at 28.2 billion yuan, or $4.4 billion.

Smartsens CEO and founder Xu Chen (Richard Xu) is the controlling shareholder and actual controller of the company, and the CTO Mo Yaowu is the second largest shareholder. Among them, Xu Chen directly holds 15.23% of the company’s shares, with 47.32% of the voting rights, and Mo Yaowu directly holds 6.66% of the company’s shares, with 4.14% of the voting rights, and the two together Holds 21.89% of the company’s shares, with a total of 51.46% of the company’s voting rights and these voting rights are owned or controlled by Xu Chen.

Revenue has grown by leaps and bounds, but compared with peers, the scale is still small, and the gross profit margin is lower than the peer average.

Smartsens products are used in many high-tech applications such as security monitoring, machine vision, and intelligent vehicle electronics. From 2018 to 2020 and from January to March 2021, the company's operating income was 325 million yuan, 678 million yuan, 1.527 billion yuan and 541 million yuan, respectively, with a year-on-year increase of 92% and 124.89% in 2019 and 2020; net profits were 166 million yuan, 242 million yuan, 121 million yuan and 69 million yuan, respectively, with a total loss of 408 million yuan in 2018 and 2019 Yuan, starting to turn losses into profits in 2020.

From 2018 to 2020 and from January to March 2021, the company's FSI series sales were 56 million units, 86 million units, 139 million units, and 49 million units, with revenues of 284 million yuan, 480 million yuan, 880 million yuan and 288 million yuan, accounting for 87.45%, 70.61%, 57.66% and 53.3% of revenue, respectively, and their proportions continued to decline. ; The company’s sales of BSI-RS products were 2.0483 million, 9.243,900, 27,624,400, and 11.11 million, respectively, with revenues of 0.36 billion yuan, 149 million yuan, 473 million yuan and 157 million yuan, the proportion of revenue increased from 10.99% in 2018 to 30.97% in 2020.

Go to the original article...

EPIC Meeting on Low-Light Imaging

Image Sensors World        Go to the original article...

EPIC Technology Meeting on Low Light Cameras Technology and Applications features Valeo, Ibeo, Leica, and Thales presentations. Few slides are below:

Go to the original article...

Sigmaintell: CIS Prices Go Down

Image Sensors World        Go to the original article...

Science and Technology Materials quotes Sigmaintell China CIS market price tracker predicting 5-7% price decline in Q4 2021:

"At the beginning of 21Q4, the CIS inventory level continues to rise, but there is no significant increase in demand. The specific analysis is as follows: on the supply side, affected by the decline in demand, the supply of low- and medium- pixel capacity is reduced, and the inventory is prioritized. The new capacity of high pixel products continues to increase in 21Q4. On the demand side, the demand for smartphones has been reduced, and the demand for CIS stocking has been continuously reduced. In summary , the overall pixel supply-demand relationship in October shows an oversupply situation. According to the latest survey data from Sigmaintell in October, the prices of low pixel products have begun to loosen up, and the prices of 48M+ pixel products have continued to fall."

Go to the original article...

Quantum Dot Sensor Company SWIR Vision Raises $5M in A-Round

Image Sensors World        Go to the original article...

Optics.org, EINPressWireSWIR Vision Systems, a Durham, North Carolina, company developing colloidal quantum dots (CQDs) image sensor has raised $5M in a round A of funding. SWIR Vision Systems has been founded in 2018 as a spin-off from the Research Triangle Institute. The company's “Acuros” family of thermoelectrically cooled CMOS-based CQD sensors reaches 2.1MP resolution.

While InGaAs SWIR cameras are generally constrained to 640×512 or 1280×1024 pixel formats, our pioneering CQD sensor technology enables 2.1 megapixel, full-HD resolution, the first commercially available SWIR product of its kind.

The new eSWIR version of the product family operates across an extended wavelength range, between 350 nm and 2000 nm, with a pixel pitch of 15 µm. 

The company's Vimeo video compares regular RGB imaging with SWIR in foggy environment.

Go to the original article...

China Domestic Market Dynamics

Image Sensors World        Go to the original article...

Sohu: Sunrise Big Data publishes its analysis of unit volume CIS market in Q1 2021. Galaxycore quickly captures a market share from Sony, Samsung, and Omnivision, as compared with Q4 2020:

"According to Sunrise Big Data statistics, global smartphone CMOS image sensor shipments in Q1 2021 will be 1.413 billion, of which Galaxycore will ship about 447 million, accounting for 31.6% , ranking first.

According to the observation, Galaxycore has also become a sensor head supplier in the security, automotive and other sub-sectors. In the first half of 2021 , Galaxycore CIS non-mobile phone business revenue has exceeded 100 million US dollars , exceeding the full year of 2020."

Go to the original article...

ToF Basics by Terabee

Image Sensors World        Go to the original article...

Terabee publishes a few webinars on ToF technology basics and applications:

Go to the original article...

TI Proprietary V3Link Competes with MIPI A-PHY

Image Sensors World        Go to the original article...

TI presentation "V3Link Industrial SerDes" unveils the company's approach to compete with MIPI A-PHY and Auto-Serdes standards:

Go to the original article...

Smartsens Improves its AI Sensor Series

Image Sensors World        Go to the original article...

CoreIntelligence (Google translation): SmartSens keeps a high pace of incremental improvements and launches three new 4MP image sensors in AI series (AI stands for Advanced Imaging) for security applications-SC400AI, SC401AI, and SC433:

"The performance has been significantly improved compared with the previous generation products. It is equipped with Smartsens' innovative SFCPixel patented technology, which can achieve excellent night vision full-color imaging effects.

In addition, thanks to the PixGain dual-pixel conversion gain technology, SC401AI improves the image quality while further expanding the applicability of terminal products. As a product of the same specification as SC401AI, SC400AI has a frame rate of up to 60fps and can support 30fps dynamic line overlap HDR (Staggered HDR) image output, which provides customers with more choices while imaging performance is upgraded.

Compared with the previous generation products, the full well electrons of SC400AI and SC401AI have increased by 48.6%, the dynamic range has increased by 3dB, and the QE (quantum efficiency) in the 520nm band has increased by 42%."

Go to the original article...

Applied Materials’ DTI Optimizations for CIS

Image Sensors World        Go to the original article...

Applied Materials master class presentation for investors shows the company's efforts on image sensor DTI process refinements:

Go to the original article...

AAA Tests: Camera-Based ADAS Fails in Rain

Image Sensors World        Go to the original article...

AAA: New research from AAA finds that moderate to heavy rain affects a vehicle safety system’s ability to “see”, which may result in performance issues. During closed course testing, AAA simulated rainfall and found that test vehicles equipped with automatic emergency braking traveling at 35 mph collided with a stopped vehicle one third (33%) of the time. Lane keeping assistance didn’t fare any better with test vehicles departing their lane 69% of the time. Vehicle safety systems, also known as advanced driver assistance systems or ADAS, are typically evaluated in ideal operating conditions. However, AAA believes testing standards must incorporate real-world conditions that drivers normally encounter.

Vehicle safety systems rely on sensors and cameras to see road markings, other cars, pedestrians and roadway obstacles. So naturally, they are more vulnerable to environmental factors like rain,” said Greg Brannon, AAA’s director of automotive engineering and industry relations. “The reality is people aren’t always driving around in perfect, sunny weather so we must expand testing and take into consideration things people actually contend with in their day-to-day driving.

An AAA video shows that even a moderate rain interferes with the camera-based emergency braking:

Go to the original article...

IEDM 2021: Canon Presents 3.2MP SPAD Sensor for Low-Light Imaging

Image Sensors World        Go to the original article...

IEDM publishes few pictures from Canon's IEDM paper #20.2, “3.2 Megapixel 3D-Stacked Charge Focusing SPAD for Low-Light Imaging and Depth Sensing,” K. Morimoto/J. Iwata et al.

3D Backside-Illuminated SPAD Imager Sensors: Unlike the CMOS image sensors found in smartphones, which measure the amount of light reaching a sensor’s pixels in a given timeframe, single photon avalanche diode (SPAD) image sensors detect each photon that reaches the pixel. Each photon is converted into an electric charge, and the electrons that result are eventually multiplied in avalanche fashion until they form an output signal. SPAD image sensors hold great promise for high-performance, low-light imaging applications, for depth sensing, and for fully digital system architectures.

However, until now their performance has been limited by tradeoffs in pixel detection efficiency vs. pixel size, and by poor signal-to-noise ratios. Recently a charge-focusing approach was proposed to overcome these issues, but until now it remained to be implemented. In a late-news paper, Canon researchers will discuss how they did so, with the industry’s first 3D-stacked backside-illuminated (BSI) charge-focusing SPADs. The devices featured the largest array size ever reported for a SPAD image sensor (3.2 megapixels) and demonstrated a photon detection efficiency of 24.4%, and timing jitter below 100ps at 940 nm.

From the images below:
(a) is a full-resolution color intensity image captured by the 3.2megapixel SPAD image sensor at a high light level.

(b) is a monochrome intensity image captured by the device under a scene illumination of 2mlux (without post-processing)

(c) is a monochrome intensity image captured by the device under a scene illumination of 0.3mlux (without post-processing)

a full-resolution color intensity image captured
by the 3.2megapixel SPAD image sensor
at a high light level.
a monochrome intensity image captured
 by the device under a scene illumination
 of 2mlux (without post-processing)
a monochrome intensity image captured
by the device under a scene illumination
 of 0.3mlux (without post-processing)

Go to the original article...

2021 Walter Kosonocky Award

Image Sensors World        Go to the original article...

 ST reports at its Facebook page:

"ST’s Francois Roy succeeds in solid-state image sensors

The International Image Sensor Society recently gave François Roy the Walter Kosonocky Award for his paper entitled Fully Depleted Trench-Pinned Photo Gate for CMOS Image Sensor Applications. The Photo Gate pixel concept improves image quality and the manufacturing process.

‘’It is a great honor for ST, the ST Imaging teams, my PhD students and myself to receive this prestigious award and I thank them all,’’ said François.

At ST we are proud of our inventors. We nurture strong competences and encourage our distinguished senior experts to coach young engineers."

Go to the original article...

HDR in iToF Imaging

Image Sensors World        Go to the original article...

Lucid Vision Labs shows two nice demos of HDR importance in ToF imaging. The demos are based on its Helios2+ camera with Sony IMX556 iToF sensors:

Go to the original article...

2014 Imaging Papers

Image Sensors World        Go to the original article...

IEEE Sensors keeps publishing 2014 papers video presentations:

Author: Refael Whyte, Lee Streeter, Michael Cree, Adrian Dorrington
Affiliation: University of Waikato, New Zealand

Abstract: Time-of-Flight (ToF) range cameras measure distance for each pixel by illuminating the entire scene with amplitude modulated light and measuring the change in phase between the emitted light and reflected light. One of the most significant causes of distance accuracy errors is multi-path interference, where multiple propagation paths exist from the light source to the same pixel. These multiple propagation paths can be caused by inter-reflections, subsurface scattering, edge effects and volumetric scattering.  Several techniques have been proposed to mitigate multi-path interference. In this paper a review of current techniques for resolving measurement errors due to multi-path interference is presented, as currently there is no quantitative comparison between techniques and evaluation of technique parameters. The results will help with the selection of a multi-path interference restoration method for specific time-of-flight camera applications.


Author: Mohammad Habib, Farhan Quaiyum, Syed Islam, Nicole McFarlane
Affiliation: University of Tennessee, Knoxville, United States

Abstract: Perimeter-gated single photon avalanche diodes (PGSPAD) in standard CMOS processes have increased breakdown voltages and improved dark count rates. These devices use a polysilicon gate to reduce the premature breakdown of the device. When coupled with a scintillation material, these devices could be instrumental in radiation detection. This work characterizes the variation in PGSPAD noise (dark count rate) and breakdown voltage as a function of applied gate voltages for varying device shape, size, and junction type.


Author: Min-Woong Seo, Taishi Takasawa, Keita Yasutomi, Keiichiro Kagawa, Shoji Kawahito
Affiliation: Shizuoka University, Japan

Abstract: A low-noise high-sensitivity CMOS image sensor for scientific use is developed and evaluated. The prototype sensor contains 1024(H) × 1024(V) pixels with high performance column-parallel ADCs. The measured maximum quantum efficiency (QE) is 69 % at 660 nm and long-wavelength sensitivity is also enhanced with a large sensing area and the optimized process. In addition, dark current is 0.96 pA/cm2 at 292 K, temporal random noise in a readout circuitry is 1.17 electrons RMS, and the conversion gain is 124 uV/e-. The implemented CMOS imager using 0.11-um CIS technology has a very high sensitivity of 87 V/lx*sec that is suitable for scientific and industrial applications such as medical imaging, bioimaging, surveillance cameras and so on.

Go to the original article...

IEDM 2021: Samsung Presents 0.8um Color Routing Pixel, Sony 6um SPAD Achieves 20.2% PDE at 940nm

Image Sensors World        Go to the original article...

IEDM 2021 presents many image sensor papers in its program:

  • 20-1 A Back Illuminated 6 μm SPAD Pixel Array with High PDE and Timing Jitter Performance,
    S. Shimada, Y. Otake, S. Yoshida, S. Endo, R. Nakamura, H. Tsugawa, T. Ogita, T. Ogasahara, K. Yokochi, Y. Inoue, K. Takabayashi, H. Maeda, K. Yamamoto, M. Ono, S. Matsumoto, H. Hiyama, and T. Wakano.
    Sony
    This paper presents a 6μm pitch silicon SPAD pixel array using 3D-stacked technology. A PDE of 20.2% and timing jitter FWHM of 137ps at λ=940nm with 3V excess bias were achieved. These state-of-the-art performances were allowed via the implementation of a pyramid surface structure and pixel potential profile optimization.
  • 30-1 Highly Efficient Color Separation and Focusing in the Sub-micron CMOS Image Sensor,
    S. Yun, S. Roh, S. Lee, H. Park, M. Lim, S. Ahn, and H. Choo.
    Samsung Advanced Institute of Technology
    We report nanoscale metaphotonic color-routing (MPCR) structure that can significantly improve the lowlight performance of a sub-micron CMOS image sensor. Fabricated on the Samsung's commercial 0.8μm pixel sensor, MPCR structures confirms increased quantum efficiency (+20%), a luminance SNR improvement (+1.22 dB@5lux), a comparably low color error and great angular tolerant response.
  • 20-2 3.2 Megapixel 3D-Stacked Charge Focusing SPAD for Low-Light Imaging and Depth Sensing (Late News),
    K. Morimoto, J. Iwata, M. Shinohara, H. Sekine, A. Abdelghafar, H. Tsuchiya, Y. Kuroda, K. Tojima, W. Endo, Y. Maehashi, Y. Ota, T. Sasago, S. Maekawa, S. Hikosaka, T. Kanou, A. Kato, T. Tezuka, S. Yoshizaki, T. Ogawa, K. Uehira, A. Ehara, F. Inui, Y. Matsuno, K. Sakurai, T. Ichikawa.
    Canon Inc.
    We present a new generation of scalable photon counting image sensors for low-light imaging and depthsensing, featuring read-noise-free operation. Newly proposed charge focusing SPAD is employed to a prototype 3.2 megapixel 3D backside-illuminated image sensor, demonstrating the best-in-class pixel performance with the largest array size in APD-based image sensors.
  • 23-4 1.62µm Global Shutter Quantum Dot Image Sensor Optimized for Near and Shortwave Infrared,
    J. S. Steckel, E. Josse, A. G. Pattantyus-Abraham, M. Bidaud, B. Mortini, H. Bilgen, O. Arnaud, S. Allegret-Maret, F. Saguin, L. Mazet, S. Lhostis, T. Berger, K. Haxaire, L. L. Chapelon, L. Parmigiani, P. Gouraud, M. Brihoum, P. Bar, M. Guillermet, S. Favreau, R. Duru, J. Fantuz, S. Ricq, D. Ney, I. Hammad, D. Roy, A. Arnaud , B. Vianne, G. Nayak, N. Virollet, V. Farys, P. Malinge, A. Tournier, F. Lalanne, A. Crocherie, J. Galvier, S. Rabary, O. Noblanc, H. Wehbe-Alause , S. Acharya, A. Singh, J. Meitzner, D. Aher, H. Yang, J. Romero, B. Chen, C.Hsu, K. C. Cheng, Y. Chang, M. Sarmiento, C. Grange, E. Mazaleyrat, K. Rochereau,
    STMicroelectronics
    We have developed a 1.62µm pixel pitch global shutter sensor optimized for imaging in the NIR and SWIR. This breakthrough was made possible through the use of our colloidal quantum Dot thin film technology. We have scaled up this new platform technology to our 300mm manufacturing toolset.
  • 30-2 Automotive 8.3 MP CMOS Image Sensor with 150 dB Dynamic Range and Light Flicker Mitigation (Invited),
    M. Innocent, S. Velichko, D. Lloyd, J. Beck, A. Hernandez, B. Vanhoff, C. Silsby, A. Oberoi, G. Singh, S. Gurindagunta, R. Mahadevappa, M. Suryadevara, M. Rahman, and V. Korobov,
    ON Semiconductor
    New 8.3 MP image sensor for automotive applications has 2.1 µm pixel with overflow and triple gain readout. In comparison to earlier 3 µm pixel, flicker free range increased to 110 dB and total range to 150dB. SNR in transitions stays above 25 dB up to 125°C.
  • 30-3 A 2.9μm Pixel CMOS Image Sensor for Security Cameras with high FWC and 97 dB Single Exposure Dynamic Range,
    T. Uchida, K. Yamashita, A. Masagaki, T. Kawamura, C. Tokumitsu, S. Iwabuchi,. Onizawa, M. Ohura, H. Ansai, K. Izukashi, S. Yoshida, T. Tanikuni, S. Hiyama, H. Hirano, S. Miyazawa, Y. Tateshita,
    Sony
    We developed a new photodiode structure for CMOS image sensors with a pixel size of 2.9μm. It adds the following two structures: one forms a strong electric field P/N junction on the full-depth deep-trench isolation side wall, and the other is a dual-vertical-gate structure.
  • 30-4 3D Sequential Process Integration for CMOS Image Sensor,
    K. Nakazawa, J. Yamamoto, S. Mori, S. Okamoto, A. Shimizu, K. Baba, N. Fujii, M. Uehara, K. Hiramatsu, H. Kumano, A. Matsumoto, K. Zaitsu, H. Ohnuma, K. Tatani, T. Hirano, and H. Iwamoto,
    Sony
    We developed a new structure of pixel transistors stacked over photodiode fabricated by 3D sequential process integration. With this technology, we successfully increased AMP size and demonstrated backsideilluminated CMOS image sensor of 6752 x 4928 pixels at 0.7um pitch to prove its functionality and integrity.
  • 35-3 Computational Imaging with Vision Sensors embedding In-pixel Processing (Invited),
    J.N.P. Martel, G. Wetzstein,
    Stanford University
    Emerging vision sensors embedding in-pixel processing capabilities enable new ways to capture visual information. We review some of our work in designing new systems and algorithms using such vision sensors with applications in video-compressive imaging, high-dynamic range imaging, high-speed tracking, hyperspectral or light-field imaging.

Go to the original article...

Prophesee CEO on Future Event-Driven Sensor Improvements

Image Sensors World        Go to the original article...

IEEE Spectrum publishes an interview with Prophesee CEO Luca Verre. There is an interesting part about the company's next generation event-driven sensor:

"For the next generation, we are working along three axes. One axis is around the reduction of the pixel pitch. Together with Sony, we made great progress by shrinking the pixel pitch from the 15 micrometers of Generation 3 down to 4.86 micrometers with generation 4. But, of course, there is still some large room for improvement by using a more advanced technology node or by using the now-maturing stacking technology of double and triple stacks. [The sensor is a photodiode chip stacked onto a CMOS chip.] You have the photodiode process, which is 90 nanometers, and then the intelligent part, the CMOS part, was developed on 40 nanometers, which is not necessarily a very aggressive node. Going for more aggressive nodes like 28 or 22 nm, the pixel pitch will shrink very much.

The benefits are clear: It's a benefit in terms of cost; it's a benefit in terms of reducing the optical format for the camera module, which means also reduction of cost at the system level; plus it allows integration in devices that require tighter space constraints. And then of course, the other related benefit is the fact that with the equivalent silicon surface, you can put more pixels in, so the resolution increases.The event-based technology is not following necessarily the same race that we are still seeing in the conventional [color camera chips]; we are not shooting for tens of millions of pixels. It's not necessary for machine vision, unless you consider some very niche exotic applications.

The second axis is around the further integration of processing capability. There is an opportunity to embed more processing capabilities inside the sensor to make the sensor even smarter than it is today. Today it's a smart sensor in the sense that it's processing the changes [in a scene]. It's also formatting these changes to make them more compatible with the conventional [system-on-chip] platform. But you can even push this reasoning further and think of doing some of the local processing inside the sensor [that's now done in the SoC processor].

The third one is related to power consumption. The sensor, by design, is actually low-power, but if we want to reach an extreme level of low power, there's still a way of optimizing it. If you look at the IMX636 gen 4, power is not necessarily optimized. In fact, what is being optimized more is the throughput. It's the capability to actually react to many changes in the scene and be able to correctly timestamp them at extremely high time precision. So in extreme situations where the scenes change a lot, the sensor has a power consumption that is equivalent to conventional image sensor, although the time precision is much higher. You can argue that in those situations you are running at the equivalent of 1000 frames per second or even beyond. So it's normal that you consume as much as a 10 or 100 frame-per-second sensor.[A lower power] sensor could be very appealing, especially for consumer devices or wearable devices where we know that there are functionalities related to eye tracking, attention monitoring, eye lock, that are becoming very relevant."

Go to the original article...

css.php