MDPI IISW2023 special issue – 316MP, 120FPS, HDR CIS

Image Sensors World        Go to the original article...

A. Agarwal et al. have published a full length article on their IISW 2023 conference presentation in a special issue of MDPI Sensors. The paper is titled "A 316MP, 120FPS, High Dynamic Range CMOS Image Sensor for Next Generation Immersive Displays" and is joint work between Forza Silicon (AMETEK Inc.) and Sphere Entertainment Co..

Full article (open access): https://doi.org/10.3390/s23208383

Abstract
We present a 2D-stitched, 316MP, 120FPS, high dynamic range CMOS image sensor with 92 CML output ports operating at a cumulative date rate of 515 Gbit/s. The total die size is 9.92 cm × 8.31 cm and the chip is fabricated in a 65 nm, 4 metal BSI process with an overall power consumption of 23 W. A 4.3 µm dual-gain pixel has a high and low conversion gain full well of 6600e- and 41,000e-, respectively, with a total high gain temporal noise of 1.8e- achieving a composite dynamic range of 87 dB.

Figure 1. Sensor on a 12 inch wafer (4 dies per wafer), die photo, and stitch plan.



Figure 2. Detailed block diagram showing sensor partitioning.


Figure 3. Distribution of active and dark rows in block B/H, block E, and final reticle plan.


Figure 5. Sensor timing for single-exposure dual-gain (HDR) operation.



Figure 6. Data aggregation and readout order for single-gain mode.


Figure 7. Data aggregation and readout order for dual-gain mode.

Figure 8. ADC output multiplexing network for electrical crosstalk mitigation.


Figure 9. Conventional single-ended ADC counter distribution.


Figure 10. Proposed pseudo-differential ADC counter distribution.


Figure 11. Generated thermal map from static IR drop simulation.

Figure 12. Measured dark current distribution.

Figure 13. SNR and transfer function in HDR mode.


Figure 14. Full-resolution color image captured in single-gain mode at 120 FPS.







Go to the original article...

Paper on "Charge-sweep" CIS Pixel

Image Sensors World        Go to the original article...

In a recent paper titled "Design and Characterization of a Burst Mode 20 Mfps Low Noise CMOS Image Sensor" (https://www.mdpi.com/1424-8220/23/14/6356) Xin Yue and Eric Fossum write:

This paper presents a novel ultra-high speed, high conversion-gain, low noise CMOS image sensor (CIS) based on charge-sweep transfer gates implemented in a standard 180 nm CIS process. Through the optimization of the photodiode geometry and the utilization of charge-sweep transfer gates, the proposed pixels achieve a charge transfer time of less than 10 ns without requiring any process modifications. Moreover, the gate structure significantly reduces the floating diffusion capacitance, resulting in an increased conversion gain of 183 µV/e−. This advancement enables the image sensor to achieve the lowest reported noise of 5.1 e− rms. To demonstrate the effectiveness of both optimizations, a proof-of-concept CMOS image sensor is designed, taped-out and characterized.











Go to the original article...

PCH-EM Algorithm for DSERN characterization

Image Sensors World        Go to the original article...

Hendrickson et al. have posted two new pre-prints on deep sub-electron read noise (DSERN) characterization. This new algorithm called PCH-EM is used to extract key performance parameters of sensors with sub-electron read noise through a custom implementation of the Expectation Maximization (EM) algorithm. It shows a dramatic improvement over the traditional Photon Transfer (PT) method in the sub-electron noise regime. The authors have some extensions and improvements of the method coming soon as well.

The first pre-print titled "Photon Counting Histogram Expectation Maximization Algorithm for Characterization of Deep Sub-Electron Read Noise Sensors" presents the theory behind their approach.

Abstract: We develop a novel algorithm for characterizing Deep Sub-Electron Read Noise (DSERN) image sensors. This algorithm is able to simultaneously compute maximum likelihood estimates of quanta exposure, conversion gain, bias, and read noise of DSERN pixels from a single sample of data with less uncertainty than the traditional photon transfer method. Methods for estimating the starting point of the algorithm are also provided to allow for automated analysis. Demonstration through Monte Carlo numerical experiments are carried out to show the effectiveness of the proposed technique. In support of the reproducible research effort, all of the simulation and analysis tools developed are available on the MathWorks file exchange.

Authors have released their code here: https://www.mathworks.com/matlabcentral/fileexchange/121343-one-sample-pch-em-algorithm


 

 

The second pre-print titled "Experimental Verification of PCH-EM Algorithm for Characterizing DSERN Image Sensors" presents an application of the PCH-EM algorithm to quanta image sensors.

Abstract: The Photon Counting Histogram Expectation Maximization (PCH-EM) algorithm has recently been reported as a candidate method for the characterization of Deep Sub-Electron Read Noise (DSERN) image sensors. This work describes a comprehensive demonstration of the PCH-EM algorithm applied to a DSERN capable quanta image sensor. The results show that PCH-EM is able to characterize DSERN pixels for a large span of quanta exposure and read noise values. The per-pixel characterization results of the sensor are combined with the proposed Photon Counting Distribution (PCD) model to demonstrate the ability of PCH-EM to predict the ensemble distribution of the device. The agreement between experimental observations and model predictions demonstrates both the applicability of the PCD model in the DSERN regime as well as the ability of the PCH-EM algorithm to accurately estimate the underlying model parameters.





Go to the original article...

Gigajot article in Nature Scientific Reports

Image Sensors World        Go to the original article...

Jiaju Ma et al. of Gigajot Technology, Inc. have published a new article titled "Ultra‑high‑resolution quanta image sensor with reliable photon‑number‑resolving and high dynamic range capabilities" in Nature Scientific Reports.

Abstract:

Superior low‑light and high dynamic range (HDR) imaging performance with ultra‑high pixel resolution are widely sought after in the imaging world. The quanta image sensor (QIS) concept was proposed in 2005 as the next paradigm in solid‑state image sensors after charge coupled devices (CCD)and complementary metal oxide semiconductor (CMOS) active pixel sensors. This next‑generation image sensor would contain hundreds of millions to billions of small pixels with photon‑number‑resolving and HDR capabilities, providing superior imaging performance over CCD and conventional CMOS sensors. In this article, we present a 163 megapixel QIS that enables both reliable photon‑number‑resolving and high dynamic range imaging in a single device. This is the highest pixel resolution ever reported among low‑noise image sensors with photon‑number‑resolving capability. This QIS was fabricated with a standard, state‑of‑the‑art CMOS process with 2‑layer wafer stacking and backside illumination. Reliable photon‑number‑resolving is demonstrated with an average read noise of 0.35 e‑ rms at room temperature operation, enabling industry leading low‑light imaging performance. Additionally, a dynamic range of 95 dB is realized due to the extremely low noise floor and an extended full‑well capacity of 20k e‑. The design, operating principles, experimental results, and imaging performance of this QIS device are discussed.








Ma, J., Zhang, D., Robledo, D. et al. Ultra-high-resolution quanta image sensor with reliable photon-number-resolving and high dynamic range capabilities. Sci Rep 12, 13869 (2022).

This is an open access article: https://www.nature.com/articles/s41598-022-17952-z.epdf

Go to the original article...

New understanding of color perception theory

Image Sensors World        Go to the original article...

From phys.org a news article about a recent paper that casts doubt on the traditional understanding of how human color perception works: "Math error: A new study overturns 100-year-old understanding of color perception":

A new study corrects an important error in the 3D mathematical space developed by the Nobel Prize-winning physicist Erwin Schrödinger and others, and used by scientists and industry for more than 100 years to describe how your eye distinguishes one color from another. The research has the potential to boost scientific data visualizations, improve TVs and recalibrate the textile and paint industries.

The full paper appears in the Proceedings of the National Academy of Sciences vol. 119 no. 18 (2022). It is titled "The non-Riemannian nature of perceptual color space" authored by Dr. Roxana Bujack and colleagues at Los Alamos National Lab.

The scientific community generally agrees on the theory, introduced by Riemann and furthered by Helmholtz and Schrödinger, that perceived color space is not Euclidean but rather, a three-dimensional Riemannian space. We show that the principle of diminishing returns applies to human color perception. This means that large color differences cannot be derived by adding a series of small steps, and therefore, perceptual color space cannot be described by a Riemannian geometry. This finding is inconsistent with the current approaches to modeling perceptual color space. Therefore, the assumed shape of color space requires a paradigm shift. Consequences of this apply to color metrics that are currently used in image and video processing, color mapping, and the paint and textile industries. These metrics are valid only for small differences. Rethinking them outside of a Riemannian setting could provide a path to extending them to large differences. This finding further hints at the existence of a second-order Weber–Fechner law describing perceived differences.

 


The key observation that this paper rests on is the concept of "diminishing returns". Statistical analysis of experimental data collected in this paper suggests that the perceived difference between pairs of colors A, B and C that lie along a single shortest path (geodesic) do not satisfy the additive equality.

A commentary by Dr. David Brainard (U. Penn.) about this paper was published in PNAS and is available here: https://color2.psych.upenn.edu/brainard/papers/2022-BrainardPNASCommentary.pdf

Some of the caveats noted in this commentary piece:

First, the authors make a first principles assumption that the achromatic locus is a geodesic and use this in their choice of stimuli. This assumption is intuitively appealing in that it would be surprising that the shortest path in color space between two achromatic stimuli would involve a detour through a chromatic stimulus and back. However, the achromatic locus as a geodesic was not empirically established, and more work could be considered to shore up this aspect of the argument. Second, the data were collected using online methods and combined across subjects prior to the analysis. This raises the question of whether the aggregate performance analyzed could be non-Riemannian even when the performance of each individual subject was itself Riemannian. Although it is not immediately obvious whether this could occur, it might be further considered as a possibility.

Phys.org press release: https://phys.org/news/2022-08-math-error-overturns-year-old-perception.html

LANL press release: https://discover.lanl.gov/news/0810-color-perception

PNAS paper: https://www.pnas.org/doi/10.1073/pnas.2119753119

Go to the original article...

Direct ToF Single-Photon Imaging (IEEE TED June 2022)

Image Sensors World        Go to the original article...

The June 2022 issue of IEEE Trans. Electron. Devices has an invited paper titled Direct Time-of-Flight Single-Photon Imaging by Istvan Gyongy et al. from University of Edinburgh and STMicroelectronics. 

This is a comprehensive tutorial-style article on single-photon 3D imaging which includes a description of the image formation model starting from first principles and practical system design considerations such as photon budget and power requirements.

Abstract: This article provides a tutorial introduction to the direct Time-of-Flight (dToF) signal chain and typical artifacts introduced due to detector and processing electronic limitations. We outline the memory requirements of embedded histograms related to desired precision and detectability, which are often the limiting factor in the array resolution. A survey of integrated CMOS dToF arrays is provided highlighting future prospects to further scaling through process optimization or smart embedded processing.



Go to the original article...

3D Wafer Stacking: Review paper in IEEE TED June 2022 Issue

Image Sensors World        Go to the original article...

In IEEE Trans. Electr. Dev. June 2022 issue, in a paper titled "A Review of 3-Dimensional Wafer Level Stacked Backside Illuminated CMOS Image Sensor Process Technologies," Wuu et al. write:

Over the past 10 years, 3-dimensional (3-D) wafer-level stacked backside Illuminated (BSI) CMOS image sensors (CISs) have undergone rapid progress in development and performance and are now in mass production. This review paper covers the key processes and technology components of 3-D integrated BSI devices, as well as results from early devices fabricated and tested in 2007 and 2008. This article is divided into three main sections. Section II covers wafer-level bonding technology. Section III covers the key wafer fabrication process modules for BSI 3-D waferlevel stacking. Section IV presents the device results.




This paper has quite a long list of acronyms. Here is a quick reference:
BDTI = backside deep trench isolation
BSI = backside illumination
BEOL = back end of line
HB = hybrid bonding
TSV = through silicon via
HAST = highly accelerated (temperature and humidity) stress test
SOI = silicon on insulator
BOX = buried oxide

Section II goes over wafer level direct bonding methods.



Section III discusses important aspects of stacked design development for BSI (wafer thinning, hybrid bonding, backside deep trench isolation, pyramid structure to improve quantum efficiency, use of high-k dielectric film to deal with crystal defects, and pixel performance analyses).














Section IV shows some results of early stacked designs.







Full article: https://doi.org/10.1109/TED.2022.3152977

Go to the original article...

"End-to-end" design of computational cameras

Image Sensors World        Go to the original article...

A team from MIT Media Lab has posted a new arXiv preprint titled "Physics vs. Learned Priors: Rethinking Camera and Algorithm Design for Task-Specific Imaging".

Abstract: Cameras were originally designed using physics-based heuristics to capture aesthetic images. In recent years, there has been a transformation in camera design from being purely physics-driven to increasingly data-driven and task-specific. In this paper, we present a framework to understand the building blocks of this nascent field of end-to-end design of camera hardware and algorithms. As part of this framework, we show how methods that exploit both physics and data have become prevalent in imaging and computer vision, underscoring a key trend that will continue to dominate the future of task-specific camera design. Finally, we share current barriers to progress in end-to-end design, and hypothesize how these barriers can be overcome.




Go to the original article...

New 3D Imaging Method for Microscopes

Image Sensors World        Go to the original article...

New method for high resolution three dimension microscopic imaging being explored.


"This method, named bijective illumination collection imaging (BICI), can extend the range of high-resolution imaging by over 12-fold compared to the state-of-the-art imaging techniques," says Pahlevani

Fig. 1 | BICI concept. 
a, The illumination beam is generated by collimated light positioned off the imaging optical axis. 
b, The metasurface bends a ray family (sheet) originating from an arc of radius r by a constant angle β to form a focal point on the z axis. A family of rays originating from the same arc is shown as a ray sheet. 
c, Ray sheets subject to the same bending model constitute a focal line along the z axis. The focal line is continuous even though a finite number of focal points is illustrated for clarity. 
d, The collection metasurface establishes trajectories of collected light in ray sheets, as mirror images of illumination paths with respect to the x–z plane. This configuration enables a one-to-one correspondence, that is, a bijective relation between the focal points of the illumination and collection paths, to eliminate out-of-focus signals. The magnified inset demonstrates the bijective relation. 
e, Top view of the illumination and collection beams. 
f, Schematic of the illumination and collection beams and a snapshot captured using a camera from one of the lateral planes intersecting the focal line, illustrating the actual arrangement of illumination and collection paths. This arrangement allows only the collection of photons originating from the corresponding illumination focal point.


Metasurface-based bijective illumination collection imaging provides high-resolution tomography in three dimensions (Masoud Pahlevaninezhad, Yao-Wei Huang , Majid Pahlevani , Brett Bouma, Melissa J. Suter , Federico Capasso  and Hamid Pahlevaninezhad )

Go to the original article...

css.php