Early announcement: Single Photon Workshop 2024

Image Sensors World        Go to the original article...

Single Photon Workshop 2024
EICC Edinburgh 18-22 Nov 2024
www.spw2024.org
 

The 11th Single Photon Workshop (SPW) 2024 will be held 18-24 November 2024, hosted at the Edinburgh International Conference Centre.

SPW is the largest conference in the world dedicated to single-photon generation and detection technology and applications. The biennial international conference brings together a broad range of experts across academia, industry and government bodies with interests in single-photon sources, single-photon detectors, photon entanglement, photonic quantum technologies and their use in scientific and industrial applications. It is an exciting opportunity for those interested in these technologies to learn about the state of the art and to foster continuing partnerships with others seeking to advance the capabilities of such technologies.

In tandem with the scientific programme, SPW 2024 will include a major industry exhibition and networking events.
 
Please register your interest at www.spw2024.org
 
Official registration will open in January 2024.
 
The 2024 workshop is being jointly organized by Heriot-Watt University and University of Glasgow.

Go to the original article...

IISW2023 special issue paper: Small-pitch InGaAs photodiodes

Image Sensors World        Go to the original article...

In a new paper titled "Design and Characterization of 5 μm Pitch InGaAs Photodiodes Using In Situ Doping and Shallow Mesa Architecture for SWIR Sensing" Jules Tillement et al. from STMicroelectronics, U. Grenoble and CNRS Grenoble write:

Abstract: This paper presents the complete design, fabrication, and characterization of a shallow-mesa photodiode for short-wave infra-red (SWIR) sensing. We characterized and demonstrated photodiodes collecting 1.55 μm photons with a pixel pitch as small as 3 μm. For a 5 μm pixel pitch photodiode, we measured the external quantum efficiency reaching as high as 54%. With substrate removal and an ideal anti-reflective coating, we estimated the internal quantum efficiency as achieving 77% at 1.55 μm. The best measured dark current density reached 5 nA/cm2 at −0.1 V and at 23 °C. The main contributors responsible for this dark current were investigated through the study of its evolution with temperature. We also highlight the importance of passivation with a perimetric contribution analysis and the correlation between MIS capacitance characterization and dark current performance.

Full paper (open access): https://www.mdpi.com/1424-8220/23/22/9219

Figure 1. Schematic cross section of the photodiode after different processes. (a) Photodiode fabricated by Zn diffusion or Be implantation; (b) photodiode fabrication using shallow mesa technique.

Figure 2. Band diagram of simulated structure at equilibrium with the photogenerated pair schematically represented with their path of collection.


Figure 3. Top zoom of the structure—Impact of the N-InP (a) thickness and (b) doping on the band diagram at equilibrium.

Figure 4. Simulated dark current with TCAD Synopsys tools [28]. (a) Shows evolution of the dark current when the InP SRH lifetime is modulated; (b) evolution of the dark current when the InGaAs SRH lifetime is modulated.

Figure 5. Impact of the doping concentration of the InP barrier on the carrier collection.

Figure 6. Simplified and schematic process flow of the shallow mesa-type process. (a) The full stack; (b) the definition of the pixel by etching the P layer and (c) the encapsulation and fabrication of contacts.

Figure 7. SEM views after the whole process. (a) A cross-section of the top stack where the P layer is etched and (b) a top view of the different configuration of the test structures (single in-array diode is not shown on this SEM view).

Figure 8. Schematic cross section of the structure with its potential sources of the dark current.


Figure 9. Dark current measurement on 15 μm pitch in a matrix like environment. The curve is the median of more than 100 single in-array diodes measured.

Figure 10. Dark current measurement of the ten-by-ten diode bundle. This measurement is from process B.

Figure 11. Evolution of the dark current with temperature at −0.1 V. The solid lines show the theoretical evolution of the current limited by diffusion (light blue line) and by generation recombination (purple line). The temperature measurement is performed on a bundle of ten-by-ten 5 μm pixel pitch diodes.

Figure 12. Perimetric and bulk contribution to the global dark current from measurements performed on diodes with diameter ranging from 10 to 120 μm.

Figure 13. (a) Capacitance measurement on metal–insulator–semiconductor structure. The measurement starts at 0 V then ramps to +40 V then goes to −40 V and ends at +40 V. (b) A cross section of the MIS structure. The MIS is a 300 μm diameter circle.

Figure 14. Dark current performances compared to the hysteresis measured on several different wafers.

Figure 15. Dark current measurement of a ten-by-ten bundle of 5 μm pixel pitch photodiode. The measurements are conducted at 23 °C.

Figure 16. (a) Schematic test structure for QE measurement; (b) the results of the 3D FDTD simulations conducted with Lumerical to estimate the internal QE of the photodiode.


Figure 18. Current noise for a ten-by-ten 5 μm pixel pitch photodiode bundle measured at −0.1 V.

Figure 19. Median current measurement for bundles of one hundred 3 μm pixel pitch photodiodes under dark and SWIR illumination conditions. The dark blue line represents the dark current and the pink line is the photocurrent under 1.55 μm illumination.

Figure 20. Comparison of our work in blue versus the state of the art for the fabrication of InGaAs photodiodes.

Go to the original article...

Sony announces new 5MP SWIR sensor IMX992

Image Sensors World        Go to the original article...

Product page: https://www.sony-semicon.com/en/products/is/industry/swir/imx992-993.html

Press release: https://www.sony-semicon.com/en/news/2023/2023112901.html

Sony Semiconductor Solutions to Release SWIR Image Sensor for Industrial Applications with Industry-Leading 5.32 Effective Megapixels Expanding the lineup for delivering high-resolution and low-light performance 


Atsugi, Japan — Sony Semiconductor Solutions Corporation (SSS) today announced the upcoming release of the IMX992 short-wavelength infrared (SWIR) image sensor for industrial equipment, with the industry’s highest pixel count, at 5.32 effective megapixels.

The new sensor uses SSS’s proprietary Cu-Cu connection to achieve the industry’s smallest pixel size of 3.45 μm among SWIR image sensors. It also features an optimized pixel structure for efficiently capturing light, enabling high-definition imaging across a broad spectrum ranging from the visible to invisible short-wavelength infrared regions (wavelength: 0.4 to 1.7 μm). Furthermore, new shooting modes deliver high-quality images with significantly reduced noise in dark environments compared to conventional products.

In addition to this product, SSS will also release the IMX993 with a pixel size of 3.45 μm and an effective pixel count of 3.21 megapixels to further expand its SWIR image sensor lineup. These new SWIR image sensors with high pixel counts and high sensitivity will help contribute to the evolution of various industrial equipment.

In the industrial equipment domain in recent years, there has been increasing demand for improving productivity and preventing defective products from leaving the plant. In this context, the capacity to sense not only visible light but also light in the invisible band is in demand. SSS’s SWIR image sensors, which are capable of seamless wide spectrum imaging in the visible to invisible short-wavelength infrared range using a single camera, are already being used in various processes such as semiconductor wafer bonding and defect inspection, as well as ingredient and contaminant inspections in food production.

The new sensors enable imaging with higher resolution using pixel miniaturization, while enhancing imaging performance in low-light environments to provide higher quality imaging in inspection and monitoring applications conducted in darker conditions. By making the most of the characteristics of short-wavelength infrared light, whose light reflection and absorption properties are different from those of visible light, these products help to further expand applications in such areas as inspection, recognition and measurement, thereby contributing to improved industrial productivity.

Main Features
* High pixel count made possible by the industry’s smallest pixels at 3.45 μm, delivering high-resolution imaging

A Cu-Cu connection is used between the indium-gallium arsenide (InGaAs) layer that forms the photodiode of the light receiving unit and the silicon (Si) layer that forms the readout circuit. This design allows for a smaller pixel pitch, resulting in the industry’s smallest pixel size of 3.45 μm. This, in turn, helps achieve a compact form factor that still delivers the industry’s highest pixel count of approximately 5.32 effective megapixels on the IMX992, and approximately 3.21 effective megapixels on the IMX993. The higher pixel count enables detection of tiny objects or imaging across a wide range, contributing to significantly improved recognition and measurement precision in various inspections using short-wavelength infrared light.


 Comparison of SWIR images with different resolutions: Lighting wavelength 1550 nm
(Left: Other SSS product, 1.34 effective megapixels; Right: IMX992)

* Low-noise imaging even in dark locations possible by switching the shooting mode

Inclusion of new shooting modes enables low-noise imaging without being affected by environmental brightness. In dark environments with limited light, High Conversion Gain (HCG) mode directly amplifies the signal with minimal noise after being converted to an electrical signal from light, thereby relatively reducing the amount of noise downstream. Doing so minimizes the impact of noise in dark locations, leading to greater recognition precision. On the other hand, in bright environments with plenty of light, Low Conversion Gain (LCG) mode enables imaging prioritizing the dynamic range.
Furthermore, enabling Dual Read Rolling Shutter (DRRS) outputs images from the sensor in two distinct types. These images are then composited on the camera to acquire an image with significantly reduced noise.

Image quality and noise comparison in dark location: Lighting wavelength 1450 nm
(Left: Other SSS product, 1.34 effective megapixels; Center: IMX992, HCG mode selected; Right: IMX992, HCG mode selected, DRRS enabled)

 

* Optimized pixel structure for high-sensitivity imaging across a wide range

SSS’s SWIR image sensors employ a thinner indium-phosphorous (InP) layer on top, which would otherwise inevitably absorb visible light, thereby allowing visible light to reach the indium-gallium arsenide (InGaAs) layer underneath, delivering high quantum efficiency even in the visible wavelength. The new products deliver even higher quantum efficiency by optimizing the pixel structure, enabling more uniform sensitivity characteristics across a wide wavelength band from 0.4 to 1.7 μm. Minimizing the image quality differences between wavelengths makes it possible to use the image sensor in a variety of industrial applications and contributes to improved reliability in inspection, recognition, and measurement applications.

 

Product Overview



 

Go to the original article...

Prof. Edoardo Charbon’s Talk on IR SPADs for LiDAR & Quantum Imaging

Image Sensors World        Go to the original article...

 


SWIR/NIR SPAD Image Sensors for LIDAR and Quantum Imaging Applications, by Prof. Charbon

In this talk, prof. Charbon will review the evolution of solid-state photon counting sensors from avalanche photodiodes (APDs) to silicon photomultipliers (SiPMs) to single-photon avalanche diodes (SPADs). The impact of these sensors on LiDAR has been remarkable, however, more innovations are to come with the continuous advance of integrated SPADs and the introduction of powerful computational imaging techniques directly coupled to SPADs/SiPMs. New technologies, such as 3D-stacking in combination with Ge and InP/InGaAs SPAD sensors, are accelerating the adoption of SWIR/NIR image sensors, while enabling new sensing functionalities. Prof. Charbon will conclude the talk with a technological perspective on how all these technologies could come together in low-cost, computational-intensive image sensors, for affordable, yet powerful quantum imaging

Edoardo Charbon (SM’00 F’17) received the Diploma from ETH Zurich, the M.S. from the University of California at San Diego, and the Ph.D. from the University of California at Berkeley in 1988, 1991, and 1995, respectively, all in electrical engineering and EECS. He has consulted with numerous organizations, including Bosch, X-Fab, Texas Instruments, Maxim, Sony, Agilent, and the Carlyle Group. He was with Cadence Design Systems from 1995 to 2000, where he was the Architect of the company's initiative on information hiding for intellectual property protection. In 2000, he joined Canesta Inc., as the Chief Architect, where he led the development of wireless 3-D CMOS image sensors.
Since 2002 he has been a member of the faculty of EPFL, where is a full professor. From 2008 to 2016 he was with Delft University of Technology’s as Chair of VLSI design. Dr. Charbon has been the driving force behind the creation of deep-submicron CMOS SPAD technology, which is mass-produced since 2015 and is present in telemeters, proximity sensors, and medical diagnostics tools. His interests span from 3-D vision, LiDAR, FLIM, FCS, NIROT to super-resolution microscopy, time-resolved Raman spectroscopy, and cryo-CMOS circuits and systems for quantum computing. He has authored or co-authored over 400 papers and two books, and he holds 24 patents. Dr. Charbon is the recipient of the 2023 IISS Pioneering Achievement Award, he is a distinguished visiting scholar of the W. M. Keck Institute for Space at Caltech, a fellow of the Kavli Institute of Nanoscience Delft, a distinguished lecturer of the IEEE Photonics Society, and a fellow of the IEEE.

Go to the original article...

Prophesee event sensor in 2023 VLSI symposium

Image Sensors World        Go to the original article...

Schon et al from Prophesee published a paper titled "A 320 x 320 1/5" BSI-CMOS stacked event sensor for low-power vision applications" in the 2023 VLSI symposium. This paper presents some technical details about their recently announced GenX320 sensor.

Abstract
Event vision sensors acquire sparse data, making them suited for edge vision applications. However, unconventional data format, nonconstant data rates and non-standard interfaces restrain wide adoption. A 320x320 6.3μm pixel BSI stacked
event sensor, specifically designed for embedded vision, features multiple data pre-processing, filtering and formatting functions, variable MIPI and CPI interfaces and a hierarchy of power modes, facilitating operability in power-sensitive vision
applications.







Go to the original article...

ISSCC 2024 Advanced Program Now Available

Image Sensors World        Go to the original article...

ISSCC will be held Feb 18-22, 2024 in San Francisco, CA.

Link to advanced program: https://submissions.mirasmart.com/ISSCC2024/PDF/ISSCC2024AdvanceProgram.pdf

There are several papers of interest in Session 6 on Imagers and Ultrasound. 

6.1 12Mb/s 4×4 Ultrasound MIMO Relay with Wireless Power and Communication for Neural Interfaces
E. So, A. Arbabian (Stanford University, Stanford, CA)

6.2 An Ultrasound-Powering TX with a Global Charge-Redistribution Adiabatic Drive Achieving 69% Power Reduction and 53° Maximum Beam Steering Angle for Implantable Applications
M. Gourdouparis1,2, C. Shi1 , Y. He1 , S. Stanzione1 , R. Ukropec3 , P. Gijsenbergh3 , V. Rochus3 , N. Van Helleputte3 , W. Serdijn2 , Y-H. Liu1,2
 1 imec, Eindhoven, The Netherlands
 2 Delft University of Technology, Delft, The Netherlands
 3 imec, Leuven, Belgium

6.3 Imager with In-Sensor Event Detection and Morphological Transformations with 2.9pJ/pixel×frame Object Segmentation FOM for Always-On Surveillance in 40nm
 J. Vohra, A. Gupta, M. Alioto, National University of Singapore, Singapore, Singapore

6.4 A Resonant High-Voltage Pulser for Battery-Powered Ultrasound Devices
 I. Bellouki1 , N. Rozsa1 , Z-Y. Chang1 , Z. Chen1 , M. Tan1,2, M. Pertijs1
 1 Delft University of Technology, Delft, The Netherlands
 2 SonoSilicon, Hangzhou, China

6.5 A 0.5°-Resolution Hybrid Dual-Band Ultrasound Imaging SoC for UAV Applications
 J. Guo1 , J. Feng1 , S. Chen1 , L. Wu1 , C-W. Tsai1,2, Y. Huang1 , B. Lin1 , J. Yoo1,2
 1 National University of Singapore, Singapore, Singapore
 2 The N.1 Institute for Health, Singapore, Singapore

6.6 A 10,000 Inference/s Vision Chip with SPAD Imaging and Reconfigurable Intelligent Spike-Based Vision Processor
 X. Yang*1 , F. Lei*1 , N. Tian*1 , C. Shi2 , Z. Wang1 , S. Yu1 , R. Dou1 , P. Feng1 , N. Qi1 , J. Liu1 , N. Wu1 , L. Liu1
 1 Chinese Academy of Sciences, Beijing, China 2 Chongqing University, Chongqing, China
 *Equally Credited Authors (ECAs)

6.7 A 160×120 Flash LiDAR Sensor with Fully Analog-Assisted In-Pixel Histogramming TDC Based on Self-Referenced SAR ADC
 S-H. Han1 , S. Park1 , J-H. Chun2,3, J. Choi2,3, S-J. Kim1
 1 Ulsan National Institute of Science and Technology, Ulsan, Korea
 2 Sungkyunkwan University, Suwon, Korea
 3 SolidVue, Seongnam, Korea

6.8 A 256×192-Pixel 30fps Automotive Direct Time-of-Flight LiDAR Using 8× Current-Integrating-Based TIA, Hybrid Pulse Position/Width Converter, and Intensity/CNN-Guided 3D Inpainting
 C. Zou1 , Y. Ou1 , Y. Zhu1 , R. P. Martins1,2, C-H. Chan1 , M. Zhang1
 1 University of Macau, Macau, China
 2 Instituto Superior Tecnico/University of Lisboa, Lisbon, Portugal

6.9 A 0.35V 0.367TOPS/W Image Sensor with 3-Layer Optical-Electronic Hybrid Convolutional Neural Network
 X. Wang*, Z. Huang*, T. Liu, W. Shi, H. Chen, M. Zhang
 Tsinghua University, Beijing, China
 *Equally Credited Authors (ECAs)

6.10 A 1/1.56-inch 50Mpixel CMOS Image Sensor with 0.5μm pitch Quad Photodiode Separated by Front Deep Trench Isolation
 D. Kim, K. Cho, H-C. Ji, M. Kim, J. Kim, T. Kim, S. Seo, D. Im, Y-N. Lee, J. Choi, S. Yoon, I. Noh, J. Kim, K. J. Lee, H. Jung, J. Shin, H. Hur, K. E. Chang, I. Cho, K. Woo, B. S. Moon, J. Kim, Y. Ahn, D. Sim, S. Park, W. Lee, K. Kim, C. K. Chang, H. Yoon, J. Kim, S-I. Kim, H. Kim, C-R. Moon, J. Song
 Samsung Semiconductor, Hwaseong, Korea

6.11 A 320x240 CMOS LiDAR Sensor with 6-Transistor nMOS-Only SPAD Analog Front-End and Area-Efficient Priority Histogram Memory
 M. Kim*1 , H. Seo*1,2, S. Kim1 , J-H. Chun1,2, S-J. Kim3 , J. Choi*1,2
 1 Sungkyunkwan University, Suwon, Korea
 2 SolidVue, Seongnam, Korea
 3 Ulsan National Institute of Science and Technology, Ulsan, Korea
 *Equally Credited Authors (ECAs)
 

Imaging papers in other sessions: 

17.3 A Fully Wireless, Miniaturized, Multicolor Fluorescence Image Sensor Implant for Real-Time Monitoring in Cancer Therapy
 R. Rabbani*1 , M. Roschelle*1 , S. Gweon1 , R. Kumar1 , A. Vercruysse1 , N. W. Cho2 , M. H. Spitzer2 , A. M. Niknejad1 , V. M. Stojanovic1 , M. Anwar1,2
 1 University of California, Berkeley, CA
 2 University of California, San Francisco, CA
 *Equally Credited Authors (ECAs)

33.10 A 2.7ps-ToF-Resolution and 12.5mW Frequency-Domain NIRS Readout IC with Dynamic Light Sensing Frontend and Cross-Coupling-Free Inter-Stabilized Data Converter
 Z. Ma1 , Y. Lin1 , C. Chen1 , X. Qi1 , Y. Li1 , K-T. Tang2 , F. Wang3 , T. Zhang4 , G. Wang1 , J. Zhao1
 1 Shanghai Jiao Tong University, Shanghai, China
 2 National Tsing Hua University, Hsinchu, Taiwan
 3 Shanghai United Imaging Microelectronics Technology, Shanghai, China
 4 Shanghai Mental Health Center, Shanghai, China

Go to the original article...

IISW2023 special issue paper on well capacity of pinned photodiodes

Image Sensors World        Go to the original article...

Miyauchi et al from Brillnics and  Tohoku University published a paper titled "Analysis of Light Intensity and Charge Holding Time Dependence of Pinned Photodiode Full Well Capacity" in the IISW 2023 special issue of the journal Sensors.

Abstract
In this paper, the light intensity and charge holding time dependence of pinned photodiode (PD) full well capacity (FWC) are studied for our pixel structure with a buried overflow path under the transfer gate. The formulae for PDFWC derived from a simple analytical model show that the relation between light intensity and PDFWC is logarithmic because PDFWC is determined by the balance between the photo-generated current and overflow current under the bright condition. Furthermore, with using pulsed light before a charge holding operation in PD, the accumulated charges in PD decrease with the holding time due to the overflow current, and finally, it reaches equilibrium PDFWC. The analytical model has been successfully validated by the technology computer-aided design (TCAD) device simulation and actual device measurement.

Open access: https://doi.org/10.3390/s23218847

Figure 1. Measured dynamic behaviors of PPD.

Figure 2. Pixel schematic and pulse timing for characterization.

Figure 3. PD cross-section and potential of the buried overflow path.

Figure 4. Potential and charge distribution changes from PD reset to PD saturation.

Figure 5. Simple PD model for theoretical analysis.
Figure 6. A simple model of dynamic behavior from PD reset to PD saturation under static light condition.

Figure 7. Potential and charge distribution changes from PD saturation to equilibrium PDFWC.

Figure 8. A simple model of PD charge reduction during charge holding operation with pulse light.
Figure 9. Chip micrograph and specifications of our developed stacked 3Q-DPS [7,8,9].


Figure 10. Relation between ∆Vb and Iof with static TCAD simulation.
Figure 12. PDFWC under various light intensity conditions.
Figure 13. PDFWC with long charge holding times.
Figure 14. TCAD simulation results of equilibrium PDFWC potential.


Go to the original article...

Sony announces full-frame global shutter camera

Image Sensors World        Go to the original article...

Link: https://www.sony.com/lr/electronics/interchangeable-lens-cameras/ilce-9m3

Sony recently announced a full-frame global shutter camera which was featured in several press articles below:


PetaPixel https://petapixel.com/2023/11/07/sony-announces-a9-iii-worlds-first-global-sensor-full-frame-camera/

DPReview https://www.dpreview.com/news/7271416294/sony-announces-a9-iii-world-s-first-full-frame-global-shutter-camera

The Verge https://www.theverge.com/2023/11/7/23950504/sony-a9-iii-mirrorless-camera-global-shutter-price-release


From Sony's official webpage:

[This camera uses the] Newly developed full-frame stacked 24.6 MP Exmor RS™ image sensor with global shutter [...] a stacked CMOS architecture and integral memory [...] advanced A/D conversion enable high-speed processing to proceed with minimal delay. [AI features are implemented using the] BIONZ XR™ processing engine. With up to eight times more processing power than previous versions, the BIONZ XR image processing engine minimises processing latency [...] It's able to process the high volume of data generated by the newly developed Exmor RS image sensor in real-time, even while shooting continuous bursts at up to 120 fps, and it can capture high-quality 14-bit RAW images in all still shooting modes. [...] [The] α9 III can use subject form data to accurately recognise movement. Human pose estimation technology recognises not just eyes but also body and head position with high precision. 

 


 

Go to the original article...

2024 International SPAD Sensor Workshop Submission Deadline Approaching!

Image Sensors World        Go to the original article...

The deadline for the 2024 ISSW on December 8, 2023 is fast approaching! Paper submission portal is now open!

The 2024 International SPAD Sensor Workshop will be held from 4-6 June 2024 in Trento, Italy.

Paper submission

Workshop papers must be submitted online on Microsoft CMT. Click here to be redirected to the submission website. You may need to register first, then search for the "2024 International SPAD Sensor Workshop" within the list of conferences using the dedicated search bar.

Paper format

Kindly take note that the ISSW employs a single-stage submission process, necessitating the submission of camera-ready papers. Each submission should comprise a 1000-character abstract and a 3-page paper, equivalent to 1 page of text and 2 pages of images. The submission must include the authors' name(s) and affiliation, mailing address, telephone, and email address. The formatting can adhere to either a style that integrates text and figures, akin to the standard IEEE format, or a structure with a page of text followed by figures, mirroring the format of the International Solid-State Circuits Conference (ISSCC) or the IEEE Symposium on VLSI Technology and Circuits. Examples illustrating these formats can be accessed in the online database of the International Image Sensor Society.

The deadline for paper submission is 23:59 CET, Friday December 8th, 2023.

Papers will be considered on the basis of originality and quality. High quality papers on work in progress are also welcome. Papers will be reviewed confidentially by the Technical Program Committee. Accepted papers will be made freely available for download from the International Image Sensor Society website. Please note that no major modifications are allowed. Authors will be notified of the acceptance of their abstract & posters at the latest by Wednesday Jan 31st, 2024.
 
Poster submission 

In addition to talks, we wish to offer all graduate students, post-docs, and early-career researchers an opportunity to present a poster on their research projects or other research relevant to the workshop topics . If you wish to take up this opportunity, please submit a 1000-character abstract and a 1-page description (including figures) of the proposed research activity, along with authors’ name(s) and affiliation, mailing address, telephone, and e-mail address.

The deadline for poster submission is 23:59 CET, Friday December 8th, 2023.

Go to the original article...

Detecting hidden defects using a single-pixel THz camera

Image Sensors World        Go to the original article...

 

Li et al. present a new THz imaging technique for defect detection in a recent paper in the journal Nature Communications. The paper is titled "Rapid sensing of hidden objects and defects using a single-pixel diffractive terahertz sensor".

Abstract: Terahertz waves offer advantages for nondestructive detection of hidden objects/defects in materials, as they can penetrate most optically-opaque materials. However, existing terahertz inspection systems face throughput and accuracy restrictions due to their limited imaging speed and resolution. Furthermore, machine-vision-based systems using large-pixel-count imaging encounter bottlenecks due to their data storage, transmission and processing requirements. Here, we report a diffractive sensor that rapidly detects hidden defects/objects within a 3D sample using a single-pixel terahertz detector, eliminating sample scanning or image formation/processing. Leveraging deep-learning-optimized diffractive layers, this diffractive sensor can all-optically probe the 3D structural information of samples by outputting a spectrum, directly indicating the presence/absence of hidden structures or defects. We experimentally validated this framework using a single-pixel terahertz time-domain spectroscopy set-up and 3D-printed diffractive layers, successfully detecting unknown hidden defects inside silicon samples. This technique is valuable for applications including security screening, biomedical sensing and industrial quality control. 

Paper (open access): https://www.nature.com/articles/s41467-023-42554-2

News coverage: https://phys.org/news/2023-11-hidden-defects-materials-single-pixel-terahertz.html

CMOS SPAD Sensors for Solid-state LIDAR

 
In the realm of engineering and material science, detecting hidden structures or defects within materials is crucial. Traditional terahertz imaging systems, which rely on the unique property of terahertz waves to penetrate visibly opaque materials, have been developed to reveal the internal structures of various materials of interest.


This capability provides unprecedented advantages in numerous applications for industrial quality control, security screening, biomedicine, and defense. However, most existing terahertz imaging systems have limited throughput and bulky setups, and they need raster scanning to acquire images of the hidden features.


To change this paradigm, researchers at UCLA Samueli School of Engineering and the California NanoSystems Institute developed a unique terahertz sensor that can rapidly detect hidden defects or objects within a target sample volume using a single-pixel spectroscopic terahertz detector.
Instead of the traditional point-by-point scanning and digital image formation-based methods, this sensor inspects the volume of the test sample illuminated with terahertz radiation in a single snapshot, without forming or digitally processing an image of the sample.


Led by Dr. Aydogan Ozcan, the Chancellor's Professor of Electrical & Computer Engineering and Dr. Mona Jarrahi, the Northrop Grumman Endowed Chair at UCLA, this sensor serves as an all-optical processor, adept at searching for and classifying unexpected sources of waves caused by diffraction through hidden defects. The paper is published in the journal Nature Communications.


"It is a shift in how we view and harness terahertz imaging and sensing as we move away from traditional methods toward more efficient, AI-driven, all-optical sensing systems," said Dr. Ozcan, who is also the Associate Director of the California NanoSystems Institute at UCLA.


This new sensor comprises a series of diffractive layers, automatically optimized using deep learning algorithms. Once trained, these layers are transformed into a physical prototype using additive manufacturing approaches such as 3D printing. This allows the system to perform all-optical processing without the burdensome need for raster scanning or digital image capture/processing.


"It is like the sensor has its own built-in intelligence," said Dr. Ozcan, drawing parallels with their previous AI-designed optical neural networks. "Our design comprises several diffractive layers that modify the input terahertz spectrum depending on the presence or absence of hidden structures or defects within materials under test. Think of it as giving our sensor the capability to 'sense and respond' based on what it 'sees' at the speed of light."


To demonstrate their novel concept, the UCLA team fabricated a diffractive terahertz sensor using 3D printing and successfully detected hidden defects in silicon samples. These samples consisted of stacked wafers, with one layer containing defects and the other concealing them. The smart system accurately revealed the presence of unknown hidden defects with various shapes and positions.
The team believes their diffractive defect sensor framework can also work across other wavelengths, such as infrared and X-rays. This versatility heralds a plethora of applications, from manufacturing quality control to security screening and even cultural heritage preservation.


The simplicity, high throughput, and cost-effectiveness of this non-imaging approach promise transformative advances in applications where speed, efficiency, and precision are paramount.

Go to the original article...

A 400 kilopixel resolution superconducting camera

Image Sensors World        Go to the original article...

Oripov et al. from NIST and JPL recently published a paper titled "A superconducting nanowire single-photon camera with 400,000 pixels" in Nature.

Abstract: For the past 50 years, superconducting detectors have offered exceptional sensitivity and speed for detecting faint electromagnetic signals in a wide range of applications. These detectors operate at very low temperatures and generate a minimum of excess noise, making them ideal for testing the non-local nature of reality, investigating dark matter, mapping the early universe and performing quantum computation and communication. Despite their appealing properties, however, there are at present no large-scale superconducting cameras—even the largest demonstrations have never exceeded 20,000 pixels. This is especially true for superconducting nanowire single-photon detectors (SNSPDs). These detectors have been demonstrated with system detection efficiencies of 98.0%, sub-3-ps timing jitter, sensitivity from the ultraviolet to the mid-infrared and microhertz dark-count rates, but have never achieved an array size larger than a kilopixel. Here we report on the development of a 400,000-pixel SNSPD camera, a factor of 400 improvement over the state of the art. The array spanned an area of 4 × 2.5 mm with 5 × 5-μm resolution, reached unity quantum efficiency at wavelengths of 370 nm and 635 nm, counted at a rate of 1.1 × 105 counts per second (cps) and had a dark-count rate of 1.0 × 10^−4 cps per detector (corresponding to 0.13 cps over the whole array). The imaging area contains no ancillary circuitry and the architecture is scalable well beyond the present demonstration, paving the way for large-format superconducting cameras with near-unity detection efficiencies across a wide range of the electromagnetic spectrum.

Link: https://www.nature.com/articles/s41586-023-06550-2

a, Imaging at 370 nm, with raw time-delay data from the buses shown as individual dots in red and binned 2D histogram data shown in black and white. b, Count rate as a function of bias current for various wavelengths of light as well as dark counts. c, False-colour scanning electron micrograph of the lower-right corner of the array, highlighting the interleaved row and column detectors. Lower-left inset, schematic diagram showing detector-to-bus connectivity. Lower-right inset, close-up showing 1.1-μm detector width and effective 5 × 5-μm pixel size. Scale bar, 5 μm.


 

a, Circuit diagram of a bus and one section of 50 detectors with ancillary readout components. SNSPDs are shown in the grey boxes and all other components are placed outside the imaging area. A photon that arrives at time t0 has its location determined by a time-of-flight readout process based on the time-of-arrival difference t2 − t1. b, Oscilloscope traces from a photon detection showing the arrival of positive (green) and negative (red) pulses at times t1 and t2, respectively.

a, Histogram of the pulse differential time delays Δt = t1 − t2 from the north bus during flood illumination with a Gaussian spot. All 400 detectors resolved clearly, with gaps indicating detectors that were pruned. Inset, zoomed-in region showing that counts from adjacent detectors are easily resolvable and no counts were generated by a pruned detector. b, Plot of raw trow and tcol time delays when flood illuminated at 370 nm. c, Zoomed-in subsection of the array with 25 × 25 detectors. d, Histogram of time delays for a 2 × 2 detector subset with 10-ps bin size showing clear distinguishability between adjacent detectors.

a, Count rate versus optical attenuation for a section of detectors biased at 45 μA per detector. The dashed purple line shows a slope of 1, with deviations from that line at higher rates indicating blocking loss. b, System jitter of a 50-detector section. Detection delay was calculated as the time elapsed between the optical pulse being generated and the detection event being read out.



News coverage: https://www.universetoday.com/163959/a-new-superconducting-camera-can-resolve-single-photons/


A New Superconducting Camera can Resolve Single Photons

Researchers have built a superconducting camera with 400,000 pixels, which is so sensitive it can detect single photons. It comprises a grid of superconducting wires with no resistance until a photon strikes one or more wires. This shuts down the superconductivity in the grid, sending a signal. By combining the locations and intensities of the signals, the camera generates an image.


The researchers who built the camera, from the US National Institute of Standards and Technology (NIST) say the architecture is scalable, and so this current iteration paves the way for even larger-format superconducting cameras that could make detections across a wide range of the electromagnetic spectrum. This would be ideal for astronomical ventures such as imaging faint galaxies or extrasolar planets, as well as biomedical research using near-infrared light to peer into human tissue.


These devices have been possible for decades but with a fraction of the pixel count. This new version has 400 times more pixels than any other device of its type. Previous versions have not been very practical because of the low-quality output.

In the past, it was found to be difficult-to-impossible to chill the camera’s superconducting components – which would be hundreds of thousands of wires – by connecting them each to a cooling system.
According to NIST, researchers Adam McCaughan and Bakhrom Oripov and their collaborators at NASA’s Jet Propulsion Laboratory in Pasadena, California, and the University of Colorado Boulder overcame that obstacle by constructing the wires to form multiple rows and columns, like those in a tic-tac-toe game, where each intersection point is a pixel. Then they combined the signals from many pixels onto just a few room-temperature readout nanowires.


The detectors can discern differences in the arrival time of signals as short as 50 trillionths of a second. They can also detect up to 100,000 photons a second striking the grid.
McCaughan said the readout technology can easily be scaled up for even larger cameras, and predicted that a superconducting single-photon camera with tens or hundreds of millions of pixels could soon be available.


In the meantime, the team plans to improve the sensitivity of their prototype camera so that it can capture virtually every incoming photon. That will enable the camera to tackle quantum imaging techniques that could be a game changer for many fields, including astronomy and medical imaging.

Go to the original article...

RADOPT 2023 Nov 29-30 in Toulouse, France

Image Sensors World        Go to the original article...

The 2023 workshop on Radiation Effects on Optoelectronic Detectors and Photonics Technologies (RADOPT) will be co-organised by CNES, UJM, SODERN, ISAE-SUPAERO AIRBUS DEFENCE & SPACE, THALES ALENIA SPACE in Touluse, France on November 29 and 30, 2023.

After the success of RADOPT 2021, this second edition of the workshop, will continue to combine and replace two well-known events from the Photonic Devices and IC’s community: the “Optical Fibers in Radiation Environments Days -FMR” and the Radiation effects on Optoelectronic Detectors Workshop, traditionally organized every-two years by the COMET OOE of CNES.

The objective of the workshop is to provide a forum for the presentation and discussion of recent developments regarding the use of optoelectronics and photonics technologies in radiation-rich environments. The workshop also offers the opportunity to highlight future prospects in the fast-moving space, high energy physics, fusion and fission research fields and to enhance exchanges and collaborations between scientists. Participation of young researchers (PhD) is especially encouraged.




Go to the original article...

Acuros announces 6 MP SWIR Sensor to be released in 2024

Image Sensors World        Go to the original article...

The sensor is based on quantum dot crystals deposited on silicon.

Link: https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/

Acuros® CQD® sensors are fabricated via the deposition of quantum dot semiconductor crystals upon the surface of silicon wafers. The resulting CQD photodiode array enables high resolution, small pixel pitch, broad bandwidth, low noise, and low inter-pixel crosstalk arrays, eliminating the prohibitively expensive hybridization process inherent to InGaAs sensors. CQD sensor technology is silicon wafer-scale compatible, opening its potential to very low-cost high-volume applications.

Features:

  •  3072 x 2048 Pixel Array
  •  7µm Pixel Pitch
  •  Global Snapshot Shutter
  •  Enhanced QE
  •  100 Hz Framerate
  •  Integrated 12bit ADC
  •  Full Visible-to-SWIR bandwidth
  •  Compatible with a range of SWIR lenses
Applications:
  • Industrial Inspection: Suitable for inspection and quality control in various industries, including semiconductor, electronics, and pharmaceuticals.
  •  Agriculture: Crop health monitoring, food quality control, and moisture content analysis.
  •  Medical Imaging: Blood vessel imaging, tissue differentiation, and endoscopy.
  •  Degraded Visual Environment: Penetrating haze, smoke, rain & snow for improved situational awareness.
  •  Security and Defense:Target recognition, camouflage detection, and covert surveillance.
  •  Scientific Research: Astronomy, biology, chemistry, and material science.
  •  Remote Sensing: Environmental monitoring, geology, and mineral exploration

 

Full press release:

SWIR Vision Systems to release industry-leading 6 MP SWIR sensors for defense, scientific, automotive, and industrial vision markets
 
The company’s latest innovation, the Acuros® 6, leverages its pioneering CQD® Quantum Dot image sensor technology, further contributing to the availability of very high resolution and broad-band sensors for a diversity of applications.

Durham, N.C., October 31, 2023 – SWIR Vision Systems today announces the upcoming release of two new models of short-wavelength infrared (SWIR) image sensors for Defense, Scientific, Automotive, and Industrial Users. The new sensors are capable of capturing images in the visible, the SWIR, and the extended SWIR spectral ranges. These very high resolution SWIR sensors are made possible by the company’s patented CQD Quantum Dot sensor technology.

SWIR Vision’s new products include both the Acuros 6 and the Acuros 4 CQD SWIR image sensors, featuring 6.3 megapixel and 4.2 megapixel global shutter arrays. Each sensor has a 7-micron pixel-pitch, 12-bit digital output, low read noise, and enhanced quantum efficiency, resulting in excellent sensitivity and SNR performance for a broad array of applications.

The new products employ SWIR Vision’s CQD photodiode technology, in which photodiodes are created via the deposition of low-cost films directly on top of silicon readout ICs. This approach enables small pixel sizes, affordable prices, broad spectral response, and industry-leading high-resolution SWIR focal plane arrays.

SWIR Vision is now engaging global camera makers, automotive, industrial, and defense system integrators, who will leverage these breakthrough sensors to tackle challenges in laser inspection and manufacturing, semiconductor inspection, automotive safety, long-range imaging, and defense.
“Our customers challenged us again to deliver more capability to their toughest imaging problems. The Acuros 4 and the Acuros 6 sensors deliver the highest resolution and widest spectral response available today,” said Allan Hilton, SWIR Vision’s Chief Product Officer. “The industry can expect to see new camera and system solutions based on these latest innovations from our best-in-class CQD sensor engineering group”.

About SWIR Vision Systems – SWIR Vision Systems (www.swirvisionsystems.com), a North Carolina-based startup company, has pioneered the development and introduction of high-definition, Colloidal Quantum Dot (CQD® ) infrared image sensor technology for infrared cameras, delivering breakthrough sensor capability. Imaging in the short wavelength IR has become critical for key applications within industrial, defense systems, mobile phones, and autonomous vehicle markets.
To learn more about our 6MP Sensors, go to https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/.

Go to the original article...

SWIR Vision Systems announces 6 MP SWIR sensor to be released in 2024

Image Sensors World        Go to the original article...

The sensor is based on quantum dot crystals deposited on silicon.

Link: https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/

Acuros® CQD® sensors are fabricated via the deposition of quantum dot semiconductor crystals upon the surface of silicon wafers. The resulting CQD photodiode array enables high resolution, small pixel pitch, broad bandwidth, low noise, and low inter-pixel crosstalk arrays, eliminating the prohibitively expensive hybridization process inherent to InGaAs sensors. CQD sensor technology is silicon wafer-scale compatible, opening its potential to very low-cost high-volume applications.

Features:

  •  3072 x 2048 Pixel Array
  •  7µm Pixel Pitch
  •  Global Snapshot Shutter
  •  Enhanced QE
  •  100 Hz Framerate
  •  Integrated 12bit ADC
  •  Full Visible-to-SWIR bandwidth
  •  Compatible with a range of SWIR lenses
Applications:
  • Industrial Inspection: Suitable for inspection and quality control in various industries, including semiconductor, electronics, and pharmaceuticals.
  •  Agriculture: Crop health monitoring, food quality control, and moisture content analysis.
  •  Medical Imaging: Blood vessel imaging, tissue differentiation, and endoscopy.
  •  Degraded Visual Environment: Penetrating haze, smoke, rain & snow for improved situational awareness.
  •  Security and Defense:Target recognition, camouflage detection, and covert surveillance.
  •  Scientific Research: Astronomy, biology, chemistry, and material science.
  •  Remote Sensing: Environmental monitoring, geology, and mineral exploration

 

Full press release:

SWIR Vision Systems to release industry-leading 6 MP SWIR sensors for defense, scientific, automotive, and industrial vision markets
 
The company’s latest innovation, the Acuros® 6, leverages its pioneering CQD® Quantum Dot image sensor technology, further contributing to the availability of very high resolution and broad-band sensors for a diversity of applications.

Durham, N.C., October 31, 2023 – SWIR Vision Systems today announces the upcoming release of two new models of short-wavelength infrared (SWIR) image sensors for Defense, Scientific, Automotive, and Industrial Users. The new sensors are capable of capturing images in the visible, the SWIR, and the extended SWIR spectral ranges. These very high resolution SWIR sensors are made possible by the company’s patented CQD Quantum Dot sensor technology.

SWIR Vision’s new products include both the Acuros 6 and the Acuros 4 CQD SWIR image sensors, featuring 6.3 megapixel and 4.2 megapixel global shutter arrays. Each sensor has a 7-micron pixel-pitch, 12-bit digital output, low read noise, and enhanced quantum efficiency, resulting in excellent sensitivity and SNR performance for a broad array of applications.

The new products employ SWIR Vision’s CQD photodiode technology, in which photodiodes are created via the deposition of low-cost films directly on top of silicon readout ICs. This approach enables small pixel sizes, affordable prices, broad spectral response, and industry-leading high-resolution SWIR focal plane arrays.

SWIR Vision is now engaging global camera makers, automotive, industrial, and defense system integrators, who will leverage these breakthrough sensors to tackle challenges in laser inspection and manufacturing, semiconductor inspection, automotive safety, long-range imaging, and defense.
“Our customers challenged us again to deliver more capability to their toughest imaging problems. The Acuros 4 and the Acuros 6 sensors deliver the highest resolution and widest spectral response available today,” said Allan Hilton, SWIR Vision’s Chief Product Officer. “The industry can expect to see new camera and system solutions based on these latest innovations from our best-in-class CQD sensor engineering group”.

About SWIR Vision Systems – SWIR Vision Systems (www.swirvisionsystems.com), a North Carolina-based startup company, has pioneered the development and introduction of high-definition, Colloidal Quantum Dot (CQD® ) infrared image sensor technology for infrared cameras, delivering breakthrough sensor capability. Imaging in the short wavelength IR has become critical for key applications within industrial, defense systems, mobile phones, and autonomous vehicle markets.
To learn more about our 6MP Sensors, go to https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/.

Go to the original article...

imec paper on thin film pinned photodiode

Image Sensors World        Go to the original article...

Kim et al. from imec and coauthors from universities in Belgium and Korea recently published a paper titled "A Thin-Film Pinned-Photodiode Imager Pixel with Fully Monolithic Fabrication and beyond 1Me- Full Well Capacity" in MDPI Sensors. This paper describes imec's recent thin film pinned photodiode technology.

Open access paper link: https://www.mdpi.com/1424-8220/23/21/8803

Abstract
Thin-film photodiodes (TFPD) monolithically integrated on the Si Read-Out Integrated Circuitry (ROIC) are promising imaging platforms when beyond-silicon optoelectronic properties are required. Although TFPD device performance has improved significantly, the pixel development has been limited in terms of noise characteristics compared to the Si-based image sensors. Here, a thin-film-based pinned photodiode (TF-PPD) structure is presented, showing reduced kTC noise and dark current, accompanied with a high conversion gain (CG). Indium-gallium-zinc oxide (IGZO) thin-film transistors and quantum dot photodiodes are integrated sequentially on the Si ROIC in a fully monolithic scheme with the introduction of photogate (PG) to achieve PPD operation. This PG brings not only a low noise performance, but also a high full well capacity (FWC) coming from the large capacitance of its metal-oxide-semiconductor (MOS). Hence, the FWC of the pixel is boosted up to 1.37 Me- with a 5 μm pixel pitch, which is 8.3 times larger than the FWC that the TFPD junction capacitor can store. This large FWC, along with the inherent low noise characteristics of the TF-PPD, leads to the three-digit dynamic range (DR) of 100.2 dB. Unlike a Si-based PG pixel, dark current contribution from the depleted semiconductor interfaces is limited, thanks to the wide energy band gap of the IGZO channel material used in this work. We expect that this novel 4 T pixel architecture can accelerate the deployment of monolithic TFPD imaging technology, as it has worked for CMOS Image sensors (CIS).


Figure 1. Pixel cross-section for the monolithic TFPD image sensor (a) 3 T and (b) 4 T (TF-PPD) structure (TCO: transparent conductive oxide, HTL: hole transport layer, PG: photogate, TG: transfer gate, FD: floating diffusion). Electric potential and signal readout configuration for 3 T pixel (c) and for 4 T pixel (d). Pixel circuit diagram for 3 T pixel (e) and for the 4 T pixel (f).

 


Figure 2. I-V characteristic of QDPD test structure (a) and of IGZO TFT (b), a micrograph of the TF-PPD passive pixel array (c), and its measurement schematic (d). Band diagrams for the PD (e) and PG (f).


Figure 3. Silvaco TCAD simulation results; (a) simulated structure, (b) lateral potential profile along the IGZO layer, and (c) potential profile when TG is turned off and (d) on.


Figure 4. Signal output vs. integration time with different VPG and VTG values with the illumination. Signal curves with the fixed VTG (−1 V), varying VPG (−4~−1 V) (a), the same graphs for the fixed VPG (−2 V), and different VTGs (−6.5~−1 V) (b).

Figure 4. Signal output vs. integration time with different VPG and VTG values with the illumination. Signal curves with the fixed VTG (−1 V), varying VPG (−4~−1 V) (a), the same graphs for the fixed VPG (−2 V), and different VTGs (−6.5~−1 V) (b).

Figure 5. (a) Pixel output vs. integration time for different pixel pitches. (b) FWC comparison between estimation and measurement.

Figure 6. FWC comparison by different pixel fill factors. Pixel schematics for different shapes (a), and FWC by different pixel shapes and pitches (b).



Figure 7. Potential diagram describing FWC increase by the larger VPG (a), and FWC vs. VPG (b).

Figure 8. Passive pixel dark current (a) and Arrhenius plots (b) for the QDPD test structure and the passive pixel.

Figure 9. FWC vs. pixel area. A guideline showing the FWC density per unit area for this work (blue) and a trend line for the most of CISs (red).

 



Go to the original article...

EETimes article about imec’s new thin film pinned photodiode

Image Sensors World        Go to the original article...

Full article: https://www.eetimes.eu/imec-taps-pinned-photodiode-to-build-a-better-swir-sensor/

Imec Taps Pinned Photodiode to Build a Better SWIR Sensor

‘Monolithic hybrid’ prototype integrates PPD into the TFT structure to lower the cost of light detection in the nonvisible range, with improved noise performance. 

Silicon-based image sensors can detect light within a limited range of wavelengths and thus have limitations in applications like automotive and medical imaging. Sensors that can capture light beyond the visible range, such as short-wave infrared (SWIR), can be built using III-V materials, which combine such elements as gallium, indium, aluminum and phosphorous. But while those sensors perform well, their manufacture requires a high degree of precision and control, increasing their cost.

Research into less expensive alternatives has yielded thin-film absorbers such as quantum-dot (QD) and other organic photodiode (OPD) materials that are compatible with the CMOS readout circuits found in electronic devices, an advantage that has boosted their adoption for IR detection. But thin-film absorbers exhibit higher levels of noise when capturing IR light, resulting in lower image quality. They are also known to have lower sensitivity to IR.

The challenge, then, is to design a cost-effective image sensor that uses thin-film absorbers but offers better noise performance. Imec has taken aim at the problem by revisiting a technology that was first used in the 1980s to improve noise in early CMOS image sensors: the pinned photodiode (PPD).
The PPD structure’s ability to completely remove electrical charges before starting a new capture cycle makes it an efficient approach, as the sensor can reset without unwanted background noise (kTC noise) or any lingering influence from the previous image frame. PPDs quickly became the go-to choice for consumer-grade silicon-based image sensors. Their low noise and high power efficiency made them a favorite among camera manufacturers.

Researchers at imec integrated a PPD structure into thin-film–transistor (TFT) image sensors to yield a hybrid prototype. The sensor structure also uses imec’s proprietary indium gallium zinc oxide (IGZO) technology for electron transport.

“You can call such systems ‘monolithic hybrid’ sensors, where the photodiode is not a part of the CMOS circuit [as in CMOS image sensors, in which silicon is used for light absorption], but is formed with another material as the photoactive layer,” Pawel Malinowski, Pixel Innovations program manager at imec, told EE Times Europe. “The spectrum this photodiode captures is something separate … By introducing an additional thin-film transistor in between, it enables separation of the storage and readout nodes, making it possible to fully deplete the photodiode and transfer all charges to the readout, [thereby] preventing the generation of kTC noise and reducing image lag.”

Unlike the conventional thin-film-based pixel architecture, imec’s TFT hybrid PPD structure introduces a separate thin-film transistor (TFT) to the design, which acts as a transfer gate and a photogate—in other words, it functions as a middleman. Here, imec’s IGZO technology serves as an effective electron transport layer, as it has higher electron mobility. Also acting as the gate dielectric, it contributes to the performance of the sensor by controlling the flow of charges and enhancing absorption characteristics.
With the new elements strategically placed within the traditional PDD structure, the prototype 4T image sensor showed a low readout noise of 6.1e-, compared to >100e- for the conventional 3T sensor, demonstrating its superior noise performance, imec stated. Because of IGZO’s large bandgap, the TFT hybrid PPD structure also entails lower dark current than traditional CMOS image sensors. This means the image sensor can capture infrared images with less noise, less distortion or interference and more accuracy and detail, according to imec


Figure 1: Top (a) and cross-sectional (b) view of structure of TF-PPD pixels


By using thin-film absorbers, imec’s prototype image sensor can detect at SWIR wavelengths and beyond, imec said. Image sensors operating in the near-infrared range are already used in automotive applications and consumer apps like iPhone Face ID. Going to longer wavelengths, such as SWIR, enables better transmission through OLED displays, which leads to better “hiding” of the components behind the screen and reduction of the “notch.”


Malinowski said, “In automotive, going to longer wavelengths can enable better visibility in adverse weather conditions, such as visibility through fog, smoke or clouds, [and achieve] increased contrast of some materials that are hard to distinguish against a dark background—for example, high contrast of textiles against poorly illuminated, shaded places.” Using the thin-film image sensor could make intruder detection and monitoring in dark conditions more effective and cost-efficient. It could also aid in medical imaging, which uses SWIR to study veins, blood flow and tissue properties.


Looking ahead, imec plans to diversify the thin-film photodiodes that can be used in the proposed architecture. The current research has tested for two types of photodiodes: a photodiode sensitive to near-infrared and a QD photodiode sensitive to SWIR.


“Current developments were focused on realizing a proof-of-concept device, with many design and process variations to arrive at a generic module,” Malinowski said. “Further steps include testing the PPD structure with different photodiodes—for example, other OPD and QDPD versions. Furthermore, next-generation devices are planned to focus on a more specific use case, with a custom readout suitable for a particular application.


“SWIR imaging with quantum dots is one of the avenues for further developments and is also a topic with high interest from the imaging community,” Malinowski added. “We are open to collaborations with industrial players to explore and mature this exciting sensor technology.”

Go to the original article...

onsemi announces Hyperlux low power CIS for smart home

Image Sensors World        Go to the original article...

Press release: https://www.onsemi.com/company/news-media/press-announcements/en/onsemi-introduces-lowest-power-image-sensor-family-for-smart-home-and-office

onsemi Introduces Lowest Power Image Sensor Family for Smart Home and Office 

Hyperlux LP Image Sensors can extend battery life by up to 40%¹



What's New: Today onsemi introduced the Hyperlux LP image sensor family ideally suited for industrial and commercial cameras such as smart doorbells, security cameras, AR/VR/XR headsets, machine vision and video conferencing. These 1.4 µm pixel sensors deliver industry-leading image quality and low power consumption while maximizing performance to capture crisp, vibrant images even in difficult lighting conditions.

The product family also features a stacked architecture design that minimizes its footprint and at its smallest approaches the size of a grain of rice, making it ideal for devices where size is critical. Depending on the use case, customers can choose between the 5-megapixel AR0544, the 8-megapixel AR0830 or the 20-megapixel AR2020.

Why It Matters: Home and business owners continue to choose cameras to protect themselves more than any other security measure, with the market expected to triple by the end of the decade.² As a result, consumers are demanding devices that offer better image quality, reliability and longer battery life to improve the overall user experience.

With the image sensors, cameras can deliver clearer images and more accurate object detection even in harsh weather and lighting conditions. Additionally, these cameras are often placed in locations that can be difficult to access to replace or recharge batteries, making low power consumption a critical feature.

How It Works: The Hyperlux LP family is packed with features and proprietary technologies that optimize performance and resolution including:

  •  Wake on Motion – Enables the sensors to operate in a low-power mode that draws a fraction of the power needed in the full-performance mode. Once the sensor detects movement, it moves to a higher performance state in less time than it takes to snap a photo.
  •  Smart ROI – Delivers more than one region of interest (ROI) to give a context view of the scene at reduced bandwidth and a separate ROI in original detail.
  •  Near-Infrared (NIR) Performance – Delivers superior image quality due to the innovative silicon design and pixel architecture, with minimal supplemental lighting.
  •  Low Power – Reduces thermal noise which negatively impacts image quality and eliminates the need for heat sinks, reducing the overall cost of the vision system.

Supporting Quotes:
“By leveraging our superior analog design and pixel architecture, our sensors elevate the two most important elements people consider when buying a device, picture quality and battery life. Our new image sensor family delivers performance that matters with a significantly increased battery life and exquisite, highly detailed images,” said Ross Jatou, senior vice president and general manager, Intelligent Sensing Group, onsemi.

In addition to smart home devices, one of the other applications the Hyperlux LP family can improve is the office meeting experience with more intuitive, seamless videoconferencing solutions.
“Our video collaboration solutions require high-quality image sensors that bring together multiple factors for the best user experience. The superior optical performance, innovative features and extremely low power consumption of the Hyperlux LP image sensors enable us to deliver a completely immersive virtual meeting experience in highly intelligent and optimized videoconferencing systems,” said Ashish Thanawala, Sr. Director of Systems Engineering, Owl Labs.

What's Next: The Hyperlux LP Image Sensor Family will be available in the fourth quarter of 2023.

More Information:
 Learn more about the AR2020, the AR0830 and the AR0544.
 Read the blog: A Closer Look - Hyperlux LP Image Sensors

¹ Based on internal tests conducted under specific conditions. Actual results may vary based on device, usage patterns, and other external factors.
² Status of the CMOS Image Sensor Industry, Yole Intelligence Report, 2023.

Go to the original article...

ESSCIRC 2023 Lecture on "circuit insights" by Dr. Sara Pellegrini

Image Sensors World        Go to the original article...


In this invited talk at ESSCIRC 2023, Dr. Pellegrini shares her insights on circuits and sensor design through her research career at Politecnico Milano, Heriot Watt and now at STMicro. The lecture covers basics of LiDAR and SPAD sensors, and various design challenges such as low signal strength and background illumination.

Go to the original article...

Dr. Robert Henderson’s lecture on time-of-flight SPAD cameras

Image Sensors World        Go to the original article...


 

Imaging Time: Cameras for the Fourth Dimension

Abstract
Time is often considered as the fourth dimension, along with the length, width and depth that form the fabric of space-time. Conventional cameras observe only two of those dimensions inferring depth from spatial cues and record time only coarsely relative to many fast phenomena in the natural world. In this talk, I will introduce the concept of time cameras, devices based on single photon avalanche diodes (SPADs) that can record the time dimension of a scene at the picosecond scales commensurate with the speed of light. This talk will chart 2 decades of my research into these devices which have seen their transformation from a research curiosity to a mainstream semiconductor technology with billions of SPAD devices in consumer use in mobile phones for depth sensing autofocus-assist. We will illustrate the talk with videos and demonstrations of ultrafast SPAD cameras developed at the University of Edinburgh. I am proud that my group’s research maintains the University position at forefront of imaging technology which has transformed our lives, seeing the transition from chemical film to digital cameras, the omnipresence of camera phones and video meetings. In the near future, SPAD-based time cameras can also be expected to play a major societal role, within optical radars (LIDARs) for robotic vision and driverless cars, surgical guidance for cancer and perhaps even to add two further dimensions to the phone camera in your pocket!

Biography
Robert K. Henderson is a Professor of Electronic Imaging in the School of Engineering at the University of Edinburgh. He obtained his PhD in 1990 from the University of Glasgow. From 1991, he was a research engineer at the Swiss Centre for Microelectronics, Neuchatel, Switzerland. In 1996, he was appointed senior VLSI engineer at VLSI Vision Ltd, Edinburgh, UK where he worked on the world’s first single chip video camera. From 2000, as principal VLSI engineer in STMicroelectronics Imaging Division he developed image sensors for mobile phone applications. He joined University of Edinburgh in 2005, designing the first SPAD image sensors in nanometer CMOS technologies in the MegaFrame and SPADnet EU projects. This research activity led to the first volume SPAD time-of-flight products in 2013 in the form of STMicroelectronics FlightSense series, which perform an autofocus-assist now present in over 1 billion smartphones. He benefits from a long-term research partnership with STMicroelectronics in which he explores medical, scientific and high speed imaging applications of SPAD technology. In 2014, he was awarded a prestigious ERC advanced fellowship. He is an advisor to Ouster Automotive and a Fellow of the IEEE and the Royal Society of Edinburgh.

Go to the original article...

Image Sensing Topics at Upcoming IEDM 2023 Dec 9-13 in San Francisco

Image Sensors World        Go to the original article...

The 69th annual IEEE International Electron Devices Meeting (IEDM) will be held in San Francisco Dec. 9-13. This year there are three sessions dealing with advanced image sensing topics. You can find summaries of all of these papers by going here (https://submissions.mirasmart.com/IEDM2023/Itinerary/EventsAAG.aspx) and then clicking on the relevant sessions and papers within each one:
 
Session #8 on Monday, Dec. 11 is “Advanced Photonics for Image Sensors and High-Speed Communications.” It features six papers describing advanced photonics for image sensors and high speed communications. The first three deal with device and integration concepts for sub-diffraction color filters targeting imaging key performance indicators, while the second three deal with devices and technologies for high speed communication systems.

  1.  IMEC will describe a novel sub-micron integration approach to color-splitting, to match human eye color sensitivity.
  2.  VisEra Technologies will describe the use of nano-light pillars to improve the quantum efficiency and signal-to-noise ratio (SNR) of color filters on CMOS imaging arrays under low-light conditions.
  3.  Samsung will detail a metasurface nano-prism structure for wide field-of-view lenses, demonstrating 25% higher sensitivity and 1.2 dB increased SNR vs. conventional micro-lenses.
  4.  National University of Singapore will describe the integration of ferroelectric material into a LiNbO3-on-insulator photonic platform, demonstrating non-volatile memory and high-efficiency modulators with an efficiency of 66 pm/V.
  5.  IHP will discuss the first germanium electro-optical modulator operating at 100 GHz in a SiGe BiCMOS photonics technology.
  6.  An invited paper from Intel will discuss the first 256 Gbps WDM transceiver with eight 200 GHz-spaced wavelengths simultaneously modulated at 32 Gbps, and with a bit-error-rate less than 1e-12.

 
Session #20 on Tuesday, Dec. 12 is Emerging Photodetectors. It features five papers describing recent developments in emerging photodetectors spanning the MIR to the DUV spectral range, and from group IV and III-V sensors to organic detectors.

  1.  The first paper by KAIST presents a fully CMOS-compatible Ge-on-Insulator platform for detection of wavelengths beyond 4 µm.
  2.  The second paper by KIST (not a typo) presents a new record-low-jitter SPAD device integrated into a CIS process technology, covering a spectral range of visible up to NIR.
  3.  The third paper by KAIST describes a wavelength-tunable detection device combining optical gratings and phase-change materials, reaching wavelengths up to 1700 nm.
  4.  The University of Science and Technology of China will report on a dual-function tunable emitter and NIR photodetector combination based on III-V GaN/AlGaN nanowires on silicon.
  5.  An invited paper from France’s CNRS gives an overview on next-generation sustainable organic photodetectors and emitters.

 
Session #40 on Wednesday, Dec. 13 features six papers describing the most recent advances in image sensors.

  1.  Samsung will describe a 0.5 µm pixel, 3 layers-stacked, CMOS image sensor (CIS) with in-pixel Cu-Cu bonding technology featuring improved conversion gain and noise.
  2.  Omnivision will present a 2.2 µm-2 layer stacked high dynamic range VDGS CIS with 1x2 shared structure offering dual conversion gain and achieving low FPN.
  3.  STMicroelectronics will describe a 2.16 µm 6T BSI VDGS CIS using deep trench capacitors and achieving 90 dB dynamic range using spatially-split exposure.
  4.  Meta will describe a 2 megapixel - 4.23 µm pixel pitch - offering block-parallel A/D architecture and featuring programmable sparse-capture with a fine grain gating scheme for power saving.
  5.  Canon will introduce a new twisted photodiode CIS structure - 6 µm pixel pitch - enabling all-directional autofocus for high speed and accuracy and 95 dB DR.
  6.  Shanghai Jiao Tong University will present a 64x64-pixel organic imager prototype, based on a novel hole transporting layer (HTL)-free structure achieving the highest recorded low-light performance.

 
Full press release about the conference is below.

2023 IEEE International Electron Devices Meeting to Highlight Advances in Critical Semiconductor Technologies with the Theme, “Devices for a Smart World Built Upon 60 Years of CMOS”

Four Focus Sessions on topics of intense research interest:

  •  3D Stacking for Next-Generation Logic & Memory by Wafer Bonding and Related Technologies
  •  Logic, Package and System Technologies for Future Generative AI
  •  Neuromorphic Computing for Smart Sensors
  •  Sustainability in Semiconductor Device Technology and Manufacturing

 
SAN FRANCISCO, CA – Since it began in 1955, the IEEE International Electron Devices Meeting (IEDM) has been where the world’s best and brightest electronics technologists go to learn about the latest breakthroughs in semiconductor and related technologies. That tradition continues this year, when the 69th annual IEEE IEDM conference takes place in-person December 9-13, 2023 at the Hilton San Francisco Union Square hotel, with online access to recorded content available afterward.
 
The 2023 IEDM technical program, supporting the theme, “Devices for a Smart World Built Upon 60 Years of CMOS,” will consist of more than 225 presentations plus a full slate of panels, Focus Sessions, Tutorials, Short Courses, a career luncheon, supplier exhibit and IEEE/EDS award presentations.
 
“The IEDM offers valuable insights into where the industry is headed, because the leading-edge work presented at the conference showcases major trends and paradigm shifts in key semiconductor technologies,” said Jungwoo Joh, IEDM 2023 Publicity Chair and Process Development Manager at Texas Instruments. “For example, this year many papers discuss ways to stack devices in 3D configurations. This is of course not new, but two things are especially noteworthy about this work. One is that it isn’t just happening with conventional logic and memory devices, but with sensors, power, neuromorphic and other devices as well. Also, many papers don’t describe futuristic laboratory studies, but rather specific hardware demonstrations that have generated solid results, opening pathways to commercial feasibility.”
 
“Finding the right materials and device configurations to develop transistors that will perform well with acceptable levels of reliability remains a key challenge,” said Kang-ill Seo, IEDM 2023 Publicity Vice Chair and Vice President, Semiconductor R&D, Samsung Semiconductor. “This year’s program shows that electrothermal considerations remain a key focus, particularly with attempts to add functionality to a chip’s interconnect, or wiring, which is fabricated using low-temperature processes.”
 
Here are details of the 2023 IEEE International Electron Devices Meeting:
 
Tutorial Sessions – Saturday, Dec. 9
The Saturday tutorial sessions on emerging technologies are presented by experts in the field to bridge the gap between textbook-level knowledge and leading-edge current research, and to introduce attendees to new fields of interest. There are three time slots, each with two tutorials running in parallel:
1:30 p.m. - 2:50 p.m.
• Innovative Technology for Beyond 2 nm, Matthew Metz, Intel
• CMOS+X: Functional Augmentation of CMOS for Next-Generation Electronics, Sayeef Salahuddin, UC-Berkeley
3:05 p.m. - 4:25 p.m.
• Reliability Challenges of Emerging FET Devices, Jacopo Franco, Imec
• Advanced Packaging and Heterogeneous Integration - Past, Present & Future, Madhavan Swaminathan, Penn State
4:40 p.m. - 6:00 p.m.
• Synapses, Circuits, and Architectures for Analog In-Memory Computing-Based Deep Neural Network Inference Hardware Acceleration, Irem Boybat, IBM
• Tools for Device Modeling: From SPICE to Scientific Machine Learning, Keno Fischer, JuliaHub
 
Short Courses – Sunday, Dec. 10
In contrast to the Tutorials, the full-day Short Courses are focused on a single technical topic. They offer the opportunity to learn about important areas and developments, and to network with global experts.

• Transistor, Interconnect, and Chiplets for Next-Generation Low-Power & High-Performance Computing, organized by Yuri Y. Masuoka, Samsung

  •  Advanced Technology Requirement for Edge Computing, Jie Deng, Qualcomm
  •  Process Technology toward 1nm and Beyond, Tomonari Yamamoto, Tokyo Electron
  •  Empowering Platform Technology with Future Semiconductor Device Innovation, Jaehun Jeong, Samsung
  •  Future Power Delivery Process Architectures and Their Capability and Impact on Interconnect Scaling, Kevin Fischer, Intel
  •  DTCO/STCO in the Era of Vertical Integration, YK Chong, ARM
  •  Low Power SOC Design Trends/3D Integration/Packaging for Mobile Applications, Milind Shah, Google

 
• The Future of Memory Technologies for High-Performance Memory and Computing, organized by Ki Il Moon, SK Hynix

  •  High-Density and High-Performance Technologies for Future Memory, Koji Sakui, Unisantis Electronics Singapore/Tokyo Institute of Technology
  •  Advanced Packaging Solutions for High Performance Memory and Compute, Jaesik Lee, SK Hynix
  •  Analog In-Memory Computing for Deep Learning Inference, Abu Sebastian, IBM
  •  The Next Generation of AI Architectures: The Role of Advanced Packaging Technologies in Enabling Heterogeneous Chiplets, Raja Swaminathan, AMD
  •  Key Challenges and Directional Path of Memory Technology for AI and High-Performance Computing, Keith Kim, NVIDIA
  •  Charge-Trapping Memories: From the Fundamental Device Physics to 3D Memory Architectures (3D NAND, 3D NOR, 3D DRAM) and Computing in Memory (CIM), Hang-Ting (Oliver) Lue, Macronix

 
Plenary Presentations – Monday, Dec. 11

  •  Redefining Innovation: A Journey forward in the New Dimension Era, Siyoung Choi, President & GM, Samsung Foundry Business, Device Solutions Division
  •  The Next Big Thing: Making Memory Magic and the Economics Beyond Moore's Law, Thy Tran, Vice President of Global Frontend Procurement, Micron
  •  Semiconductor Challenges in the 5G and 6G Technology Platforms, Björn Ekelund, Corporate Research Director, Ericsson

 
Evening Panel Session – Tuesday evening, Dec. 12
The IEDM evening panel session is an interactive forum where experts give their views on important industry topics, and audience participation is encouraged to foster an open exchange of ideas. This year’s panel will be moderated by Dan Hutcheson, Vice Chair at Tech Insights.

  •  AI: Semiconductor Catalyst? Or Disrupter? Artificial Intelligence (AI) has long been a hot topic. In 2023 it became super-heated when large language models became readily available to the public. This year’s IEDM will not rehash what’s been dragged through media. Instead, it will bring together industry experts to have a conversation about how AI is changing the semiconductor industry and to ask them how they are using AI to transform their efforts. The topics will be wide-ranging, from how AI will drive demand for semiconductors, to how it’s changing design and manufacturing, and even to how it will change the jobs and careers of those working in it.

 
Luncheon – Tuesday, Dec. 12
There will be a career-focused luncheon featuring industry and scientific leaders talking about their personal experiences in the context of career growth. The discussion will be moderated by Jennifer Zhao, President/CEO, asm OSRAM USA Inc. The speakers will be:

  •  Ilesanmi Adesida, University Provost and Acting President, Nazarbayev University, Kazakhstan -- Professor Ilesanmi Adesida is a scientist/engineer and an experienced administrator in both scientific and educational circles, with more than 350 peer-reviewed articles/250 presentations at international conferences.
  •  Isabelle Ferain, Vice-President of Technology Development, GlobalFoundries -- Dr. Ferain oversees GF’s technology development mission in its 300mm fabs in the US and Europe.

 
Vendor Exhibition/MRAM Poster Session/MRAM Global Innovation Forum

  •  A vendor exhibition will be held once again.
  •  A special poster session dedicated to MRAM (magnetoresistive RAM memory) will take place during the IEDM on Tuesday, Dec. 12 from 2:20 pm to 5:30 p.m., sponsored by the IEEE Magnetics Society.
  •  Also sponsored by the IEEE Magnetics Society, the 15th MRAM Global Innovation Forum will be held in the same venue after the IEDM conference concludes, on Thursday, Dec. 14.

 
For registration and other information, visit www.ieee-iedm.org.
 
Follow IEDM via social media

 
About IEEE & EDS
IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice on a wide variety of areas ranging from aerospace systems, computers, and telecommunications to biomedical engineering, electric power, and consumer electronics. The IEEE Electron Devices Society is dedicated to promoting excellence in the field of electron devices, and sponsors the IEEE IEDM.

Go to the original article...

Metalenz announces polarization sensor for face ID

Image Sensors World        Go to the original article...

Press release: https://metalenz.com/metalenz-launches-polar-id-enabling-simple-secure-face-unlock-for-smartphones/

Metalenz Launches Polar ID, Enabling Simple, Secure Face Unlock for Smartphones 

  • The world’s first polarization sensor for smartphones, Polar ID provides ultra-secure facial authentication in a condensed footprint, lowering implementation cost and complexity.
  •  Now demonstrated on Qualcomm Technologies’ latest Snapdragon mobile platform, Polar ID is poised to drive large-scale adoption of secure face unlock across the Android ecosystem.

Boston, MA – October 26, 2023 Meta-optics industry leader Metalenz unveiled Polar ID, a revolutionary new face unlock solution, at Qualcomm Technologies’ annual Snapdragon Summit this week. Being the world’s only consumer-grade imaging system that can sense the full polarization state of light, Polar ID enables the next level of biometric security. Using breakthrough advances in meta-optic capability, Polar ID accurately captures the unique “polarization signature” of a human face. With this additional layer of information, even the most sophisticated 3D masks and spoof instruments are immediately detected as non-human.


Facial authentication provides a seamless method for unlocking phones and allowing digital payment. However, to make the solution sufficiently secure requires expensive, bulky, and often power-hungry optical modules. Historically, this has limited the implementation of face unlock to only a few high-end phone models. Polar ID harnesses meta-optic technology to extract additional information such as facial contour details and to detect human tissue liveness from a single image. It is significantly more compact and cost effective than incumbent “Structured Light” face authentication solutions which require an expensive dot-pattern projector and multiple images.


Now demonstrated on a smartphone reference design powered by the new Snapdragon® 8 Gen 3 Mobile Platform, Polar ID has the efficiency, footprint, and price point to enable any Android smartphone OEM to bring the convenience and security of face unlock to the 100s of millions of mobile devices that currently use fingerprint sensors.

“Size, cost, and performance, those are the key metrics in the consumer industry”, said Rob Devlin, Metalenz CEO & Co-founder. “Polar ID offers an advantage in all three. Its small enough to fit in the most challenging form factors, eliminating the need for a large notch in the display. Its secure enough that it doesn’t get fooled by the most sophisticated 3D masks. Its substantially higher resolution than existing facial authentication solutions, so even if you’re wearing sunglasses and a surgical mask, the system still works. As a result, Polar ID delivers secure facial recognition at less than half the size and cost of incumbent solutions.”


“With each new generation of our flagship Snapdragon 8 series, our goal is to deliver the next generation of cutting-edge smartphone imaging capabilities to consumers. Our advanced Qualcomm® Spectra™ ISP and Qualcomm® Hexagon™ NPU were specifically designed to enable complex new imaging solutions, and we are excited to work with Metalenz to support their new Polar ID biometric imaging solution on our Snapdragon mobile platform for the first time,” said Judd Heape, VP of Product Management, Qualcomm Technologies, Inc.


“Polar ID is a uniquely powerful biometric imaging solution that combines our polarization image sensor with post-processing algorithms and sophisticated machine learning models to reliably and securely recognize and authenticate the phone’s registered user. Working closely with Qualcomm Technologies to implement our solution on their reference smartphone powered by Snapdragon 8 Gen 3, we were able to leverage the advanced image signal processing capabilities of the Qualcomm Spectra ISP while also implementing mission critical aspects of our algorithms in the secure framework of the Qualcomm Hexagon NPU, to ensure that the solution is not only spoof-proof but also essentially unhackable” said Pawel Latawiec, CTO of Metalenz. “The result is an extremely fast and compute efficient face unlock solution ready for OEMs to use in their next generation of Snapdragon 8 Gen 3-powered flagship Android smartphones.”


Polar ID is under early evaluation with several top smartphone OEMs, and additional evaluation kits will be made available in early 2024. Metalenz will exhibit its revolutionary Polar ID solution at MWC Barcelona and is now booking meetings to showcase a live demo of the technology to mobile OEMs.
Contact sales@metalenz.com to reserve your demo.
 


 

Go to the original article...

Fraunhofer IMS 10th CMOS Imaging Workshop Nov 21-22 in Duisburg, Germany

Image Sensors World        Go to the original article...

https://www.ims.fraunhofer.de/en/Newsroom/Fairs-and-events/10th-cmos-imaging-workshop.html

10th CMOS Imaging Workshop 

What to expect
You are kindly invited to an exciting event, which will promote the exchange of users, developers and researchers of optical sensing to enhance synergy and pave the way to great applications and ideas.

Main topics

  •  Single photon imaging
  •  Spectroscopy, scientific and medical imaging
  •  Quantum imaging
  •  Image sensor technologies

The workshop will not be limited to CMOS as a sensor technology, but will be fundamentally open to applications, technologies and methods based on advanced optical sensing.




Go to the original article...

Prophesee announces GenX320 low power event sensor for IoT applications

Image Sensors World        Go to the original article...

Press release: https://prophesee-1.reportablenews.com/pr/prophesee-launches-the-world-s-smallest-and-most-power-efficient-event-based-vision-sensor-bringing-more-intelligence-privacy-and-safety-than-ever-to-consumer-edge-ai-devices

Prophesee launches the world’s smallest and most power-efficient event-based vision sensor, bringing more intelligence, privacy and safety than ever to consumer Edge-AI devices

Prophesee’s latest event-based Metavision® sensor - GenX320 - delivers new levels of performance including ultra-low power, low latency, high flexibility for efficient integration in AR/VR, wearables, security and monitoring systems, touch-free interfaces, always-on IoT and many more

October 16, 2023 2pm CET PARIS –– Prophesee SA, inventor of the world’s most advanced neuromorphic vision systems, today announced the availability of the GenX320 Event-based Metavision sensor, the industry’s first event-based vision sensor developed specifically for integration into ultra-low-power Edge AI vision devices. The fifth generation Metavision sensor, available in a tiny 3x4mm die size, expands the reach of the company’s pioneering technology platform into a vast range of fast-growing intelligent Edge market segments, including AR/VR headsets, security and monitoring/detection systems, touchless displays, eye tracking features, always-on smart IoT devices and many more.

The GenX320 event-based vision sensor builds on Prophesee’s track record of proven success and expertise in delivering the speed, low latency, dynamic range and power efficiency and privacy benefits of event-based vision to a diverse array of applications.

The 320x320 6.3μm pixel BSI stacked event-based vision sensor offers a tiny 1/5” optical format. It has been developed with a specific focus on the unique requirements of efficient integration of innovative event sensing in energy-, compute- and size-constrained embedded at-the-edge vision systems. The GenX320 enables robust, high-speed vision at ultra-low power and in challenging operating and lighting conditions.

GenX320 benefits include:

  •  Low latency µsec resolution timestamping of events with flexible data formatting.
  •  On-chip intelligent power management modes reduce power consumption to as low as 36uW and enable smart wake-on-events. Deep sleep and standby modes are also featured.
  •  Easy integrability/interfacing with standard SoCs with multiple integrated event data pre-processing, filtering, and formatting functions to minimize external processing overhead.
  •  MIPI or CPI data output interfaces offer low-latency connectivity to embedded processing platforms, including low-power microcontrollers and modern neuromorphic processor architectures.
  •  AI-ready: on-chip histogram output compatible with multiple AI accelerators;
  •  Sensor-level privacy-enabled thanks to event sensor’s inherent sparse frameless event data with inherent static scene removal.
  •  Native compatibility with Prophesee Metavision Intelligence, the most comprehensive, free, event-based vision software suite, used by a fast-growing community of 10,000+ users.

“The low-power Edge-AI market offers a diverse range of applications where the power efficiency and performance characteristics of event sensors are ideally suited. We have built on our foundation of commercial success in other application areas and developed this new event-based Metavision sensor to address the needs of Edge system developers with a sensor that is easy to integrate, configure and optimize for multiple compelling use cases in motion and object detection, presence awareness, gesture recognition, eye tracking, and other high growth areas,” said Luca Verre, CEO and co-founder of Prophesee.


Specific use case potential

  •  High speed eye-tracking for foveated rendering for seamless interaction in AR/VR/XR headsets
  •  Low latency touch-free human machine interface in consumer devices (TVs, laptops, game consoles, smart home appliances and devices, smart displays and more)
  •  Smart presence detection and people counting in IoT cameras and other devices
  •  Ultra-low power always-on area monitoring systems
  •  Fall detection cameras in homes and health facilities

Availability
The GenX320 is available for purchase from Prophesee and its sales partners. It is supported by a complete range of development tools for easy exploration and optimization, including a comprehensive Evaluation Kit housing a chip on board (COB) GenX320 module, or a compact optical flex module. In addition, Prophesee is offering a range of adapter kits that enable seamless connectivity to a large range of embedded platforms, such as a STM32 MCU, enabling faster time-to-market.


Early adopters
Zinn Labs
“Zinn Labs is developing the next generation of gaze tracking systems built on the unique capabilities of Prophesee’s Metavision event sensors. The new GenX320 sensor meets the demands of eye and gaze movements that change on millisecond timescales. Unlike traditional video-based gaze tracking pipelines, Zinn Labs is able to leverage the GenX320 sensor to track features of the eye with a fraction of the power and compute required for full-blown computer vision algorithms, bringing the footprint of the gaze tracking system below 20 mW. The small package size of the new sensor makes this the first time an event-based vision sensor can be applied to space-constrained head-mounted applications in AR/VR products. Zinn Labs is happy to be working with Prophesee and the GenX320 sensor as we move towards integrating this new sensor into upcoming customer projects.”
Kevin Boyle, CEO & Founder
 

XPERI
“Privacy continues to be one of the biggest consumer concerns when vision-based technology is used in our products such as DMS and TV services. Prophesee’s event-based Metavision technology enables us to take our ‘privacy by design’ principle to an even more secure level by allowing scene understanding without the need to have explicit visual representation of the scene. By capturing only changes in every pixel, rather than the entire scene as with traditional frame-based imaging sensors, our algorithms can derive knowledge to sense what is in the scene, without a detailed representation of it. We have developed a proof-of-concept demo that demonstrates DMS is fully possible using neuromorphic sensors. Using a 1MP neuromorphic sensor we can infer similar performance as an active NIR illumination 2MP vision sensor-based solution. Going forward, we focus on the GenX320 neuromorphic sensor that can be used in privacy sensitive smart devices to improve user experience.”
Petronel Bigioi, Chief Technology Officer
 

ULTRALEAP
“We have seen the benefits of Prophesee’s event-based sensors in enabling hands-free interaction via highly accurate gesture recognition and hand tracking capabilities in Ultraleap’s TouchFree application. Their ability to operate in challenging environmental conditions, at very efficient power levels, and with low system latency enhances the overall user experience and intuitiveness of our touch free UIs. With the new Genx320 sensor, these benefits of robustness, low power consumption, latency and high dynamic range can be extended to more types of applications and devices, including battery-operated and small form factors systems, proliferating hands-free use cases for increased convenience and ease of use in interacting with all sorts of digital content.”
Tom Carter, CEO & Co-founder

Additional coverage on EETimes:

https://www.eetimes.com/prophesee-reinvents-dvs-camera-for-aiot-applications/

Prophesee’s GenX30 chip, sensor die at the top, processor at the bottom. ESP refers to the digital event signal processing pipeline. (Source: Prophesee)

 

Go to the original article...

Omnivision’s new sensor for security cameras

Image Sensors World        Go to the original article...

OMNIVISION Announces New 4K2K Resolution Image Sensor for Home and Professional Security Cameras
 
The OS08C10 is a high-performance 8MP resolution, small-form-factor image sensor with on-chip staggered and DAG HDR technology, designed to produce superb video/image quality in challenging lighting environments
 
SANTA CLARA, Calif. – October 24, 2023 – OMNIVISION, a leading global developer of semiconductor solutions, including advanced digital imaging, analog, and touch & display technology, today announced the new OS08C10, an 8-megapixel (MP) backside illumination (BSI) image sensor that features both staggered high dynamic range (HDR) and single exposure dual analog gain (DAG) for high-performance imaging in challenging lighting conditions. The 1.45-micron (µm) BSI pixel supports 4K2K resolution and high frame rates. It comes in a small 1/2.8-inch optical format, a popular size for home and professional security, IoT and action cameras.
 
“Our new 1.45 µm pixel OS08C10 image sensor provides improved sensitivity and optimized readout noise, closing the gap with big-pixel image sensors that have traditionally been required for high-performance imaging in the security market,” said Cheney Zhang, senior marketing manager, OMNIVISION. “The OS08C10 supports both staggered HDR and DAG HDR. Staggered HDR extends dynamic range in both bright and low lighting conditions; the addition of built-in DAG provides single-exposure HDR support and reduces motion artifacts. Our new feature-packed sensor supports 4K2K resolution for superior image quality with finer details and enhanced clarity.”
 
OMNIVISION’s OS08C10 captures real-time 4K video at 60 frames per second (fps) with minimal artifacts. Its selective conversion gain (SCG) pixel design allows the sensor to flexibly select low and high conversion gain, depending on the lighting conditions. The sensor adopts the new correlated multi-sampling (CMS) to further reduce readout noise and improve SNR1 and low-light performance. The OS08C10’s on-chip defective pixel correction (DPC) improves quality and reliability above and beyond standard devices by providing real-time correction of defective pixels that can result throughout the sensor’s life cycle, especially in harsh operating conditions.
 
The OS08C10 is built on OMNIVISION’s PureCel®Plus-S stacked-die technology, enabling high-resolution 8MP in a small 1.45 µm BSI pixel. At 300 mW (60 fps), the OS08C10 achieves the lowest power consumption on the market. OMNIVISION’s OS08C10 is a cost-effective 4K2K solution for security, IoT and action cameras applications.
 
The OS08C10 is sampling now and will be in mass production in Q1 2024. For more information, contact your OMNIVISION sales representative: www.ovt.com/contact-sales.


 

Go to the original article...

Sony introduces IMX900 stacked CIS

Image Sensors World        Go to the original article...

Sony Semiconductor Solutions to Launch 1/3-Type-Lens-Compatible, 3.2-Effective-Megapixel Stacked CMOS Image Sensor with Global Shutter for Industrial Use Featuring Highest Resolution in This Class in the Industry

Atsugi, Japan — Sony Semiconductor Solutions Corporation (SSS) today announced the upcoming release of the IMX900, a 1/3-type-lens-compatible, 3.2-effective-megapixel stacked CMOS image sensor with a global shutter for industrial use that boasts the highest resolution in its class.
The new sensor product employs an original pixel structure to dramatically improve light condensing efficiency and near infrared sensitivity compared to conventional products, enabling miniaturization of pixels while maintaining the key characteristics required of industrial image sensors. This design achieves the industry’s highest resolution of 3.2 effective megapixels for a 1/3.1-type, global shutter system which fits in the S-mount (M12), the mount widely used in compact industrial cameras and built-in vision cameras.

The new product will contribute to the streamlining of industrial tasks in numerous ways, by serving in applications such as code reading in the logistics market and assisting in automating manufacturing processes using picking robot applications on production lines, thereby helping to resolve issues in industrial applications.

With demand for automation and manpower savings on the rise in every industry, SSS’s original Pregius S™ global shutter technology contributes to improved image recognition by enabling high-speed, high-precision, motion distortion-free imaging in a compact design. The new sensor utilizes a unique pixel structure developed based on Pregius S, moving the memory unit that was previously located on the same substrate as the photodiode to a separate signal processing circuit area. This new design makes it possible to enlarge the photodiode area, enabling pixel miniaturization (2.25 μm) while maintaining a high saturation signal volume, successfully delivering a higher pixel count of approximately 3.2 effective megapixels for a 1/3.1-type sensor.

Moving the memory unit to the signal processing circuit area has also increased the aperture ratio, bringing significant improvements to both incident light angle dependency and quantum efficiency. These features enable a much greater level of flexibility in the lens design for the cameras which employ this sensor. Additionally, a thicker photodiode area enhances the near infrared wavelength (850 nm) sensitivity, and nearly doubles the quantum efficiency compared to conventional products.

This compact, 1/3.1-type product is available in a package size that fits in the S-mount (M12), the versatile mount type used in industrial applications. It can be used in a wide range of applications where more compact, higher performance product designs are desired, such as in compact cameras for barcode readers in the logistics market, picking robot cameras on production lines, and the automated guided vehicles (AGVs) and autonomous mobile robots (AMRs) that handle transportation tasks for workers.

Main Features

  •  Industry’s highest resolution for an image sensor with a global shutter compatible with a 1/3-type lens, at approximately 3.2 effective megapixels
  •  Vastly improved incident light angle dependency lend greater flexibility to lens design
  • Delivers approximately double the quantum performance of conventional products in the near infrared wavelength
  • Includes on-chip features for greater convenience in reducing post-production image processing load
  • High-speed, 113 fps imaging


Cross-section of pixel structure
Product using conventional Pregius S technology (left) and the IMX900 using the new pixel structure (right)

Example of effects due to improved incident light angle dependency

Imaging comparison using near-infrared lighting (850 nm)
(Comparison in 2.25 μm pixel equivalent using conventional Pregius structure)


Usage example of Fast Auto Exposure function




Go to the original article...

Gpixel introduces 5MP and 12MP MIPI-enabled CIS

Image Sensors World        Go to the original article...

Gpixel adds MIPI-enabled 5 MP and 12 MP NIR Global Shutter image sensors to popular GMAX family


October 18, 2023, Changchun, China: Gpixel announces the pin-compatible GMAX3405 and
GMAX3412 CMOS image sensors - both based on a high-performance 3.4 μm charge domain global
shutter pixel to complete its c-mount range of GMAX products. With options for read out via either
LVDS or MIPI channels, these new sensors are optimized for easy integration into cost-sensitive
applications in machine vision, industrial bar code reading, logistics, and traffic.


GMAX3405 provides a 2448(H) x 2048(V), 5 MP resolution in a 2/3” optical format. In 10-bit mode,
reading out through all 12 pairs of LVDS channels, the frame rate is over 164 fps. In 12-bit mode,
100 fps can be achieved. Using the 4 MIPI D-PHY channels, the maximum frame rate is 73 fps with
a 12-bit depth. GMAX3412 provides a 4096(H) x 3072(V), 12 MP resolution in a 1.1” optical format.
In 10-bit mode, reading out through all 16 pairs of LVDS channels, the frame rate is over 128 fps.
In 12-bit mode, 60 fps can be achieved. Using the 4 MIPI D-PHY channels, the maximum frame rate
is 30 fps with a 12-bit depth. In both sensors, various multiplexing options are available for both
LVDS and MIPI readout to reduce the number of lanes.

 The 3.4 μm charge-domain global shutter pixel achieves a full well capacity of 10 ke- and noise of
3.6 e- at default x1 PGA gain, down to 1.5 e- at max gain setting (x16), delivering up to 68.8 dB
linear dynamic range. The advanced pixel design and Red Fox technology combined brings a peak
QE of 75% @ 540 nm, a NIR QE of 33% @850 nm , a Parasitic Light Sensitivity of -88 dB and an
excellent angular response of > 15° @ 80% response. All of this combined with multislope HDR
mode and ultra-short exposure time modes down to 1 us.


“The GMAX family was originally known for the world’s first 2.5 μm global shutter pixel. As the
product family grows, we are leveraging the advanced technology that makes the 2.5 μm pixel
possible to bring more generous light sensitivity with larger pixel sizes fitting mainstream optical
formats. With the addition of the MIPI interface and pin-compatibility and excellent NIR response,
these 2 new models bring more flexibility and cost-effectiveness to the GMAX product family.” says
Wim Wuyts, Gpixel’s Chief Commercial Officer.


Both GMAX3405 and GMAX3412 are housed in 176 pin ceramic LGA packages, both being pin-
compatible to each other. The outer dimensions of the 5MP and 12MP sensors respectively are
17.60 mm x 15.80 mm and 22.93 mm x 19.39 mm. The LGA pad pattern is optimized for reliable
solder connections and the sensor assembly includes a double-sided AR coated cover glass lid.

Engineering samples of both products, in both color and monochrome variants, can be ordered
today for delivery in November 2023. For more information about Gpixel’s roadmap of products
for industrial imaging, please contact info@gpixel.com to arrange for an overview.

Go to the original article...

Galaxycore announces dual analog gain HDR CIS

Image Sensors World        Go to the original article...

Press release: https://en.gcoreinc.com/news/detail-66

GalaxyCore Unveils Industry's First DAG Single-Frame HDR 13Megapixels CIS

2023.08.11

GalaxyCore has officially launched the industry's first 13megapixels image sensor with Single-Frame High Dynamic Range (HDR) capability – the GC13A2. This groundbreaking 1/3.1", 1.12μm pixel back-illuminated CIS features GalaxyCore's unique Dual Analog Gain (DAG) circuit architecture, enabling low-power consumption 12bit HDR output during previewing, photography, and video recording. This technology enhances imaging dynamic range for smartphones, tablets, and more, resulting in vividly clear images for users.

The GC13A2 also supports on-chip Global Tone Mapping, which compresses real-time 12bit data into 10bit output, preserving HDR effects and expanding compatibility with a wider range of smartphone platforms.



High Dynamic Range Technology

Dynamic range refers to the range between the darkest and brightest images an image sensor can capture. Traditional image sensors have limitations in dynamic range, often failing to capture scenes as perceived by the human eye. High Dynamic Range (HDR) technology emerged as a solution to this issue.


Left Image: blowout in the bright part resulting from narrow dynamic range/Right Image: shot with DAG HDR

Currently, image sensors use multi-frame synthesis techniques to enhance dynamic range:
Photography: Capturing 2-3 frames of the same scene with varying exposure times – shorter exposure to capture highlight details and longer exposure to supplement shadow details – then combining them to create an image with a wider dynamic range.

Video Recording: Utilizing multi-frame synthesis, the image sensor alternates between outputting 60fps long-exposure and short-exposure images, which the platform combines to produce a 30fps frame with preserved highlight color and shadow details. While multi-frame synthesis yields noticeable improvements in dynamic range, it significantly increases power consumption, making it unsuitable for prolonged use on devices like smartphones and tablets. Moreover, it tends to produce motion artifacts when capturing moving objects.



Left Image: shot with Multi-Frame HDR (Motion Artifact) Right Image: shot with DAG HDR

GalaxyCore's Patented DAG HDR Technology

GalaxyCore's DAG HDR technology, based on single-frame imaging, employs high analog gain in shadow regions for improved clarity and texture, while low analog gain is used in highlight parts to prevent overexposure and preserve details. Compared to traditional multi-frame HDR, DAG HDR not only increases dynamic range and mitigates artifact issues but also addresses the power consumption problem associated with multi-frame synthesis. For instance, in photography, scenes that used to require 3-frame synthesis are reduced by 50% when utilizing DAG HDR.

Left Image: Traditional HDR Photography Right Image: DAG HDR Photography

GC13A2 Empowers Imaging Excellence with HDR


Empowered by DAG HDR, the GC13A2 is capable of low-power 12bit HDR image output and 4K 30fps video capture. It reduces the need for frame synthesis during photography and lowers HDR video recording power consumption by approximately 30%, while avoiding the distortion caused by motion artifacts.

Compared to other image sensors of the same specifications in the industry, GC13A2 supports real-time HDR previewing, allowing users to directly observe every frame's details while shooting. This provides consumers with an enhanced shooting experience.

GC13A2 has already passed initial verification by brand customers and is set to enter mass production. In the future, GalaxyCore will introduce a series of high-resolution DAG single-frame HDR products, including 32Megapixels and 50Megapixels variants. This will further enhance GalaxyCore’s high-performance product lineup, promoting superior imaging quality and an enhanced user experience for smartphones.

Go to the original article...

ISSW 2024 call for papers announced

Image Sensors World        Go to the original article...

Link: https://issw2024.fbk.eu/cfp

International SPAD Sensor Workshop (ISSW 2024) will be organized by Fondazione Bruno Kessler - FBK.
When: June 4-6, 2024
Location: Trento, Italy

Call for Papers & Posters

The 2024 International SPAD Sensor Workshop (ISSW) is a biennial event focusing on Single-Photon Avalanche Diodes (SPAD), SPAD-based sensors and related applications. The workshop welcomes all researchers (including PhDs, postdocs, and early-career researchers), practitioners, and educators interested in these topics.
 
After two on-line editions, the fourth edition of the workshop will return to an in-person only format.
The event will take place in the city of Trento, in northern Italy, hosted at Fondazione Bruno Kessler, in a venue suited to encourage interaction and a shared experience among the attendees.

The workshop will follow a 1-day long introductory school on SPAD sensor technology, which will be held in the same venue as the workshop on June 3rd, 2024.
 
The workshop will include a mix of invited talks and, for the first time, peer-reviewed contributions.
Accepted works will be published on the International Image Sensor Society website (https://imagesensors.org/).

Submitted works may cover any of the aspects of SPAD technology, including device modelling, engineering and fabrication, SPAD characterization and measurements, pixel and sensor architectures and designs, and SPAD applications.
 
Topics
Papers on the following SPAD-related topics are solicited:
● CMOS/CMOS-compatible technologies
● SiPMs
● III-V, Ge-on-Si
● Modelling
● Quenching and front-end circuits
● Architectures
● Time-to-Digital Converters
● Smart histogramming techniques
● Applications of SPAD arrays, such as:
o Depth sensing / ToF / LiDAR
o Time-resolved imaging
o Low-light imaging
o High dynamic range imaging
o Biophotonics
o Computational imaging
o Quantum imaging
o Quantum RNG
o High energy physics
o Free space communication
● Emerging technologies & applications
 
Paper submission
Workshop proposals must be submitted online. A link will be soon made available.
 
Each submission should consist of a 100-word abstract, and a camera-ready manuscript of 2-to-3 pages (including figures), and include authors’ name(s) and affiliation, short bio & picture, mailing address of the presenter, telephone, and e-mail address of the presenter. A template will be provided soon.
The deadline for paper submission is 23:59 CET, Friday December 8th, 2023.
 
Papers will be considered on the basis of originality and quality. High quality papers on work in progress are also welcome. Papers will be reviewed confidentially by the Technical Program Committee.

Accepted papers will be made freely available for download from the International Image Sensor Society website. Please note that no major modifications are allowed.

Authors will be notified of the acceptance of their abstract & posters at the latest by Wednesday Jan 31st, 2024.
 
Poster submission
In addition to talks, we wish to offer all graduate students, post-docs, and early-career researchers an opportunity to present a poster on their research projects or other research relevant to the workshop topics .

If you wish to take up this opportunity, please submit a 1-page description (including figures) of the proposed research activity, along with authors’ name(s) and affiliation, mailing address, telephone, and e-mail address.

The deadline for poster submission is 23:59 CET, Friday December 8th, 2023.

Go to the original article...

MDPI IISW2023 special issue – 316MP, 120FPS, HDR CIS

Image Sensors World        Go to the original article...

A. Agarwal et al. have published a full length article on their IISW 2023 conference presentation in a special issue of MDPI Sensors. The paper is titled "A 316MP, 120FPS, High Dynamic Range CMOS Image Sensor for Next Generation Immersive Displays" and is joint work between Forza Silicon (AMETEK Inc.) and Sphere Entertainment Co..

Full article (open access): https://doi.org/10.3390/s23208383

Abstract
We present a 2D-stitched, 316MP, 120FPS, high dynamic range CMOS image sensor with 92 CML output ports operating at a cumulative date rate of 515 Gbit/s. The total die size is 9.92 cm × 8.31 cm and the chip is fabricated in a 65 nm, 4 metal BSI process with an overall power consumption of 23 W. A 4.3 µm dual-gain pixel has a high and low conversion gain full well of 6600e- and 41,000e-, respectively, with a total high gain temporal noise of 1.8e- achieving a composite dynamic range of 87 dB.

Figure 1. Sensor on a 12 inch wafer (4 dies per wafer), die photo, and stitch plan.



Figure 2. Detailed block diagram showing sensor partitioning.


Figure 3. Distribution of active and dark rows in block B/H, block E, and final reticle plan.


Figure 5. Sensor timing for single-exposure dual-gain (HDR) operation.



Figure 6. Data aggregation and readout order for single-gain mode.


Figure 7. Data aggregation and readout order for dual-gain mode.

Figure 8. ADC output multiplexing network for electrical crosstalk mitigation.


Figure 9. Conventional single-ended ADC counter distribution.


Figure 10. Proposed pseudo-differential ADC counter distribution.


Figure 11. Generated thermal map from static IR drop simulation.

Figure 12. Measured dark current distribution.

Figure 13. SNR and transfer function in HDR mode.


Figure 14. Full-resolution color image captured in single-gain mode at 120 FPS.







Go to the original article...

Review paper on IR photodiodes

Image Sensors World        Go to the original article...

A team from Military University of Technology (Poland) and Shanghai Institute of Technical Physics (China) have published a review article titled "Infrared avalanche photodiodes from bulk to 2D materials" in Light: Science & Applications journal.

Open access paper: https://www.nature.com/articles/s41377-023-01259-3

Abstract: Avalanche photodiodes (APDs) have drawn huge interest in recent years and have been extensively used in a range of fields including the most important one—optical communication systems due to their time responses and high sensitivities. This article shows the evolution and the recent development of AIIIBV, AIIBVI, and potential alternatives to formerly mentioned—“third wave” superlattices (SL) and two-dimensional (2D) materials infrared (IR) APDs. In the beginning, the APDs fundamental operating principle is demonstrated together with progress in architecture. It is shown that the APDs evolution has moved the device’s performance towards higher bandwidths, lower noise, and higher gain-bandwidth products. The material properties to reach both high gain and low excess noise for devices operating in different wavelength ranges were also considered showing the future progress and the research direction. More attention was paid to advances in AIIIBV APDs, such as AlInAsSb, which may be used in future optical communications, type-II superlattice (T2SLs, “Ga-based” and “Ga-free”), and 2D materials-based IR APDs. The latter—atomically thin 2D materials exhibit huge potential in APDs and could be considered as an alternative material to the well-known, sophisticated, and developed AIIIBV APD technologies to include single-photon detection mode. That is related to the fact that conventional bulk materials APDs’ performance is restricted by reasonably high dark currents. One approach to resolve that problem seems to be implementing low-dimensional materials and structures as the APDs’ active regions. The Schottky barrier and atomic level thicknesses lead to the 2D APD dark current significant suppression. What is more, APDs can operate within visible (VIS), near-infrared (NIR)/mid-wavelength infrared range (MWIR), with a responsivity ~80 A/W, external quantum efficiency ~24.8%, gain ~105 for MWIR [wavelength, λ = 4 μm, temperature, T = 10–180 K, Black Phosphorous (BP)/InSe APD]. It is believed that the 2D APD could prove themselves to be an alternative providing a viable method for device fabrication with simultaneous high-performance—sensitivity and low excess noise.


Fig. 1: Bulk to low-dimensional material, tactics to fabricate APDs and possible applications: FOC, FSO, LIDAR and QKDs.



Fig. 2: The APD’s operating principle. a Electron and hole multiplication mechanisms, schematic of multiplication mechanism for b k = 0 (αh = 0) and c k = 1 (αe = αh), where k = αh/αe – αe, αh represent electron and hole multiplication coefficients. d αe, αh ionization coefficients versus electric field for selected semiconductors used for APDs’ fabrication


Fig. 3: APDs. a p–n device, b SAM device, and c SAGCM device with electric field distribution. F(M) dependence on M for the selected k = αh/αe in APDs when: d electrons and e holes dominate in the avalanche mechanism. The multiplication path length probability distribution functions in the: f local and g non-local field “dead space” models

Fig. 4: InGaAs/InP SAM-APD. a device structure, b energy band profile, and electric field under normal reverse bias condition. AlxIn1–xAsySb1–y based SACM APD: c detector’s design with the E distribution within the detector, d measured and theoretically simulated gain, dark current, photocurrent versus reverse voltage for 90 μm diameter device at room temperature39. InAs planar avalanche photodiode: e a schematic design diagram, f comparison of the gain reached by 1550 nm wavelength laser132,133. The M normalized dark current for 100 μm radius planar APD was presented for 200 K

Fig. 5: F(M) versus M for. a Si, AlInAs, GaAs, Ge, InP [the solid lines present the F(M) for k within the range 0–1 (increment 0.1) calculated by the local field model24, typical F(M) are shown by shaded regions37 and b selected materials: 3.5 μm thick intrinsic InAs APDs (50 μm and 100 μm radius), 4.2 μm cut-off wavelengths HgCdTe and 2.2 μm InAlAs APDs



Fig. 6: Gain and k versus Hg1–xCdxTe bandgap energy. a the crossover between e-APD and h-APD. The crossover at Eg ≈ 0.65 eV corresponds to the λc = 1.9 μm for 300 K46. Hole-initiated avalanche HgCdTe photodiode: b detector profile, c energy band structure, d hole-initiated multiplication process energy band structure. The multiplication layer bandgap energy is adjusted to the resonance condition where the bandgap and the split-off valence band energy and the top of the heavy-hole valence band energy difference are equal. Electron-initiated avalanche HgCdTe photodiode: e diagram of electron-initiated avalanche process for HgCdTe-based high-density vertically integrated photodiode (HDVIP) structure (n-type central region and p-type material around), f electron avalanche mechanism, and g relative spectral response for 5.1 μm cut-off wavelength HgCdTe HDVIP at T = 80 K

 


Fig. 7: HgCdTe APDs performance. a the experimental gain versus bias for selected cut-off wavelengths for DRS electron-initiated APDs at 77 K together with extra measured data points taken at ∼77 K51 and LETI e-APDs at 80 K59, b constant F(M) ~ 1 versus M at 80 K for 4.3 μm cut-off wavelength APD135

Fig. 8: The device structure comparison between low-noise PMT and multi-quantum well APDs. a schematic presentation of a photomultiplier tube, b multi-quantum well p-i-n APD energy band sketch with marked intrinsic region (i), c energy band profiles of staircase APD under zero (top) and reverse (bottom) voltage. Multistep AlInAsSb staircase avalanche photodiode: d 3-step staircase APD device profile, e theoretically calculated by Monte Carlo method and measured gain of 1-, 2-, and 3-stairs APDs for 300 K70. MWIR SAM-APD structure with AlAsSb/GaSb superlattice: f device design profile, g energy band structure under reverse voltage, and h carriers impact multiplication coefficients versus reciprocal electric field at 200 K


Fig. 9: Low-dimensional solid avalanche photodetectors. a graphite/InSe Schottky avalanche detector - injection, ionization, collection electron transport mechanisms, b e-ph scattering dimensionality reduction affects electron acceleration process and gain versus electric field in 2D (red line) and 3D (blue line), c breakdown voltage (Vbd) and gain as a function of temperature—exhibits a negative temperature coefficient81. Nanoscale vertical InSe/BP heterostructures ballistic avalanche photodetector: d schematic of the graphene/BP/metal avalanche device83, e ballistic avalanche photodetector operating principle, f quasi-periodic current oscillations, g schematic of the graphene InSe/BP83, h Ids–Vds characteristics for selected temperatures (40 − 180 K), i avalanche breakdown threshold voltage (Vth) and gain versus temperature—showing a negative temperature coefficient. Pristine PN junction avalanche photodetector: j device structure, k as the number of layers increases, a positive/negative signal of SCM denotes hole/electron carries, l APD’s low temperature (~100 K) dark and photocurrent I–V curves


Fig. 10: An idea of laser-gated system connected with passive thermal imaging for enhanced distant identification. a operation principle [at t0—camera is closed—light pulse is emitted, at t1—target reflects light pulse, at t2—the camera is opened for a short period (∆t) matching the needed depth of view]; b typical images of wide FOV thermal and laser-gating systems


Go to the original article...

css.php