OPPO, AlpsenTek and Qualcomm collaboration on "Hybrid Vision Sensing"

Image Sensors World        Go to the original article...

OPPO, AlpsenTek and Qualcomm Boost AI Motion, Image Quality For Mobile Applications

Jan.11,2024,Las Vegas,USA—OPPO, AlpsenTek and Qualcomm Technologies, Inc. have teamed up with the goal of enhancing innovative Hybrid Vision Sensing (HVS) technology, to better extract valuable motion and image data to enhance picture quality for mobile phone applications.

OPPO and AlpsenTek will collaborate to pioneer the use of Hybrid Vision Sensing technologies, developing a data processing chain to collect relevant camera information to help enhance picture quality and allow for deblurring, augmented resolution, and slow-motion reconstruction, as well as other features required for machine sensing. This will be accomplished by leveraging Snapdragon® Mobile Platforms from Qualcomm Technologies.

“The HVS solution, with the support of hardware and algorithms, significantly enhances the capacities of smartphone cameras”, said Judd Heape VP, Product Management at Qualcomm Technologies, Inc. “We are pleased to contribute to the optimization of this new technology on our Snapdragon platforms – which will help consumers to get the best performance from their smartphone cameras, and capture what’s most precious to them.”

The Image product director, Mr. Xuan ZHANG, from OPPO commented: “Over the years, we have conducted extensive research in new sensor technologies, with a particular focus on HVS (Hybrid Vision System) technology. We have engaged in substantial collaborative developments with AlpsenTek and Qualcomm, involving numerous iterations in both chip design and algorithms. Our confidence in the potential of this technology has driven us to invest time and effort into refining it collaboratively, with the ultimate goal of pushing it towards the application on OPPO’s HyperTone Camera System.” 

Motion information is crucial in photography and machine vision. Traditional image sensors collapse motion information within a period (i.e. the exposure) into a single image. This leads to motion blurs and loss of valuable motion data essential for image/video processing and machine vision algorithms.

Effectively obtaining high-fidelity motion information with a vision sensor is a top demand across various fields today. Current solutions based on conventional image sensors often rely on increasing the frame rate, which is expensive and impractical for many applications. High frame rates lead to a significant amount of data (much of it redundant) and short shutter durations, causing high system resource usage, low efficiency, and poor adaptation to lighting conditions for high-frame-rate cameras.

Event-based Vision Sensing (EVS) is an imaging technology that continuously records change/motion information through its shutter-free mechanism. It provides motion information with high time resolution and lower cost machine vision. With an in-pixel processing chain featuring logarithm amplification, EVS achieves a balance between high frame rate, high dynamic range, and low data redundancy for recording motion information.

However, EVS sensors often lack critical static pictorial information that is needed for many machine vision applications. It typically works alongside a separate traditional image sensor (RGB) to compensate for this drawback, introducing challenges in cost, system complexity, and image registration between the two types of images (EVS and RGB), offsetting many of EVS's advantages.
AlpsenTek's Hybrid Vision Sensing (HVS) technology, introduced in 2019, combines EVS and conventional imaging technology into a single sensor. The ALPIX® sensor from AlpsenTek simultaneously outputs high-quality RGB images and EVS data stream, providing a cost-effective and algorithm-friendly solution for capturing images with embedded motion information.

Jian Deng, Founder and CEO of AlpsenTek, stated, "In the current landscape of vision sensors, there is a growing expectation for more than just 2D RGB information; sensors are now anticipated to provide additional data, such as distance, spectrum, and motion. Collaborating with OPPO and Qualcomm, we collectively designed the ALPIX-Eiger® to seamlessly integrate into mobile phone applications. Considered an enhanced RGB image sensor, it boasts image quality comparable to leading mobile sensors on the market, while introducing the added functionality of EVS. Witnessing the process of bringing our technology from conception to product brings us immense excitement."

Deng further emphasized, "It's important to recognize that what truly changes the world is not the technology itself but the products that it enables. Our passion lies in bringing Hybrid Vision Sensing (HVS) into the hands of everyone. This commitment has been our driving force from the very beginning. We look forward to fruitful outcomes from this collaboration”.

Jian Deng, Founder and CEO of AlpsenTek, stated, "In the current landscape of vision sensors, there is a growing expectation for more than just 2D RGB information; sensors are now anticipated to provide additional data, such as distance, spectrum, and motion. Collaborating with OPPO and Qualcomm, we collectively designed the ALPIX-Eiger® to seamlessly integrate into mobile phone applications. Considered an enhanced RGB image sensor, it boasts image quality comparable to leading mobile sensors on the market, while introducing the added functionality of EVS. Witnessing the process of bringing our technology from conception to product brings us immense excitement."

Deng further emphasized, "It's important to recognize that what truly changes the world is not the technology itself but the products that it enables. Our passion lies in bringing Hybrid Vision Sensing (HVS) into the hands of everyone. This commitment has been our driving force from the very beginning. We look forward to fruitful outcomes from this collaboration”.

This news was also featured on EETimes: https://www.eetimes.com/oppo-alpsentek-and-qualcomm-boost-ai-motion-image-quality-for-mobile-applications/

Go to the original article...

Histogram-less SPAD LiDAR

Image Sensors World        Go to the original article...

Tontini et al. from  FBK and University of Trento recently published an article titled "Histogram-less LiDAR through SPAD response linearization" in the IEEE Sensors journal.

Open access link: https://ieeexplore.ieee.org/document/10375298

Abstract:  We present a new method to acquire the 3D information from a SPAD-based direct-Time-of-Flight (d-ToF) imaging system which does not require the construction of a histogram of timestamps and can withstand high flux operation regime. The proposed acquisition scheme emulates the behavior of a SPAD detector with no distortion due to dead time, and extracts the TOF information by a simple average operation on the photon timestamps ensuring ease of integration in a dedicated sensor and scalability to large arrays. The method is validated through a comprehensive mathematical analysis, whose predictions are in agreement with a numerical Monte Carlo model of the problem. Finally, we show the validity of the predictions in a real d-ToF measurement setup under challenging background conditions well beyond the typical pile-up limit of 5% detection rate up to a distance of 3.8m.

 

















Go to the original article...

Talk on meta-optics-based color imagers (Prof. Arka Majumdar – UWash Seattle)

Image Sensors World        Go to the original article...


URochester - Institute of Optics Colloquium Sep 2023

Abstract: The miniaturization of image sensors in recent decades has made today’s cameras ubiquitous across many application domains, including medical imaging, smartphones, security, robotics, and autonomous transportation. However, only imagers that are an order of magnitude smaller could enable novel applications in nano-robotics, in vivo imaging, mixed reality, and health monitoring. While sensors with sub-micron pixels exist now, further miniaturization has been primarily prohibited by fundamental limitations of conventional optics. Traditional imaging systems consist of a cascade of refractive elements that correct for aberrations, and these bulky lenses impose a lower limit on camera footprint. In recent years, sub-wavelength diffractive optics, also known as meta-optics have been touted as a promising replacement for the bulky refractive optics. However, the images taken with meta-optics, to date, remain significantly inferior to the ones taken with refractive. Especially, full-color imaging with a large aperture meta-lens remains an important unsolved problem. We employ computationally designed meta-optics to solve this problem and enable ultra-compact cameras. Our solution is to design the meta-optics such that the modulation transfer function (MTF) of all the wavelength across the desired optical bandwidth are the same at the sensor plane. Additionally, the volume under the MTF curve is maximized to ensure enough information is captured enabling computational reconstruction of the image. The same intuition can be employed for different angles to mitigate geometric aberrations as well. In this talk, I will describe our efforts on achieving full-color imaging using a single meta-optic and a computational backend. Starting from traditional extended depth of focus lens [1,2], I will describe inverse-designed meta-optics [3], end-to-end designed meta-optics [4] and hybrid refractive/ meta-optics [5] for visible full-color imaging. I will also talk about how these techniques can be extended for thermal imaging [6,7].

[1] S. Colburn et al., Sci Adv 4, eaar2114 (2018).
[2] L. Huang et al., Photon. Res. 8, 1613 (2020).
[3] E. Bayati et al., Nanophotonics 11, 2531 (2022).
[4] E. Tseng et al., Nature Communications 12, 6493 (2021).
[5] S. Pinilla et al., Science Advances 9, eadg7297.
[6] L. Huang et al., Opt. Mater. Express 11, 2907 (2021).
[7] V. Saragadam et al., arXiv:2212.06345 (2023).

Biography
Professor Arka Majumdar is an associate professor in the departments of electrical and computer engineering and physics at the University of Washington (UW). He received B. Tech. from IIT-Kharagpur (2007), where he was honored with the President’s Gold Medal. He completed his MS (2009) and PhD.(2012) in Electrical Engineering at Stanford University. He spent one year at the University of California, Berkeley (2012-13), and then in Intel Labs (2013-14) as postdoc before joining UW. His research interests include developing a hybrid nanophotonic platform using emerging material systems for optical information science, imaging, and microscopy. Professor Majumdar is the recipient of multiple Young Investigator Awards from the AFOSR (2015), NSF (2019), ONR (2020) and DARPA (2021), Intel early career faculty award (2015), Amazon Catalyst Award (2016), Alfred P. Sloan fellowship (2018), UW college of engineering outstanding junior faculty award (2020), iCANX Young Scientist Award (2021), IIT-Kharagpur Young Alumni Achiever Award (2022) and DARPA Director’s Award (2023). He is co-founder and technical advisor of Tunoptix, a startup commercializing software defined meta-optics.

Go to the original article...

Image Sensors Europe 2024 Speakers announced

Image Sensors World        Go to the original article...

The annual Image Sensors Europe 2024 will be held in London on March 20-21, 2024.

See below for the speakers confirmed to present at the 2024 edition in London.

Link: https://www.image-sensors.com/image-sensors-europe/2020-speakers?EventId=4047&page=2

Lindsay Grant - OmniVision Technology
Federico Canini - Datalogic
Nasim Sahraei - Edgehog Advanced Technologies Inc.
Pawel Latawiec - Metalenz
Emilie Huss - STMicroelectronics
Nicolas Roux - STMicroelectronics
Abhinav Agarwal - Forza Silicon (Ametek Inc.)
Dr Claudio Jakobson - SCD
Jan Bogaerts - Gpixel
Christian Mourad - VoxelSensors
Carl Philipp Koppen - pmdtechnologies AG
Dr Artem Shulga - QDI systems
Albert Theuwissen - Harvest Imaging
Anthony Huggett - onsemi
Matthias Schaffland - Sensor to Image GmbH
Dr. Kazuhiro Morimoto - Canon Inc.
Svorad Štolc - Photoneo
Florian Domengie - Yole Intelligence
Adi Xhakoni - ams-osram


CIS Masterclass

Dr. Albert Theuwissen will give a Masterclass on "Recent Developments in the CIS World over the last 12 months" which will cover the following topics: Numbers and Market Trends, High Dynamic Range, Global Shutter, Low Noise, Colour Filter News, Phase Detective Auto-Focus Pixels, New materials, Beyond Silicon in the Near-IR, Event-Based Imagers

About Image Sensors Europe

Image Sensors Europe established and held its first conference in 2007, and has since grown to be the go-to annual image sensors technical and business conference. Each year this ever evolving market continuously prompts new and exciting opportunities for the entire supply chain.

This esteemed event provides a platform for over 250 representatives from across the digital imaging supply chain to engage in high calibre discussions and face-to-face networking opportunities with key industry experts and colleagues.

2024 Key Themes:

  • Image sensor market challenges and opportunities: CMOS image sensor industry analysis and 2018-2028 forecastsTechnology focus: SPAD, SWIR, triple-stack sensors, metaoptics, event-based imaging and 3D perception
  • Image sensor application updates: industrial, automotive, biomedical, surveillance & security, consumer
  • Global foundry updates: Manufacturing capacity, chip stacking timelines, realistic improvements and opportunities
  • Data processing & compression - high speed data handling and transfer

Go to the original article...

Flare reduction technology from Edgehog

Image Sensors World        Go to the original article...

Eliminating Stray Light Image Artifacts via Invisible Image Sensor Coverglass

High-quality images are critical for machine vision applications like autonomous vehicles, surveillance systems, and industrial automation. However, lens flare caused by internal light reflections can significantly degrade image quality. This “ghosting” effect manifests as spots, starbursts, and other artifacts that obscure objects and details.

Traditional anti-reflective coatings help reduce flare by creating destructive interference to cancel out light reflections. But they fall short at wider angles where reflections still occur. Stray light hitting image sensors causes flares. These artifacts interfere with image clarity and create glare, which decreases the signal-to-noise ratio, especially in environments with high dynamic range.

Omnidirectional Anti-Reflection CMOS Coverglass
Edgehog’s Omnidirectional Anti-Reflection (OAR) nanotexturing technology takes a fundamentally different approach to eliminating reflections. Instead of coatings, OAR uses nano-scale surface textures that create a gradual transition in refractive index from air to glass. Edgehog’s texturing allows light to transmit through the surface without internal reflections, regardless of angle.

  • OAR nanotexturing provides exceptional advantages:Omnidirectional performance - Anti-reflection at angles up to 70 degrees
  • Broad spectrum - Works across all wavelengths from UV to IRThermal stability - No risk of delamination like traditional coatings
  • Anti-fogging technology spreads water droplets on the surface, reducing fogging

By treating the image sensor cover glass with OAR nanotexturing, Edgehog enables flare-free imaging under any lighting condition. Edgehog delivers crisper images and videos with enhanced contrast, sharpness, and color accuracy.

Case Study
Edgehog recently showcased the impact of its technology by retrofitting a camera’s stock CMOS cover glass with an OAR-treated replacement. Simulations showed OAR’s superiority in mitigating flare irradiance compared to the original glass. Real-world testing also exhibited significant flare reduction in challenging high-glare environments.  

 

Images taken from two identical camera models showing a significant reduction in lens flare in
the bottom left of the images. The image on the left (A) is taken using an off-the-shelf FLIR Blackfly
S camera where the sensor coverglass utilizes conventional anti-reflection coatings. The right
image (B) is taken using an identical camera with the sensor coverglass replaced with Edgehog
coverglass, as shown in the schematic above.

 

Photos were captured simultaneously in an indoor garage. (A) off-the-shelf FLIR Blackfly S
camera and (B) identical camera setup with Edgehog-enhanced sensor coverglass.

 

Photos captured simultaneously outdoors on a sunny day. (A) off-the-shelf FLIR Blackfly S
camera and (B) identical camera setup with Edgehog-enhanced sensor coverglass.


Edgehog’s Seeking Manufacturing Partners
Overall, Edgehog’s nanotextured anti-reflection technology represents a revolutionary leap forward for imaging components. OAR enables reliable, high-performance vision capabilities for autonomous systems by stopping flare at the source. We are looking for manufacturing partners to scale up our manufacturing.

To learn more about eliminating lens flare with omnidirectional anti-reflection, download Edgehog’s full white paper today or email us to discover how nanotexturing can enhance image quality and enable the next generation of machine vision.

Download Edgehog’s whitepaper - https://www.edgehogtech.com/machine-vision-whitepaper

Visit Edgehog’s Website - www.edgehogtech.com

Go to the original article...

Samsung announces new Isocell Vizion sensors

Image Sensors World        Go to the original article...

Samsung press release: https://semiconductor.samsung.com/emea/news-events/news/samsung-unveils-two-new-isocell-vizion-sensors-tailored-for-robotics-and-xr-applications/

Samsung Unveils Two New ISOCELL Vizion Sensors Tailored for Robotics and XR Applications
The ISOCELL Vizion 63D, a time-of-flight sensor, captures high-resolution 3D images with exceptional detail
The ISOCELL Vizion 931, a global shutter sensor, captures dynamic moments with clarity and precision 



Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today introduced two new ISOCELL Vizion sensors — a time-of-flight (ToF) sensor, the ISOCELL Vizion 63D, and a global shutter sensor, the ISOCELL Vizion 931. First introduced in 2020, Samsung’s ISOCELL Vizion lineup includes ToF and global shutter sensors specifically designed to offer visual capabilities across an extensive range of next-generation mobile, commercial and industrial use cases.  

“Engineered with state-of-the-art sensor technologies, Samsung’s ISOCELL Vizion 63D and ISOCELL Vizion 931 will be essential in facilitating machine vision for future high-tech applications like robotics and extended reality (XR),” said Haechang Lee, Executive Vice President of the Next Generation Sensor Development Team at Samsung Electronics. “Leveraging our rich history in technological innovation, we are committed to driving the rapidly expanding image sensor market forward.”

ISOCELL Vizion 63D: Tailored for capturing high-resolution 3D images with exceptional detail

Similar to how bats use echolocation to navigate in the dark, ToF sensors measure distance and depth by calculating the time it takes the emitted light to travel to and from an object. 

Particularly, Samsung’s ISOCELL Vizion 63D is an indirect ToF (iToF) sensor that measures the phase shift between emitted and reflected light to sense its surroundings in three dimensions. With exceptional accuracy and clarity, the Vizion 63D is ideal for service and industrial robots as well as XR devices and facial authentication where high-resolution and precise depth measuring are crucial.

The ISOCELL Vizion 63D sensor is the industry’s first iToF sensor with an integrated depth-sensing hardware image signal processor (ISP). With this innovative one-chip design, it can precisely capture 3D depth information without the help of another chip, enabling up to a 40% reduction in system power consumption compared to the previous ISOCELL Vizion 33D product. The sensor can also process images at up to 60 frames per second in QVGA resolution (320x240), which is a high-demand display resolution used in commercial and industrial markets. 

Based on the industry’s smallest 3.5㎛ pixel size in iToF sensors, the ISOCELL Vizion 63D achieves high Video Graphics Array (VGA) resolution (640x480) within a 1/6.4” optical format, making it an ideal fit for compact, on-the-go devices.

Thanks to backside scattering technology (BST) that enhances light absorption, the Vizion 63D sensor boasts the highest level of quantum efficiency in the industry, reaching 38% at an infrared light wavelength of 940 nanometers (nm). This enables enhanced light sensitivity and reduced noise, resulting in sharper image quality with minimal motion blur.

Moreover, the ISOCELL Vizion 63D supports both flood (high-resolution at short-range) and spot (long-range) lighting modes, significantly extending its measurable distance range from its predecessor’s five meters to 10.

ISOCELL Vizion 931: Optimized for capturing dynamic movements without distortion 

The ISOCELL Vizion 931 is a global shutter image sensor tailored for capturing rapid movements without the “jello effect”. Unlike rolling shutter sensors that scan the scene line by line from top to bottom in a “rolling” manner, global shutter sensors capture the entire scene at once or “globally,” similar to how human eyes see. This allows the ISOCELL Vizion 931 to capture sharp, undistorted images of moving objects, making it well-suited for motion-tracking in XR devices, gaming systems, service and logistics robots as well as drones.

Designed in a one-to-one ratio VGA resolution (640 x 640) that packs more pixels in a smaller form factor, the ISOCELL Vizion 931 is optimal for iris recognition, eye tracking as well as facial and gesture detection in head-mounted display devices like XR headsets. 

The ISOCELL Vizion 931 also achieves the industry’s highest level of quantum efficiency, delivering an impressive 60% at 850nm infrared light wavelength. This feat was made possible by incorporating Front Deep Trench Isolation (FDTI) which places an insulation layer between pixels to maximize light absorption, in addition to the BST method used in the ISOCELL Vizion 63D.

The Vizion 931 supports multi-drop that can seamlessly connect up to four cameras to the application processor using a single wire. With minimal wiring required, the sensor provides greater design flexibility for device manufactures. 

Samsung ISOCELL Vizion 63D and ISOCELL Vizion 931 sensors are currently sampling to OEMs worldwide.

Coverage on PetaPixel: https://petapixel.com/2023/12/20/samsungs-new-vizion-sensors-boast-wild-tech-for-industrial-use/

The Isocell Vizion 63D is a time-of-flight (ToF) image sensor, while the Isocell Vizion 931 sports a global shutter image sensor. These sensors are the latest in Samsung’s Vizion line of ToF and global shutter sensors, first announced in 2020. The Vizion lineup is designed with next-generation mobile, commercial, and industrial use cases in mind.
 
“Engineered with state-of-the-art sensor technologies, Samsung’s Isocell Vizion 63D and Isocell Vizion 931 will be essential in facilitating machine vision for future high-tech applications like robotics and extended reality (XR),” says Haechang Lee, Executive Vice President of the Next Generation Sensor Development Team at Samsung Electronics. “Leveraging our rich history in technological innovation, we are committed to driving the rapidly expanding image sensor market forward.”
The Vizion 63D works similarly to how bats use echolocation to navigate in dim conditions. A ToF sensor measures distance and depth by calculating the amount of time it takes for emitted photons to travel to and back from an object in a scene. The Isocell Vizion 63D is an “indirect time-of-flight (iToF) sensor,” meaning it measures the “phase shift between emitted and reflected light to sense its surroundings in three dimensions.”
 
The sophisticated sensor is the first of its kind to integrate depth-sensing hardware into the image signal processor. The one-chip design enables 3D data capture without a seprate chip, thereby reducing the overall power demand of the system. The technology is still relatively new, so its resolution is not exceptionally high. The 63D processes QVGA (320×240) images at up to 60 frames per second. The sensor has 3.5-micron pixel size and can measure distances up to 10 meters from the sensor, double what it predecessor could achieve.

As for the Isocell Vizion 931, the global shutter sensor is designed to capture motion without rolling shutter artifacts, much like has been seen with Sony’s new a9 III full-frame interchangeable lens camera.

“Unlike rolling shutter sensors that scan the scene line by line from top to bottom in a ‘rolling’ manner, global shutter sensors capture the entire scene at once or ‘globally,’ similar to how human eyes see,” Samsung explains. “This allows the Isocell Vizion 931 to capture sharp, undistorted images of moving objects, making it well-suited for motion-tracking in XR devices, gaming systems, service and logistics robots as well as drones.”

The Vizion 931 boasts the industry’s highest quantum efficiency, joining the 63D, which makes the same claim in its specialized class of sensors.

While these sensors are unlikely to find their way into photography-oriented consumer products, it is always fascinating to learn about the newest advancements in image sensor technology. Samsung is sending samples of its new sensors to OEMs worldwide, so it won’t be long before they make their way into products of some kind.

Go to the original article...

Samsung announces new Isocell Vizion sensors

Image Sensors World        Go to the original article...

Samsung press release: https://semiconductor.samsung.com/emea/news-events/news/samsung-unveils-two-new-isocell-vizion-sensors-tailored-for-robotics-and-xr-applications/

Samsung Unveils Two New ISOCELL Vizion Sensors Tailored for Robotics and XR Applications
The ISOCELL Vizion 63D, a time-of-flight sensor, captures high-resolution 3D images with exceptional detail
The ISOCELL Vizion 931, a global shutter sensor, captures dynamic moments with clarity and precision 



Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today introduced two new ISOCELL Vizion sensors — a time-of-flight (ToF) sensor, the ISOCELL Vizion 63D, and a global shutter sensor, the ISOCELL Vizion 931. First introduced in 2020, Samsung’s ISOCELL Vizion lineup includes ToF and global shutter sensors specifically designed to offer visual capabilities across an extensive range of next-generation mobile, commercial and industrial use cases.  

“Engineered with state-of-the-art sensor technologies, Samsung’s ISOCELL Vizion 63D and ISOCELL Vizion 931 will be essential in facilitating machine vision for future high-tech applications like robotics and extended reality (XR),” said Haechang Lee, Executive Vice President of the Next Generation Sensor Development Team at Samsung Electronics. “Leveraging our rich history in technological innovation, we are committed to driving the rapidly expanding image sensor market forward.”

ISOCELL Vizion 63D: Tailored for capturing high-resolution 3D images with exceptional detail

Similar to how bats use echolocation to navigate in the dark, ToF sensors measure distance and depth by calculating the time it takes the emitted light to travel to and from an object. 

Particularly, Samsung’s ISOCELL Vizion 63D is an indirect ToF (iToF) sensor that measures the phase shift between emitted and reflected light to sense its surroundings in three dimensions. With exceptional accuracy and clarity, the Vizion 63D is ideal for service and industrial robots as well as XR devices and facial authentication where high-resolution and precise depth measuring are crucial.

The ISOCELL Vizion 63D sensor is the industry’s first iToF sensor with an integrated depth-sensing hardware image signal processor (ISP). With this innovative one-chip design, it can precisely capture 3D depth information without the help of another chip, enabling up to a 40% reduction in system power consumption compared to the previous ISOCELL Vizion 33D product. The sensor can also process images at up to 60 frames per second in QVGA resolution (320x240), which is a high-demand display resolution used in commercial and industrial markets. 

Based on the industry’s smallest 3.5㎛ pixel size in iToF sensors, the ISOCELL Vizion 63D achieves high Video Graphics Array (VGA) resolution (640x480) within a 1/6.4” optical format, making it an ideal fit for compact, on-the-go devices.

Thanks to backside scattering technology (BST) that enhances light absorption, the Vizion 63D sensor boasts the highest level of quantum efficiency in the industry, reaching 38% at an infrared light wavelength of 940 nanometers (nm). This enables enhanced light sensitivity and reduced noise, resulting in sharper image quality with minimal motion blur.

Moreover, the ISOCELL Vizion 63D supports both flood (high-resolution at short-range) and spot (long-range) lighting modes, significantly extending its measurable distance range from its predecessor’s five meters to 10.

ISOCELL Vizion 931: Optimized for capturing dynamic movements without distortion 

The ISOCELL Vizion 931 is a global shutter image sensor tailored for capturing rapid movements without the “jello effect”. Unlike rolling shutter sensors that scan the scene line by line from top to bottom in a “rolling” manner, global shutter sensors capture the entire scene at once or “globally,” similar to how human eyes see. This allows the ISOCELL Vizion 931 to capture sharp, undistorted images of moving objects, making it well-suited for motion-tracking in XR devices, gaming systems, service and logistics robots as well as drones.

Designed in a one-to-one ratio VGA resolution (640 x 640) that packs more pixels in a smaller form factor, the ISOCELL Vizion 931 is optimal for iris recognition, eye tracking as well as facial and gesture detection in head-mounted display devices like XR headsets. 

The ISOCELL Vizion 931 also achieves the industry’s highest level of quantum efficiency, delivering an impressive 60% at 850nm infrared light wavelength. This feat was made possible by incorporating Front Deep Trench Isolation (FDTI) which places an insulation layer between pixels to maximize light absorption, in addition to the BST method used in the ISOCELL Vizion 63D.

The Vizion 931 supports multi-drop that can seamlessly connect up to four cameras to the application processor using a single wire. With minimal wiring required, the sensor provides greater design flexibility for device manufactures. 

Samsung ISOCELL Vizion 63D and ISOCELL Vizion 931 sensors are currently sampling to OEMs worldwide.

Coverage on PetaPixel: https://petapixel.com/2023/12/20/samsungs-new-vizion-sensors-boast-wild-tech-for-industrial-use/

The Isocell Vizion 63D is a time-of-flight (ToF) image sensor, while the Isocell Vizion 931 sports a global shutter image sensor. These sensors are the latest in Samsung’s Vizion line of ToF and global shutter sensors, first announced in 2020. The Vizion lineup is designed with next-generation mobile, commercial, and industrial use cases in mind.
 
“Engineered with state-of-the-art sensor technologies, Samsung’s Isocell Vizion 63D and Isocell Vizion 931 will be essential in facilitating machine vision for future high-tech applications like robotics and extended reality (XR),” says Haechang Lee, Executive Vice President of the Next Generation Sensor Development Team at Samsung Electronics. “Leveraging our rich history in technological innovation, we are committed to driving the rapidly expanding image sensor market forward.”
The Vizion 63D works similarly to how bats use echolocation to navigate in dim conditions. A ToF sensor measures distance and depth by calculating the amount of time it takes for emitted photons to travel to and back from an object in a scene. The Isocell Vizion 63D is an “indirect time-of-flight (iToF) sensor,” meaning it measures the “phase shift between emitted and reflected light to sense its surroundings in three dimensions.”
 
The sophisticated sensor is the first of its kind to integrate depth-sensing hardware into the image signal processor. The one-chip design enables 3D data capture without a seprate chip, thereby reducing the overall power demand of the system. The technology is still relatively new, so its resolution is not exceptionally high. The 63D processes QVGA (320×240) images at up to 60 frames per second. The sensor has 3.5-micron pixel size and can measure distances up to 10 meters from the sensor, double what it predecessor could achieve.

As for the Isocell Vizion 931, the global shutter sensor is designed to capture motion without rolling shutter artifacts, much like has been seen with Sony’s new a9 III full-frame interchangeable lens camera.

“Unlike rolling shutter sensors that scan the scene line by line from top to bottom in a ‘rolling’ manner, global shutter sensors capture the entire scene at once or ‘globally,’ similar to how human eyes see,” Samsung explains. “This allows the Isocell Vizion 931 to capture sharp, undistorted images of moving objects, making it well-suited for motion-tracking in XR devices, gaming systems, service and logistics robots as well as drones.”

The Vizion 931 boasts the industry’s highest quantum efficiency, joining the 63D, which makes the same claim in its specialized class of sensors.

While these sensors are unlikely to find their way into photography-oriented consumer products, it is always fascinating to learn about the newest advancements in image sensor technology. Samsung is sending samples of its new sensors to OEMs worldwide, so it won’t be long before they make their way into products of some kind.

Go to the original article...

Samsung announces new Isocell Vizion sensors

Image Sensors World        Go to the original article...

Samsung press release: https://semiconductor.samsung.com/emea/news-events/news/samsung-unveils-two-new-isocell-vizion-sensors-tailored-for-robotics-and-xr-applications/

Samsung Unveils Two New ISOCELL Vizion Sensors Tailored for Robotics and XR Applications
The ISOCELL Vizion 63D, a time-of-flight sensor, captures high-resolution 3D images with exceptional detail
The ISOCELL Vizion 931, a global shutter sensor, captures dynamic moments with clarity and precision 



Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today introduced two new ISOCELL Vizion sensors — a time-of-flight (ToF) sensor, the ISOCELL Vizion 63D, and a global shutter sensor, the ISOCELL Vizion 931. First introduced in 2020, Samsung’s ISOCELL Vizion lineup includes ToF and global shutter sensors specifically designed to offer visual capabilities across an extensive range of next-generation mobile, commercial and industrial use cases.  

“Engineered with state-of-the-art sensor technologies, Samsung’s ISOCELL Vizion 63D and ISOCELL Vizion 931 will be essential in facilitating machine vision for future high-tech applications like robotics and extended reality (XR),” said Haechang Lee, Executive Vice President of the Next Generation Sensor Development Team at Samsung Electronics. “Leveraging our rich history in technological innovation, we are committed to driving the rapidly expanding image sensor market forward.”

ISOCELL Vizion 63D: Tailored for capturing high-resolution 3D images with exceptional detail

Similar to how bats use echolocation to navigate in the dark, ToF sensors measure distance and depth by calculating the time it takes the emitted light to travel to and from an object. 

Particularly, Samsung’s ISOCELL Vizion 63D is an indirect ToF (iToF) sensor that measures the phase shift between emitted and reflected light to sense its surroundings in three dimensions. With exceptional accuracy and clarity, the Vizion 63D is ideal for service and industrial robots as well as XR devices and facial authentication where high-resolution and precise depth measuring are crucial.

The ISOCELL Vizion 63D sensor is the industry’s first iToF sensor with an integrated depth-sensing hardware image signal processor (ISP). With this innovative one-chip design, it can precisely capture 3D depth information without the help of another chip, enabling up to a 40% reduction in system power consumption compared to the previous ISOCELL Vizion 33D product. The sensor can also process images at up to 60 frames per second in QVGA resolution (320x240), which is a high-demand display resolution used in commercial and industrial markets. 

Based on the industry’s smallest 3.5㎛ pixel size in iToF sensors, the ISOCELL Vizion 63D achieves high Video Graphics Array (VGA) resolution (640x480) within a 1/6.4” optical format, making it an ideal fit for compact, on-the-go devices.

Thanks to backside scattering technology (BST) that enhances light absorption, the Vizion 63D sensor boasts the highest level of quantum efficiency in the industry, reaching 38% at an infrared light wavelength of 940 nanometers (nm). This enables enhanced light sensitivity and reduced noise, resulting in sharper image quality with minimal motion blur.

Moreover, the ISOCELL Vizion 63D supports both flood (high-resolution at short-range) and spot (long-range) lighting modes, significantly extending its measurable distance range from its predecessor’s five meters to 10.

ISOCELL Vizion 931: Optimized for capturing dynamic movements without distortion 

The ISOCELL Vizion 931 is a global shutter image sensor tailored for capturing rapid movements without the “jello effect”. Unlike rolling shutter sensors that scan the scene line by line from top to bottom in a “rolling” manner, global shutter sensors capture the entire scene at once or “globally,” similar to how human eyes see. This allows the ISOCELL Vizion 931 to capture sharp, undistorted images of moving objects, making it well-suited for motion-tracking in XR devices, gaming systems, service and logistics robots as well as drones.

Designed in a one-to-one ratio VGA resolution (640 x 640) that packs more pixels in a smaller form factor, the ISOCELL Vizion 931 is optimal for iris recognition, eye tracking as well as facial and gesture detection in head-mounted display devices like XR headsets. 

The ISOCELL Vizion 931 also achieves the industry’s highest level of quantum efficiency, delivering an impressive 60% at 850nm infrared light wavelength. This feat was made possible by incorporating Front Deep Trench Isolation (FDTI) which places an insulation layer between pixels to maximize light absorption, in addition to the BST method used in the ISOCELL Vizion 63D.

The Vizion 931 supports multi-drop that can seamlessly connect up to four cameras to the application processor using a single wire. With minimal wiring required, the sensor provides greater design flexibility for device manufactures. 

Samsung ISOCELL Vizion 63D and ISOCELL Vizion 931 sensors are currently sampling to OEMs worldwide.

Coverage on PetaPixel: https://petapixel.com/2023/12/20/samsungs-new-vizion-sensors-boast-wild-tech-for-industrial-use/

The Isocell Vizion 63D is a time-of-flight (ToF) image sensor, while the Isocell Vizion 931 sports a global shutter image sensor. These sensors are the latest in Samsung’s Vizion line of ToF and global shutter sensors, first announced in 2020. The Vizion lineup is designed with next-generation mobile, commercial, and industrial use cases in mind.
 
“Engineered with state-of-the-art sensor technologies, Samsung’s Isocell Vizion 63D and Isocell Vizion 931 will be essential in facilitating machine vision for future high-tech applications like robotics and extended reality (XR),” says Haechang Lee, Executive Vice President of the Next Generation Sensor Development Team at Samsung Electronics. “Leveraging our rich history in technological innovation, we are committed to driving the rapidly expanding image sensor market forward.”
The Vizion 63D works similarly to how bats use echolocation to navigate in dim conditions. A ToF sensor measures distance and depth by calculating the amount of time it takes for emitted photons to travel to and back from an object in a scene. The Isocell Vizion 63D is an “indirect time-of-flight (iToF) sensor,” meaning it measures the “phase shift between emitted and reflected light to sense its surroundings in three dimensions.”
 
The sophisticated sensor is the first of its kind to integrate depth-sensing hardware into the image signal processor. The one-chip design enables 3D data capture without a seprate chip, thereby reducing the overall power demand of the system. The technology is still relatively new, so its resolution is not exceptionally high. The 63D processes QVGA (320×240) images at up to 60 frames per second. The sensor has 3.5-micron pixel size and can measure distances up to 10 meters from the sensor, double what it predecessor could achieve.

As for the Isocell Vizion 931, the global shutter sensor is designed to capture motion without rolling shutter artifacts, much like has been seen with Sony’s new a9 III full-frame interchangeable lens camera.

“Unlike rolling shutter sensors that scan the scene line by line from top to bottom in a ‘rolling’ manner, global shutter sensors capture the entire scene at once or ‘globally,’ similar to how human eyes see,” Samsung explains. “This allows the Isocell Vizion 931 to capture sharp, undistorted images of moving objects, making it well-suited for motion-tracking in XR devices, gaming systems, service and logistics robots as well as drones.”

The Vizion 931 boasts the industry’s highest quantum efficiency, joining the 63D, which makes the same claim in its specialized class of sensors.

While these sensors are unlikely to find their way into photography-oriented consumer products, it is always fascinating to learn about the newest advancements in image sensor technology. Samsung is sending samples of its new sensors to OEMs worldwide, so it won’t be long before they make their way into products of some kind.

Go to the original article...

Samsung/SK hynix bet on intelligent image sensors

Image Sensors World        Go to the original article...

From Business Korea: https://www.businesskorea.co.kr/news/articleView.html?idxno=208769

Samsung, SK hynix Advance in AI-embedded Image Sensors

Samsung Electronics and SK hynix are making strides in commercializing “On-sensor AI” technology for image sensors, aiming to elevate their image sensor technologies centered around AI and challenge the market leader, Japan’s Sony, in dominating the next-generation market.

At the “SK Tech Summit 2023” held last month, SK hynix revealed its progress in developing On-sensor AI technology. This technology embeds an image sensor onto an AI chip, processing data directly at the sensor level, unlike traditional sensors that relay image information to the Central Processing Unit (CPU) for computation and inference. This advance is expected to be a key technology in enabling evolved Internet of Things (IoT) and smart home services, reducing power consumption and processing time.

SK hynix’s approach involves integrating an AI accelerator into the image sensor. The company is currently conducting proof-of-concept research focused on facial and object recognition features, using a Computing In Memory (CIM) accelerator, a next-generation technology capable of performing multiplication and addition operations required for AI model computations.

Additionally, SK hynix has presented its technologies for implementing On-sensor AI, including AI software and AI lightweighting, at major academic conferences like the International Conference on Computer Vision and the IEEE EDTM seminar on semiconductor manufacturing and next-generation devices.

Samsung Electronics is also rapidly incorporating AI into its image sensor business. This year, the company unveiled a 200-megapixel image sensor with an advanced zoom feature called Zoom Anyplace, which uses AI technology for automatic object tracking during close-ups. Samsung has set a long-term business goal to commercialize “Humanoid Sensors” capable of sensing and replicating human senses, with a road map to develop image sensors that can capture even the invisible by 2027.

In October, Park Yong-in, president of Samsung Electronics’ System LSI Business, emphasized at the Samsung System LSI Tech Day in Silicon Valley, the goal of pioneering the era of “Proactive AI,” advancing from generative AI through high-performance IP, short and long-range communication solutions, and System LSI Humanoids based on sensors mimicking human senses.
The push by both companies into On-sensor AI technology development is seen as a strategy to capture new AI-specific demands and increase their market share. The image sensor market, which temporarily contracted post-COVID-19 due to a downturn in the smartphone market, is now entering a new growth phase, expanding its applications from mobile to autonomous vehicles, extended reality devices, and robotics.

According to Counterpoint Research, Sony dominated the global image sensor market with a 54% share in the last year, while Samsung Electronics held second place with 29%, and SK hynix, struggling to close the gap, barely made it into the top five with 5%.

Go to the original article...

Review paper on stacking and interconnects for CMOS image sensors

Image Sensors World        Go to the original article...

In an ASME J. Electron. Packag. paper titled "Advancement of Chip Stacking Architectures and Interconnect Technologies for Image Sensors" Mei-Chien Lu writes:

Numerous technology breakthroughs have been made in image sensor development in the past two decades. Image sensors have evolved into a technology platform to support many applications. Their successful implementation in mobile devices has accelerated market demand and established a business platform to propel continuous innovation and performance improvement extending to surveillance, medical, and automotive industries. This overview briefs the general camera module and the crucial technology elements of chip stacking architectures and advanced interconnect technologies. This study will also examine the role of pixel electronics in determining the chip stacking architecture and interconnect technology of choice. It is conducted by examining a few examples of CMOS image sensors (CIS) for different functions such as visible light detection, single photon avalanche photodiode (SPAD) for low light detection, rolling shutter, and global shutter, and depth sensing and light detection and ranging (LiDAR). Performance attributes of different architectures of chip stacking are overviewed. Direct bonding followed by Via-last through silicon via (Via-last TSV) and hybrid bonding (HB) technologies are identified as newer and favorable chip-to-chip interconnect technologies for image sensor chip stacking. The state-of-the-art ultrahigh-density interconnect manufacturability is also highlighted.



Schematics of an imaging pixel array, circuit blocks and a typical 4 T-APS pixel electronics



Exemplary schematics of front side illuminated sensors (FSI-CIS) and back side illuminated sensors (BSI-CIS)


Schematics of ceramic leadless chip carrier ceramic image sensor package at the top and imaging ball grid array image sensor package at the bottom



Schematics of two camera modules with image sensor packages at the bottom parts under lens modules




A micrograph of the partitioned top and bottom circuit blocks of the first stacked image sensor from SONY



Schematics of Stacked BSI pixel chip to circuit chip bonded at dielectric surfaces with peripheral via-last TSVs



Dual-photodiode stacked chips BSI-CIS processed by 65 nm/14 nm technologies



Chip-to-chip bonding and interconnect methods with (a) direct dielectric bonding followed by via-last TSVs for chip-to-chip interconnect, (b) hybrid bonding at peripheral area, and (c) hybrid bonding under pixel arrays



Pixel array, DRAM, and logic three-chip stacked image sensor by Sony Corp using dielectric-to-dielectric bonding followed by via-last TSV interconnects at peripheral areas



A SONY stacked-chip GS using pixel-level integration with (a) the pixel array chip, (b) the processor chip, and (c) cross section of stacked chips using hybrid bonding interconnects



Schematics of pixel electronics partition and cross section view of OmniVision stacked-chip pixel level connections




A schematic of pixel electronics for ToF SPAD image sensor


A cross section of a NIR SPAD ToF image sensor using hybrid bonding at chip-to-chip interface.



Schematic of a 3D stacked SPAD image sensor: (a) cross section view of pixel array chip stacked on CMOS circuitry chip by hybrid bonding and (b) diagram of pixel electronics

A chip floor plan and pixel array for a two-tier IR SPAD sensor for LiDAR application [14]


Short wavelength infrared sensor using InP/InGaAs/InP photodiodes bond to readout circuitry in silicon chip by pixel-level integration: (a) flip-chip bumps and (b) Cu-Cu hybrid bonds



Process flow of via-last TSV for chip-to-chip interconnect


Process flow of Direct Bond Interconnect





Link to full paper (open access): https://asmedigitalcollection.asme.org/electronicpackaging/article/144/2/020801/1115637/Advancement-of-Chip-Stacking-Architectures-and

Go to the original article...

Videos du jour: TinyML, Hamamatsu, ADI

Image Sensors World        Go to the original article...

tinyML Asia 2022
In-memory computing and Dynamic Vision Sensors: Recipes for tinyML in Internet of Video Things
Arindam BASU , Professor, Department of Electrical Engineering, City University of Hong Kong

Vision sensors are unique for IoT in that they provide rich information but also require excessive bandwidth and energy which limits scalability of this architecture. In this talk, we will describe our recent work in using event-driven dynamic vision sensors for IoVT applications like unattended ground sensors and intelligent transportation systems. To further reduce the energy of the sensor node, we utilize In-memory computing (IMC)—the SRAM used to store the video frames are used to perform basic image processing operations and trigger the following deep neural networks. Lastly, we introduce a new concept of hybrid IMC combining multiple types of memory.


 
Photon counting imaging using Hamamatsu's scientific imaging cameras - TechBites Series

With our new photon number resolving mode the ORCA-Quest enables photon counting resolution across a full 9.4 megapixel image. See the camera in action and learn how photon number imaging pushes quantitative imaging to a new frontier.


Accurate, Mobile Object Dimensioning using Time of Flight Technology

ADI's High Resolution 3D Depth Sensing Technology coupled with advanced image stitching algorithms enables the dimensioning of non-conveyable large objects for Logistics applications. Rather than move the object to a fixed dimensioning gantry, ADI's 3D technology enables operators to take the camera to the object to perform the dimensioning function. With the same level of accuracy as fixed dimensioners, making the system mobile reduces time and cost of measurement while enhancing energy efficiency.



Go to the original article...

CEA-Leti IEDM 2023 papers on emerging devices

Image Sensors World        Go to the original article...

From Semiconductor Digest: https://www.semiconductor-digest.com/cea-leti-will-present-gains-in-ultimate-3d-rf-power-and-quantum-neuromorphic-computing-with-emerging-devices/

CEA-Leti Will Present Gains in Ultimate 3D, RF & Power, and Quantum & Neuromorphic Computing with Emerging Devices

CEA-Leti papers at IEDM 2023, Dec. 9-13, in San Francisco, will present results in multiple fields, including ultimate 3D advances in radio frequency, such as performance improvement at cryogenic temperature.

The institute will present nine papers during the conference this year. Two presentations will highlight a breakthrough in 3D sequential integration and results pushing GaN/Si HEMT closer to GaN/SiC performance at 28 GHz:

 “3D Sequential Integration with Si CMOS Stacked on 28nm Industrial FDSOI with Cu-ULK iBEOL Featuring RO and HDR Pixel”, reports the world-first 3D sequential integration of CMOS over CMOS with advanced metal line levels, which brings 3DSI with intermediate BEOL closer to commercialization.

 “6.6W/mm 200mm CMOS Compatible AlN/GaN/Si MIS-HEMT with In-Situ SiN Gate Dielectric and Low Temperature Ohmic Contacts” reports development of CMOS compatible 200mm SiN/AlN/GaN MIS-HEMT on silicon substrate that brings GaN/Si high electron mobility transistors (HEMT) closer to GaN/SiC performance at 28 GHz in power density. It also highlights that SiN/AlN/GaN on silicon metal-insulated semiconductor (MIS-HEMT) is a potential candidate for high power Ka-band power amplifiers.

Leti Devices Workshop
“Semiconductor Devices: Moving Towards Efficiency & Sustainability”
Dec. 10 @ 5:30 pm, Nikko Hotel, 222 Mason Street, Third Floor
The workshop will present CEA-Leti experts’ visions for and key results in efficient computing and radiofrequency devices for More than Moore applications.

CEA-Leti Presentations

Radio Frequency

 RF: “A Cost Effective RF-SOI Drain Extended MOS Transistor Featuring PSAT=19dBm @28GHz & VDD=3V for 5G Power Amplifier Application”, by Xavier Garros
 Session 34.2: Wednesday, Dec. 13 @ 9:30 am (Continental 7-9)
 RF crypto: “RF Performance Enhancement of 28nm FD-SOI Transistors Down to Cryogenic Temperature Using Back Biasing”, by Quentin Berlingard
 Session 34.3: Wednesday, Dec. 13 @ 9:55 am (Continental 7-9)
 GaN RF: “6.6W/mm 200mm CMOS Compatible AlN/GaN/Si MIS-HEMT with In-Situ SiN Gate Dielectric and Low Temperature Ohmic Contacts”, by Erwan Morvan
 Session 38.3: Wednesday, Dec. 13 @ 2:25 pm (Continental 4)

3D Sequential Stacking

 “Ultimate Layer Stacking Technology for High Density Sequential 3D Integration”, a collaborative paper with Ionut Rad of Soitec
 Session 19.5: Tuesday, Dec. 12 @ 4:00 pm (Grand Ballroom A)
 “3D Sequential Integration with Si CMOS Stacked on 28nm Industrial FDSOI with Cu-ULK iBEOL Featuring RO and HDR Pixel”, by Perrine Batude
 Session 29.3: Wednesday, Dec. 13 @ 9:55 am (Grand Ballroom B)
Emerging Device and Compute Technology (EDT)
 “Designing Networks of Resistively-Coupled Stochastic Magnetic Tunnel Junctions for Energy-Based Optimum Search”, by Kamal Danouchi
 Session 22.3: Tuesday, Dec. 12 @ 3:10 (Continental 5)

Neuromorphic Computing

 Hybrid FeRAM/RRAM Synaptic Circuit Enabling On-Chip Inference and Learning at the Edge”, by Michele Martemucci (LIST)
 Session 23:3: Tuesday, Dec. 12 @ 3:10 (Continental 6)
 “Bayesian In-Memory Computing with Resistive Memories”, a collaborative paper with Damien Querlioz of CNRS-C2N
 Session 12:3: Tuesday, Dec. 12 @ 9:55 am (Continental 1-3)
Quantum Technology
 “Tunnel and Capacitive Coupling Optimization in FDSOI Spin-Qubit Devices”, by H. Niebojewski and B. Bertrand
 Session 22:6: Tuesday, Dec. 12 @ 4:25 pm (Continental 5)

Go to the original article...

STMicroelectronics releases new multizone time-of-flight sensor

Image Sensors World        Go to the original article...

Original article: https://www.eejournal.com/industry_news/next-generation-multizone-time-of-flight-sensor-from-stmicroelectronics-boosts-ranging-performance-and-power-saving/

Next-generation multizone time-of-flight sensor from STMicroelectronics boosts ranging performance and power saving

Target applications include human-presence sensing, gesture recognition, robotics, and other industrial uses

Geneva, Switzerland, December 14, 2023 – STMicroelectronics’ VL53L8CX, the latest-generation 8×8 multizone time-of-flight (ToF) ranging sensor, delivers a range of improvements including greater ambient-light immunity, lower power consumption, and enhanced optics.

ST’s direct-ToF sensors combine a 940nm vertical cavity surface emitting laser (VCSEL), a multizone SPAD (single-photon avalanche diode) detector array, and an optical system comprising filters and diffractive optical elements (DOE) in an all-in-one module that outperforms conventional micro lenses typically used with similar alternative sensors. The sensor projects a wide square field of view of 45° x 45° (65° diagonal) and receives reflected light to calculate the distance of objects up to 400cm away, across 64 independent zones, and up to 30 captures per second.

The new VL53L8CX boosts ranging performance with a new-generation VCSEL and advanced silicon-based meta-optics. Compared with the current VL53L5CX, the enhancements increase immunity to interference from ambient light, extending the sensor’s maximum range in daylight from 170cm to 285cm and reducing power consumption from 4.5mW to 1.6mW in low-power mode.

ST released the first multizone time-of-flight sensor with the VL53L5CX in 2021. By increasing performance, the new VL53L8CX now further extends the advantages of these sensors over alternatives with conventional optics, which have fewer native zones and lose sensitivity in the outer areas. Thanks to its true 8×8 multizone sensing, the VL53L8CX ensures uniform sensitivity and accurate ranging throughout the field of view, with superior range in ambient light.

When used for system activation and human presence detection, the VL53L8CX’s greater ambient-light immunity enables equipment to respond more consistently and quickly. As part of ST’s STGesture™ platform that also includes the STSW-IMG035 turnkey gesture-recognition software and Gesture EVK development tool, the new sensor delivers the precision needed for repeatable gesture-based interaction. In addition to motion gesture recognition, hand posture recognition is also possible leveraging the latest AI models available in the STM32ai-modelzoo on GitHub.

Moreover, the VL53L8CX provides increased accuracy for monitoring the contents of bins, containers, silos, and tanks, including liquid-level monitoring, in industrial bulk storage and warehousing. The superior accuracy can also enhance the performance of drinks machines such as coffee makers and beverage dispensers.

Mobile robots including autonomous vacuum cleaners can leverage the VL53L8CX to improve guidance capabilities like floor sensing, small object detection, collision avoidance, and cliff detection. Also, the synchronization pin enables projectors and cameras to benefit from coordinated autofocus. There is also a motion indicator, an auto-stop feature that allows real-time actions, and the sensor is immune to cover-glass crosstalk beyond 60cm. Now supporting SPI connectivity, in addition to the 1MHz I2C interface, the new sensor handles host data transfers at up to 3MHz.

Designers can quickly evaluate the VL53L8CX and jump-start their projects taking advantage of the supporting ecosystem that includes the X-NUCLEO-53L8A1 expansion board and SATEL-VL53L8 breakout boards. The P-NUCLEO-53L8A1 pack is also available, which contains a STM32F401 Nucleo microcontroller board and X-NUCLEO-53L8A1 expansion board ready to power up and start exploring.
The VL53L8CX is available now, housed in a 6.4mm x 3.0mm x 1.75mm leadless package, from $3.60 for orders of 1000 pieces.

Please visit www.st.com/VL53L8CX for more information.

Go to the original article...

3D cameras at CES 2024: Orbbec and MagikEye

Image Sensors World        Go to the original article...

Annoucements below from (1) Orbbec and (2) MagikEye about their upcoming CES demos.


Orbbec releases Persee N1 camera-computer kit for 3D vision enthusiasts, powered by the NVIDIA Jetson platform


Orbbec’s feature-rich RGB-D camera-computer is a ready-to-use out-of-the box solution for 3D vision application developers and experimenters

Troy, Mich, 13 December 2023 — Orbbec, an industry leader dedicated to 3D vision systems, has developed the Persee N1, an all-in-one combination of a popular stereo-vision 3D camera and a purpose-built computer based on the NVIDIA Jetson platform, and equipped with industry-standard interfaces for the most useful accessories and data connections. Developers using the newly launched camera-computer will also enjoy the benefits of the Ubuntu OS and OpenCV libraries. Orbbec recently became an NVIDIA Partner Network (NPN) Preferred Partner.

Persee N1 delivers highly accurate and reliable data for in-door/semi-outdoor operation, ideally suited for healthtech, dimensioning, interactive gaming, retail and robotics applications, and features:

  • An easy setup process using the Orbbec SDK and Ubuntu-based software environment.
  • Industry-proven Gemini 2 camera, based on active stereo IR technology, which includes Orbbec’s custom ASIC for high-quality, in-camera depth processing.
  • The powerful NVIDIA Jetson platform for edge AI and robotics.
  • HDMI and USB ports for easy connections to a monitor and keyboard.
  • Multiple USB ports for data and a POE (Power over Ethernet) port for combined data and power connections.
  •  Expandable storage with MicroSD and M.2 slots.

“The self-contained Persee N1 camera-computer makes it easy for computer vision developers to experiment with 3D vision,” said Amit Banerjee, Head of Platform and Partnerships at Orbbec. “This combination of our Gemini 2 RGB-D camera and the NVIDIA Jetson platform for edge AI and robotics allows AI development while at the same time enabling large-scale cloud-based commercial deployments.”

The new camera module also features official support for the widely used Open Computer Vision (OpenCV) library. OpenCV is used in an estimated 89% of all embedded vision projects according to industry reports. This integration marks the beginning of a deeper collaboration between Orbbec and OpenCV, which is operated by the non-profit Open Source Vision Foundation.

“The Persee N1 features robust support for the industry-standard computer vision and AI toolset from OpenCV,” said Dr. Satya Mullick, CEO of OpenCV. “OpenCV and Orbbec have entered a partnership to ensure OpenCV compatibility with Orbbec’s powerful new devices and are jointly developing new capabilities for the 3D vision community.”


MagikEye's Pico Image Sensor: Pioneering the Eyes of AI for the Robotics Age at CES

From Businesswire.

December 20, 2023 09:00 AM Eastern Standard Time
STAMFORD, Conn.--(BUSINESS WIRE)--Magik Eye Inc. (www.magik-eye.com), a trailblazer in 3D sensing technology, is set to showcase its groundbreaking Pico Depth Sensor at the 2024 Consumer Electronics Show (CES) in Las Vegas, Nevada. Embarking on a mission to "Provide the Eyes of AI for the Robotics Age," the Pico Depth Sensor represents a key milestone in MagikEye’s journey towards AI and robotics excellence.

The heart of the Pico Depth Sensor’s innovation lies in its use of MagikEye’s proprietary Invertible Light™ Technology (ILT), which operates efficiently on a “bare-metal” ARM M0 processor within the Raspberry Pi RP2040. This noteworthy feature underscores the sensor's ability to deliver high-quality 3D sensing without the need for specialized silicon. Moreover, while the Pico Sensor showcases its capabilities using the RP2040, the underlying technology is designed with adaptability in mind, allowing seamless operation on a variety of microcontroller cores, including those based on the popular RISC-V architecture. This flexibility signifies a major leap forward in making advanced 3D sensing accessible and adaptable across different platforms.

Takeo Miyazawa, Founder & CEO of MagikEye, emphasizes the sensor's transformative potential: “Just as personal computers democratized access to technology and spurred a revolution in productivity, the Pico Depth Sensor is set to ignite a similar transformation in the realms of AI and robotics. It is not just an innovative product; it’s a gateway to new possibilities in fields like autonomous vehicles, smart home systems, and beyond, where AI and depth sensing converge to create smarter, more intuitive solutions.”

Attendees at CES 2024 are cordially invited to visit MagikEye's booth for an exclusive first-hand experience of the Pico Sensor. Live demonstrations of MagikEye’s latest ILT solutions for next-gen 3D sensing solutions will be held from January 9-11 at the Embassy Suites by Hilton Convention Center Las Vegas. Demonstration times are limited and private reservations will be accommodated by contacting ces2024@magik-eye.com.

Go to the original article...

imec paper at IEDM 2023 on a waveguide design for color imaging

Image Sensors World        Go to the original article...

News article: https://optics.org/news/14/12/11

imec presents new way to render colors with sub-micron pixel sizes

This week at the International Electron Devices Meeting, in San Francisco, CA, (IEEE IEDM 2023), imec, a Belgium-based research and innovation hub in nanoelectronics and digital technologies, has demonstrated a new method for “faithfully splitting colors with sub-micron resolution using standard back-end-of-line processing on 300mm wafers”.

imec says that the technology is poised to elevate high-end camera performance, delivering higher signal-to-noise ratio, enhanced color quality with unprecedented spatial resolution.
Designing next-generation CMOS imagers requires striking a balance between collecting all incoming photons, achieving a resolution down to photon size or diffraction limit, and accurately recording the light color.

Traditional image sensors with color filters on the pixels are still limited in combining all three requirements. While higher pixel densities would increase the overall image resolution, smaller pixels capture even less light and are prone to artifacts that result from interpolating color values from neighboring pixels.

Even though diffraction-based color splitters represent a leap forward in increasing color sensitivity and capturing light, they are still unable to improve image resolution.

'Fundamentally new' approach
imec is now proposing a fundamentally new way for splitting colors at sub-micron pixel sizes (i.e., beyond the fundamental Abbe diffraction limit) using standard back-end processing. The approach is said to “tick all the boxes” for next-generation imagers by collecting nearly all photons, increasing resolution by utilizing very small pixels, and rendering colors faithfully.
To achieve this, imec researchers built an array of vertical Si3N4 multimode waveguides in an SiO2 matrix. The waveguides have a tapered, diffraction-limited sized input (e.g., 800 x 800 nm2) to collect all the incident light.

“In each waveguide, incident photons are exciting both symmetric and asymmetric modes, which propagate through the waveguide differently, leading to a unique “beating” pattern between the two modes for a given frequency. This beating pattern enables a spatial separation at the end of the waveguides corresponding to a specific color,” said Prof. Jan Genoe, scientific director at imec.

Cost-efficient structures
The total output light from each waveguide is estimated to reach over 90% within the range of human color perception (wavelength range 400-700nm), making it superior to color filters, says imec.
Robert Gehlhaar, principal member of technical staff at imec, said, “Because this technique is compatible with standard 300-mm processing, the splitters can be produced cost-efficiently. This enables further scaling of high-resolution imagers, with the ultimate goal to detect every incident photon and its properties.

“Our ambition is to become the future standard for color imaging with diffraction-limited resolution. We are welcoming industry partners to join us on this path towards full camera demonstration.”

 

RGB camera measurement (100x magnification) of an array of waveguides with alternating 5 left-side-open-aperture and 5 right-side-open-aperture (the others being occluded by TiN) waveguides at a 1-micron pitch. Yellow light exits at the right part of the waveguide, whereas the blue exits at the left. The wafer is illuminated using plane wave white light. Credit: imec.



3D visualization (left) and TEM cross-section (right) of the vertical waveguide array for color splitting in BY-CR imaging. Credit: imec.


Go to the original article...

OmniVision 15MP/1MP hybrid RGB/event vision sensor (ISSCC 2023)

Image Sensors World        Go to the original article...

Guo et al. from Omnivision presented a hybrid RGB/event vision sensor in a paper titled "A 3-Wafer-Stacked Hybrid 15MPixel CIS + 1 MPixel EVS with 4.6GEvent/s Readout, In-Pixel TDC and On-Chip ISP and ESP Function" at ISSCC 2023.

Abstract: Event Vision Sensors (EVS) determine, at pixel level, whether a temporal contrast change beyond a predefined threshold is detected [1–6]. Compared to CMOS image sensors (CIS), this new modality inherently provides data-compression functionality and hence, enables high-speed, low-latency data capture while operating at low power. Numerous applications such as object tracking, 3D detection, or slow-motion are being researched based on EVS [1]. Temporal contrast detection is a relative measurement and is encoded by so-called “events” being further characterized through x/y pixel location, event time-stamp (t) and the polarity (p), indicating whether an increase or decrease in illuminance has been detected.

 

Schematic of dual wafer 4x4 macro-pixel and peripheral readout circuitry on third wafer.

EVS readout block diagram and asynchronous scanner with hierarchical skip-logic.
 
 
Event-signal processor (ESP) block diagram and MIPI interface.
 


Sensor output illustrating hybrid CIS and EVS data capture. 10kfps slow-motion images of an exploding water balloon from 1080p, 120fps + event data.
 
 
Characterization results: Contrast response, nominal contrast, latency and noise vs. illuminance.
 


 
Technology trend and chip micrograph.

Go to the original article...

X-FAB introduces NIR SPADs on their 180nm process

Image Sensors World        Go to the original article...

X-FAB Introduces New Generation of Enhanced Performance SPAD Devices focused on Near-Infrared Applications

Link: https://www.xfab.com/news/details/article/x-fab-introduces-new-generation-of-enhanced-performance-spad-devices-focused-on-near-infrared-applications?trk=feed_main-feed-card_feed-article-content

NEWS – Tessenderlo, Belgium – Nov 16, 2023
X-FAB Silicon Foundries SE, the leading analog/mixed-signal and specialty foundry, has introduced a specific near-infrared version to its single-photon avalanche diode (SPAD) device portfolio.

X-FAB Silicon Foundries SE, the leading analog/mixed-signal and specialty foundry, has introduced a specific near-infrared version to its single-photon avalanche diode (SPAD) device portfolio. Like the previous SPAD generation, which launched in 2021, this version is based on the company’s 180nm XH018 process. The inclusion of an additional step to the fabrication workflow has resulted in significant increases in signal while still retaining the same low noise floor, without negatively affecting parameters such as dark count rate, afterpulsing and breakdown voltage.

Through this latest variant, X-FAB is successfully expanding the scope of its SPAD offering, improving its ability to address numerous emerging applications where NIR operation proves critically important. Among these are time-of-flight sensing in industrial applications, vehicle LiDAR imaging, biophotonics and FLIM research work, plus a variety of different medical-related activities. Sensitivity is boosted over the whole near-infrared (NIR) band, with respective improvements of 40% and 35% at the key wavelengths of 850nm and 905nm.

Using the new SPAD devices will reduce the complexity of visible light filtering, since UV and visible light is already suppressed. Filter designs will consequently be simpler, with fewer component parts involved. Furthermore, having exactly the same footprint dimensions as the previous SPAD generation provides a straightforward upgrade route. Customers’ existing designs can gain major performance benefits by just swapping in the new devices.

X-FAB has compiled a comprehensive PDK for the near-infrared SPAD variant, with extensive documentation and application notes featured. Models for optical and electrical simulation will provide engineers the additional design support they need, enabling them to integrate these devices into their circuitry within a short time period.

As Heming Wei, Product Marketing Manager Sensors at X-FAB explains; “Our SPAD technology has already gained a very positive market response, seeing uptake with a multitude of customers. Thanks to continuing innovation at the process level, we have now been able to develop a solution that will secure business for us within various NIR applications, across automotive, healthcare and life sciences.”
The new NIR enhanced SPAD is available now. Engineers can start their design with the new device immediately.

Go to the original article...

Lecture by Dr. Tobi Delbruck on the history of silicon retina and event cameras

Image Sensors World        Go to the original article...

Silicon Retina: History, Live Demo, and Whiteboard Pixel Design


 

Rockwood Memorial Lecture 2023: Tobi Delbruck, Institute of Neuroinformatics, UZH-ETH Zürich

Event Camera Silicon Retina; History, Live Demo, and Whiteboard Circuit Design
Rockwood Memorial Lecture 2023 (11/20/23)
https://inc.ucsd.edu/events/rockwood/
Hosted by: Terry Sejnowski, Ph.D. and Gert Cauwenberghs, Ph.D.
Organized by: Institute for Neural Computation, https://inc.ucsd.edu

Abstract: Event cameras electronically model spike-based sparse output from biological eyes to reduce latency, increase dynamic range, and sparsify activity in comparison to conventional imagers. Driven by the need for more efficient battery powered, always-on machine vision in future wearables, event cameras have emerged as a next step in the continued evolution of electronic vision. This lecture will have 3 parts: 1. A brief history of silicon retina development starting from Fukushima’s Neocognitron and Mahowald and Mead’s earliest spatial retinas; 2: A live demo of a contemporary frame-event DAVIS camera that includes an inertial measurement unit (IMU) vestibular system, 3: (targeted for neuromorphic analog circuit design students in the BENG 216 class), a whiteboard discussion about event camera pixel design at the transistor level, highlighting design aspects of event camera pixels which endow them with fast response even under low lighting, precise threshold matching even under large transistor mismatch, and temperature-independent event threshold.

Go to the original article...

3D stacked BSI SPAD sensor with on-chip lens

Image Sensors World        Go to the original article...

Fujisaki et al. from Sony Semiconductor (Japan) presented a paper titled "A back-illuminated 6 μm SPAD depth sensor with PDE 36.5% at 940 nm via combination of dual diffraction structure and 2×2 on-chip lens" at the 2023 IEEE Symposium on VLSI Technology and Circuits.

Abstract: We present a back-illuminated 3D-stacked 6 μm single-photon avalanche diode (SPAD) sensor with very high photon detection efficiency (PDE) performance. To enhance PDE, a dual diffraction structure was combined with 2×2 on-chip lens (OCL) for the first time. A dual diffraction structure comprises a pyramid surface for diffraction (PSD) and periodic uneven structures by shallow trench for diffraction formed on the Si surface of light-facing and opposite sides, respectively. Additionally, PSD pitch and SiO 2 film thickness buried in full trench isolation were optimized. Consequently, a PDE of 36.5% was achieved at λ = 940 nm, the world’s highest value. Owing to shield ring contact, crosstalk was reduced by about half compared to a conventionally plugged one.




Schematics of Gapless and 2x2 on-chip lens.




Cross sectional SPAD image of (a) our previous work and (b) this work.



Go to the original article...

Early announcement: Single Photon Workshop 2024

Image Sensors World        Go to the original article...

Single Photon Workshop 2024
EICC Edinburgh 18-22 Nov 2024
www.spw2024.org
 

The 11th Single Photon Workshop (SPW) 2024 will be held 18-24 November 2024, hosted at the Edinburgh International Conference Centre.

SPW is the largest conference in the world dedicated to single-photon generation and detection technology and applications. The biennial international conference brings together a broad range of experts across academia, industry and government bodies with interests in single-photon sources, single-photon detectors, photon entanglement, photonic quantum technologies and their use in scientific and industrial applications. It is an exciting opportunity for those interested in these technologies to learn about the state of the art and to foster continuing partnerships with others seeking to advance the capabilities of such technologies.

In tandem with the scientific programme, SPW 2024 will include a major industry exhibition and networking events.
 
Please register your interest at www.spw2024.org
 
Official registration will open in January 2024.
 
The 2024 workshop is being jointly organized by Heriot-Watt University and University of Glasgow.

Go to the original article...

IISW2023 special issue paper: Small-pitch InGaAs photodiodes

Image Sensors World        Go to the original article...

In a new paper titled "Design and Characterization of 5 μm Pitch InGaAs Photodiodes Using In Situ Doping and Shallow Mesa Architecture for SWIR Sensing" Jules Tillement et al. from STMicroelectronics, U. Grenoble and CNRS Grenoble write:

Abstract: This paper presents the complete design, fabrication, and characterization of a shallow-mesa photodiode for short-wave infra-red (SWIR) sensing. We characterized and demonstrated photodiodes collecting 1.55 μm photons with a pixel pitch as small as 3 μm. For a 5 μm pixel pitch photodiode, we measured the external quantum efficiency reaching as high as 54%. With substrate removal and an ideal anti-reflective coating, we estimated the internal quantum efficiency as achieving 77% at 1.55 μm. The best measured dark current density reached 5 nA/cm2 at −0.1 V and at 23 °C. The main contributors responsible for this dark current were investigated through the study of its evolution with temperature. We also highlight the importance of passivation with a perimetric contribution analysis and the correlation between MIS capacitance characterization and dark current performance.

Full paper (open access): https://www.mdpi.com/1424-8220/23/22/9219

Figure 1. Schematic cross section of the photodiode after different processes. (a) Photodiode fabricated by Zn diffusion or Be implantation; (b) photodiode fabrication using shallow mesa technique.

Figure 2. Band diagram of simulated structure at equilibrium with the photogenerated pair schematically represented with their path of collection.


Figure 3. Top zoom of the structure—Impact of the N-InP (a) thickness and (b) doping on the band diagram at equilibrium.

Figure 4. Simulated dark current with TCAD Synopsys tools [28]. (a) Shows evolution of the dark current when the InP SRH lifetime is modulated; (b) evolution of the dark current when the InGaAs SRH lifetime is modulated.

Figure 5. Impact of the doping concentration of the InP barrier on the carrier collection.

Figure 6. Simplified and schematic process flow of the shallow mesa-type process. (a) The full stack; (b) the definition of the pixel by etching the P layer and (c) the encapsulation and fabrication of contacts.

Figure 7. SEM views after the whole process. (a) A cross-section of the top stack where the P layer is etched and (b) a top view of the different configuration of the test structures (single in-array diode is not shown on this SEM view).

Figure 8. Schematic cross section of the structure with its potential sources of the dark current.


Figure 9. Dark current measurement on 15 μm pitch in a matrix like environment. The curve is the median of more than 100 single in-array diodes measured.

Figure 10. Dark current measurement of the ten-by-ten diode bundle. This measurement is from process B.

Figure 11. Evolution of the dark current with temperature at −0.1 V. The solid lines show the theoretical evolution of the current limited by diffusion (light blue line) and by generation recombination (purple line). The temperature measurement is performed on a bundle of ten-by-ten 5 μm pixel pitch diodes.

Figure 12. Perimetric and bulk contribution to the global dark current from measurements performed on diodes with diameter ranging from 10 to 120 μm.

Figure 13. (a) Capacitance measurement on metal–insulator–semiconductor structure. The measurement starts at 0 V then ramps to +40 V then goes to −40 V and ends at +40 V. (b) A cross section of the MIS structure. The MIS is a 300 μm diameter circle.

Figure 14. Dark current performances compared to the hysteresis measured on several different wafers.

Figure 15. Dark current measurement of a ten-by-ten bundle of 5 μm pixel pitch photodiode. The measurements are conducted at 23 °C.

Figure 16. (a) Schematic test structure for QE measurement; (b) the results of the 3D FDTD simulations conducted with Lumerical to estimate the internal QE of the photodiode.


Figure 18. Current noise for a ten-by-ten 5 μm pixel pitch photodiode bundle measured at −0.1 V.

Figure 19. Median current measurement for bundles of one hundred 3 μm pixel pitch photodiodes under dark and SWIR illumination conditions. The dark blue line represents the dark current and the pink line is the photocurrent under 1.55 μm illumination.

Figure 20. Comparison of our work in blue versus the state of the art for the fabrication of InGaAs photodiodes.

Go to the original article...

Sony announces new 5MP SWIR sensor IMX992

Image Sensors World        Go to the original article...

Product page: https://www.sony-semicon.com/en/products/is/industry/swir/imx992-993.html

Press release: https://www.sony-semicon.com/en/news/2023/2023112901.html

Sony Semiconductor Solutions to Release SWIR Image Sensor for Industrial Applications with Industry-Leading 5.32 Effective Megapixels Expanding the lineup for delivering high-resolution and low-light performance 


Atsugi, Japan — Sony Semiconductor Solutions Corporation (SSS) today announced the upcoming release of the IMX992 short-wavelength infrared (SWIR) image sensor for industrial equipment, with the industry’s highest pixel count, at 5.32 effective megapixels.

The new sensor uses SSS’s proprietary Cu-Cu connection to achieve the industry’s smallest pixel size of 3.45 μm among SWIR image sensors. It also features an optimized pixel structure for efficiently capturing light, enabling high-definition imaging across a broad spectrum ranging from the visible to invisible short-wavelength infrared regions (wavelength: 0.4 to 1.7 μm). Furthermore, new shooting modes deliver high-quality images with significantly reduced noise in dark environments compared to conventional products.

In addition to this product, SSS will also release the IMX993 with a pixel size of 3.45 μm and an effective pixel count of 3.21 megapixels to further expand its SWIR image sensor lineup. These new SWIR image sensors with high pixel counts and high sensitivity will help contribute to the evolution of various industrial equipment.

In the industrial equipment domain in recent years, there has been increasing demand for improving productivity and preventing defective products from leaving the plant. In this context, the capacity to sense not only visible light but also light in the invisible band is in demand. SSS’s SWIR image sensors, which are capable of seamless wide spectrum imaging in the visible to invisible short-wavelength infrared range using a single camera, are already being used in various processes such as semiconductor wafer bonding and defect inspection, as well as ingredient and contaminant inspections in food production.

The new sensors enable imaging with higher resolution using pixel miniaturization, while enhancing imaging performance in low-light environments to provide higher quality imaging in inspection and monitoring applications conducted in darker conditions. By making the most of the characteristics of short-wavelength infrared light, whose light reflection and absorption properties are different from those of visible light, these products help to further expand applications in such areas as inspection, recognition and measurement, thereby contributing to improved industrial productivity.

Main Features
* High pixel count made possible by the industry’s smallest pixels at 3.45 μm, delivering high-resolution imaging

A Cu-Cu connection is used between the indium-gallium arsenide (InGaAs) layer that forms the photodiode of the light receiving unit and the silicon (Si) layer that forms the readout circuit. This design allows for a smaller pixel pitch, resulting in the industry’s smallest pixel size of 3.45 μm. This, in turn, helps achieve a compact form factor that still delivers the industry’s highest pixel count of approximately 5.32 effective megapixels on the IMX992, and approximately 3.21 effective megapixels on the IMX993. The higher pixel count enables detection of tiny objects or imaging across a wide range, contributing to significantly improved recognition and measurement precision in various inspections using short-wavelength infrared light.


 Comparison of SWIR images with different resolutions: Lighting wavelength 1550 nm
(Left: Other SSS product, 1.34 effective megapixels; Right: IMX992)

* Low-noise imaging even in dark locations possible by switching the shooting mode

Inclusion of new shooting modes enables low-noise imaging without being affected by environmental brightness. In dark environments with limited light, High Conversion Gain (HCG) mode directly amplifies the signal with minimal noise after being converted to an electrical signal from light, thereby relatively reducing the amount of noise downstream. Doing so minimizes the impact of noise in dark locations, leading to greater recognition precision. On the other hand, in bright environments with plenty of light, Low Conversion Gain (LCG) mode enables imaging prioritizing the dynamic range.
Furthermore, enabling Dual Read Rolling Shutter (DRRS) outputs images from the sensor in two distinct types. These images are then composited on the camera to acquire an image with significantly reduced noise.

Image quality and noise comparison in dark location: Lighting wavelength 1450 nm
(Left: Other SSS product, 1.34 effective megapixels; Center: IMX992, HCG mode selected; Right: IMX992, HCG mode selected, DRRS enabled)

 

* Optimized pixel structure for high-sensitivity imaging across a wide range

SSS’s SWIR image sensors employ a thinner indium-phosphorous (InP) layer on top, which would otherwise inevitably absorb visible light, thereby allowing visible light to reach the indium-gallium arsenide (InGaAs) layer underneath, delivering high quantum efficiency even in the visible wavelength. The new products deliver even higher quantum efficiency by optimizing the pixel structure, enabling more uniform sensitivity characteristics across a wide wavelength band from 0.4 to 1.7 μm. Minimizing the image quality differences between wavelengths makes it possible to use the image sensor in a variety of industrial applications and contributes to improved reliability in inspection, recognition, and measurement applications.

 

Product Overview



 

Go to the original article...

Prof. Edoardo Charbon’s Talk on IR SPADs for LiDAR & Quantum Imaging

Image Sensors World        Go to the original article...

 


SWIR/NIR SPAD Image Sensors for LIDAR and Quantum Imaging Applications, by Prof. Charbon

In this talk, prof. Charbon will review the evolution of solid-state photon counting sensors from avalanche photodiodes (APDs) to silicon photomultipliers (SiPMs) to single-photon avalanche diodes (SPADs). The impact of these sensors on LiDAR has been remarkable, however, more innovations are to come with the continuous advance of integrated SPADs and the introduction of powerful computational imaging techniques directly coupled to SPADs/SiPMs. New technologies, such as 3D-stacking in combination with Ge and InP/InGaAs SPAD sensors, are accelerating the adoption of SWIR/NIR image sensors, while enabling new sensing functionalities. Prof. Charbon will conclude the talk with a technological perspective on how all these technologies could come together in low-cost, computational-intensive image sensors, for affordable, yet powerful quantum imaging

Edoardo Charbon (SM’00 F’17) received the Diploma from ETH Zurich, the M.S. from the University of California at San Diego, and the Ph.D. from the University of California at Berkeley in 1988, 1991, and 1995, respectively, all in electrical engineering and EECS. He has consulted with numerous organizations, including Bosch, X-Fab, Texas Instruments, Maxim, Sony, Agilent, and the Carlyle Group. He was with Cadence Design Systems from 1995 to 2000, where he was the Architect of the company's initiative on information hiding for intellectual property protection. In 2000, he joined Canesta Inc., as the Chief Architect, where he led the development of wireless 3-D CMOS image sensors.
Since 2002 he has been a member of the faculty of EPFL, where is a full professor. From 2008 to 2016 he was with Delft University of Technology’s as Chair of VLSI design. Dr. Charbon has been the driving force behind the creation of deep-submicron CMOS SPAD technology, which is mass-produced since 2015 and is present in telemeters, proximity sensors, and medical diagnostics tools. His interests span from 3-D vision, LiDAR, FLIM, FCS, NIROT to super-resolution microscopy, time-resolved Raman spectroscopy, and cryo-CMOS circuits and systems for quantum computing. He has authored or co-authored over 400 papers and two books, and he holds 24 patents. Dr. Charbon is the recipient of the 2023 IISS Pioneering Achievement Award, he is a distinguished visiting scholar of the W. M. Keck Institute for Space at Caltech, a fellow of the Kavli Institute of Nanoscience Delft, a distinguished lecturer of the IEEE Photonics Society, and a fellow of the IEEE.

Go to the original article...

Prophesee event sensor in 2023 VLSI symposium

Image Sensors World        Go to the original article...

Schon et al from Prophesee published a paper titled "A 320 x 320 1/5" BSI-CMOS stacked event sensor for low-power vision applications" in the 2023 VLSI symposium. This paper presents some technical details about their recently announced GenX320 sensor.

Abstract
Event vision sensors acquire sparse data, making them suited for edge vision applications. However, unconventional data format, nonconstant data rates and non-standard interfaces restrain wide adoption. A 320x320 6.3μm pixel BSI stacked
event sensor, specifically designed for embedded vision, features multiple data pre-processing, filtering and formatting functions, variable MIPI and CPI interfaces and a hierarchy of power modes, facilitating operability in power-sensitive vision
applications.







Go to the original article...

ISSCC 2024 Advanced Program Now Available

Image Sensors World        Go to the original article...

ISSCC will be held Feb 18-22, 2024 in San Francisco, CA.

Link to advanced program: https://submissions.mirasmart.com/ISSCC2024/PDF/ISSCC2024AdvanceProgram.pdf

There are several papers of interest in Session 6 on Imagers and Ultrasound. 

6.1 12Mb/s 4×4 Ultrasound MIMO Relay with Wireless Power and Communication for Neural Interfaces
E. So, A. Arbabian (Stanford University, Stanford, CA)

6.2 An Ultrasound-Powering TX with a Global Charge-Redistribution Adiabatic Drive Achieving 69% Power Reduction and 53° Maximum Beam Steering Angle for Implantable Applications
M. Gourdouparis1,2, C. Shi1 , Y. He1 , S. Stanzione1 , R. Ukropec3 , P. Gijsenbergh3 , V. Rochus3 , N. Van Helleputte3 , W. Serdijn2 , Y-H. Liu1,2
 1 imec, Eindhoven, The Netherlands
 2 Delft University of Technology, Delft, The Netherlands
 3 imec, Leuven, Belgium

6.3 Imager with In-Sensor Event Detection and Morphological Transformations with 2.9pJ/pixel×frame Object Segmentation FOM for Always-On Surveillance in 40nm
 J. Vohra, A. Gupta, M. Alioto, National University of Singapore, Singapore, Singapore

6.4 A Resonant High-Voltage Pulser for Battery-Powered Ultrasound Devices
 I. Bellouki1 , N. Rozsa1 , Z-Y. Chang1 , Z. Chen1 , M. Tan1,2, M. Pertijs1
 1 Delft University of Technology, Delft, The Netherlands
 2 SonoSilicon, Hangzhou, China

6.5 A 0.5°-Resolution Hybrid Dual-Band Ultrasound Imaging SoC for UAV Applications
 J. Guo1 , J. Feng1 , S. Chen1 , L. Wu1 , C-W. Tsai1,2, Y. Huang1 , B. Lin1 , J. Yoo1,2
 1 National University of Singapore, Singapore, Singapore
 2 The N.1 Institute for Health, Singapore, Singapore

6.6 A 10,000 Inference/s Vision Chip with SPAD Imaging and Reconfigurable Intelligent Spike-Based Vision Processor
 X. Yang*1 , F. Lei*1 , N. Tian*1 , C. Shi2 , Z. Wang1 , S. Yu1 , R. Dou1 , P. Feng1 , N. Qi1 , J. Liu1 , N. Wu1 , L. Liu1
 1 Chinese Academy of Sciences, Beijing, China 2 Chongqing University, Chongqing, China
 *Equally Credited Authors (ECAs)

6.7 A 160×120 Flash LiDAR Sensor with Fully Analog-Assisted In-Pixel Histogramming TDC Based on Self-Referenced SAR ADC
 S-H. Han1 , S. Park1 , J-H. Chun2,3, J. Choi2,3, S-J. Kim1
 1 Ulsan National Institute of Science and Technology, Ulsan, Korea
 2 Sungkyunkwan University, Suwon, Korea
 3 SolidVue, Seongnam, Korea

6.8 A 256×192-Pixel 30fps Automotive Direct Time-of-Flight LiDAR Using 8× Current-Integrating-Based TIA, Hybrid Pulse Position/Width Converter, and Intensity/CNN-Guided 3D Inpainting
 C. Zou1 , Y. Ou1 , Y. Zhu1 , R. P. Martins1,2, C-H. Chan1 , M. Zhang1
 1 University of Macau, Macau, China
 2 Instituto Superior Tecnico/University of Lisboa, Lisbon, Portugal

6.9 A 0.35V 0.367TOPS/W Image Sensor with 3-Layer Optical-Electronic Hybrid Convolutional Neural Network
 X. Wang*, Z. Huang*, T. Liu, W. Shi, H. Chen, M. Zhang
 Tsinghua University, Beijing, China
 *Equally Credited Authors (ECAs)

6.10 A 1/1.56-inch 50Mpixel CMOS Image Sensor with 0.5μm pitch Quad Photodiode Separated by Front Deep Trench Isolation
 D. Kim, K. Cho, H-C. Ji, M. Kim, J. Kim, T. Kim, S. Seo, D. Im, Y-N. Lee, J. Choi, S. Yoon, I. Noh, J. Kim, K. J. Lee, H. Jung, J. Shin, H. Hur, K. E. Chang, I. Cho, K. Woo, B. S. Moon, J. Kim, Y. Ahn, D. Sim, S. Park, W. Lee, K. Kim, C. K. Chang, H. Yoon, J. Kim, S-I. Kim, H. Kim, C-R. Moon, J. Song
 Samsung Semiconductor, Hwaseong, Korea

6.11 A 320x240 CMOS LiDAR Sensor with 6-Transistor nMOS-Only SPAD Analog Front-End and Area-Efficient Priority Histogram Memory
 M. Kim*1 , H. Seo*1,2, S. Kim1 , J-H. Chun1,2, S-J. Kim3 , J. Choi*1,2
 1 Sungkyunkwan University, Suwon, Korea
 2 SolidVue, Seongnam, Korea
 3 Ulsan National Institute of Science and Technology, Ulsan, Korea
 *Equally Credited Authors (ECAs)
 

Imaging papers in other sessions: 

17.3 A Fully Wireless, Miniaturized, Multicolor Fluorescence Image Sensor Implant for Real-Time Monitoring in Cancer Therapy
 R. Rabbani*1 , M. Roschelle*1 , S. Gweon1 , R. Kumar1 , A. Vercruysse1 , N. W. Cho2 , M. H. Spitzer2 , A. M. Niknejad1 , V. M. Stojanovic1 , M. Anwar1,2
 1 University of California, Berkeley, CA
 2 University of California, San Francisco, CA
 *Equally Credited Authors (ECAs)

33.10 A 2.7ps-ToF-Resolution and 12.5mW Frequency-Domain NIRS Readout IC with Dynamic Light Sensing Frontend and Cross-Coupling-Free Inter-Stabilized Data Converter
 Z. Ma1 , Y. Lin1 , C. Chen1 , X. Qi1 , Y. Li1 , K-T. Tang2 , F. Wang3 , T. Zhang4 , G. Wang1 , J. Zhao1
 1 Shanghai Jiao Tong University, Shanghai, China
 2 National Tsing Hua University, Hsinchu, Taiwan
 3 Shanghai United Imaging Microelectronics Technology, Shanghai, China
 4 Shanghai Mental Health Center, Shanghai, China

Go to the original article...

IISW2023 special issue paper on well capacity of pinned photodiodes

Image Sensors World        Go to the original article...

Miyauchi et al from Brillnics and  Tohoku University published a paper titled "Analysis of Light Intensity and Charge Holding Time Dependence of Pinned Photodiode Full Well Capacity" in the IISW 2023 special issue of the journal Sensors.

Abstract
In this paper, the light intensity and charge holding time dependence of pinned photodiode (PD) full well capacity (FWC) are studied for our pixel structure with a buried overflow path under the transfer gate. The formulae for PDFWC derived from a simple analytical model show that the relation between light intensity and PDFWC is logarithmic because PDFWC is determined by the balance between the photo-generated current and overflow current under the bright condition. Furthermore, with using pulsed light before a charge holding operation in PD, the accumulated charges in PD decrease with the holding time due to the overflow current, and finally, it reaches equilibrium PDFWC. The analytical model has been successfully validated by the technology computer-aided design (TCAD) device simulation and actual device measurement.

Open access: https://doi.org/10.3390/s23218847

Figure 1. Measured dynamic behaviors of PPD.

Figure 2. Pixel schematic and pulse timing for characterization.

Figure 3. PD cross-section and potential of the buried overflow path.

Figure 4. Potential and charge distribution changes from PD reset to PD saturation.

Figure 5. Simple PD model for theoretical analysis.
Figure 6. A simple model of dynamic behavior from PD reset to PD saturation under static light condition.

Figure 7. Potential and charge distribution changes from PD saturation to equilibrium PDFWC.

Figure 8. A simple model of PD charge reduction during charge holding operation with pulse light.
Figure 9. Chip micrograph and specifications of our developed stacked 3Q-DPS [7,8,9].


Figure 10. Relation between ∆Vb and Iof with static TCAD simulation.
Figure 12. PDFWC under various light intensity conditions.
Figure 13. PDFWC with long charge holding times.
Figure 14. TCAD simulation results of equilibrium PDFWC potential.


Go to the original article...

Sony announces full-frame global shutter camera

Image Sensors World        Go to the original article...

Link: https://www.sony.com/lr/electronics/interchangeable-lens-cameras/ilce-9m3

Sony recently announced a full-frame global shutter camera which was featured in several press articles below:


PetaPixel https://petapixel.com/2023/11/07/sony-announces-a9-iii-worlds-first-global-sensor-full-frame-camera/

DPReview https://www.dpreview.com/news/7271416294/sony-announces-a9-iii-world-s-first-full-frame-global-shutter-camera

The Verge https://www.theverge.com/2023/11/7/23950504/sony-a9-iii-mirrorless-camera-global-shutter-price-release


From Sony's official webpage:

[This camera uses the] Newly developed full-frame stacked 24.6 MP Exmor RS™ image sensor with global shutter [...] a stacked CMOS architecture and integral memory [...] advanced A/D conversion enable high-speed processing to proceed with minimal delay. [AI features are implemented using the] BIONZ XR™ processing engine. With up to eight times more processing power than previous versions, the BIONZ XR image processing engine minimises processing latency [...] It's able to process the high volume of data generated by the newly developed Exmor RS image sensor in real-time, even while shooting continuous bursts at up to 120 fps, and it can capture high-quality 14-bit RAW images in all still shooting modes. [...] [The] α9 III can use subject form data to accurately recognise movement. Human pose estimation technology recognises not just eyes but also body and head position with high precision. 

 


 

Go to the original article...

2024 International SPAD Sensor Workshop Submission Deadline Approaching!

Image Sensors World        Go to the original article...

The deadline for the 2024 ISSW on December 8, 2023 is fast approaching! Paper submission portal is now open!

The 2024 International SPAD Sensor Workshop will be held from 4-6 June 2024 in Trento, Italy.

Paper submission

Workshop papers must be submitted online on Microsoft CMT. Click here to be redirected to the submission website. You may need to register first, then search for the "2024 International SPAD Sensor Workshop" within the list of conferences using the dedicated search bar.

Paper format

Kindly take note that the ISSW employs a single-stage submission process, necessitating the submission of camera-ready papers. Each submission should comprise a 1000-character abstract and a 3-page paper, equivalent to 1 page of text and 2 pages of images. The submission must include the authors' name(s) and affiliation, mailing address, telephone, and email address. The formatting can adhere to either a style that integrates text and figures, akin to the standard IEEE format, or a structure with a page of text followed by figures, mirroring the format of the International Solid-State Circuits Conference (ISSCC) or the IEEE Symposium on VLSI Technology and Circuits. Examples illustrating these formats can be accessed in the online database of the International Image Sensor Society.

The deadline for paper submission is 23:59 CET, Friday December 8th, 2023.

Papers will be considered on the basis of originality and quality. High quality papers on work in progress are also welcome. Papers will be reviewed confidentially by the Technical Program Committee. Accepted papers will be made freely available for download from the International Image Sensor Society website. Please note that no major modifications are allowed. Authors will be notified of the acceptance of their abstract & posters at the latest by Wednesday Jan 31st, 2024.
 
Poster submission 

In addition to talks, we wish to offer all graduate students, post-docs, and early-career researchers an opportunity to present a poster on their research projects or other research relevant to the workshop topics . If you wish to take up this opportunity, please submit a 1000-character abstract and a 1-page description (including figures) of the proposed research activity, along with authors’ name(s) and affiliation, mailing address, telephone, and e-mail address.

The deadline for poster submission is 23:59 CET, Friday December 8th, 2023.

Go to the original article...

Detecting hidden defects using a single-pixel THz camera

Image Sensors World        Go to the original article...

 

Li et al. present a new THz imaging technique for defect detection in a recent paper in the journal Nature Communications. The paper is titled "Rapid sensing of hidden objects and defects using a single-pixel diffractive terahertz sensor".

Abstract: Terahertz waves offer advantages for nondestructive detection of hidden objects/defects in materials, as they can penetrate most optically-opaque materials. However, existing terahertz inspection systems face throughput and accuracy restrictions due to their limited imaging speed and resolution. Furthermore, machine-vision-based systems using large-pixel-count imaging encounter bottlenecks due to their data storage, transmission and processing requirements. Here, we report a diffractive sensor that rapidly detects hidden defects/objects within a 3D sample using a single-pixel terahertz detector, eliminating sample scanning or image formation/processing. Leveraging deep-learning-optimized diffractive layers, this diffractive sensor can all-optically probe the 3D structural information of samples by outputting a spectrum, directly indicating the presence/absence of hidden structures or defects. We experimentally validated this framework using a single-pixel terahertz time-domain spectroscopy set-up and 3D-printed diffractive layers, successfully detecting unknown hidden defects inside silicon samples. This technique is valuable for applications including security screening, biomedical sensing and industrial quality control. 

Paper (open access): https://www.nature.com/articles/s41467-023-42554-2

News coverage: https://phys.org/news/2023-11-hidden-defects-materials-single-pixel-terahertz.html

CMOS SPAD Sensors for Solid-state LIDAR

 
In the realm of engineering and material science, detecting hidden structures or defects within materials is crucial. Traditional terahertz imaging systems, which rely on the unique property of terahertz waves to penetrate visibly opaque materials, have been developed to reveal the internal structures of various materials of interest.


This capability provides unprecedented advantages in numerous applications for industrial quality control, security screening, biomedicine, and defense. However, most existing terahertz imaging systems have limited throughput and bulky setups, and they need raster scanning to acquire images of the hidden features.


To change this paradigm, researchers at UCLA Samueli School of Engineering and the California NanoSystems Institute developed a unique terahertz sensor that can rapidly detect hidden defects or objects within a target sample volume using a single-pixel spectroscopic terahertz detector.
Instead of the traditional point-by-point scanning and digital image formation-based methods, this sensor inspects the volume of the test sample illuminated with terahertz radiation in a single snapshot, without forming or digitally processing an image of the sample.


Led by Dr. Aydogan Ozcan, the Chancellor's Professor of Electrical & Computer Engineering and Dr. Mona Jarrahi, the Northrop Grumman Endowed Chair at UCLA, this sensor serves as an all-optical processor, adept at searching for and classifying unexpected sources of waves caused by diffraction through hidden defects. The paper is published in the journal Nature Communications.


"It is a shift in how we view and harness terahertz imaging and sensing as we move away from traditional methods toward more efficient, AI-driven, all-optical sensing systems," said Dr. Ozcan, who is also the Associate Director of the California NanoSystems Institute at UCLA.


This new sensor comprises a series of diffractive layers, automatically optimized using deep learning algorithms. Once trained, these layers are transformed into a physical prototype using additive manufacturing approaches such as 3D printing. This allows the system to perform all-optical processing without the burdensome need for raster scanning or digital image capture/processing.


"It is like the sensor has its own built-in intelligence," said Dr. Ozcan, drawing parallels with their previous AI-designed optical neural networks. "Our design comprises several diffractive layers that modify the input terahertz spectrum depending on the presence or absence of hidden structures or defects within materials under test. Think of it as giving our sensor the capability to 'sense and respond' based on what it 'sees' at the speed of light."


To demonstrate their novel concept, the UCLA team fabricated a diffractive terahertz sensor using 3D printing and successfully detected hidden defects in silicon samples. These samples consisted of stacked wafers, with one layer containing defects and the other concealing them. The smart system accurately revealed the presence of unknown hidden defects with various shapes and positions.
The team believes their diffractive defect sensor framework can also work across other wavelengths, such as infrared and X-rays. This versatility heralds a plethora of applications, from manufacturing quality control to security screening and even cultural heritage preservation.


The simplicity, high throughput, and cost-effectiveness of this non-imaging approach promise transformative advances in applications where speed, efficiency, and precision are paramount.

Go to the original article...

A 400 kilopixel resolution superconducting camera

Image Sensors World        Go to the original article...

Oripov et al. from NIST and JPL recently published a paper titled "A superconducting nanowire single-photon camera with 400,000 pixels" in Nature.

Abstract: For the past 50 years, superconducting detectors have offered exceptional sensitivity and speed for detecting faint electromagnetic signals in a wide range of applications. These detectors operate at very low temperatures and generate a minimum of excess noise, making them ideal for testing the non-local nature of reality, investigating dark matter, mapping the early universe and performing quantum computation and communication. Despite their appealing properties, however, there are at present no large-scale superconducting cameras—even the largest demonstrations have never exceeded 20,000 pixels. This is especially true for superconducting nanowire single-photon detectors (SNSPDs). These detectors have been demonstrated with system detection efficiencies of 98.0%, sub-3-ps timing jitter, sensitivity from the ultraviolet to the mid-infrared and microhertz dark-count rates, but have never achieved an array size larger than a kilopixel. Here we report on the development of a 400,000-pixel SNSPD camera, a factor of 400 improvement over the state of the art. The array spanned an area of 4 × 2.5 mm with 5 × 5-μm resolution, reached unity quantum efficiency at wavelengths of 370 nm and 635 nm, counted at a rate of 1.1 × 105 counts per second (cps) and had a dark-count rate of 1.0 × 10^−4 cps per detector (corresponding to 0.13 cps over the whole array). The imaging area contains no ancillary circuitry and the architecture is scalable well beyond the present demonstration, paving the way for large-format superconducting cameras with near-unity detection efficiencies across a wide range of the electromagnetic spectrum.

Link: https://www.nature.com/articles/s41586-023-06550-2

a, Imaging at 370 nm, with raw time-delay data from the buses shown as individual dots in red and binned 2D histogram data shown in black and white. b, Count rate as a function of bias current for various wavelengths of light as well as dark counts. c, False-colour scanning electron micrograph of the lower-right corner of the array, highlighting the interleaved row and column detectors. Lower-left inset, schematic diagram showing detector-to-bus connectivity. Lower-right inset, close-up showing 1.1-μm detector width and effective 5 × 5-μm pixel size. Scale bar, 5 μm.


 

a, Circuit diagram of a bus and one section of 50 detectors with ancillary readout components. SNSPDs are shown in the grey boxes and all other components are placed outside the imaging area. A photon that arrives at time t0 has its location determined by a time-of-flight readout process based on the time-of-arrival difference t2 − t1. b, Oscilloscope traces from a photon detection showing the arrival of positive (green) and negative (red) pulses at times t1 and t2, respectively.

a, Histogram of the pulse differential time delays Δt = t1 − t2 from the north bus during flood illumination with a Gaussian spot. All 400 detectors resolved clearly, with gaps indicating detectors that were pruned. Inset, zoomed-in region showing that counts from adjacent detectors are easily resolvable and no counts were generated by a pruned detector. b, Plot of raw trow and tcol time delays when flood illuminated at 370 nm. c, Zoomed-in subsection of the array with 25 × 25 detectors. d, Histogram of time delays for a 2 × 2 detector subset with 10-ps bin size showing clear distinguishability between adjacent detectors.

a, Count rate versus optical attenuation for a section of detectors biased at 45 μA per detector. The dashed purple line shows a slope of 1, with deviations from that line at higher rates indicating blocking loss. b, System jitter of a 50-detector section. Detection delay was calculated as the time elapsed between the optical pulse being generated and the detection event being read out.



News coverage: https://www.universetoday.com/163959/a-new-superconducting-camera-can-resolve-single-photons/


A New Superconducting Camera can Resolve Single Photons

Researchers have built a superconducting camera with 400,000 pixels, which is so sensitive it can detect single photons. It comprises a grid of superconducting wires with no resistance until a photon strikes one or more wires. This shuts down the superconductivity in the grid, sending a signal. By combining the locations and intensities of the signals, the camera generates an image.


The researchers who built the camera, from the US National Institute of Standards and Technology (NIST) say the architecture is scalable, and so this current iteration paves the way for even larger-format superconducting cameras that could make detections across a wide range of the electromagnetic spectrum. This would be ideal for astronomical ventures such as imaging faint galaxies or extrasolar planets, as well as biomedical research using near-infrared light to peer into human tissue.


These devices have been possible for decades but with a fraction of the pixel count. This new version has 400 times more pixels than any other device of its type. Previous versions have not been very practical because of the low-quality output.

In the past, it was found to be difficult-to-impossible to chill the camera’s superconducting components – which would be hundreds of thousands of wires – by connecting them each to a cooling system.
According to NIST, researchers Adam McCaughan and Bakhrom Oripov and their collaborators at NASA’s Jet Propulsion Laboratory in Pasadena, California, and the University of Colorado Boulder overcame that obstacle by constructing the wires to form multiple rows and columns, like those in a tic-tac-toe game, where each intersection point is a pixel. Then they combined the signals from many pixels onto just a few room-temperature readout nanowires.


The detectors can discern differences in the arrival time of signals as short as 50 trillionths of a second. They can also detect up to 100,000 photons a second striking the grid.
McCaughan said the readout technology can easily be scaled up for even larger cameras, and predicted that a superconducting single-photon camera with tens or hundreds of millions of pixels could soon be available.


In the meantime, the team plans to improve the sensitivity of their prototype camera so that it can capture virtually every incoming photon. That will enable the camera to tackle quantum imaging techniques that could be a game changer for many fields, including astronomy and medical imaging.

Go to the original article...

css.php