3D cameras at CES 2024: Orbbec and MagikEye

Image Sensors World        Go to the original article...

Annoucements below from (1) Orbbec and (2) MagikEye about their upcoming CES demos.

Orbbec releases Persee N1 camera-computer kit for 3D vision enthusiasts, powered by the NVIDIA Jetson platform

Orbbec’s feature-rich RGB-D camera-computer is a ready-to-use out-of-the box solution for 3D vision application developers and experimenters

Troy, Mich, 13 December 2023 — Orbbec, an industry leader dedicated to 3D vision systems, has developed the Persee N1, an all-in-one combination of a popular stereo-vision 3D camera and a purpose-built computer based on the NVIDIA Jetson platform, and equipped with industry-standard interfaces for the most useful accessories and data connections. Developers using the newly launched camera-computer will also enjoy the benefits of the Ubuntu OS and OpenCV libraries. Orbbec recently became an NVIDIA Partner Network (NPN) Preferred Partner.

Persee N1 delivers highly accurate and reliable data for in-door/semi-outdoor operation, ideally suited for healthtech, dimensioning, interactive gaming, retail and robotics applications, and features:

  • An easy setup process using the Orbbec SDK and Ubuntu-based software environment.
  • Industry-proven Gemini 2 camera, based on active stereo IR technology, which includes Orbbec’s custom ASIC for high-quality, in-camera depth processing.
  • The powerful NVIDIA Jetson platform for edge AI and robotics.
  • HDMI and USB ports for easy connections to a monitor and keyboard.
  • Multiple USB ports for data and a POE (Power over Ethernet) port for combined data and power connections.
  •  Expandable storage with MicroSD and M.2 slots.

“The self-contained Persee N1 camera-computer makes it easy for computer vision developers to experiment with 3D vision,” said Amit Banerjee, Head of Platform and Partnerships at Orbbec. “This combination of our Gemini 2 RGB-D camera and the NVIDIA Jetson platform for edge AI and robotics allows AI development while at the same time enabling large-scale cloud-based commercial deployments.”

The new camera module also features official support for the widely used Open Computer Vision (OpenCV) library. OpenCV is used in an estimated 89% of all embedded vision projects according to industry reports. This integration marks the beginning of a deeper collaboration between Orbbec and OpenCV, which is operated by the non-profit Open Source Vision Foundation.

“The Persee N1 features robust support for the industry-standard computer vision and AI toolset from OpenCV,” said Dr. Satya Mullick, CEO of OpenCV. “OpenCV and Orbbec have entered a partnership to ensure OpenCV compatibility with Orbbec’s powerful new devices and are jointly developing new capabilities for the 3D vision community.”

MagikEye's Pico Image Sensor: Pioneering the Eyes of AI for the Robotics Age at CES

From Businesswire.

December 20, 2023 09:00 AM Eastern Standard Time
STAMFORD, Conn.--(BUSINESS WIRE)--Magik Eye Inc. (www.magik-eye.com), a trailblazer in 3D sensing technology, is set to showcase its groundbreaking Pico Depth Sensor at the 2024 Consumer Electronics Show (CES) in Las Vegas, Nevada. Embarking on a mission to "Provide the Eyes of AI for the Robotics Age," the Pico Depth Sensor represents a key milestone in MagikEye’s journey towards AI and robotics excellence.

The heart of the Pico Depth Sensor’s innovation lies in its use of MagikEye’s proprietary Invertible Light™ Technology (ILT), which operates efficiently on a “bare-metal” ARM M0 processor within the Raspberry Pi RP2040. This noteworthy feature underscores the sensor's ability to deliver high-quality 3D sensing without the need for specialized silicon. Moreover, while the Pico Sensor showcases its capabilities using the RP2040, the underlying technology is designed with adaptability in mind, allowing seamless operation on a variety of microcontroller cores, including those based on the popular RISC-V architecture. This flexibility signifies a major leap forward in making advanced 3D sensing accessible and adaptable across different platforms.

Takeo Miyazawa, Founder & CEO of MagikEye, emphasizes the sensor's transformative potential: “Just as personal computers democratized access to technology and spurred a revolution in productivity, the Pico Depth Sensor is set to ignite a similar transformation in the realms of AI and robotics. It is not just an innovative product; it’s a gateway to new possibilities in fields like autonomous vehicles, smart home systems, and beyond, where AI and depth sensing converge to create smarter, more intuitive solutions.”

Attendees at CES 2024 are cordially invited to visit MagikEye's booth for an exclusive first-hand experience of the Pico Sensor. Live demonstrations of MagikEye’s latest ILT solutions for next-gen 3D sensing solutions will be held from January 9-11 at the Embassy Suites by Hilton Convention Center Las Vegas. Demonstration times are limited and private reservations will be accommodated by contacting ces2024@magik-eye.com.

Go to the original article...

imec paper at IEDM 2023 on a waveguide design for color imaging

Image Sensors World        Go to the original article...

News article: https://optics.org/news/14/12/11

imec presents new way to render colors with sub-micron pixel sizes

This week at the International Electron Devices Meeting, in San Francisco, CA, (IEEE IEDM 2023), imec, a Belgium-based research and innovation hub in nanoelectronics and digital technologies, has demonstrated a new method for “faithfully splitting colors with sub-micron resolution using standard back-end-of-line processing on 300mm wafers”.

imec says that the technology is poised to elevate high-end camera performance, delivering higher signal-to-noise ratio, enhanced color quality with unprecedented spatial resolution.
Designing next-generation CMOS imagers requires striking a balance between collecting all incoming photons, achieving a resolution down to photon size or diffraction limit, and accurately recording the light color.

Traditional image sensors with color filters on the pixels are still limited in combining all three requirements. While higher pixel densities would increase the overall image resolution, smaller pixels capture even less light and are prone to artifacts that result from interpolating color values from neighboring pixels.

Even though diffraction-based color splitters represent a leap forward in increasing color sensitivity and capturing light, they are still unable to improve image resolution.

'Fundamentally new' approach
imec is now proposing a fundamentally new way for splitting colors at sub-micron pixel sizes (i.e., beyond the fundamental Abbe diffraction limit) using standard back-end processing. The approach is said to “tick all the boxes” for next-generation imagers by collecting nearly all photons, increasing resolution by utilizing very small pixels, and rendering colors faithfully.
To achieve this, imec researchers built an array of vertical Si3N4 multimode waveguides in an SiO2 matrix. The waveguides have a tapered, diffraction-limited sized input (e.g., 800 x 800 nm2) to collect all the incident light.

“In each waveguide, incident photons are exciting both symmetric and asymmetric modes, which propagate through the waveguide differently, leading to a unique “beating” pattern between the two modes for a given frequency. This beating pattern enables a spatial separation at the end of the waveguides corresponding to a specific color,” said Prof. Jan Genoe, scientific director at imec.

Cost-efficient structures
The total output light from each waveguide is estimated to reach over 90% within the range of human color perception (wavelength range 400-700nm), making it superior to color filters, says imec.
Robert Gehlhaar, principal member of technical staff at imec, said, “Because this technique is compatible with standard 300-mm processing, the splitters can be produced cost-efficiently. This enables further scaling of high-resolution imagers, with the ultimate goal to detect every incident photon and its properties.

“Our ambition is to become the future standard for color imaging with diffraction-limited resolution. We are welcoming industry partners to join us on this path towards full camera demonstration.”


RGB camera measurement (100x magnification) of an array of waveguides with alternating 5 left-side-open-aperture and 5 right-side-open-aperture (the others being occluded by TiN) waveguides at a 1-micron pitch. Yellow light exits at the right part of the waveguide, whereas the blue exits at the left. The wafer is illuminated using plane wave white light. Credit: imec.

3D visualization (left) and TEM cross-section (right) of the vertical waveguide array for color splitting in BY-CR imaging. Credit: imec.

Go to the original article...

OmniVision 15MP/1MP hybrid RGB/event vision sensor (ISSCC 2023)

Image Sensors World        Go to the original article...

Guo et al. from Omnivision presented a hybrid RGB/event vision sensor in a paper titled "A 3-Wafer-Stacked Hybrid 15MPixel CIS + 1 MPixel EVS with 4.6GEvent/s Readout, In-Pixel TDC and On-Chip ISP and ESP Function" at ISSCC 2023.

Abstract: Event Vision Sensors (EVS) determine, at pixel level, whether a temporal contrast change beyond a predefined threshold is detected [1–6]. Compared to CMOS image sensors (CIS), this new modality inherently provides data-compression functionality and hence, enables high-speed, low-latency data capture while operating at low power. Numerous applications such as object tracking, 3D detection, or slow-motion are being researched based on EVS [1]. Temporal contrast detection is a relative measurement and is encoded by so-called “events” being further characterized through x/y pixel location, event time-stamp (t) and the polarity (p), indicating whether an increase or decrease in illuminance has been detected.


Schematic of dual wafer 4x4 macro-pixel and peripheral readout circuitry on third wafer.

EVS readout block diagram and asynchronous scanner with hierarchical skip-logic.
Event-signal processor (ESP) block diagram and MIPI interface.

Sensor output illustrating hybrid CIS and EVS data capture. 10kfps slow-motion images of an exploding water balloon from 1080p, 120fps + event data.
Characterization results: Contrast response, nominal contrast, latency and noise vs. illuminance.

Technology trend and chip micrograph.

Go to the original article...

X-FAB introduces NIR SPADs on their 180nm process

Image Sensors World        Go to the original article...

X-FAB Introduces New Generation of Enhanced Performance SPAD Devices focused on Near-Infrared Applications

Link: https://www.xfab.com/news/details/article/x-fab-introduces-new-generation-of-enhanced-performance-spad-devices-focused-on-near-infrared-applications?trk=feed_main-feed-card_feed-article-content

NEWS – Tessenderlo, Belgium – Nov 16, 2023
X-FAB Silicon Foundries SE, the leading analog/mixed-signal and specialty foundry, has introduced a specific near-infrared version to its single-photon avalanche diode (SPAD) device portfolio.

X-FAB Silicon Foundries SE, the leading analog/mixed-signal and specialty foundry, has introduced a specific near-infrared version to its single-photon avalanche diode (SPAD) device portfolio. Like the previous SPAD generation, which launched in 2021, this version is based on the company’s 180nm XH018 process. The inclusion of an additional step to the fabrication workflow has resulted in significant increases in signal while still retaining the same low noise floor, without negatively affecting parameters such as dark count rate, afterpulsing and breakdown voltage.

Through this latest variant, X-FAB is successfully expanding the scope of its SPAD offering, improving its ability to address numerous emerging applications where NIR operation proves critically important. Among these are time-of-flight sensing in industrial applications, vehicle LiDAR imaging, biophotonics and FLIM research work, plus a variety of different medical-related activities. Sensitivity is boosted over the whole near-infrared (NIR) band, with respective improvements of 40% and 35% at the key wavelengths of 850nm and 905nm.

Using the new SPAD devices will reduce the complexity of visible light filtering, since UV and visible light is already suppressed. Filter designs will consequently be simpler, with fewer component parts involved. Furthermore, having exactly the same footprint dimensions as the previous SPAD generation provides a straightforward upgrade route. Customers’ existing designs can gain major performance benefits by just swapping in the new devices.

X-FAB has compiled a comprehensive PDK for the near-infrared SPAD variant, with extensive documentation and application notes featured. Models for optical and electrical simulation will provide engineers the additional design support they need, enabling them to integrate these devices into their circuitry within a short time period.

As Heming Wei, Product Marketing Manager Sensors at X-FAB explains; “Our SPAD technology has already gained a very positive market response, seeing uptake with a multitude of customers. Thanks to continuing innovation at the process level, we have now been able to develop a solution that will secure business for us within various NIR applications, across automotive, healthcare and life sciences.”
The new NIR enhanced SPAD is available now. Engineers can start their design with the new device immediately.

Go to the original article...

Lecture by Dr. Tobi Delbruck on the history of silicon retina and event cameras

Image Sensors World        Go to the original article...

Silicon Retina: History, Live Demo, and Whiteboard Pixel Design


Rockwood Memorial Lecture 2023: Tobi Delbruck, Institute of Neuroinformatics, UZH-ETH Zürich

Event Camera Silicon Retina; History, Live Demo, and Whiteboard Circuit Design
Rockwood Memorial Lecture 2023 (11/20/23)
Hosted by: Terry Sejnowski, Ph.D. and Gert Cauwenberghs, Ph.D.
Organized by: Institute for Neural Computation, https://inc.ucsd.edu

Abstract: Event cameras electronically model spike-based sparse output from biological eyes to reduce latency, increase dynamic range, and sparsify activity in comparison to conventional imagers. Driven by the need for more efficient battery powered, always-on machine vision in future wearables, event cameras have emerged as a next step in the continued evolution of electronic vision. This lecture will have 3 parts: 1. A brief history of silicon retina development starting from Fukushima’s Neocognitron and Mahowald and Mead’s earliest spatial retinas; 2: A live demo of a contemporary frame-event DAVIS camera that includes an inertial measurement unit (IMU) vestibular system, 3: (targeted for neuromorphic analog circuit design students in the BENG 216 class), a whiteboard discussion about event camera pixel design at the transistor level, highlighting design aspects of event camera pixels which endow them with fast response even under low lighting, precise threshold matching even under large transistor mismatch, and temperature-independent event threshold.

Go to the original article...

3D stacked BSI SPAD sensor with on-chip lens

Image Sensors World        Go to the original article...

Fujisaki et al. from Sony Semiconductor (Japan) presented a paper titled "A back-illuminated 6 μm SPAD depth sensor with PDE 36.5% at 940 nm via combination of dual diffraction structure and 2×2 on-chip lens" at the 2023 IEEE Symposium on VLSI Technology and Circuits.

Abstract: We present a back-illuminated 3D-stacked 6 μm single-photon avalanche diode (SPAD) sensor with very high photon detection efficiency (PDE) performance. To enhance PDE, a dual diffraction structure was combined with 2×2 on-chip lens (OCL) for the first time. A dual diffraction structure comprises a pyramid surface for diffraction (PSD) and periodic uneven structures by shallow trench for diffraction formed on the Si surface of light-facing and opposite sides, respectively. Additionally, PSD pitch and SiO 2 film thickness buried in full trench isolation were optimized. Consequently, a PDE of 36.5% was achieved at λ = 940 nm, the world’s highest value. Owing to shield ring contact, crosstalk was reduced by about half compared to a conventionally plugged one.

Schematics of Gapless and 2x2 on-chip lens.

Cross sectional SPAD image of (a) our previous work and (b) this work.

Go to the original article...

Early announcement: Single Photon Workshop 2024

Image Sensors World        Go to the original article...

Single Photon Workshop 2024
EICC Edinburgh 18-22 Nov 2024

The 11th Single Photon Workshop (SPW) 2024 will be held 18-24 November 2024, hosted at the Edinburgh International Conference Centre.

SPW is the largest conference in the world dedicated to single-photon generation and detection technology and applications. The biennial international conference brings together a broad range of experts across academia, industry and government bodies with interests in single-photon sources, single-photon detectors, photon entanglement, photonic quantum technologies and their use in scientific and industrial applications. It is an exciting opportunity for those interested in these technologies to learn about the state of the art and to foster continuing partnerships with others seeking to advance the capabilities of such technologies.

In tandem with the scientific programme, SPW 2024 will include a major industry exhibition and networking events.
Please register your interest at www.spw2024.org
Official registration will open in January 2024.
The 2024 workshop is being jointly organized by Heriot-Watt University and University of Glasgow.

Go to the original article...

IISW2023 special issue paper: Small-pitch InGaAs photodiodes

Image Sensors World        Go to the original article...

In a new paper titled "Design and Characterization of 5 μm Pitch InGaAs Photodiodes Using In Situ Doping and Shallow Mesa Architecture for SWIR Sensing" Jules Tillement et al. from STMicroelectronics, U. Grenoble and CNRS Grenoble write:

Abstract: This paper presents the complete design, fabrication, and characterization of a shallow-mesa photodiode for short-wave infra-red (SWIR) sensing. We characterized and demonstrated photodiodes collecting 1.55 μm photons with a pixel pitch as small as 3 μm. For a 5 μm pixel pitch photodiode, we measured the external quantum efficiency reaching as high as 54%. With substrate removal and an ideal anti-reflective coating, we estimated the internal quantum efficiency as achieving 77% at 1.55 μm. The best measured dark current density reached 5 nA/cm2 at −0.1 V and at 23 °C. The main contributors responsible for this dark current were investigated through the study of its evolution with temperature. We also highlight the importance of passivation with a perimetric contribution analysis and the correlation between MIS capacitance characterization and dark current performance.

Full paper (open access): https://www.mdpi.com/1424-8220/23/22/9219

Figure 1. Schematic cross section of the photodiode after different processes. (a) Photodiode fabricated by Zn diffusion or Be implantation; (b) photodiode fabrication using shallow mesa technique.

Figure 2. Band diagram of simulated structure at equilibrium with the photogenerated pair schematically represented with their path of collection.

Figure 3. Top zoom of the structure—Impact of the N-InP (a) thickness and (b) doping on the band diagram at equilibrium.

Figure 4. Simulated dark current with TCAD Synopsys tools [28]. (a) Shows evolution of the dark current when the InP SRH lifetime is modulated; (b) evolution of the dark current when the InGaAs SRH lifetime is modulated.

Figure 5. Impact of the doping concentration of the InP barrier on the carrier collection.

Figure 6. Simplified and schematic process flow of the shallow mesa-type process. (a) The full stack; (b) the definition of the pixel by etching the P layer and (c) the encapsulation and fabrication of contacts.

Figure 7. SEM views after the whole process. (a) A cross-section of the top stack where the P layer is etched and (b) a top view of the different configuration of the test structures (single in-array diode is not shown on this SEM view).

Figure 8. Schematic cross section of the structure with its potential sources of the dark current.

Figure 9. Dark current measurement on 15 μm pitch in a matrix like environment. The curve is the median of more than 100 single in-array diodes measured.

Figure 10. Dark current measurement of the ten-by-ten diode bundle. This measurement is from process B.

Figure 11. Evolution of the dark current with temperature at −0.1 V. The solid lines show the theoretical evolution of the current limited by diffusion (light blue line) and by generation recombination (purple line). The temperature measurement is performed on a bundle of ten-by-ten 5 μm pixel pitch diodes.

Figure 12. Perimetric and bulk contribution to the global dark current from measurements performed on diodes with diameter ranging from 10 to 120 μm.

Figure 13. (a) Capacitance measurement on metal–insulator–semiconductor structure. The measurement starts at 0 V then ramps to +40 V then goes to −40 V and ends at +40 V. (b) A cross section of the MIS structure. The MIS is a 300 μm diameter circle.

Figure 14. Dark current performances compared to the hysteresis measured on several different wafers.

Figure 15. Dark current measurement of a ten-by-ten bundle of 5 μm pixel pitch photodiode. The measurements are conducted at 23 °C.

Figure 16. (a) Schematic test structure for QE measurement; (b) the results of the 3D FDTD simulations conducted with Lumerical to estimate the internal QE of the photodiode.

Figure 18. Current noise for a ten-by-ten 5 μm pixel pitch photodiode bundle measured at −0.1 V.

Figure 19. Median current measurement for bundles of one hundred 3 μm pixel pitch photodiodes under dark and SWIR illumination conditions. The dark blue line represents the dark current and the pink line is the photocurrent under 1.55 μm illumination.

Figure 20. Comparison of our work in blue versus the state of the art for the fabrication of InGaAs photodiodes.

Go to the original article...

Sony announces new 5MP SWIR sensor IMX992

Image Sensors World        Go to the original article...

Product page: https://www.sony-semicon.com/en/products/is/industry/swir/imx992-993.html

Press release: https://www.sony-semicon.com/en/news/2023/2023112901.html

Sony Semiconductor Solutions to Release SWIR Image Sensor for Industrial Applications with Industry-Leading 5.32 Effective Megapixels Expanding the lineup for delivering high-resolution and low-light performance 

Atsugi, Japan — Sony Semiconductor Solutions Corporation (SSS) today announced the upcoming release of the IMX992 short-wavelength infrared (SWIR) image sensor for industrial equipment, with the industry’s highest pixel count, at 5.32 effective megapixels.

The new sensor uses SSS’s proprietary Cu-Cu connection to achieve the industry’s smallest pixel size of 3.45 μm among SWIR image sensors. It also features an optimized pixel structure for efficiently capturing light, enabling high-definition imaging across a broad spectrum ranging from the visible to invisible short-wavelength infrared regions (wavelength: 0.4 to 1.7 μm). Furthermore, new shooting modes deliver high-quality images with significantly reduced noise in dark environments compared to conventional products.

In addition to this product, SSS will also release the IMX993 with a pixel size of 3.45 μm and an effective pixel count of 3.21 megapixels to further expand its SWIR image sensor lineup. These new SWIR image sensors with high pixel counts and high sensitivity will help contribute to the evolution of various industrial equipment.

In the industrial equipment domain in recent years, there has been increasing demand for improving productivity and preventing defective products from leaving the plant. In this context, the capacity to sense not only visible light but also light in the invisible band is in demand. SSS’s SWIR image sensors, which are capable of seamless wide spectrum imaging in the visible to invisible short-wavelength infrared range using a single camera, are already being used in various processes such as semiconductor wafer bonding and defect inspection, as well as ingredient and contaminant inspections in food production.

The new sensors enable imaging with higher resolution using pixel miniaturization, while enhancing imaging performance in low-light environments to provide higher quality imaging in inspection and monitoring applications conducted in darker conditions. By making the most of the characteristics of short-wavelength infrared light, whose light reflection and absorption properties are different from those of visible light, these products help to further expand applications in such areas as inspection, recognition and measurement, thereby contributing to improved industrial productivity.

Main Features
* High pixel count made possible by the industry’s smallest pixels at 3.45 μm, delivering high-resolution imaging

A Cu-Cu connection is used between the indium-gallium arsenide (InGaAs) layer that forms the photodiode of the light receiving unit and the silicon (Si) layer that forms the readout circuit. This design allows for a smaller pixel pitch, resulting in the industry’s smallest pixel size of 3.45 μm. This, in turn, helps achieve a compact form factor that still delivers the industry’s highest pixel count of approximately 5.32 effective megapixels on the IMX992, and approximately 3.21 effective megapixels on the IMX993. The higher pixel count enables detection of tiny objects or imaging across a wide range, contributing to significantly improved recognition and measurement precision in various inspections using short-wavelength infrared light.

 Comparison of SWIR images with different resolutions: Lighting wavelength 1550 nm
(Left: Other SSS product, 1.34 effective megapixels; Right: IMX992)

* Low-noise imaging even in dark locations possible by switching the shooting mode

Inclusion of new shooting modes enables low-noise imaging without being affected by environmental brightness. In dark environments with limited light, High Conversion Gain (HCG) mode directly amplifies the signal with minimal noise after being converted to an electrical signal from light, thereby relatively reducing the amount of noise downstream. Doing so minimizes the impact of noise in dark locations, leading to greater recognition precision. On the other hand, in bright environments with plenty of light, Low Conversion Gain (LCG) mode enables imaging prioritizing the dynamic range.
Furthermore, enabling Dual Read Rolling Shutter (DRRS) outputs images from the sensor in two distinct types. These images are then composited on the camera to acquire an image with significantly reduced noise.

Image quality and noise comparison in dark location: Lighting wavelength 1450 nm
(Left: Other SSS product, 1.34 effective megapixels; Center: IMX992, HCG mode selected; Right: IMX992, HCG mode selected, DRRS enabled)


* Optimized pixel structure for high-sensitivity imaging across a wide range

SSS’s SWIR image sensors employ a thinner indium-phosphorous (InP) layer on top, which would otherwise inevitably absorb visible light, thereby allowing visible light to reach the indium-gallium arsenide (InGaAs) layer underneath, delivering high quantum efficiency even in the visible wavelength. The new products deliver even higher quantum efficiency by optimizing the pixel structure, enabling more uniform sensitivity characteristics across a wide wavelength band from 0.4 to 1.7 μm. Minimizing the image quality differences between wavelengths makes it possible to use the image sensor in a variety of industrial applications and contributes to improved reliability in inspection, recognition, and measurement applications.


Product Overview


Go to the original article...

Prof. Edoardo Charbon’s Talk on IR SPADs for LiDAR & Quantum Imaging

Image Sensors World        Go to the original article...


SWIR/NIR SPAD Image Sensors for LIDAR and Quantum Imaging Applications, by Prof. Charbon

In this talk, prof. Charbon will review the evolution of solid-state photon counting sensors from avalanche photodiodes (APDs) to silicon photomultipliers (SiPMs) to single-photon avalanche diodes (SPADs). The impact of these sensors on LiDAR has been remarkable, however, more innovations are to come with the continuous advance of integrated SPADs and the introduction of powerful computational imaging techniques directly coupled to SPADs/SiPMs. New technologies, such as 3D-stacking in combination with Ge and InP/InGaAs SPAD sensors, are accelerating the adoption of SWIR/NIR image sensors, while enabling new sensing functionalities. Prof. Charbon will conclude the talk with a technological perspective on how all these technologies could come together in low-cost, computational-intensive image sensors, for affordable, yet powerful quantum imaging

Edoardo Charbon (SM’00 F’17) received the Diploma from ETH Zurich, the M.S. from the University of California at San Diego, and the Ph.D. from the University of California at Berkeley in 1988, 1991, and 1995, respectively, all in electrical engineering and EECS. He has consulted with numerous organizations, including Bosch, X-Fab, Texas Instruments, Maxim, Sony, Agilent, and the Carlyle Group. He was with Cadence Design Systems from 1995 to 2000, where he was the Architect of the company's initiative on information hiding for intellectual property protection. In 2000, he joined Canesta Inc., as the Chief Architect, where he led the development of wireless 3-D CMOS image sensors.
Since 2002 he has been a member of the faculty of EPFL, where is a full professor. From 2008 to 2016 he was with Delft University of Technology’s as Chair of VLSI design. Dr. Charbon has been the driving force behind the creation of deep-submicron CMOS SPAD technology, which is mass-produced since 2015 and is present in telemeters, proximity sensors, and medical diagnostics tools. His interests span from 3-D vision, LiDAR, FLIM, FCS, NIROT to super-resolution microscopy, time-resolved Raman spectroscopy, and cryo-CMOS circuits and systems for quantum computing. He has authored or co-authored over 400 papers and two books, and he holds 24 patents. Dr. Charbon is the recipient of the 2023 IISS Pioneering Achievement Award, he is a distinguished visiting scholar of the W. M. Keck Institute for Space at Caltech, a fellow of the Kavli Institute of Nanoscience Delft, a distinguished lecturer of the IEEE Photonics Society, and a fellow of the IEEE.

Go to the original article...

Prophesee event sensor in 2023 VLSI symposium

Image Sensors World        Go to the original article...

Schon et al from Prophesee published a paper titled "A 320 x 320 1/5" BSI-CMOS stacked event sensor for low-power vision applications" in the 2023 VLSI symposium. This paper presents some technical details about their recently announced GenX320 sensor.

Event vision sensors acquire sparse data, making them suited for edge vision applications. However, unconventional data format, nonconstant data rates and non-standard interfaces restrain wide adoption. A 320x320 6.3μm pixel BSI stacked
event sensor, specifically designed for embedded vision, features multiple data pre-processing, filtering and formatting functions, variable MIPI and CPI interfaces and a hierarchy of power modes, facilitating operability in power-sensitive vision

Go to the original article...

ISSCC 2024 Advanced Program Now Available

Image Sensors World        Go to the original article...

ISSCC will be held Feb 18-22, 2024 in San Francisco, CA.

Link to advanced program: https://submissions.mirasmart.com/ISSCC2024/PDF/ISSCC2024AdvanceProgram.pdf

There are several papers of interest in Session 6 on Imagers and Ultrasound. 

6.1 12Mb/s 4×4 Ultrasound MIMO Relay with Wireless Power and Communication for Neural Interfaces
E. So, A. Arbabian (Stanford University, Stanford, CA)

6.2 An Ultrasound-Powering TX with a Global Charge-Redistribution Adiabatic Drive Achieving 69% Power Reduction and 53° Maximum Beam Steering Angle for Implantable Applications
M. Gourdouparis1,2, C. Shi1 , Y. He1 , S. Stanzione1 , R. Ukropec3 , P. Gijsenbergh3 , V. Rochus3 , N. Van Helleputte3 , W. Serdijn2 , Y-H. Liu1,2
 1 imec, Eindhoven, The Netherlands
 2 Delft University of Technology, Delft, The Netherlands
 3 imec, Leuven, Belgium

6.3 Imager with In-Sensor Event Detection and Morphological Transformations with 2.9pJ/pixel×frame Object Segmentation FOM for Always-On Surveillance in 40nm
 J. Vohra, A. Gupta, M. Alioto, National University of Singapore, Singapore, Singapore

6.4 A Resonant High-Voltage Pulser for Battery-Powered Ultrasound Devices
 I. Bellouki1 , N. Rozsa1 , Z-Y. Chang1 , Z. Chen1 , M. Tan1,2, M. Pertijs1
 1 Delft University of Technology, Delft, The Netherlands
 2 SonoSilicon, Hangzhou, China

6.5 A 0.5°-Resolution Hybrid Dual-Band Ultrasound Imaging SoC for UAV Applications
 J. Guo1 , J. Feng1 , S. Chen1 , L. Wu1 , C-W. Tsai1,2, Y. Huang1 , B. Lin1 , J. Yoo1,2
 1 National University of Singapore, Singapore, Singapore
 2 The N.1 Institute for Health, Singapore, Singapore

6.6 A 10,000 Inference/s Vision Chip with SPAD Imaging and Reconfigurable Intelligent Spike-Based Vision Processor
 X. Yang*1 , F. Lei*1 , N. Tian*1 , C. Shi2 , Z. Wang1 , S. Yu1 , R. Dou1 , P. Feng1 , N. Qi1 , J. Liu1 , N. Wu1 , L. Liu1
 1 Chinese Academy of Sciences, Beijing, China 2 Chongqing University, Chongqing, China
 *Equally Credited Authors (ECAs)

6.7 A 160×120 Flash LiDAR Sensor with Fully Analog-Assisted In-Pixel Histogramming TDC Based on Self-Referenced SAR ADC
 S-H. Han1 , S. Park1 , J-H. Chun2,3, J. Choi2,3, S-J. Kim1
 1 Ulsan National Institute of Science and Technology, Ulsan, Korea
 2 Sungkyunkwan University, Suwon, Korea
 3 SolidVue, Seongnam, Korea

6.8 A 256×192-Pixel 30fps Automotive Direct Time-of-Flight LiDAR Using 8× Current-Integrating-Based TIA, Hybrid Pulse Position/Width Converter, and Intensity/CNN-Guided 3D Inpainting
 C. Zou1 , Y. Ou1 , Y. Zhu1 , R. P. Martins1,2, C-H. Chan1 , M. Zhang1
 1 University of Macau, Macau, China
 2 Instituto Superior Tecnico/University of Lisboa, Lisbon, Portugal

6.9 A 0.35V 0.367TOPS/W Image Sensor with 3-Layer Optical-Electronic Hybrid Convolutional Neural Network
 X. Wang*, Z. Huang*, T. Liu, W. Shi, H. Chen, M. Zhang
 Tsinghua University, Beijing, China
 *Equally Credited Authors (ECAs)

6.10 A 1/1.56-inch 50Mpixel CMOS Image Sensor with 0.5μm pitch Quad Photodiode Separated by Front Deep Trench Isolation
 D. Kim, K. Cho, H-C. Ji, M. Kim, J. Kim, T. Kim, S. Seo, D. Im, Y-N. Lee, J. Choi, S. Yoon, I. Noh, J. Kim, K. J. Lee, H. Jung, J. Shin, H. Hur, K. E. Chang, I. Cho, K. Woo, B. S. Moon, J. Kim, Y. Ahn, D. Sim, S. Park, W. Lee, K. Kim, C. K. Chang, H. Yoon, J. Kim, S-I. Kim, H. Kim, C-R. Moon, J. Song
 Samsung Semiconductor, Hwaseong, Korea

6.11 A 320x240 CMOS LiDAR Sensor with 6-Transistor nMOS-Only SPAD Analog Front-End and Area-Efficient Priority Histogram Memory
 M. Kim*1 , H. Seo*1,2, S. Kim1 , J-H. Chun1,2, S-J. Kim3 , J. Choi*1,2
 1 Sungkyunkwan University, Suwon, Korea
 2 SolidVue, Seongnam, Korea
 3 Ulsan National Institute of Science and Technology, Ulsan, Korea
 *Equally Credited Authors (ECAs)

Imaging papers in other sessions: 

17.3 A Fully Wireless, Miniaturized, Multicolor Fluorescence Image Sensor Implant for Real-Time Monitoring in Cancer Therapy
 R. Rabbani*1 , M. Roschelle*1 , S. Gweon1 , R. Kumar1 , A. Vercruysse1 , N. W. Cho2 , M. H. Spitzer2 , A. M. Niknejad1 , V. M. Stojanovic1 , M. Anwar1,2
 1 University of California, Berkeley, CA
 2 University of California, San Francisco, CA
 *Equally Credited Authors (ECAs)

33.10 A 2.7ps-ToF-Resolution and 12.5mW Frequency-Domain NIRS Readout IC with Dynamic Light Sensing Frontend and Cross-Coupling-Free Inter-Stabilized Data Converter
 Z. Ma1 , Y. Lin1 , C. Chen1 , X. Qi1 , Y. Li1 , K-T. Tang2 , F. Wang3 , T. Zhang4 , G. Wang1 , J. Zhao1
 1 Shanghai Jiao Tong University, Shanghai, China
 2 National Tsing Hua University, Hsinchu, Taiwan
 3 Shanghai United Imaging Microelectronics Technology, Shanghai, China
 4 Shanghai Mental Health Center, Shanghai, China

Go to the original article...

IISW2023 special issue paper on well capacity of pinned photodiodes

Image Sensors World        Go to the original article...

Miyauchi et al from Brillnics and  Tohoku University published a paper titled "Analysis of Light Intensity and Charge Holding Time Dependence of Pinned Photodiode Full Well Capacity" in the IISW 2023 special issue of the journal Sensors.

In this paper, the light intensity and charge holding time dependence of pinned photodiode (PD) full well capacity (FWC) are studied for our pixel structure with a buried overflow path under the transfer gate. The formulae for PDFWC derived from a simple analytical model show that the relation between light intensity and PDFWC is logarithmic because PDFWC is determined by the balance between the photo-generated current and overflow current under the bright condition. Furthermore, with using pulsed light before a charge holding operation in PD, the accumulated charges in PD decrease with the holding time due to the overflow current, and finally, it reaches equilibrium PDFWC. The analytical model has been successfully validated by the technology computer-aided design (TCAD) device simulation and actual device measurement.

Open access: https://doi.org/10.3390/s23218847

Figure 1. Measured dynamic behaviors of PPD.

Figure 2. Pixel schematic and pulse timing for characterization.

Figure 3. PD cross-section and potential of the buried overflow path.

Figure 4. Potential and charge distribution changes from PD reset to PD saturation.

Figure 5. Simple PD model for theoretical analysis.
Figure 6. A simple model of dynamic behavior from PD reset to PD saturation under static light condition.

Figure 7. Potential and charge distribution changes from PD saturation to equilibrium PDFWC.

Figure 8. A simple model of PD charge reduction during charge holding operation with pulse light.
Figure 9. Chip micrograph and specifications of our developed stacked 3Q-DPS [7,8,9].

Figure 10. Relation between ∆Vb and Iof with static TCAD simulation.
Figure 12. PDFWC under various light intensity conditions.
Figure 13. PDFWC with long charge holding times.
Figure 14. TCAD simulation results of equilibrium PDFWC potential.

Go to the original article...

Sony announces full-frame global shutter camera

Image Sensors World        Go to the original article...

Link: https://www.sony.com/lr/electronics/interchangeable-lens-cameras/ilce-9m3

Sony recently announced a full-frame global shutter camera which was featured in several press articles below:

PetaPixel https://petapixel.com/2023/11/07/sony-announces-a9-iii-worlds-first-global-sensor-full-frame-camera/

DPReview https://www.dpreview.com/news/7271416294/sony-announces-a9-iii-world-s-first-full-frame-global-shutter-camera

The Verge https://www.theverge.com/2023/11/7/23950504/sony-a9-iii-mirrorless-camera-global-shutter-price-release

From Sony's official webpage:

[This camera uses the] Newly developed full-frame stacked 24.6 MP Exmor RS™ image sensor with global shutter [...] a stacked CMOS architecture and integral memory [...] advanced A/D conversion enable high-speed processing to proceed with minimal delay. [AI features are implemented using the] BIONZ XR™ processing engine. With up to eight times more processing power than previous versions, the BIONZ XR image processing engine minimises processing latency [...] It's able to process the high volume of data generated by the newly developed Exmor RS image sensor in real-time, even while shooting continuous bursts at up to 120 fps, and it can capture high-quality 14-bit RAW images in all still shooting modes. [...] [The] α9 III can use subject form data to accurately recognise movement. Human pose estimation technology recognises not just eyes but also body and head position with high precision. 



Go to the original article...

2024 International SPAD Sensor Workshop Submission Deadline Approaching!

Image Sensors World        Go to the original article...

The deadline for the 2024 ISSW on December 8, 2023 is fast approaching! Paper submission portal is now open!

The 2024 International SPAD Sensor Workshop will be held from 4-6 June 2024 in Trento, Italy.

Paper submission

Workshop papers must be submitted online on Microsoft CMT. Click here to be redirected to the submission website. You may need to register first, then search for the "2024 International SPAD Sensor Workshop" within the list of conferences using the dedicated search bar.

Paper format

Kindly take note that the ISSW employs a single-stage submission process, necessitating the submission of camera-ready papers. Each submission should comprise a 1000-character abstract and a 3-page paper, equivalent to 1 page of text and 2 pages of images. The submission must include the authors' name(s) and affiliation, mailing address, telephone, and email address. The formatting can adhere to either a style that integrates text and figures, akin to the standard IEEE format, or a structure with a page of text followed by figures, mirroring the format of the International Solid-State Circuits Conference (ISSCC) or the IEEE Symposium on VLSI Technology and Circuits. Examples illustrating these formats can be accessed in the online database of the International Image Sensor Society.

The deadline for paper submission is 23:59 CET, Friday December 8th, 2023.

Papers will be considered on the basis of originality and quality. High quality papers on work in progress are also welcome. Papers will be reviewed confidentially by the Technical Program Committee. Accepted papers will be made freely available for download from the International Image Sensor Society website. Please note that no major modifications are allowed. Authors will be notified of the acceptance of their abstract & posters at the latest by Wednesday Jan 31st, 2024.
Poster submission 

In addition to talks, we wish to offer all graduate students, post-docs, and early-career researchers an opportunity to present a poster on their research projects or other research relevant to the workshop topics . If you wish to take up this opportunity, please submit a 1000-character abstract and a 1-page description (including figures) of the proposed research activity, along with authors’ name(s) and affiliation, mailing address, telephone, and e-mail address.

The deadline for poster submission is 23:59 CET, Friday December 8th, 2023.

Go to the original article...

Detecting hidden defects using a single-pixel THz camera

Image Sensors World        Go to the original article...


Li et al. present a new THz imaging technique for defect detection in a recent paper in the journal Nature Communications. The paper is titled "Rapid sensing of hidden objects and defects using a single-pixel diffractive terahertz sensor".

Abstract: Terahertz waves offer advantages for nondestructive detection of hidden objects/defects in materials, as they can penetrate most optically-opaque materials. However, existing terahertz inspection systems face throughput and accuracy restrictions due to their limited imaging speed and resolution. Furthermore, machine-vision-based systems using large-pixel-count imaging encounter bottlenecks due to their data storage, transmission and processing requirements. Here, we report a diffractive sensor that rapidly detects hidden defects/objects within a 3D sample using a single-pixel terahertz detector, eliminating sample scanning or image formation/processing. Leveraging deep-learning-optimized diffractive layers, this diffractive sensor can all-optically probe the 3D structural information of samples by outputting a spectrum, directly indicating the presence/absence of hidden structures or defects. We experimentally validated this framework using a single-pixel terahertz time-domain spectroscopy set-up and 3D-printed diffractive layers, successfully detecting unknown hidden defects inside silicon samples. This technique is valuable for applications including security screening, biomedical sensing and industrial quality control. 

Paper (open access): https://www.nature.com/articles/s41467-023-42554-2

News coverage: https://phys.org/news/2023-11-hidden-defects-materials-single-pixel-terahertz.html

CMOS SPAD Sensors for Solid-state LIDAR

In the realm of engineering and material science, detecting hidden structures or defects within materials is crucial. Traditional terahertz imaging systems, which rely on the unique property of terahertz waves to penetrate visibly opaque materials, have been developed to reveal the internal structures of various materials of interest.

This capability provides unprecedented advantages in numerous applications for industrial quality control, security screening, biomedicine, and defense. However, most existing terahertz imaging systems have limited throughput and bulky setups, and they need raster scanning to acquire images of the hidden features.

To change this paradigm, researchers at UCLA Samueli School of Engineering and the California NanoSystems Institute developed a unique terahertz sensor that can rapidly detect hidden defects or objects within a target sample volume using a single-pixel spectroscopic terahertz detector.
Instead of the traditional point-by-point scanning and digital image formation-based methods, this sensor inspects the volume of the test sample illuminated with terahertz radiation in a single snapshot, without forming or digitally processing an image of the sample.

Led by Dr. Aydogan Ozcan, the Chancellor's Professor of Electrical & Computer Engineering and Dr. Mona Jarrahi, the Northrop Grumman Endowed Chair at UCLA, this sensor serves as an all-optical processor, adept at searching for and classifying unexpected sources of waves caused by diffraction through hidden defects. The paper is published in the journal Nature Communications.

"It is a shift in how we view and harness terahertz imaging and sensing as we move away from traditional methods toward more efficient, AI-driven, all-optical sensing systems," said Dr. Ozcan, who is also the Associate Director of the California NanoSystems Institute at UCLA.

This new sensor comprises a series of diffractive layers, automatically optimized using deep learning algorithms. Once trained, these layers are transformed into a physical prototype using additive manufacturing approaches such as 3D printing. This allows the system to perform all-optical processing without the burdensome need for raster scanning or digital image capture/processing.

"It is like the sensor has its own built-in intelligence," said Dr. Ozcan, drawing parallels with their previous AI-designed optical neural networks. "Our design comprises several diffractive layers that modify the input terahertz spectrum depending on the presence or absence of hidden structures or defects within materials under test. Think of it as giving our sensor the capability to 'sense and respond' based on what it 'sees' at the speed of light."

To demonstrate their novel concept, the UCLA team fabricated a diffractive terahertz sensor using 3D printing and successfully detected hidden defects in silicon samples. These samples consisted of stacked wafers, with one layer containing defects and the other concealing them. The smart system accurately revealed the presence of unknown hidden defects with various shapes and positions.
The team believes their diffractive defect sensor framework can also work across other wavelengths, such as infrared and X-rays. This versatility heralds a plethora of applications, from manufacturing quality control to security screening and even cultural heritage preservation.

The simplicity, high throughput, and cost-effectiveness of this non-imaging approach promise transformative advances in applications where speed, efficiency, and precision are paramount.

Go to the original article...

A 400 kilopixel resolution superconducting camera

Image Sensors World        Go to the original article...

Oripov et al. from NIST and JPL recently published a paper titled "A superconducting nanowire single-photon camera with 400,000 pixels" in Nature.

Abstract: For the past 50 years, superconducting detectors have offered exceptional sensitivity and speed for detecting faint electromagnetic signals in a wide range of applications. These detectors operate at very low temperatures and generate a minimum of excess noise, making them ideal for testing the non-local nature of reality, investigating dark matter, mapping the early universe and performing quantum computation and communication. Despite their appealing properties, however, there are at present no large-scale superconducting cameras—even the largest demonstrations have never exceeded 20,000 pixels. This is especially true for superconducting nanowire single-photon detectors (SNSPDs). These detectors have been demonstrated with system detection efficiencies of 98.0%, sub-3-ps timing jitter, sensitivity from the ultraviolet to the mid-infrared and microhertz dark-count rates, but have never achieved an array size larger than a kilopixel. Here we report on the development of a 400,000-pixel SNSPD camera, a factor of 400 improvement over the state of the art. The array spanned an area of 4 × 2.5 mm with 5 × 5-μm resolution, reached unity quantum efficiency at wavelengths of 370 nm and 635 nm, counted at a rate of 1.1 × 105 counts per second (cps) and had a dark-count rate of 1.0 × 10^−4 cps per detector (corresponding to 0.13 cps over the whole array). The imaging area contains no ancillary circuitry and the architecture is scalable well beyond the present demonstration, paving the way for large-format superconducting cameras with near-unity detection efficiencies across a wide range of the electromagnetic spectrum.

Link: https://www.nature.com/articles/s41586-023-06550-2

a, Imaging at 370 nm, with raw time-delay data from the buses shown as individual dots in red and binned 2D histogram data shown in black and white. b, Count rate as a function of bias current for various wavelengths of light as well as dark counts. c, False-colour scanning electron micrograph of the lower-right corner of the array, highlighting the interleaved row and column detectors. Lower-left inset, schematic diagram showing detector-to-bus connectivity. Lower-right inset, close-up showing 1.1-μm detector width and effective 5 × 5-μm pixel size. Scale bar, 5 μm.


a, Circuit diagram of a bus and one section of 50 detectors with ancillary readout components. SNSPDs are shown in the grey boxes and all other components are placed outside the imaging area. A photon that arrives at time t0 has its location determined by a time-of-flight readout process based on the time-of-arrival difference t2 − t1. b, Oscilloscope traces from a photon detection showing the arrival of positive (green) and negative (red) pulses at times t1 and t2, respectively.

a, Histogram of the pulse differential time delays Δt = t1 − t2 from the north bus during flood illumination with a Gaussian spot. All 400 detectors resolved clearly, with gaps indicating detectors that were pruned. Inset, zoomed-in region showing that counts from adjacent detectors are easily resolvable and no counts were generated by a pruned detector. b, Plot of raw trow and tcol time delays when flood illuminated at 370 nm. c, Zoomed-in subsection of the array with 25 × 25 detectors. d, Histogram of time delays for a 2 × 2 detector subset with 10-ps bin size showing clear distinguishability between adjacent detectors.

a, Count rate versus optical attenuation for a section of detectors biased at 45 μA per detector. The dashed purple line shows a slope of 1, with deviations from that line at higher rates indicating blocking loss. b, System jitter of a 50-detector section. Detection delay was calculated as the time elapsed between the optical pulse being generated and the detection event being read out.

News coverage: https://www.universetoday.com/163959/a-new-superconducting-camera-can-resolve-single-photons/

A New Superconducting Camera can Resolve Single Photons

Researchers have built a superconducting camera with 400,000 pixels, which is so sensitive it can detect single photons. It comprises a grid of superconducting wires with no resistance until a photon strikes one or more wires. This shuts down the superconductivity in the grid, sending a signal. By combining the locations and intensities of the signals, the camera generates an image.

The researchers who built the camera, from the US National Institute of Standards and Technology (NIST) say the architecture is scalable, and so this current iteration paves the way for even larger-format superconducting cameras that could make detections across a wide range of the electromagnetic spectrum. This would be ideal for astronomical ventures such as imaging faint galaxies or extrasolar planets, as well as biomedical research using near-infrared light to peer into human tissue.

These devices have been possible for decades but with a fraction of the pixel count. This new version has 400 times more pixels than any other device of its type. Previous versions have not been very practical because of the low-quality output.

In the past, it was found to be difficult-to-impossible to chill the camera’s superconducting components – which would be hundreds of thousands of wires – by connecting them each to a cooling system.
According to NIST, researchers Adam McCaughan and Bakhrom Oripov and their collaborators at NASA’s Jet Propulsion Laboratory in Pasadena, California, and the University of Colorado Boulder overcame that obstacle by constructing the wires to form multiple rows and columns, like those in a tic-tac-toe game, where each intersection point is a pixel. Then they combined the signals from many pixels onto just a few room-temperature readout nanowires.

The detectors can discern differences in the arrival time of signals as short as 50 trillionths of a second. They can also detect up to 100,000 photons a second striking the grid.
McCaughan said the readout technology can easily be scaled up for even larger cameras, and predicted that a superconducting single-photon camera with tens or hundreds of millions of pixels could soon be available.

In the meantime, the team plans to improve the sensitivity of their prototype camera so that it can capture virtually every incoming photon. That will enable the camera to tackle quantum imaging techniques that could be a game changer for many fields, including astronomy and medical imaging.

Go to the original article...

RADOPT 2023 Nov 29-30 in Toulouse, France

Image Sensors World        Go to the original article...

The 2023 workshop on Radiation Effects on Optoelectronic Detectors and Photonics Technologies (RADOPT) will be co-organised by CNES, UJM, SODERN, ISAE-SUPAERO AIRBUS DEFENCE & SPACE, THALES ALENIA SPACE in Touluse, France on November 29 and 30, 2023.

After the success of RADOPT 2021, this second edition of the workshop, will continue to combine and replace two well-known events from the Photonic Devices and IC’s community: the “Optical Fibers in Radiation Environments Days -FMR” and the Radiation effects on Optoelectronic Detectors Workshop, traditionally organized every-two years by the COMET OOE of CNES.

The objective of the workshop is to provide a forum for the presentation and discussion of recent developments regarding the use of optoelectronics and photonics technologies in radiation-rich environments. The workshop also offers the opportunity to highlight future prospects in the fast-moving space, high energy physics, fusion and fission research fields and to enhance exchanges and collaborations between scientists. Participation of young researchers (PhD) is especially encouraged.

Go to the original article...

SWIR Vision Systems announces 6 MP SWIR sensor to be released in 2024

Image Sensors World        Go to the original article...

The sensor is based on quantum dot crystals deposited on silicon.

Link: https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/

Acuros® CQD® sensors are fabricated via the deposition of quantum dot semiconductor crystals upon the surface of silicon wafers. The resulting CQD photodiode array enables high resolution, small pixel pitch, broad bandwidth, low noise, and low inter-pixel crosstalk arrays, eliminating the prohibitively expensive hybridization process inherent to InGaAs sensors. CQD sensor technology is silicon wafer-scale compatible, opening its potential to very low-cost high-volume applications.


  •  3072 x 2048 Pixel Array
  •  7µm Pixel Pitch
  •  Global Snapshot Shutter
  •  Enhanced QE
  •  100 Hz Framerate
  •  Integrated 12bit ADC
  •  Full Visible-to-SWIR bandwidth
  •  Compatible with a range of SWIR lenses
  • Industrial Inspection: Suitable for inspection and quality control in various industries, including semiconductor, electronics, and pharmaceuticals.
  •  Agriculture: Crop health monitoring, food quality control, and moisture content analysis.
  •  Medical Imaging: Blood vessel imaging, tissue differentiation, and endoscopy.
  •  Degraded Visual Environment: Penetrating haze, smoke, rain & snow for improved situational awareness.
  •  Security and Defense:Target recognition, camouflage detection, and covert surveillance.
  •  Scientific Research: Astronomy, biology, chemistry, and material science.
  •  Remote Sensing: Environmental monitoring, geology, and mineral exploration


Full press release:

SWIR Vision Systems to release industry-leading 6 MP SWIR sensors for defense, scientific, automotive, and industrial vision markets
The company’s latest innovation, the Acuros® 6, leverages its pioneering CQD® Quantum Dot image sensor technology, further contributing to the availability of very high resolution and broad-band sensors for a diversity of applications.

Durham, N.C., October 31, 2023 – SWIR Vision Systems today announces the upcoming release of two new models of short-wavelength infrared (SWIR) image sensors for Defense, Scientific, Automotive, and Industrial Users. The new sensors are capable of capturing images in the visible, the SWIR, and the extended SWIR spectral ranges. These very high resolution SWIR sensors are made possible by the company’s patented CQD Quantum Dot sensor technology.

SWIR Vision’s new products include both the Acuros 6 and the Acuros 4 CQD SWIR image sensors, featuring 6.3 megapixel and 4.2 megapixel global shutter arrays. Each sensor has a 7-micron pixel-pitch, 12-bit digital output, low read noise, and enhanced quantum efficiency, resulting in excellent sensitivity and SNR performance for a broad array of applications.

The new products employ SWIR Vision’s CQD photodiode technology, in which photodiodes are created via the deposition of low-cost films directly on top of silicon readout ICs. This approach enables small pixel sizes, affordable prices, broad spectral response, and industry-leading high-resolution SWIR focal plane arrays.

SWIR Vision is now engaging global camera makers, automotive, industrial, and defense system integrators, who will leverage these breakthrough sensors to tackle challenges in laser inspection and manufacturing, semiconductor inspection, automotive safety, long-range imaging, and defense.
“Our customers challenged us again to deliver more capability to their toughest imaging problems. The Acuros 4 and the Acuros 6 sensors deliver the highest resolution and widest spectral response available today,” said Allan Hilton, SWIR Vision’s Chief Product Officer. “The industry can expect to see new camera and system solutions based on these latest innovations from our best-in-class CQD sensor engineering group”.

About SWIR Vision Systems – SWIR Vision Systems (www.swirvisionsystems.com), a North Carolina-based startup company, has pioneered the development and introduction of high-definition, Colloidal Quantum Dot (CQD® ) infrared image sensor technology for infrared cameras, delivering breakthrough sensor capability. Imaging in the short wavelength IR has become critical for key applications within industrial, defense systems, mobile phones, and autonomous vehicle markets.
To learn more about our 6MP Sensors, go to https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/.

Go to the original article...

Acuros announces 6 MP SWIR Sensor to be released in 2024

Image Sensors World        Go to the original article...

The sensor is based on quantum dot crystals deposited on silicon.

Link: https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/

Acuros® CQD® sensors are fabricated via the deposition of quantum dot semiconductor crystals upon the surface of silicon wafers. The resulting CQD photodiode array enables high resolution, small pixel pitch, broad bandwidth, low noise, and low inter-pixel crosstalk arrays, eliminating the prohibitively expensive hybridization process inherent to InGaAs sensors. CQD sensor technology is silicon wafer-scale compatible, opening its potential to very low-cost high-volume applications.


  •  3072 x 2048 Pixel Array
  •  7µm Pixel Pitch
  •  Global Snapshot Shutter
  •  Enhanced QE
  •  100 Hz Framerate
  •  Integrated 12bit ADC
  •  Full Visible-to-SWIR bandwidth
  •  Compatible with a range of SWIR lenses
  • Industrial Inspection: Suitable for inspection and quality control in various industries, including semiconductor, electronics, and pharmaceuticals.
  •  Agriculture: Crop health monitoring, food quality control, and moisture content analysis.
  •  Medical Imaging: Blood vessel imaging, tissue differentiation, and endoscopy.
  •  Degraded Visual Environment: Penetrating haze, smoke, rain & snow for improved situational awareness.
  •  Security and Defense:Target recognition, camouflage detection, and covert surveillance.
  •  Scientific Research: Astronomy, biology, chemistry, and material science.
  •  Remote Sensing: Environmental monitoring, geology, and mineral exploration


Full press release:

SWIR Vision Systems to release industry-leading 6 MP SWIR sensors for defense, scientific, automotive, and industrial vision markets
The company’s latest innovation, the Acuros® 6, leverages its pioneering CQD® Quantum Dot image sensor technology, further contributing to the availability of very high resolution and broad-band sensors for a diversity of applications.

Durham, N.C., October 31, 2023 – SWIR Vision Systems today announces the upcoming release of two new models of short-wavelength infrared (SWIR) image sensors for Defense, Scientific, Automotive, and Industrial Users. The new sensors are capable of capturing images in the visible, the SWIR, and the extended SWIR spectral ranges. These very high resolution SWIR sensors are made possible by the company’s patented CQD Quantum Dot sensor technology.

SWIR Vision’s new products include both the Acuros 6 and the Acuros 4 CQD SWIR image sensors, featuring 6.3 megapixel and 4.2 megapixel global shutter arrays. Each sensor has a 7-micron pixel-pitch, 12-bit digital output, low read noise, and enhanced quantum efficiency, resulting in excellent sensitivity and SNR performance for a broad array of applications.

The new products employ SWIR Vision’s CQD photodiode technology, in which photodiodes are created via the deposition of low-cost films directly on top of silicon readout ICs. This approach enables small pixel sizes, affordable prices, broad spectral response, and industry-leading high-resolution SWIR focal plane arrays.

SWIR Vision is now engaging global camera makers, automotive, industrial, and defense system integrators, who will leverage these breakthrough sensors to tackle challenges in laser inspection and manufacturing, semiconductor inspection, automotive safety, long-range imaging, and defense.
“Our customers challenged us again to deliver more capability to their toughest imaging problems. The Acuros 4 and the Acuros 6 sensors deliver the highest resolution and widest spectral response available today,” said Allan Hilton, SWIR Vision’s Chief Product Officer. “The industry can expect to see new camera and system solutions based on these latest innovations from our best-in-class CQD sensor engineering group”.

About SWIR Vision Systems – SWIR Vision Systems (www.swirvisionsystems.com), a North Carolina-based startup company, has pioneered the development and introduction of high-definition, Colloidal Quantum Dot (CQD® ) infrared image sensor technology for infrared cameras, delivering breakthrough sensor capability. Imaging in the short wavelength IR has become critical for key applications within industrial, defense systems, mobile phones, and autonomous vehicle markets.
To learn more about our 6MP Sensors, go to https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/.

Go to the original article...

imec paper on thin film pinned photodiode

Image Sensors World        Go to the original article...

Kim et al. from imec and coauthors from universities in Belgium and Korea recently published a paper titled "A Thin-Film Pinned-Photodiode Imager Pixel with Fully Monolithic Fabrication and beyond 1Me- Full Well Capacity" in MDPI Sensors. This paper describes imec's recent thin film pinned photodiode technology.

Open access paper link: https://www.mdpi.com/1424-8220/23/21/8803

Thin-film photodiodes (TFPD) monolithically integrated on the Si Read-Out Integrated Circuitry (ROIC) are promising imaging platforms when beyond-silicon optoelectronic properties are required. Although TFPD device performance has improved significantly, the pixel development has been limited in terms of noise characteristics compared to the Si-based image sensors. Here, a thin-film-based pinned photodiode (TF-PPD) structure is presented, showing reduced kTC noise and dark current, accompanied with a high conversion gain (CG). Indium-gallium-zinc oxide (IGZO) thin-film transistors and quantum dot photodiodes are integrated sequentially on the Si ROIC in a fully monolithic scheme with the introduction of photogate (PG) to achieve PPD operation. This PG brings not only a low noise performance, but also a high full well capacity (FWC) coming from the large capacitance of its metal-oxide-semiconductor (MOS). Hence, the FWC of the pixel is boosted up to 1.37 Me- with a 5 μm pixel pitch, which is 8.3 times larger than the FWC that the TFPD junction capacitor can store. This large FWC, along with the inherent low noise characteristics of the TF-PPD, leads to the three-digit dynamic range (DR) of 100.2 dB. Unlike a Si-based PG pixel, dark current contribution from the depleted semiconductor interfaces is limited, thanks to the wide energy band gap of the IGZO channel material used in this work. We expect that this novel 4 T pixel architecture can accelerate the deployment of monolithic TFPD imaging technology, as it has worked for CMOS Image sensors (CIS).

Figure 1. Pixel cross-section for the monolithic TFPD image sensor (a) 3 T and (b) 4 T (TF-PPD) structure (TCO: transparent conductive oxide, HTL: hole transport layer, PG: photogate, TG: transfer gate, FD: floating diffusion). Electric potential and signal readout configuration for 3 T pixel (c) and for 4 T pixel (d). Pixel circuit diagram for 3 T pixel (e) and for the 4 T pixel (f).


Figure 2. I-V characteristic of QDPD test structure (a) and of IGZO TFT (b), a micrograph of the TF-PPD passive pixel array (c), and its measurement schematic (d). Band diagrams for the PD (e) and PG (f).

Figure 3. Silvaco TCAD simulation results; (a) simulated structure, (b) lateral potential profile along the IGZO layer, and (c) potential profile when TG is turned off and (d) on.

Figure 4. Signal output vs. integration time with different VPG and VTG values with the illumination. Signal curves with the fixed VTG (−1 V), varying VPG (−4~−1 V) (a), the same graphs for the fixed VPG (−2 V), and different VTGs (−6.5~−1 V) (b).

Figure 4. Signal output vs. integration time with different VPG and VTG values with the illumination. Signal curves with the fixed VTG (−1 V), varying VPG (−4~−1 V) (a), the same graphs for the fixed VPG (−2 V), and different VTGs (−6.5~−1 V) (b).

Figure 5. (a) Pixel output vs. integration time for different pixel pitches. (b) FWC comparison between estimation and measurement.

Figure 6. FWC comparison by different pixel fill factors. Pixel schematics for different shapes (a), and FWC by different pixel shapes and pitches (b).

Figure 7. Potential diagram describing FWC increase by the larger VPG (a), and FWC vs. VPG (b).

Figure 8. Passive pixel dark current (a) and Arrhenius plots (b) for the QDPD test structure and the passive pixel.

Figure 9. FWC vs. pixel area. A guideline showing the FWC density per unit area for this work (blue) and a trend line for the most of CISs (red).


Go to the original article...

EETimes article about imec’s new thin film pinned photodiode

Image Sensors World        Go to the original article...

Full article: https://www.eetimes.eu/imec-taps-pinned-photodiode-to-build-a-better-swir-sensor/

Imec Taps Pinned Photodiode to Build a Better SWIR Sensor

‘Monolithic hybrid’ prototype integrates PPD into the TFT structure to lower the cost of light detection in the nonvisible range, with improved noise performance. 

Silicon-based image sensors can detect light within a limited range of wavelengths and thus have limitations in applications like automotive and medical imaging. Sensors that can capture light beyond the visible range, such as short-wave infrared (SWIR), can be built using III-V materials, which combine such elements as gallium, indium, aluminum and phosphorous. But while those sensors perform well, their manufacture requires a high degree of precision and control, increasing their cost.

Research into less expensive alternatives has yielded thin-film absorbers such as quantum-dot (QD) and other organic photodiode (OPD) materials that are compatible with the CMOS readout circuits found in electronic devices, an advantage that has boosted their adoption for IR detection. But thin-film absorbers exhibit higher levels of noise when capturing IR light, resulting in lower image quality. They are also known to have lower sensitivity to IR.

The challenge, then, is to design a cost-effective image sensor that uses thin-film absorbers but offers better noise performance. Imec has taken aim at the problem by revisiting a technology that was first used in the 1980s to improve noise in early CMOS image sensors: the pinned photodiode (PPD).
The PPD structure’s ability to completely remove electrical charges before starting a new capture cycle makes it an efficient approach, as the sensor can reset without unwanted background noise (kTC noise) or any lingering influence from the previous image frame. PPDs quickly became the go-to choice for consumer-grade silicon-based image sensors. Their low noise and high power efficiency made them a favorite among camera manufacturers.

Researchers at imec integrated a PPD structure into thin-film–transistor (TFT) image sensors to yield a hybrid prototype. The sensor structure also uses imec’s proprietary indium gallium zinc oxide (IGZO) technology for electron transport.

“You can call such systems ‘monolithic hybrid’ sensors, where the photodiode is not a part of the CMOS circuit [as in CMOS image sensors, in which silicon is used for light absorption], but is formed with another material as the photoactive layer,” Pawel Malinowski, Pixel Innovations program manager at imec, told EE Times Europe. “The spectrum this photodiode captures is something separate … By introducing an additional thin-film transistor in between, it enables separation of the storage and readout nodes, making it possible to fully deplete the photodiode and transfer all charges to the readout, [thereby] preventing the generation of kTC noise and reducing image lag.”

Unlike the conventional thin-film-based pixel architecture, imec’s TFT hybrid PPD structure introduces a separate thin-film transistor (TFT) to the design, which acts as a transfer gate and a photogate—in other words, it functions as a middleman. Here, imec’s IGZO technology serves as an effective electron transport layer, as it has higher electron mobility. Also acting as the gate dielectric, it contributes to the performance of the sensor by controlling the flow of charges and enhancing absorption characteristics.
With the new elements strategically placed within the traditional PDD structure, the prototype 4T image sensor showed a low readout noise of 6.1e-, compared to >100e- for the conventional 3T sensor, demonstrating its superior noise performance, imec stated. Because of IGZO’s large bandgap, the TFT hybrid PPD structure also entails lower dark current than traditional CMOS image sensors. This means the image sensor can capture infrared images with less noise, less distortion or interference and more accuracy and detail, according to imec

Figure 1: Top (a) and cross-sectional (b) view of structure of TF-PPD pixels

By using thin-film absorbers, imec’s prototype image sensor can detect at SWIR wavelengths and beyond, imec said. Image sensors operating in the near-infrared range are already used in automotive applications and consumer apps like iPhone Face ID. Going to longer wavelengths, such as SWIR, enables better transmission through OLED displays, which leads to better “hiding” of the components behind the screen and reduction of the “notch.”

Malinowski said, “In automotive, going to longer wavelengths can enable better visibility in adverse weather conditions, such as visibility through fog, smoke or clouds, [and achieve] increased contrast of some materials that are hard to distinguish against a dark background—for example, high contrast of textiles against poorly illuminated, shaded places.” Using the thin-film image sensor could make intruder detection and monitoring in dark conditions more effective and cost-efficient. It could also aid in medical imaging, which uses SWIR to study veins, blood flow and tissue properties.

Looking ahead, imec plans to diversify the thin-film photodiodes that can be used in the proposed architecture. The current research has tested for two types of photodiodes: a photodiode sensitive to near-infrared and a QD photodiode sensitive to SWIR.

“Current developments were focused on realizing a proof-of-concept device, with many design and process variations to arrive at a generic module,” Malinowski said. “Further steps include testing the PPD structure with different photodiodes—for example, other OPD and QDPD versions. Furthermore, next-generation devices are planned to focus on a more specific use case, with a custom readout suitable for a particular application.

“SWIR imaging with quantum dots is one of the avenues for further developments and is also a topic with high interest from the imaging community,” Malinowski added. “We are open to collaborations with industrial players to explore and mature this exciting sensor technology.”

Go to the original article...

onsemi announces Hyperlux low power CIS for smart home

Image Sensors World        Go to the original article...

Press release: https://www.onsemi.com/company/news-media/press-announcements/en/onsemi-introduces-lowest-power-image-sensor-family-for-smart-home-and-office

onsemi Introduces Lowest Power Image Sensor Family for Smart Home and Office 

Hyperlux LP Image Sensors can extend battery life by up to 40%¹

What's New: Today onsemi introduced the Hyperlux LP image sensor family ideally suited for industrial and commercial cameras such as smart doorbells, security cameras, AR/VR/XR headsets, machine vision and video conferencing. These 1.4 µm pixel sensors deliver industry-leading image quality and low power consumption while maximizing performance to capture crisp, vibrant images even in difficult lighting conditions.

The product family also features a stacked architecture design that minimizes its footprint and at its smallest approaches the size of a grain of rice, making it ideal for devices where size is critical. Depending on the use case, customers can choose between the 5-megapixel AR0544, the 8-megapixel AR0830 or the 20-megapixel AR2020.

Why It Matters: Home and business owners continue to choose cameras to protect themselves more than any other security measure, with the market expected to triple by the end of the decade.² As a result, consumers are demanding devices that offer better image quality, reliability and longer battery life to improve the overall user experience.

With the image sensors, cameras can deliver clearer images and more accurate object detection even in harsh weather and lighting conditions. Additionally, these cameras are often placed in locations that can be difficult to access to replace or recharge batteries, making low power consumption a critical feature.

How It Works: The Hyperlux LP family is packed with features and proprietary technologies that optimize performance and resolution including:

  •  Wake on Motion – Enables the sensors to operate in a low-power mode that draws a fraction of the power needed in the full-performance mode. Once the sensor detects movement, it moves to a higher performance state in less time than it takes to snap a photo.
  •  Smart ROI – Delivers more than one region of interest (ROI) to give a context view of the scene at reduced bandwidth and a separate ROI in original detail.
  •  Near-Infrared (NIR) Performance – Delivers superior image quality due to the innovative silicon design and pixel architecture, with minimal supplemental lighting.
  •  Low Power – Reduces thermal noise which negatively impacts image quality and eliminates the need for heat sinks, reducing the overall cost of the vision system.

Supporting Quotes:
“By leveraging our superior analog design and pixel architecture, our sensors elevate the two most important elements people consider when buying a device, picture quality and battery life. Our new image sensor family delivers performance that matters with a significantly increased battery life and exquisite, highly detailed images,” said Ross Jatou, senior vice president and general manager, Intelligent Sensing Group, onsemi.

In addition to smart home devices, one of the other applications the Hyperlux LP family can improve is the office meeting experience with more intuitive, seamless videoconferencing solutions.
“Our video collaboration solutions require high-quality image sensors that bring together multiple factors for the best user experience. The superior optical performance, innovative features and extremely low power consumption of the Hyperlux LP image sensors enable us to deliver a completely immersive virtual meeting experience in highly intelligent and optimized videoconferencing systems,” said Ashish Thanawala, Sr. Director of Systems Engineering, Owl Labs.

What's Next: The Hyperlux LP Image Sensor Family will be available in the fourth quarter of 2023.

More Information:
 Learn more about the AR2020, the AR0830 and the AR0544.
 Read the blog: A Closer Look - Hyperlux LP Image Sensors

¹ Based on internal tests conducted under specific conditions. Actual results may vary based on device, usage patterns, and other external factors.
² Status of the CMOS Image Sensor Industry, Yole Intelligence Report, 2023.

Go to the original article...

ESSCIRC 2023 Lecture on "circuit insights" by Dr. Sara Pellegrini

Image Sensors World        Go to the original article...

In this invited talk at ESSCIRC 2023, Dr. Pellegrini shares her insights on circuits and sensor design through her research career at Politecnico Milano, Heriot Watt and now at STMicro. The lecture covers basics of LiDAR and SPAD sensors, and various design challenges such as low signal strength and background illumination.

Go to the original article...

Dr. Robert Henderson’s lecture on time-of-flight SPAD cameras

Image Sensors World        Go to the original article...


Imaging Time: Cameras for the Fourth Dimension

Time is often considered as the fourth dimension, along with the length, width and depth that form the fabric of space-time. Conventional cameras observe only two of those dimensions inferring depth from spatial cues and record time only coarsely relative to many fast phenomena in the natural world. In this talk, I will introduce the concept of time cameras, devices based on single photon avalanche diodes (SPADs) that can record the time dimension of a scene at the picosecond scales commensurate with the speed of light. This talk will chart 2 decades of my research into these devices which have seen their transformation from a research curiosity to a mainstream semiconductor technology with billions of SPAD devices in consumer use in mobile phones for depth sensing autofocus-assist. We will illustrate the talk with videos and demonstrations of ultrafast SPAD cameras developed at the University of Edinburgh. I am proud that my group’s research maintains the University position at forefront of imaging technology which has transformed our lives, seeing the transition from chemical film to digital cameras, the omnipresence of camera phones and video meetings. In the near future, SPAD-based time cameras can also be expected to play a major societal role, within optical radars (LIDARs) for robotic vision and driverless cars, surgical guidance for cancer and perhaps even to add two further dimensions to the phone camera in your pocket!

Robert K. Henderson is a Professor of Electronic Imaging in the School of Engineering at the University of Edinburgh. He obtained his PhD in 1990 from the University of Glasgow. From 1991, he was a research engineer at the Swiss Centre for Microelectronics, Neuchatel, Switzerland. In 1996, he was appointed senior VLSI engineer at VLSI Vision Ltd, Edinburgh, UK where he worked on the world’s first single chip video camera. From 2000, as principal VLSI engineer in STMicroelectronics Imaging Division he developed image sensors for mobile phone applications. He joined University of Edinburgh in 2005, designing the first SPAD image sensors in nanometer CMOS technologies in the MegaFrame and SPADnet EU projects. This research activity led to the first volume SPAD time-of-flight products in 2013 in the form of STMicroelectronics FlightSense series, which perform an autofocus-assist now present in over 1 billion smartphones. He benefits from a long-term research partnership with STMicroelectronics in which he explores medical, scientific and high speed imaging applications of SPAD technology. In 2014, he was awarded a prestigious ERC advanced fellowship. He is an advisor to Ouster Automotive and a Fellow of the IEEE and the Royal Society of Edinburgh.

Go to the original article...

Image Sensing Topics at Upcoming IEDM 2023 Dec 9-13 in San Francisco

Image Sensors World        Go to the original article...

The 69th annual IEEE International Electron Devices Meeting (IEDM) will be held in San Francisco Dec. 9-13. This year there are three sessions dealing with advanced image sensing topics. You can find summaries of all of these papers by going here (https://submissions.mirasmart.com/IEDM2023/Itinerary/EventsAAG.aspx) and then clicking on the relevant sessions and papers within each one:
Session #8 on Monday, Dec. 11 is “Advanced Photonics for Image Sensors and High-Speed Communications.” It features six papers describing advanced photonics for image sensors and high speed communications. The first three deal with device and integration concepts for sub-diffraction color filters targeting imaging key performance indicators, while the second three deal with devices and technologies for high speed communication systems.

  1.  IMEC will describe a novel sub-micron integration approach to color-splitting, to match human eye color sensitivity.
  2.  VisEra Technologies will describe the use of nano-light pillars to improve the quantum efficiency and signal-to-noise ratio (SNR) of color filters on CMOS imaging arrays under low-light conditions.
  3.  Samsung will detail a metasurface nano-prism structure for wide field-of-view lenses, demonstrating 25% higher sensitivity and 1.2 dB increased SNR vs. conventional micro-lenses.
  4.  National University of Singapore will describe the integration of ferroelectric material into a LiNbO3-on-insulator photonic platform, demonstrating non-volatile memory and high-efficiency modulators with an efficiency of 66 pm/V.
  5.  IHP will discuss the first germanium electro-optical modulator operating at 100 GHz in a SiGe BiCMOS photonics technology.
  6.  An invited paper from Intel will discuss the first 256 Gbps WDM transceiver with eight 200 GHz-spaced wavelengths simultaneously modulated at 32 Gbps, and with a bit-error-rate less than 1e-12.

Session #20 on Tuesday, Dec. 12 is Emerging Photodetectors. It features five papers describing recent developments in emerging photodetectors spanning the MIR to the DUV spectral range, and from group IV and III-V sensors to organic detectors.

  1.  The first paper by KAIST presents a fully CMOS-compatible Ge-on-Insulator platform for detection of wavelengths beyond 4 µm.
  2.  The second paper by KIST (not a typo) presents a new record-low-jitter SPAD device integrated into a CIS process technology, covering a spectral range of visible up to NIR.
  3.  The third paper by KAIST describes a wavelength-tunable detection device combining optical gratings and phase-change materials, reaching wavelengths up to 1700 nm.
  4.  The University of Science and Technology of China will report on a dual-function tunable emitter and NIR photodetector combination based on III-V GaN/AlGaN nanowires on silicon.
  5.  An invited paper from France’s CNRS gives an overview on next-generation sustainable organic photodetectors and emitters.

Session #40 on Wednesday, Dec. 13 features six papers describing the most recent advances in image sensors.

  1.  Samsung will describe a 0.5 µm pixel, 3 layers-stacked, CMOS image sensor (CIS) with in-pixel Cu-Cu bonding technology featuring improved conversion gain and noise.
  2.  Omnivision will present a 2.2 µm-2 layer stacked high dynamic range VDGS CIS with 1x2 shared structure offering dual conversion gain and achieving low FPN.
  3.  STMicroelectronics will describe a 2.16 µm 6T BSI VDGS CIS using deep trench capacitors and achieving 90 dB dynamic range using spatially-split exposure.
  4.  Meta will describe a 2 megapixel - 4.23 µm pixel pitch - offering block-parallel A/D architecture and featuring programmable sparse-capture with a fine grain gating scheme for power saving.
  5.  Canon will introduce a new twisted photodiode CIS structure - 6 µm pixel pitch - enabling all-directional autofocus for high speed and accuracy and 95 dB DR.
  6.  Shanghai Jiao Tong University will present a 64x64-pixel organic imager prototype, based on a novel hole transporting layer (HTL)-free structure achieving the highest recorded low-light performance.

Full press release about the conference is below.

2023 IEEE International Electron Devices Meeting to Highlight Advances in Critical Semiconductor Technologies with the Theme, “Devices for a Smart World Built Upon 60 Years of CMOS”

Four Focus Sessions on topics of intense research interest:

  •  3D Stacking for Next-Generation Logic & Memory by Wafer Bonding and Related Technologies
  •  Logic, Package and System Technologies for Future Generative AI
  •  Neuromorphic Computing for Smart Sensors
  •  Sustainability in Semiconductor Device Technology and Manufacturing

SAN FRANCISCO, CA – Since it began in 1955, the IEEE International Electron Devices Meeting (IEDM) has been where the world’s best and brightest electronics technologists go to learn about the latest breakthroughs in semiconductor and related technologies. That tradition continues this year, when the 69th annual IEEE IEDM conference takes place in-person December 9-13, 2023 at the Hilton San Francisco Union Square hotel, with online access to recorded content available afterward.
The 2023 IEDM technical program, supporting the theme, “Devices for a Smart World Built Upon 60 Years of CMOS,” will consist of more than 225 presentations plus a full slate of panels, Focus Sessions, Tutorials, Short Courses, a career luncheon, supplier exhibit and IEEE/EDS award presentations.
“The IEDM offers valuable insights into where the industry is headed, because the leading-edge work presented at the conference showcases major trends and paradigm shifts in key semiconductor technologies,” said Jungwoo Joh, IEDM 2023 Publicity Chair and Process Development Manager at Texas Instruments. “For example, this year many papers discuss ways to stack devices in 3D configurations. This is of course not new, but two things are especially noteworthy about this work. One is that it isn’t just happening with conventional logic and memory devices, but with sensors, power, neuromorphic and other devices as well. Also, many papers don’t describe futuristic laboratory studies, but rather specific hardware demonstrations that have generated solid results, opening pathways to commercial feasibility.”
“Finding the right materials and device configurations to develop transistors that will perform well with acceptable levels of reliability remains a key challenge,” said Kang-ill Seo, IEDM 2023 Publicity Vice Chair and Vice President, Semiconductor R&D, Samsung Semiconductor. “This year’s program shows that electrothermal considerations remain a key focus, particularly with attempts to add functionality to a chip’s interconnect, or wiring, which is fabricated using low-temperature processes.”
Here are details of the 2023 IEEE International Electron Devices Meeting:
Tutorial Sessions – Saturday, Dec. 9
The Saturday tutorial sessions on emerging technologies are presented by experts in the field to bridge the gap between textbook-level knowledge and leading-edge current research, and to introduce attendees to new fields of interest. There are three time slots, each with two tutorials running in parallel:
1:30 p.m. - 2:50 p.m.
• Innovative Technology for Beyond 2 nm, Matthew Metz, Intel
• CMOS+X: Functional Augmentation of CMOS for Next-Generation Electronics, Sayeef Salahuddin, UC-Berkeley
3:05 p.m. - 4:25 p.m.
• Reliability Challenges of Emerging FET Devices, Jacopo Franco, Imec
• Advanced Packaging and Heterogeneous Integration - Past, Present & Future, Madhavan Swaminathan, Penn State
4:40 p.m. - 6:00 p.m.
• Synapses, Circuits, and Architectures for Analog In-Memory Computing-Based Deep Neural Network Inference Hardware Acceleration, Irem Boybat, IBM
• Tools for Device Modeling: From SPICE to Scientific Machine Learning, Keno Fischer, JuliaHub
Short Courses – Sunday, Dec. 10
In contrast to the Tutorials, the full-day Short Courses are focused on a single technical topic. They offer the opportunity to learn about important areas and developments, and to network with global experts.

• Transistor, Interconnect, and Chiplets for Next-Generation Low-Power & High-Performance Computing, organized by Yuri Y. Masuoka, Samsung

  •  Advanced Technology Requirement for Edge Computing, Jie Deng, Qualcomm
  •  Process Technology toward 1nm and Beyond, Tomonari Yamamoto, Tokyo Electron
  •  Empowering Platform Technology with Future Semiconductor Device Innovation, Jaehun Jeong, Samsung
  •  Future Power Delivery Process Architectures and Their Capability and Impact on Interconnect Scaling, Kevin Fischer, Intel
  •  DTCO/STCO in the Era of Vertical Integration, YK Chong, ARM
  •  Low Power SOC Design Trends/3D Integration/Packaging for Mobile Applications, Milind Shah, Google

• The Future of Memory Technologies for High-Performance Memory and Computing, organized by Ki Il Moon, SK Hynix

  •  High-Density and High-Performance Technologies for Future Memory, Koji Sakui, Unisantis Electronics Singapore/Tokyo Institute of Technology
  •  Advanced Packaging Solutions for High Performance Memory and Compute, Jaesik Lee, SK Hynix
  •  Analog In-Memory Computing for Deep Learning Inference, Abu Sebastian, IBM
  •  The Next Generation of AI Architectures: The Role of Advanced Packaging Technologies in Enabling Heterogeneous Chiplets, Raja Swaminathan, AMD
  •  Key Challenges and Directional Path of Memory Technology for AI and High-Performance Computing, Keith Kim, NVIDIA
  •  Charge-Trapping Memories: From the Fundamental Device Physics to 3D Memory Architectures (3D NAND, 3D NOR, 3D DRAM) and Computing in Memory (CIM), Hang-Ting (Oliver) Lue, Macronix

Plenary Presentations – Monday, Dec. 11

  •  Redefining Innovation: A Journey forward in the New Dimension Era, Siyoung Choi, President & GM, Samsung Foundry Business, Device Solutions Division
  •  The Next Big Thing: Making Memory Magic and the Economics Beyond Moore's Law, Thy Tran, Vice President of Global Frontend Procurement, Micron
  •  Semiconductor Challenges in the 5G and 6G Technology Platforms, Björn Ekelund, Corporate Research Director, Ericsson

Evening Panel Session – Tuesday evening, Dec. 12
The IEDM evening panel session is an interactive forum where experts give their views on important industry topics, and audience participation is encouraged to foster an open exchange of ideas. This year’s panel will be moderated by Dan Hutcheson, Vice Chair at Tech Insights.

  •  AI: Semiconductor Catalyst? Or Disrupter? Artificial Intelligence (AI) has long been a hot topic. In 2023 it became super-heated when large language models became readily available to the public. This year’s IEDM will not rehash what’s been dragged through media. Instead, it will bring together industry experts to have a conversation about how AI is changing the semiconductor industry and to ask them how they are using AI to transform their efforts. The topics will be wide-ranging, from how AI will drive demand for semiconductors, to how it’s changing design and manufacturing, and even to how it will change the jobs and careers of those working in it.

Luncheon – Tuesday, Dec. 12
There will be a career-focused luncheon featuring industry and scientific leaders talking about their personal experiences in the context of career growth. The discussion will be moderated by Jennifer Zhao, President/CEO, asm OSRAM USA Inc. The speakers will be:

  •  Ilesanmi Adesida, University Provost and Acting President, Nazarbayev University, Kazakhstan -- Professor Ilesanmi Adesida is a scientist/engineer and an experienced administrator in both scientific and educational circles, with more than 350 peer-reviewed articles/250 presentations at international conferences.
  •  Isabelle Ferain, Vice-President of Technology Development, GlobalFoundries -- Dr. Ferain oversees GF’s technology development mission in its 300mm fabs in the US and Europe.

Vendor Exhibition/MRAM Poster Session/MRAM Global Innovation Forum

  •  A vendor exhibition will be held once again.
  •  A special poster session dedicated to MRAM (magnetoresistive RAM memory) will take place during the IEDM on Tuesday, Dec. 12 from 2:20 pm to 5:30 p.m., sponsored by the IEEE Magnetics Society.
  •  Also sponsored by the IEEE Magnetics Society, the 15th MRAM Global Innovation Forum will be held in the same venue after the IEDM conference concludes, on Thursday, Dec. 14.

For registration and other information, visit www.ieee-iedm.org.
Follow IEDM via social media

About IEEE & EDS
IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice on a wide variety of areas ranging from aerospace systems, computers, and telecommunications to biomedical engineering, electric power, and consumer electronics. The IEEE Electron Devices Society is dedicated to promoting excellence in the field of electron devices, and sponsors the IEEE IEDM.

Go to the original article...

Metalenz announces polarization sensor for face ID

Image Sensors World        Go to the original article...

Press release: https://metalenz.com/metalenz-launches-polar-id-enabling-simple-secure-face-unlock-for-smartphones/

Metalenz Launches Polar ID, Enabling Simple, Secure Face Unlock for Smartphones 

  • The world’s first polarization sensor for smartphones, Polar ID provides ultra-secure facial authentication in a condensed footprint, lowering implementation cost and complexity.
  •  Now demonstrated on Qualcomm Technologies’ latest Snapdragon mobile platform, Polar ID is poised to drive large-scale adoption of secure face unlock across the Android ecosystem.

Boston, MA – October 26, 2023 Meta-optics industry leader Metalenz unveiled Polar ID, a revolutionary new face unlock solution, at Qualcomm Technologies’ annual Snapdragon Summit this week. Being the world’s only consumer-grade imaging system that can sense the full polarization state of light, Polar ID enables the next level of biometric security. Using breakthrough advances in meta-optic capability, Polar ID accurately captures the unique “polarization signature” of a human face. With this additional layer of information, even the most sophisticated 3D masks and spoof instruments are immediately detected as non-human.

Facial authentication provides a seamless method for unlocking phones and allowing digital payment. However, to make the solution sufficiently secure requires expensive, bulky, and often power-hungry optical modules. Historically, this has limited the implementation of face unlock to only a few high-end phone models. Polar ID harnesses meta-optic technology to extract additional information such as facial contour details and to detect human tissue liveness from a single image. It is significantly more compact and cost effective than incumbent “Structured Light” face authentication solutions which require an expensive dot-pattern projector and multiple images.

Now demonstrated on a smartphone reference design powered by the new Snapdragon® 8 Gen 3 Mobile Platform, Polar ID has the efficiency, footprint, and price point to enable any Android smartphone OEM to bring the convenience and security of face unlock to the 100s of millions of mobile devices that currently use fingerprint sensors.

“Size, cost, and performance, those are the key metrics in the consumer industry”, said Rob Devlin, Metalenz CEO & Co-founder. “Polar ID offers an advantage in all three. Its small enough to fit in the most challenging form factors, eliminating the need for a large notch in the display. Its secure enough that it doesn’t get fooled by the most sophisticated 3D masks. Its substantially higher resolution than existing facial authentication solutions, so even if you’re wearing sunglasses and a surgical mask, the system still works. As a result, Polar ID delivers secure facial recognition at less than half the size and cost of incumbent solutions.”

“With each new generation of our flagship Snapdragon 8 series, our goal is to deliver the next generation of cutting-edge smartphone imaging capabilities to consumers. Our advanced Qualcomm® Spectra™ ISP and Qualcomm® Hexagon™ NPU were specifically designed to enable complex new imaging solutions, and we are excited to work with Metalenz to support their new Polar ID biometric imaging solution on our Snapdragon mobile platform for the first time,” said Judd Heape, VP of Product Management, Qualcomm Technologies, Inc.

“Polar ID is a uniquely powerful biometric imaging solution that combines our polarization image sensor with post-processing algorithms and sophisticated machine learning models to reliably and securely recognize and authenticate the phone’s registered user. Working closely with Qualcomm Technologies to implement our solution on their reference smartphone powered by Snapdragon 8 Gen 3, we were able to leverage the advanced image signal processing capabilities of the Qualcomm Spectra ISP while also implementing mission critical aspects of our algorithms in the secure framework of the Qualcomm Hexagon NPU, to ensure that the solution is not only spoof-proof but also essentially unhackable” said Pawel Latawiec, CTO of Metalenz. “The result is an extremely fast and compute efficient face unlock solution ready for OEMs to use in their next generation of Snapdragon 8 Gen 3-powered flagship Android smartphones.”

Polar ID is under early evaluation with several top smartphone OEMs, and additional evaluation kits will be made available in early 2024. Metalenz will exhibit its revolutionary Polar ID solution at MWC Barcelona and is now booking meetings to showcase a live demo of the technology to mobile OEMs.
Contact sales@metalenz.com to reserve your demo.


Go to the original article...

Fraunhofer IMS 10th CMOS Imaging Workshop Nov 21-22 in Duisburg, Germany

Image Sensors World        Go to the original article...


10th CMOS Imaging Workshop 

What to expect
You are kindly invited to an exciting event, which will promote the exchange of users, developers and researchers of optical sensing to enhance synergy and pave the way to great applications and ideas.

Main topics

  •  Single photon imaging
  •  Spectroscopy, scientific and medical imaging
  •  Quantum imaging
  •  Image sensor technologies

The workshop will not be limited to CMOS as a sensor technology, but will be fundamentally open to applications, technologies and methods based on advanced optical sensing.

Go to the original article...

Prophesee announces GenX320 low power event sensor for IoT applications

Image Sensors World        Go to the original article...

Press release: https://prophesee-1.reportablenews.com/pr/prophesee-launches-the-world-s-smallest-and-most-power-efficient-event-based-vision-sensor-bringing-more-intelligence-privacy-and-safety-than-ever-to-consumer-edge-ai-devices

Prophesee launches the world’s smallest and most power-efficient event-based vision sensor, bringing more intelligence, privacy and safety than ever to consumer Edge-AI devices

Prophesee’s latest event-based Metavision® sensor - GenX320 - delivers new levels of performance including ultra-low power, low latency, high flexibility for efficient integration in AR/VR, wearables, security and monitoring systems, touch-free interfaces, always-on IoT and many more

October 16, 2023 2pm CET PARIS –– Prophesee SA, inventor of the world’s most advanced neuromorphic vision systems, today announced the availability of the GenX320 Event-based Metavision sensor, the industry’s first event-based vision sensor developed specifically for integration into ultra-low-power Edge AI vision devices. The fifth generation Metavision sensor, available in a tiny 3x4mm die size, expands the reach of the company’s pioneering technology platform into a vast range of fast-growing intelligent Edge market segments, including AR/VR headsets, security and monitoring/detection systems, touchless displays, eye tracking features, always-on smart IoT devices and many more.

The GenX320 event-based vision sensor builds on Prophesee’s track record of proven success and expertise in delivering the speed, low latency, dynamic range and power efficiency and privacy benefits of event-based vision to a diverse array of applications.

The 320x320 6.3μm pixel BSI stacked event-based vision sensor offers a tiny 1/5” optical format. It has been developed with a specific focus on the unique requirements of efficient integration of innovative event sensing in energy-, compute- and size-constrained embedded at-the-edge vision systems. The GenX320 enables robust, high-speed vision at ultra-low power and in challenging operating and lighting conditions.

GenX320 benefits include:

  •  Low latency µsec resolution timestamping of events with flexible data formatting.
  •  On-chip intelligent power management modes reduce power consumption to as low as 36uW and enable smart wake-on-events. Deep sleep and standby modes are also featured.
  •  Easy integrability/interfacing with standard SoCs with multiple integrated event data pre-processing, filtering, and formatting functions to minimize external processing overhead.
  •  MIPI or CPI data output interfaces offer low-latency connectivity to embedded processing platforms, including low-power microcontrollers and modern neuromorphic processor architectures.
  •  AI-ready: on-chip histogram output compatible with multiple AI accelerators;
  •  Sensor-level privacy-enabled thanks to event sensor’s inherent sparse frameless event data with inherent static scene removal.
  •  Native compatibility with Prophesee Metavision Intelligence, the most comprehensive, free, event-based vision software suite, used by a fast-growing community of 10,000+ users.

“The low-power Edge-AI market offers a diverse range of applications where the power efficiency and performance characteristics of event sensors are ideally suited. We have built on our foundation of commercial success in other application areas and developed this new event-based Metavision sensor to address the needs of Edge system developers with a sensor that is easy to integrate, configure and optimize for multiple compelling use cases in motion and object detection, presence awareness, gesture recognition, eye tracking, and other high growth areas,” said Luca Verre, CEO and co-founder of Prophesee.

Specific use case potential

  •  High speed eye-tracking for foveated rendering for seamless interaction in AR/VR/XR headsets
  •  Low latency touch-free human machine interface in consumer devices (TVs, laptops, game consoles, smart home appliances and devices, smart displays and more)
  •  Smart presence detection and people counting in IoT cameras and other devices
  •  Ultra-low power always-on area monitoring systems
  •  Fall detection cameras in homes and health facilities

The GenX320 is available for purchase from Prophesee and its sales partners. It is supported by a complete range of development tools for easy exploration and optimization, including a comprehensive Evaluation Kit housing a chip on board (COB) GenX320 module, or a compact optical flex module. In addition, Prophesee is offering a range of adapter kits that enable seamless connectivity to a large range of embedded platforms, such as a STM32 MCU, enabling faster time-to-market.

Early adopters
Zinn Labs
“Zinn Labs is developing the next generation of gaze tracking systems built on the unique capabilities of Prophesee’s Metavision event sensors. The new GenX320 sensor meets the demands of eye and gaze movements that change on millisecond timescales. Unlike traditional video-based gaze tracking pipelines, Zinn Labs is able to leverage the GenX320 sensor to track features of the eye with a fraction of the power and compute required for full-blown computer vision algorithms, bringing the footprint of the gaze tracking system below 20 mW. The small package size of the new sensor makes this the first time an event-based vision sensor can be applied to space-constrained head-mounted applications in AR/VR products. Zinn Labs is happy to be working with Prophesee and the GenX320 sensor as we move towards integrating this new sensor into upcoming customer projects.”
Kevin Boyle, CEO & Founder

“Privacy continues to be one of the biggest consumer concerns when vision-based technology is used in our products such as DMS and TV services. Prophesee’s event-based Metavision technology enables us to take our ‘privacy by design’ principle to an even more secure level by allowing scene understanding without the need to have explicit visual representation of the scene. By capturing only changes in every pixel, rather than the entire scene as with traditional frame-based imaging sensors, our algorithms can derive knowledge to sense what is in the scene, without a detailed representation of it. We have developed a proof-of-concept demo that demonstrates DMS is fully possible using neuromorphic sensors. Using a 1MP neuromorphic sensor we can infer similar performance as an active NIR illumination 2MP vision sensor-based solution. Going forward, we focus on the GenX320 neuromorphic sensor that can be used in privacy sensitive smart devices to improve user experience.”
Petronel Bigioi, Chief Technology Officer

“We have seen the benefits of Prophesee’s event-based sensors in enabling hands-free interaction via highly accurate gesture recognition and hand tracking capabilities in Ultraleap’s TouchFree application. Their ability to operate in challenging environmental conditions, at very efficient power levels, and with low system latency enhances the overall user experience and intuitiveness of our touch free UIs. With the new Genx320 sensor, these benefits of robustness, low power consumption, latency and high dynamic range can be extended to more types of applications and devices, including battery-operated and small form factors systems, proliferating hands-free use cases for increased convenience and ease of use in interacting with all sorts of digital content.”
Tom Carter, CEO & Co-founder

Additional coverage on EETimes:


Prophesee’s GenX30 chip, sensor die at the top, processor at the bottom. ESP refers to the digital event signal processing pipeline. (Source: Prophesee)


Go to the original article...

Omnivision’s new sensor for security cameras

Image Sensors World        Go to the original article...

OMNIVISION Announces New 4K2K Resolution Image Sensor for Home and Professional Security Cameras
The OS08C10 is a high-performance 8MP resolution, small-form-factor image sensor with on-chip staggered and DAG HDR technology, designed to produce superb video/image quality in challenging lighting environments
SANTA CLARA, Calif. – October 24, 2023 – OMNIVISION, a leading global developer of semiconductor solutions, including advanced digital imaging, analog, and touch & display technology, today announced the new OS08C10, an 8-megapixel (MP) backside illumination (BSI) image sensor that features both staggered high dynamic range (HDR) and single exposure dual analog gain (DAG) for high-performance imaging in challenging lighting conditions. The 1.45-micron (µm) BSI pixel supports 4K2K resolution and high frame rates. It comes in a small 1/2.8-inch optical format, a popular size for home and professional security, IoT and action cameras.
“Our new 1.45 µm pixel OS08C10 image sensor provides improved sensitivity and optimized readout noise, closing the gap with big-pixel image sensors that have traditionally been required for high-performance imaging in the security market,” said Cheney Zhang, senior marketing manager, OMNIVISION. “The OS08C10 supports both staggered HDR and DAG HDR. Staggered HDR extends dynamic range in both bright and low lighting conditions; the addition of built-in DAG provides single-exposure HDR support and reduces motion artifacts. Our new feature-packed sensor supports 4K2K resolution for superior image quality with finer details and enhanced clarity.”
OMNIVISION’s OS08C10 captures real-time 4K video at 60 frames per second (fps) with minimal artifacts. Its selective conversion gain (SCG) pixel design allows the sensor to flexibly select low and high conversion gain, depending on the lighting conditions. The sensor adopts the new correlated multi-sampling (CMS) to further reduce readout noise and improve SNR1 and low-light performance. The OS08C10’s on-chip defective pixel correction (DPC) improves quality and reliability above and beyond standard devices by providing real-time correction of defective pixels that can result throughout the sensor’s life cycle, especially in harsh operating conditions.
The OS08C10 is built on OMNIVISION’s PureCel®Plus-S stacked-die technology, enabling high-resolution 8MP in a small 1.45 µm BSI pixel. At 300 mW (60 fps), the OS08C10 achieves the lowest power consumption on the market. OMNIVISION’s OS08C10 is a cost-effective 4K2K solution for security, IoT and action cameras applications.
The OS08C10 is sampling now and will be in mass production in Q1 2024. For more information, contact your OMNIVISION sales representative: www.ovt.com/contact-sales.


Go to the original article...