Galaxycore announces dual analog gain HDR CIS

Image Sensors World        Go to the original article...

Press release: https://en.gcoreinc.com/news/detail-66

GalaxyCore Unveils Industry's First DAG Single-Frame HDR 13Megapixels CIS

2023.08.11

GalaxyCore has officially launched the industry's first 13megapixels image sensor with Single-Frame High Dynamic Range (HDR) capability – the GC13A2. This groundbreaking 1/3.1", 1.12μm pixel back-illuminated CIS features GalaxyCore's unique Dual Analog Gain (DAG) circuit architecture, enabling low-power consumption 12bit HDR output during previewing, photography, and video recording. This technology enhances imaging dynamic range for smartphones, tablets, and more, resulting in vividly clear images for users.

The GC13A2 also supports on-chip Global Tone Mapping, which compresses real-time 12bit data into 10bit output, preserving HDR effects and expanding compatibility with a wider range of smartphone platforms.



High Dynamic Range Technology

Dynamic range refers to the range between the darkest and brightest images an image sensor can capture. Traditional image sensors have limitations in dynamic range, often failing to capture scenes as perceived by the human eye. High Dynamic Range (HDR) technology emerged as a solution to this issue.


Left Image: blowout in the bright part resulting from narrow dynamic range/Right Image: shot with DAG HDR

Currently, image sensors use multi-frame synthesis techniques to enhance dynamic range:
Photography: Capturing 2-3 frames of the same scene with varying exposure times – shorter exposure to capture highlight details and longer exposure to supplement shadow details – then combining them to create an image with a wider dynamic range.

Video Recording: Utilizing multi-frame synthesis, the image sensor alternates between outputting 60fps long-exposure and short-exposure images, which the platform combines to produce a 30fps frame with preserved highlight color and shadow details. While multi-frame synthesis yields noticeable improvements in dynamic range, it significantly increases power consumption, making it unsuitable for prolonged use on devices like smartphones and tablets. Moreover, it tends to produce motion artifacts when capturing moving objects.



Left Image: shot with Multi-Frame HDR (Motion Artifact) Right Image: shot with DAG HDR

GalaxyCore's Patented DAG HDR Technology

GalaxyCore's DAG HDR technology, based on single-frame imaging, employs high analog gain in shadow regions for improved clarity and texture, while low analog gain is used in highlight parts to prevent overexposure and preserve details. Compared to traditional multi-frame HDR, DAG HDR not only increases dynamic range and mitigates artifact issues but also addresses the power consumption problem associated with multi-frame synthesis. For instance, in photography, scenes that used to require 3-frame synthesis are reduced by 50% when utilizing DAG HDR.

Left Image: Traditional HDR Photography Right Image: DAG HDR Photography

GC13A2 Empowers Imaging Excellence with HDR


Empowered by DAG HDR, the GC13A2 is capable of low-power 12bit HDR image output and 4K 30fps video capture. It reduces the need for frame synthesis during photography and lowers HDR video recording power consumption by approximately 30%, while avoiding the distortion caused by motion artifacts.

Compared to other image sensors of the same specifications in the industry, GC13A2 supports real-time HDR previewing, allowing users to directly observe every frame's details while shooting. This provides consumers with an enhanced shooting experience.

GC13A2 has already passed initial verification by brand customers and is set to enter mass production. In the future, GalaxyCore will introduce a series of high-resolution DAG single-frame HDR products, including 32Megapixels and 50Megapixels variants. This will further enhance GalaxyCore’s high-performance product lineup, promoting superior imaging quality and an enhanced user experience for smartphones.

Go to the original article...

OmniVision three-layer stacked sensor

Image Sensors World        Go to the original article...

From Businesswire --- "OMNIVISION Announces World’s Smallest Global Shutter Image Sensor for AR/VR/MR and Metaverse".

OmniVision has announced the industry’s first and only three-layer stacked BSI global shutter (GS) image sensor. The OG0TB is the world’s smallest image sensor for eye and face tracking in AR/VR/MR and Metaverse consumer devices, with a package size of just 1.64mm x 1.64mm, it has a 2.2µm pixel in a 1/14.46-inch optical format (OF). The CMOS image sensor features 400×400 resolution and ultra-low power consumption, ideal for some of the smallest and lightest battery-powered wearables, such as eye goggles and glasses. Ultra-low power consumption is critical for these battery-powered devices, which can have 10 or more cameras per system. Their OG0TB BSI GS image sensor consumes less than 7.2mW at 30 frames per second (fps).



SANTA CLARA, Calif.--(BUSINESS WIRE)--OMNIVISION, a leading global developer of semiconductor solutions, including advanced digital imaging, analog, and touch & display technology, today announced the industry’s first and only three-layer stacked BSI global shutter (GS) image sensor. The OG0TB is the world’s smallest image sensor for eye and face tracking in AR/VR/MR and Metaverse consumer devices, with a package size of just 1.64mm x 1.64mm, it has a 2.2µm pixel in a 1/14.46-inch optical format (OF). The CMOS image sensor features 400x400 resolution and ultra-low power consumption, ideal for some of the smallest and lightest battery-powered wearables, such as eye goggles and glasses.

“OMNIVISION is leading the industry by developing the world’s first three-layer stacked global shutter pixel technology and implementing it in the smallest GS image sensor with uncompromising performance,” said David Shin, staff product marketing manager – IoT/Emerging at OMNIVISION. “We pack all of these features and functions into the world’s smallest ‘ready-to-go’ image sensor, which provides design flexibility to put the camera in the most ideal placement on some of the smallest and slimmest wearable devices.” Shin adds, “Ultra-low power consumption is critical for these battery-powered devices, which can have 10 or more cameras per system. Our OG0TB BSI GS image sensor consumes less than 7.2mW at 30 frames per second (fps).”

The worldwide market for AR/VR headsets grew 92.1% year over year in 2021, with shipments reaching 11.2 million units, according to new data from the International Data Corporation (IDC) Worldwide Quarterly AR/VR Headset Tracker1. New entrants as well as broader adoption from the commercial sector will propel the market further as headset shipments are forecast to grow 46.9% year over year in 2022. In fact, IDC expects this market to experience double-digit growth through 2026 as global shipments of AR/VR headsets surpass 50 million units by the end of the forecast, with a 35.1% compounded annual growth rate (CAGR).

OMNIVISION is supporting the growing market for AR/VR headsets by introducing new products such as the OG0TB GS image sensor, which features the company’s most advanced technology:

 It is built on OMNIVISION’s PureCel®Plus-S stacked-die technology.

 It features a three-layer stacked sensor with pixel size at 2.2µm in a 1/14.46-inch OF to achieve 400x400 resolution.

 Nyxel® technology enables the best quantum efficiency (QE) at the 940nm NIR wavelength for sharp, accurate images of moving objects.

 The sensor’s high modulation transfer function (MTF) enables sharper images with greater contrast and more detail, which is especially important for enhancing decision-making processes in machine vision applications.

 The sensor supports a flexible interface, including MIPI with multi-drop, CPHY, SPI, etc.

The OG0TB GS image sensor will be available for sampling in Q3 2022 and in mass production in the 2H 2023.


PS: It is worth noting that Sony made a claim for "world's first 3 layer stacked CIS" back in 2017 after their ISSCC paper titled "A 1/2.3inch 20Mpixel 3-layer stacked CMOS Image Sensor with DRAM" (DOI: 10.1109/ISSCC.2017.7870268). The three layers consisted of photodiodes, DRAM memory, and mixed-signal ISP. But that was a rolling shutter sensor, whereas this one from OmniVision is a global shutter sensor. 

PPS: Readers of blog who know of any journal or conference publication about OmniVision's new design please share them in the comments below! 


Go to the original article...

Review of indirect time-of-flight 3D cameras (IEEE TED June 2022)

Image Sensors World        Go to the original article...

C. Bamji et al. from Microsoft published a paper titled "A Review of Indirect Time-of-Flight Technologies" in IEEE Trans. Electron Devices (June 2022).

Abstract: Indirect time-of-flight (iToF) cameras operate by illuminating a scene with modulated light and inferring depth at each pixel by combining the back-reflected light with different gating signals. This article focuses on amplitude-modulated continuous-wave (AMCW) time-of-flight (ToF), which, because of its robustness and stability properties, is the most common form of iToF. The figures of merit that drive iToF performance are explained and plotted, and system parameters that drive a camera’s final performance are summarized. Different iToF pixel and chip architectures are compared and the basic phasor methods for extracting depth from the pixel output values are explained. The evolution of pixel size is discussed, showing performance improvement over time. Depth pipelines, which play a key role in filtering and enhancing data, have also greatly improved over time with sophisticated denoising methods now available. Key remaining challenges, such as ambient light resilience and multipath invariance, are explained, and state-of-the-art mitigation techniques are referenced. Finally, applications, use cases, and benefits of iToF are listed.



Use of time gates to integrate returning light


iToF camera measurement


Modulation contrast vs. modulation frequency used in iToF cameras


Trend of pixel sizes since 2012

Trend of pixel array sizes since 2012

Trend of near infrared pixel quantum efficiencies since 2010


Multigain column readout


Multipath mitigation

DOI link: 10.1109/TED.2022.3145762

Go to the original article...

High resolution ToF module from Analog Devices

Image Sensors World        Go to the original article...

Analog Devices has released an industrial-grade megapixel ToF module ADTF3175 and a VGA resolution sensor the ADSD3030 that seeks to bring the highest accuracy ToF technology in the most compact VGA footprint.


The ADTF3175 is a complete Time-of-Flight (ToF) module for high resolution 3D depth sensing and vision systems. Based on the ADSD3100, a 1 Megapixel CMOS indirect Time-of-Flight (iToF) imager, the ADTF3175 also integrates the lens and optical bandpass filter for the imager, an infrared illumination source containing optics, laser diode, laser diode driver and photodetector, a flash memory, and power regulators to generate local supply voltages. The module is fully calibrated at multiple range and resolution modes. To complete the depth sensing system, the raw image data from the ADTF3175 is processed externally by the host system processor or depth ISP.

The ADTF3175 image data output interfaces electrically to the host system over a 4-lane mobile industry processor interface (MIPI), Camera Serial Interface 2 (CSI-2) Tx interface. The module programming and operation are controlled through 4-wire SPI and I2C serial interfaces.

The ADTF3175 has module dimensions of 42mm × 31mm × 15.1mm, and is specified over an operating temperature range of -20°C to 65°C.

Applications:
Machine vision systems
Robotics
Building automation
Augmented reality (AR) systems

Price:
$197 in 1,000 Unit Quantities




The ADSD3030 is a CMOS 3D Time of Flight (ToF)-based 3D depth and 2D visual light imager that is available for integration into 3D sensor systems. The functional blocks required for read out, which include analog-to-digital converters (ADCs), amplifiers, pixel biasing circuitry, and sensor control logic, are built into the chip to enable a cost-effective and simple implementation into systems.

The ADSD3030 interfaces electrically to a host system over a mobile industry processor interface (MIPI), Camera Serial Interface 2 (CSI-2) interface. A lens plus optical band-pass filter for the imager and an infrared light source plus an associated driver are required to complete the working subsystem.

Applications:
Smartphones
Augmented reality (AR) and virtual reality (VR)
Machine vision systems (logistics and inventory)
Robotics (consumer and industrial)


Go to the original article...

Samsung’s ISOCELL HP3 sensor

Image Sensors World        Go to the original article...

Samsung has published details about its now 200MP sensor 'ISOCELL HP3'.

https://semiconductor.samsung.com/image-sensor/mobile-image-sensor/isocell-hp3/

Press release: https://news.samsung.com/global/samsung-unveils-isocell-image-sensor-with-industrys-smallest-0-56%CE%BCm-pixel


Samsung Electronics, a world leader in advanced semiconductor technology, today introduced the 200MP ISOCELL HP3, the image sensor with the industry’s smallest 0.56-micrometer (μm)-pixels.

“Samsung has continuously led the image sensor market trend through its technology leadership in high resolution sensors with the smallest pixels,” said JoonSeo Yim, Executive Vice President of Sensor Business Team at Samsung Electronics. “With our latest and upgraded 0.56μm 200MP ISOCELL HP3, Samsung will push on to deliver epic resolutions beyond professional levels for smartphone camera users.”

Epic Resolution Beyond Pro Levels

Since its first 108MP image sensor roll-out in 2019, Samsung has been leading the trend of next-generation, ultra-high-resolution camera development. Through the steady launch of new image sensors and advancements in performance, the company is once again forging ahead with the 0.56μm 200MP ISOCELL HP3.

The ISOCELL HP3, with a 12 percent smaller pixel size than the predecessor’s 0.64μm, packs 200 million pixels in a 1/1.4” optical format, which is the diameter of the area that is captured through the camera lens. This means that the ISOCELL HP3 can enable an approximately 20 percent reduction in camera module surface area, allowing smartphone manufacturers to keep their premium devices slim.

The ISOCELL HP3 comes with a Super QPD auto-focusing solution, meaning that all of the sensor’s pixels are equipped with auto-focusing capabilities. In addition, Super QPD uses a single lens over four-adjacent pixels to detect the phase differences in both horizontal and vertical directions. This paves way for a more accurate and quicker auto focusing for smartphone camera users.

The sensor also allows users to take videos in 8K at 30 frames-per-second (fps) or 4K at 120fps, with minimal loss in the field of view when taking 8K videos. Combined with the Super QPD solution, users can take movie-like cinematic footage with their mobile devices.

Ultimate Low Light Experience Through ‘Tetra2pixel’

The ISOCELL HP3 also provides an ultimate low-light experience, with the Tetra2pixel technology that combines four pixels into one to transform the 0.56μm 200MP sensor into a 1.12μm 50MP sensor, or a 12.5MP sensor with 2.24μm-pixels by combining 16 pixels into one. The technology enables the sensor to simulate a large-sized pixel sensor to take brighter and more vibrant shots even in dimmed environments, like in-doors or during nighttime.

To maximize the dynamic range of the mobile image sensor, the ISOCELL HP3 adopts an improved Smart-ISO Pro feature. The technology merges image information made from the two conversion gains of Low and High ISO mode to create HDR images. The upgraded version of the technology comes with a triple ISO mode — Low, Mid and High — that further widens the sensor’s dynamic range. In addition, the improved Smart-ISO Pro enables the sensor to express images in over 4 trillion colors (14-bit color depth), 64 times more colors than the predecessor’s 68 billion (12-bit). Furthermore, by supporting staggered HDR along with Smart-ISO Pro, the ISOCELL HP3 can switch between the two solutions depending on the filming environment to produce high-quality HDR images.

Samples of the Samsung ISOCELL HP3 are currently available, and mass production is set to begin this year.


Effective Resolution 16,320 x 12,288 (200M)

Pixel Size 0.56μm

Optical Format 1/1.4"

Color Filter Super QPD Tetra2pixel, RGB Bayer Pattern

Normal Frame Rate 7.5 fps @ full 200 MP, 27 fps @ 50 MP, and 120 fps @ 12.5 MP

Video Frame Rate 30 fps @ 8K, 120 fps @ 4K, and 480 fps @ FHD

Shutter Type Electronic rolling shutter

ADC Accuracy 10-bits

Supply Voltage 2.2 V for analog, 1.8 V for I/O, and 0.9 V for digital core supply

Operating Temperature -20℃ to +85℃

Interface 4 lanes (2.5Gbps per lane) D-PHY / 3 lanes (4.0Gsps per lane) C-PHY

Chroma Tetra2pixel

Auto Focus RGB Bayer Pattern

HDR Smart-ISO Pro (iDCG), Staggered HDR

Output Formats RAW10/12/14

Analog Gain x128 with High Conversion Gain

Product Status Samples Available 

Go to the original article...

Preprint on unconventional cameras for automotive applications

Image Sensors World        Go to the original article...

From arXiv.org --- You Li et al. write:

Autonomous vehicles rely on perception systems to understand their surroundings for further navigation missions. Cameras are essential for perception systems due to the advantages of object detection and recognition provided by modern computer vision algorithms, comparing to other sensors, such as LiDARs and radars. However, limited by its inherent imaging principle, a standard RGB camera may perform poorly in a variety of adverse scenarios, including but not limited to: low illumination, high contrast, bad weather such as fog/rain/snow, etc. Meanwhile, estimating the 3D information from the 2D image detection is generally more difficult when compared to LiDARs or radars. Several new sensing technologies have emerged in recent years to address the limitations of conventional RGB cameras. In this paper, we review the principles of four novel image sensors: infrared cameras, range-gated cameras, polarization cameras, and event cameras. Their comparative advantages, existing or potential applications, and corresponding data processing algorithms are all presented in a systematic manner. We expect that this study will assist practitioners in the autonomous driving society with new perspectives and insights.








Go to the original article...

Photonics magazine article on Pi Imaging SPAD array

Image Sensors World        Go to the original article...

Photonics magazine has a new article about Pi Imaging Technology's high resolution SPAD sensor array; some excerpts below.


As the performance capabilities and sophistication of these detectors have expanded, so too have their value and impact in applications ranging from astronomy to the life sciences.

As their name implies, single-photon avalanche diodes (SPADs) detect single particles of light, and they do so with picosecond precision. Single-pixel SPADs have found wide use in astronomy, flow cytometry, fluorescence lifetime imaging microscopy (FLIM), particle sizing, quantum computing, quantum key distribution, and single- molecule detection. Over the last 10 years, however, SPAD technology has evolved through the use of standard complementary metal-oxide-semiconductor (CMOS) technology. This paved the way for arrays and image sensor architectures that could increase the number of SPAD pixels in a compact and scalable way. 

Compared to single-pixel SPADs, arrays offer improved spatial resolution and signal-to-noise ratio (SNR). In confocal microscopy applications, for example, each pixel in an array acts as a virtual small pinhole with good lateral and axial resolution, while multiple pixels collect the signal of a virtual large pinhole.


Challenges: 

Early SPADs produced as single-point detectors in custom processes offered poor scalability. In 2003, researchers started using standard CMOS technology to build SPAD arrays. This change in design and production platform opened up the possibility to reliably produce high-pixel-count SPAD detectors, as well as invent and integrate new pixel circuity for quenching and recharging, time tagging, and photon-counting functions. Data handling in these devices ranged from simple SPAD pulse outputting to full digital signal processing.
Close collaboration between SPAD developers and CMOS fabs, however, has helped SPAD technology overcome many of its sensitivity and noise challenges by adding SPAD-specific layers into the semiconductor process flow, design innovations in SPAD guard rings, and enhanced fill factors made possible by microlenses. 


Applications:

Research on SPADs also focused on the technology’s potential in biomedical applications, such as Raman spectroscopy, FLIM, and positron emission tomography (PET).

FLIM [fluorescence lifetime imaging microscopy] benefits from the use of SPAD arrays, which allow faster imaging speeds by increasing the sustainable count rate via pixel parallelization. SPAD image sensors enhanced with time-gating functions can further expand the implementation of FLIM to nonconfocal microscopic modalities and thus establish FLIM in a broader range of potential applications, such as spatial multiplexed applications in a variety of biological disciplines including genomics, proteomics, and other “-omics” fields.

One additional application where SPAD technology is forging performance enhancements is high-speed imaging, in which image sensors typically suffer from low SNR. The shorter integration times in these operations lead to lower photon collection and pixel blur, while the faster readout speeds increase noise in the collected image. SPAD image sensors fully eliminate this noise to offer Poisson-maximized SNR. 






A signal-to-noise ratio (SNR) comparison between a SPAD with 50% sensitivity and a typical photodiode with 80% sensitivity, both with equivalent readout noise of 10 e− (representative only for high-speed readout mode). Courtesy of Pi Imaging Technology.





A demonstration of SNR differences between a typical photodiode with 80% sensitivity and 10 e− signal (representative only for high-speed readout mode) equivalent readout noise (top) and a SPAD with 50% sensitivity (bottom), both at 10 impinging photons average. Courtesy of Pi Imaging Technology.


A SPAD array system implementation for image-scanning microscopy applications.



A fluorescence lifetime imaging microscopy (FLIM) image of mouse embryo tissue recorded with a single-photon-counting confocal microscope. Courtesy of PicoQuant.




About Pi Imaging:

Pi Imaging Technology is fundamentally changing the way we detect light. We do that by creating photon-counting arrays with the highest sensitivity and lowest noise.

We enable our partners to introduce innovative products. The end-users of these products perform cutting-edge science, develop better products and services in life science and quantum information.

Pi Imaging Technology bases its technology on 7 years of dedicated work at TU Delft and EPFL and 6 patent applications. The core of it is a single-photon avalanche diode (SPAD) designed in standard semiconductor technology. This enables our photon-counting arrays to have an unlimited number of pixels and adaptable architectures.


Full article here: https://www.photonics.com/Articles/Single-Photon_Avalanche_Diodes_Sharpen_Spatial/p5/vo211/i1358/a67902

Go to the original article...

PhD Thesis on Analog Signal Processing for CMOS Image Sensors

Image Sensors World        Go to the original article...

The very first PhD thesis that came out of Albert Theuwissen's group at TU Delft is now freely available as a pdf. This seems like a great educational resource for people interested in image sensors.

Direct download link: https://repository.tudelft.nl/islandora/object/uuid:2fbc1f51-7784-4bcd-85ab-70fc193c5ce9/datastream/OBJ/download

Title: Analog Signal Processing for CMOS Image Sensors
Author: Martijn Snoeij
Year: 2007

Abstract: 
This thesis describes the development of low-noise power-efficient analog interface circuitry for CMOS image sensors. It focuses on improving two aspects of the interface circuitry: firstly, lowering the noise in the front-end readout circuit, and secondly the realization of more power-efficient analog-to-digital converters (ADCs) that are capable of reading out high-resolution imaging arrays. 

Chapter 2 provides an overview of the analog signal processing chain in conventional, commercially-available CMOS imagers. First of all, the different photo-sensitive elements that form the input to the analog signal chain are briefly discussed. This is followed by a discussion of the analog signal processing chain itself, which will be divided into two parts. Firstly, the analog front-end, consisting of in-pixel circuitry and column-level circuitry, is discussed. Second, the analog back-end, consisting of variable gain amplification and A/D conversion is discussed. Finally, a brief overview of advanced readout circuit techniques is provided.

In chapter 3, the performance of the analog front-end is analyzed in detail. It is shown that its noise performance is the most important parameter of the front-end. An overview of front-end noise sources is given and their relative importance is discussed. It will be shown that 1/f noise is the limiting noise source in current CMOS imagers. A relatively unknown 1/f noise reduction technique, called switched-biasing or large signal excitation (LSE), is introduced and its applicability to CMOS imagers is explored. Measurement results on this 1/f noise reduction technique are presented. Finally, at the end of the chapter, a preliminary conclusion on CMOS imager noise performance is presented. 

The main function of the back-end analog signal chain is analog-to-digital conversion, which is described in chapter 4. First of all, the conventional approach of a single chip-level ADC is compared to a massively-parallel, column-level ADC, and the advantages of the latter will be shown. Next, the existing column-level ADC architectures will be briefly discussed, in particular the column-parallel single-slope ADC. Furthermore, a new architecture, the multiple-ramp single-slope ADC will be proposed. Finally, two circuit techniques are introduced that can improve ADC performance. Firstly, it will be shown that the presence of photon shot noise in an imager can be used to significantly decrease ADC power consumption. Secondly, an column FPN reduction technique, called Dynamic Column Switching (DCS) is introduced.

Chapter 5 and 6 present two realisations of imagers with column-level ADCs. In chapter 5, a CMOS imager with single-slope ADC is presented that consumes only 3.2µW per column. The circuit details of the comparator achieving this low power consumption are described, as well as the digital column circuitry. The ADC uses the dynamic column switching technique introduced in chapter 4 to reduce the perceptional effects of column FPN. Chapter 6 presents an imager with a multiple-ramp single-slope architecture, which was proposed in chapter 4. The column comparator used in this design is taken from a commercially available CMOS imager. The multiple ramps are generated on chip with a low power ladder DAC structure. The ADC uses an auto-calibration scheme to compensate for offset and delay of the ramp drivers.

Go to the original article...

Harvest Imaging Forum 2022 is open for registration!

Image Sensors World        Go to the original article...

After the Harvest Imaging forums during the last 7 years, an eighth one will be organized on June 23 & 24, 2022 in Delft, the Netherlands. The basic intention of the Harvest Imaging forum is to have a scientific and technical in-depth discussion on one particular topic that is of great importance and value to digital imaging. Due to well-known reasons, the 2022 version of the forum will be organized in a hybrid form :

You can attend in-person and can benefit in an utmost way of the live interaction with the speakers and audience,

There will be also a live broadcast of the forum, still interactions with the speakers through a chat box will be made possible,

Finally the forum also can be watched on-line at a later date.

The 2022 Harvest Imaging forum will deal with two subjects in the field of solid-state imaging and two speakers. Both speakers are world-level experts in their own fields.


"Dark current, dim points and bright spots : coming to the dark side of image sensors"

Dr. Daniel McGrath (GOODiX, USA)

Abstract:

Charge-generating defects are an intersection of physics, material properties, manufacturing processes and image science. In this time when pixels are reduced in dimensions comparable to the wavelength of light and noise performance is approaching photon counting, processes that produce erroneous signals in the dark have come to limit image sensor performance. The reduction of dark current over the last decades has been a success story, but has got the industry to a point where it is not clear the path for further improvement.

The aim of this forum is to provide an feet-on-the-ground exploration of the nature of dark current and of bright defects in image sensors. The start will be a discussion of the nature of both with their individual challenges and a timeline to put the development that has got the technology to its present state. It will discuss the challenge and opportunity provided by extreme sensitivity of the pixel, a curse and a blessing for understanding. It will traverse the physics and material issues related in spontaneous charge generation in semiconductors. It will take time to ponder gettering, passivation and radiation effects. It will try to provide a path through the tangle of manufacturing's mysteries and challenges. The goal is to climb to the present precipice, there to consider options that can take the technology to the next advance.

Bio:

Dan McGrath has worked for 40 years specializing in the device physics of silicon-based pixels, CCD and CIS, and in the integration of image-sensor process enhancements in the manufacturing flow. He chose his first job because it offered that “studying defects in image sensors means doing physics” and has kept this passion front-and-center in his work. After obtaining his doctorate from The Johns Hopkins University, he pursued this work at Texas Instruments, Polaroid, Atmel, Eastman Kodak, Aptina and BAE Systems. He has worked with manufacturing facilities in France, Italy, Taiwan, and the USA. In 2019 he joined GOODiX Technology, a supplier to the cell phone and IoT market. He has held organizational positions in the Semiconductor Interface Specialists Conference, the International Solid State Circuits Conference, The International Electron Device Conference and the International Image Sensor Workshop. He has made presentations on dark current at ESSDERC, Electronic Imaging and the International Image Sensor Workshop. His publications include the first megapixel CCD and the basis for dark current spectroscopy (DCS).


"Random Telegraph Signal and Radiation Induced Defects in CMOS Image Sensors"

Dr. Vincent Goiffon (ISAE-SUPAERO, Fr)

Abstract:

CMOS Image Sensors (CIS) are by far the main solid-state image sensor technology in 2021. Each and every year, this technology comes closer to the ideal visible imaging device with near 100% peak quantum efficiency, sub electron readout noise and ultra-low dark current (< 1 e-/s) at room temperature. In such near-perfect pixel arrays, the appearance of a single defect can seriously jeopardize the pixel function. Oxide/silicon interface and silicon bulk defects can remain after manufacturing or can be introduced by aging or after exposure to particle radiation. This later source of performance degradation limits the use of commercial “unhardened” solid-state sensors in a wide range of key applications such as medical imaging, space exploration, nuclear power plant safety, electron microscopy, particle physics and nuclear fusion instrumentation.

The aim of this forum is to explore the influence of semiconductor defects on CIS performances through the magnifying glass of radiation damage. In a first part, a review of radiation effects on CIS will be provided alongside the main mitigation techniques (so-called radiation hardening by design or RHBD techniques). The trade-off between radiation-hardening and performance will be discussed on chosen applications. This first part has a double objective: 1) to provide image sensors professionals the background to anticipate and improve the radiation hardness of their sensors in radiation environment, and 2) to give a different perspective on parasitic physical mechanisms that can be observed in as-fabricated sensors such as hot pixels and charge transfer inefficiency.

The second part will focus on Random Telegraph Signals (RTS) in image sensors, a defect related phenomenon of growing importance in advanced technologies. The fundamental differences between the two main RTS in imagers – MOSFET channel RTS, also called RTN, and Dark Current RTS (DC-RTS) – will be presented. Similarly to the first part, radiation damage will be used to clarify the mysterious origin of DC-RTS. The discussion will conclude with an opening towards the RTS mechanisms similarities between CIS and other image sensor technologies (e.g. SPAD and infrared detectors) and integrated circuits (DRAM).


Bio:

Vincent Goiffon received his Ph.D. in EE from the University of Toulouse in 2008. The same year he joined the ISAE-SUPAERO Image Sensor Research group as Associate Professor and he has been a Full Professor of Electronics at the Institute since 2018.

He has contributed to advance the understanding of radiation effects on solid-state image sensors, notably by identifying original degradation mechanisms in pinned photodiode pixels and by clarifying the role of interface and bulk defects in the mysterious dark current random telegraph signal phenomenon.

Besides his contributions to various space R&D projects, Vincent has been leading the development of radiation hardened CMOS image sensors (CIS) and cameras for nuclear fusion experiments (e.g. ITER and CEA Laser MegaJoule) and nuclear power plant safety. Vincent recently became the head of the Image Sensor Group of ISAE-SUPAERO.

Vincent Goiffon is the author of one book chapter and more than 90 scientific publications, including more than 10 conference awards at IEEE NSREC, RADECS and IISW.

He has been an associate editor of the IEEE Transactions on Nuclear Science since 2017 and has served the community as reviewer and session chair.


Register here: https://www.harvestimaging.com/forum_introduction_2021_new.php

Go to the original article...

Better Piezoelectric Light Modulators for AMCW Time-of-Flight Cameras

Image Sensors World        Go to the original article...

A team from Stanford University's Laboratory for Integrated Nano-Quantum Systems (LINQS) and ArbabianLab present a new method that can potentially convert any conventional CMOS image sensor into an amplitude-modulated continuous-wave time-of-flight camera. The paper titled "Longitudinal piezoelectric resonant photoelastic modulator for efficient intensity modulation at megahertz frequencies" appeared in Nature Communications.

Intensity modulators are an essential component in optics for controlling free-space beams. Many applications require the intensity of a free-space beam to be modulated at a single frequency, including wide-field lock-in detection for sensitive measurements, mode-locking in lasers, and phase-shift time-of-flight imaging (LiDAR). Here, we report a new type of single frequency intensity modulator that we refer to as a longitudinal piezoelectric resonant photoelastic modulator. The modulator consists of a thin lithium niobate wafer coated with transparent surface electrodes. One of the fundamental acoustic modes of the modulator is excited through the surface electrodes, confining an acoustic standing wave to the electrode region. The modulator is placed between optical polarizers; light propagating through the modulator and polarizers is intensity modulated with a wide acceptance angle and record breaking modulation efficiency in the megahertz frequency regime. As an illustration of the potential of our approach, we show that the proposed modulator can be integrated with a standard image sensor to effectively convert it into a time-of-flight imaging system.



a) A Y-cut lithium niobate wafer of diameter 50.8 mm and of thickness 0.5 mm is coated on top and bottom surfaces with electrodes having a diameter of 12.7 mm. The wafer is excited with an RF source through the top and bottom electrodes. b) Simulated ∣s11∣ of the wafer with respect to 50 Ω, showing the resonances corresponding to different acoustic modes of the wafer (loss was added to lithium niobate to make it consistent with experimental results). The desired acoustic mode appears around 3.77 MHz and is highlighted in blue. c) The desired acoustic mode ∣s11∣ with respect to 50 Ω is shown in more detail. d) The dominant strain distribution (Syz) when the wafer is excited at 3.7696 MHz with 2 Vpp is shown for the center of the wafer. This strain distribution corresponds to the ∣s11∣ resonance shown in (c). e) The variation in Syz parallel to the wafer normal and centered along the wafer is shown when the wafer is excited at 3.7696 MHz with 2 Vpp.



a) Schematic of the characterization setup is shown. The setup includes a laser (L) with a wavelength of 532 nm that is intensity-modulated at 3.733704 MHz, aperture (A) with a diameter of 1 cm, neutral density filter (N), two polarizers (P) with transmission axis t^=(a^x+a^z)/2–√, wafer (W), and a standard CMOS camera (C). The wafer is excited with 90 mW of RF power at fr = 3.7337 MHz, and the laser beam passes through the center of the wafer that is coated with ITO. The camera detects the intensity-modulated laser beam. b) The desired acoustic mode is found for the modulator by performing an s11 scan with respect to 50 Ω using 0 dBm excitation power and with a bandwidth of 100 Hz. The desired acoustic mode is highlighted in blue. c) The desired acoustic mode is shown in more detail by performing an s11 scan with respect to 50 Ω using 0 dBm excitation power with a bandwidth of 20 Hz. d) The fabricated modulator is shown. e) The depth of intensity modulation is plotted for different angles of incidence for the laser beam (averaged across all the pixels), where ϕ is the angle between the surface normal of the wafer and the beam direction k^ (see “Methods” for more details). Error bars represent the standard deviation of the depth of intensity modulation across the pixels. f) Time-averaged intensity profile of the laser beam detected by the camera is shown for ϕ = 0. g) The DoM at 4 Hz of the laser beam is shown per pixel for ϕ = 0. h) The phase of intensity modulation at 4 Hz of the laser beam is shown per pixel for ϕ = 0.


a) Schematic of the imaging setup is shown. The setup includes a standard CMOS camera (C), camera lens (CL), two polarizers (P) with transmission axis t^=(a^x+a^z)/sqrt(2), wafer (W), aperture (A) with a diameter of 4 mm, laser (L) with a wavelength of 635 nm that is intensity-modulated at 3.733702 MHz, and two metallic targets (T1 and T2) placed 1.09 m and 1.95 m away from the imaging system, respectively. For the experiment, 140 mW of RF power at fr = 3.7337 MHz is used to excite the wafer electrodes. The laser is used for illuminating the targets. The camera detects the reflected laser beam from the two targets, and uses the 2 Hz beat tone to extract the distance of each pixel corresponding to a distinct point in the scene (see “Methods” for more details). b) Bird’s eye view of the schematic in (a). c) Reconstructed depth map seen by the camera. Reconstruction is performed by mapping the phase of the beat tone at 2 Hz to distance using Eq. (3). The distance of each pixel is color-coded from 0 to 3 m (pixels that receive very few photons are displayed in black). The distance of targets T1 and T2 are estimated by averaging across their corresponding pixels, respectively. The estimated distances for T1 and T2 are 1.07 m and 1.96 m, respectively (averaged across all pixels corresponding to T1 and T2). d) Ambient image capture of the field-of-view of the camera, showing the two targets T1 and T2. e The dimensions of the targets used for ToF imaging are shown.


The paper points out limitations of other approaches such as spatial light modulators and meta-optics, but doesn't mention any potential challenges or limitations of their proposed method. Interestingly, the authors cite some recent papers on high-resolution SPAD sensors to make the claim that their method is more promising than "highly specialized costly image sensors that are difficult to implement with a large number of pixels." Although the authors do not explicitly mention this in the paper, their piezoelectric material of choice (lithium niobate) is CMOS compatible. Thin-film deposition of lithium niobate on silicon using a CMOS process seems to be an active area of research (for example, see Mercante et al., Optics Express 24(14), 2016 and Wang et al., Nature 562, 2018.)

Go to the original article...

Product Videos: STMicro and Airy3D

Image Sensors World        Go to the original article...

Low power, low noise 3D iToF 0.5 Mpix sensor

The VD55H1 is a low-noise, low-power, 672 x 804 pixel (0.54 Mpix), indirect Time-of-Flight (iToF) sensor die manufactured on advanced backside-illuminated, stacked wafer technology. Combined with a 940 nm illumination system, it enables building a small form-factor 3D camera producing a high-definition depth map with typical ranging distance up to 5 meters in full resolution, and beyond 5 meters with patterned illumination. With a unique ability to operate at 200 MHz modulation frequency and more than 85% demodulation contrast, the sensor can produce depth precision twice as good as typical 100 MHz modulated sensors, while multifrequency operation provides long distance ranging. The low-power 4.6 µm pixel enables state-of-the-art power consumption, with average sensor power down to 80 mW in some modes. The VD55H1 outputs 12-bit RAW digital video data over a MIPI CSI-2 quad lane or dual lane interface clocked at 1.5 GHz. The sensor frame rate can reach 60 fps in full resolution and 120 fps in analog binning 2x2. ST has developed a proprietary software image signal processor (ISP) to convert RAW data into depth map, amplitude map, confidence map and offset map. Android formats like DEPTH16 and depth point cloud are also supported. The device is fully configurable through the I2C serial interface. It features a 200 MHz low-voltage differential signaling (LVDS) and a 10 MHz, 3-wire SPI interface to control the laser driver with high flexibility. The sensor is optimized for low EMI/EMC, multidevice immunity, and easy calibration procedure. The sensor die size is 4.5 x 4.9 mm and the product is delivered in the form of reconstructed wafers.


 



VD55G0 Consumer Global Shutter 0.4Mpix for Windows Hello Login

The VD55G0 is a global shutter image sensor with high BSI performance which captures up to 210 frames per second in a 644 x 604 resolution format. The pixel construction of this device minimizes crosstalk while enabling a high quantum efficiency (QE) in the near infrared spectrum.
 

 

 

DepthIQ from AIRY3D

The DEPTHIQ™ 3D computer vision platform converts any camera sensor into a single 3D sensor for generating both 2D images and depth maps that are co-aligned. DEPTHIQ uses diffraction to measure depth directly through an optical encoder called the transmissive diffraction mask which can be applied over any CMOS image sensor.


Go to the original article...

Lensless Imaging with Fresnel Zone Plates

Image Sensors World        Go to the original article...

Although the idea of Fresnel zone plates is not new and can be traced back several decades to X-ray imaging and perhaps to Fresnel's original paper from 1818*, there is renewed interest in this idea for visible light imaging due to the need for compact form-factor cameras.

This 2020 article in the journal Light: Science and Applications by a team from Tsinghua University and MIT describes a lensless image sensor with a compressed-sensing style inverse reconstruction algorithm for high resolution color imaging.

Lensless imaging eliminates the need for geometric isomorphism between a scene and an image while allowing the construction of compact, lightweight imaging systems. However, a challenging inverse problem remains due to the low reconstructed signal-to-noise ratio. Current implementations require multiple masks or multiple shots to denoise the reconstruction. We propose single-shot lensless imaging with a Fresnel zone aperture and incoherent illumination. By using the Fresnel zone aperture to encode the incoherent rays in wavefront-like form, the captured pattern has the same form as the inline hologram. Since conventional backpropagation reconstruction is troubled by the twin-image problem, we show that the compressive sensing algorithm is effective in removing this twin-image artifact due to the sparsity in natural scenes. The reconstruction with a significantly improved signal-to-noise ratio from a single-shot image promotes a camera architecture that is flat and reliable in its structure and free of the need for strict calibration.








Full article is available here: https://www.nature.com/articles/s41377-020-0289-9

* "Calcul de l'intensité de la lumière au centre de l'ombre d'un ecran et d'une ouverture circulaires eclairés par un point radieux," in Œuvres Complètes d'Augustin Fresnel 1866-1870. https://gallica.bnf.fr/ark:/12148/bpt6k1512245j/f917.item

Go to the original article...

State of the Image Sensor Market

Image Sensors World        Go to the original article...

Sigmaintell report on smartphone image sensors

According to Sigmaintell, the global mobile phone image sensor shipments in 2021 will be approximately 5.37B units, a YoY decrease of about 11.8%; among which, the global mobile phone image sensor shipments in 4Q21 will be about 1.37B units, a YoY decrease. About 25.3%. At the same time, it is estimated that the global mobile phone image sensor shipments will be about 5.50B in 2022, a year-on-year increase of about 2.5%. In 1H21, due to the long ramp-up cycle of ultra-high pixel production capacity and the squeeze of low-end pixel production capacity by other applications, there was a short-term structural imbalance and market price fluctuations rose. In 2H21, the production capacity of Samsung and Sony’s external foundries was released steadily and significantly, but the sales in the terminal market were lower than expected and the stocking plan was lowered again, resulting in an oversupply in the overall image sensor market.





Business Korea report about Samsung CIS foundry capacity expansion


Samsung Electronics will expand its foundry capacity in legacy nodes starting in 2022. The move is aimed at securing new customers and boosting profitability by increasing the production capacity of mature processes for such items as CMOS image sensors (CISs), which are in growing demand due to a prolonged shortage. At the same time, Samsung Electronics is planning to start volume production of advanced chips on its sub-3nm fabrication process in 1H22. Samsung Electronics plans to secure up to 300 foundry customers by 2026 and triple production from the 2017 level. (Laoyaoba, Business Korea)



Yole announces a new edition of its "Imaging for Security" Market report

https://www.i-micronews.com/products/imaging-for-security-2022














Yole announces a new edition of its "Imaging for Automotive" market report

Flyer: https://s3.i-micronews.com/uploads/2022/03/YINTR22245-Imaging-for-Automotive-2022-Product-Brochure.pdf













Strategy Analytics estimates USD15.1B global smartphone image sensor market in 2021

According to Strategy Analytics, the global smartphone Image sensor market in 2021 secured a total revenue of USD15.1B. Strategy Analytics finds that the smartphone image sensor market witnessed a revenue growth of more than 3% YoY in 2021. Sony Semiconductor Solutions topped with 45% revenue share followed by Samsung System LSI and OmniVision in 2021. The top 3 vendors captured nearly 83% revenue share in the global smartphone image sensor market in 2021. In terms of smartphone multi-camera application, Image sensors for Depth and Macro application reached 30 percent share while those for Ultrawide application exceeded 15% share.




ijiwei Insights predicts drop in mobile phone camera prices

In 2022, some manufacturers will reportedly reduce the price of mobile phone camera CIS several times. Currently, the cost down of phone camera CIS has penetrated into the camera chip products of 2MP, 5MP and 8MP. Among them, the unit price of 2MP and 5MP mobile phone camera CIS fell by about 20% and more than 30% year-on-year, respectively. [source]

Go to the original article...

Photonics Spectra article about Gigajot’s QIS Tech

Image Sensors World        Go to the original article...

The March 2022 edition of Photonics Spectra magazine has an interesting article titled "Photon-Counting CMOS Sensors: Extend Frontiers in Scientific Imaging" by Dakota Robledo, Ph.D., senior image sensor scientist at Gigajot Technology.

While CMOS imagers have evolved significantly since the 1960s, photon-counting sensitivity has still required the use of specialized sensors that often come with detrimental drawbacks. This changed recently with the emergence of new quanta image sensor (QIS) technology, which pushes CMOS imaging capabilities to their fundamental limit while also delivering high-resolution, high-speed, and low-power linear photon counting at room temperature. First proposed in 2005 by Eric Fossum, who pioneered the CMOS imaging sensor, the QIS paradigm envisioned a large array of specialized pixels, called jots, that are able to accurately detect single photons at a very fast frame rate . The technology’s unique combination of high resolution, high sensitivity, and high frame rate enables imaging capabilities that were previously impossible to achieve. The concept was also expanded further to include multibit QIS, wherein the jots can reliably enumerate more than a single photon. As a result, quanta image sensors can be used in higher light scenarios, versus other single-photon detectors, without saturating the pixels. The multibit QIS concept has already resulted in new sensor architectures using photon number resolution, with sufficient photon capacity for high-dynamic-range imaging, and the ability to achieve competitive frame rates.





The article uses "bit-error-rate" metric for assessing image sensor quality.


The photon-counting error rate of a detector is often quantified by the bit error rate. The broadening of signals associated with various photo charge numbers causes the peaks and valleys in the overall distribution to become less distinct, and eventually to be indistinguishable. The bit error rate measures the fraction of false positive and false negative photon counts compared to the total photon count in each signal bin. Figure 4 shows the predicted bit error rate of a detector as a function of the read noise, which demonstrates the rapid rate reduction that occurs for very low-noise sensors. 

 


The article ends with a qualitative comparison between three popular single-photon image sensor technologies.



Interestingly, SPADs are listed as "No Photon Number Resolution" and "Low Manufacturability". It may be worth referring to previous blog posts for different perspectives on this issue. [1] [2] [3]

Full article available here: https://www.photonicsspectra-digital.com/photonicsspectra/march_2022/MobilePagedReplica.action?pm=1&folio=50#pg50



Go to the original article...

Axcelis to ship its processing tool to multiple CMOS image sensor manufacturers

Image Sensors World        Go to the original article...

BEVERLY, Mass., March 17, 2022 /PRNewswire/ -- Axcelis Technologies, Inc. (Nasdaq: ACLS), a leading supplier of innovative, high-productivity solutions for the semiconductor industry, announced today that it has shipped multiple Purion VXE™ high energy systems to multiple leading CMOS image sensor manufacturers located in Asia. The Purion VXE is an extended energy range solution for the industry leading Purion XE™ high energy implanter.

President and CEO Mary Puma commented, "We continue to maintain a leading position in the image sensor market. Our growth in this segment is clear and sustainable, and is tied to long-term trends in demand for products in the growing IoT, mobile and automotive markets. The Purion VXE was designed to address the specific needs of customers developing and manufacturing the most advanced CMOS image sensors, and has quickly become the process tool of record for image sensor manufacturers."

Source: https://www.prnewswire.com/news-releases/axcelis-announces-multiple-shipments-of-purion-high-energy-system-to-multiple-cmos-image-sensor-manufacturers-301504815.html

Go to the original article...

CMOS SPAD SoC for Fluorescence Imaging

Image Sensors World        Go to the original article...

Hot off the press! An article titled "A High Dynamic Range 128 x 120 3-D Stacked CMOS SPAD Image Sensor SoC for Fluorescence Microendoscopy" from the research group at The University of Edinburgh and STMicroelectronics is now available for early access in the IEEE Journal of Solid-State Circuits.

A miniaturized 1.4 mm x 1.4 mm, 128 x 120 single-photon avalanche diode (SPAD) image sensor with a five-wire interface is designed for time-resolved fluorescence microendoscopy. This is the first endoscopic chip-on-tip sensor capable of fluorescence lifetime imaging microscopy (FLIM). The sensor provides a novel, compact means to extend the photon counting dynamic range (DR) by partitioning the required bit depth between in-pixel counters and off-pixel noiseless frame summation. The sensor is implemented in STMicroelectronics 40-/90-nm 3-D-stacked backside-illuminated (BSI) CMOS process with 8-μm pixels and 45% fill factor. The sensor capabilities are demonstrated through FLIM examples, including ex vivo human lung tissue, obtained at video rate.














Full article is available here: https://ieeexplore.ieee.org/document/9723499 

Open access version: https://www.pure.ed.ac.uk/ws/portalfiles/portal/252858429/JSSC_acceptedFeb2022.pdf

Go to the original article...

SmartSens 50MP Ultra-High-Resolution Image Sensor

Image Sensors World        Go to the original article...

SmartSens has launched an ultra high resolution image sensor based on a 22nm process. SC550XS is their first 50MP ultra-high resolution image sensor with a 1.0μm pixel size. The new product adopts the advanced 22nm HKMG Stack process as well as SmartSens’ multiple proprietary technologies, including SmartClarity®-2 technology, SFCPixel® technology and PixGain HDR® technology to enable excellent imaging performance. In addition, it can achieve 100% all pixel all direction auto focus coverage via AllPix ADAF® technology and is equipped with MIPI C-PHY 3.0Gsps high-speed data transmission interface. The product is designed to address the requirements of flagship smartphone main camera in terms of night vision full-color imaging, high dynamic range, and low power consumption.










Full press release: https://www.smartsenstech.com/en/page?id=179

Go to the original article...

High Resolution MEMS LiDAR Paper in Nature Magazine

Image Sensors World        Go to the original article...

Researches from the Integrated Photonics Lab at UC-Berkeley recently published a paper titled "A large-scale microelectromechanical-systems-based silicon photonics LiDAR" proposing a CMOS-compatible high-resolution scanning MEMS LiDAR system.

Three-dimensional (3D) imaging sensors allow machines to perceive, map and interact with the surrounding world. The size of light detection and ranging (LiDAR) devices is often limited by mechanical scanners. Focal plane array-based 3D sensors are promising candidates for solid-state LiDARs because they allow electronic scanning without mechanical moving parts. However, their resolutions have been limited to 512 pixels or smaller. In this paper, we report on a 16,384-pixel LiDAR with a wide field of view (FoV, 70° × 70°), a fine addressing resolution (0.6° × 0.6°), a narrow beam divergence (0.050° × 0.049°) and a random-access beam addressing with sub-MHz operation speed. The 128 × 128-element focal plane switch array (FPSA) of grating antennas and microelectromechanical systems (MEMS)-actuated optical switches are monolithically integrated on a 10 × 11-mm2 silicon photonic chip, where a 128 × 96 subarray is wire bonded and tested in experiments. 3D imaging with a distance resolution of 1.7 cm is achieved with frequency-modulated continuous-wave (FMCW) ranging in monostatic configuration. The FPSA can be mass-produced in complementary metal–oxide–semiconductor (CMOS) foundries, which will allow ubiquitous 3D sensors for use in autonomous cars, drones, robots and smartphones.



Go to the original article...

Artilux Announces CMOS IR Sensor for Mobile Digital Health Applications

Image Sensors World        Go to the original article...

Hsinchu, Taiwan, March 8th 2022 – Artilux, the leader in CMOS-based SWIR optical sensing technology, demonstrated a multi-spectral optical sensing platform compatible with NIR/SWIR vertical-cavity surface-emitting laser (VCSEL) arrays, light emitting diodes (LED), and CMOS-based GeSi (Germanium-Silicon) sensors. This compact optical sensing platform is the industry-leading solution targeted to embrace the rapidly growing TWS and wearables markets in addition to unlock diversified scenarios in digital health.

In light of the increasingly popular wide spectrum (NIR/SWIR) optical sensing applications starting from vital sign monitoring in smartwatches to skin detection in TWS earbuds, cost-effective and energy-efficient optical components including LED, VCSEL, edge-emitting lasers, and SWIR sensors have become the crucial factors to meet such rising user demands. The widely discussed skin detection function in TWS earbuds requires SWIR sensors to perform precise in-ear detection and to deliver seamless listening experiences, while at the same time sustaining long battery life. Such product requires SWIR wavelength, lower power-consumption, lower cost, smaller size with higher sensitivity. The announcement aims to deliver a compact and cost-effective multi-spectral optical sensing solution, by incorporating Artilux’s CMOS-based ultra-sensitive SWIR GeSi sensors with the capability to integrate AFE (analog front end) and digital function into a single chip, together with high-performance VCSEL arrays at 940nm and 1380nm supplied by Lumentum.

 

Although the press release does not mention any technical specifications it may be worth referring to an ISSCC paper from 2020 published by a team from Artilux that described a Ge-on-Si technology. The paper is titled "An Up-to-1400nm 500MHz Demodulated Time-of-Flight Image Sensor on a Ge-on-Si Platform" (https://doi.org/10.1109/ISSCC19947.2020.9063107).

 




Press Release: https://www.artiluxtech.com/resources/news/1014


Go to the original article...

css.php