dToF Sensor with In-pixel Processing

Image Sensors World        Go to the original article...

In a recent preprint (https://arxiv.org/pdf/2209.11772.pdf) Gyongy et al. describe a new 64x32 SPAD-based direct time-of-flight sensor with in-pixel histogramming and processing capability.

Abstract
3D flash LIDAR is an alternative to the traditional scanning LIDAR systems, promising precise depth imaging in a compact form factor, and free of moving parts, for applications such as self-driving cars, robotics and augmented reality (AR). Typically implemented using single-photon, direct time-of-flight (dToF) receivers in image sensor format, the operation of the devices can be hindered by the large number of photon events needing to be processed and compressed in outdoor scenarios, limiting frame rates and scalability to larger arrays. We here present a 64 × 32 pixel (256 × 128 SPAD) dToF imager that overcomes these limitations by using pixels with embedded histogramming, which lock onto and track the return signal. This reduces the size of output data frames considerably, enabling maximum frame rates in the 10 kFPS range or 100 kFPS for direct depth readings. The sensor offers selective readout of pixels detecting surfaces, or those sensing motion, leading to reduced power consumption and off-chip processing requirements. We demonstrate the application of the sensor in mid-range LIDAR.



















Go to the original article...

Direct ToF Single-Photon Imaging (IEEE TED June 2022)

Image Sensors World        Go to the original article...

The June 2022 issue of IEEE Trans. Electron. Devices has an invited paper titled Direct Time-of-Flight Single-Photon Imaging by Istvan Gyongy et al. from University of Edinburgh and STMicroelectronics. 

This is a comprehensive tutorial-style article on single-photon 3D imaging which includes a description of the image formation model starting from first principles and practical system design considerations such as photon budget and power requirements.

Abstract: This article provides a tutorial introduction to the direct Time-of-Flight (dToF) signal chain and typical artifacts introduced due to detector and processing electronic limitations. We outline the memory requirements of embedded histograms related to desired precision and detectability, which are often the limiting factor in the array resolution. A survey of integrated CMOS dToF arrays is provided highlighting future prospects to further scaling through process optimization or smart embedded processing.



Go to the original article...

Review of indirect time-of-flight 3D cameras (IEEE TED June 2022)

Image Sensors World        Go to the original article...

C. Bamji et al. from Microsoft published a paper titled "A Review of Indirect Time-of-Flight Technologies" in IEEE Trans. Electron Devices (June 2022).

Abstract: Indirect time-of-flight (iToF) cameras operate by illuminating a scene with modulated light and inferring depth at each pixel by combining the back-reflected light with different gating signals. This article focuses on amplitude-modulated continuous-wave (AMCW) time-of-flight (ToF), which, because of its robustness and stability properties, is the most common form of iToF. The figures of merit that drive iToF performance are explained and plotted, and system parameters that drive a camera’s final performance are summarized. Different iToF pixel and chip architectures are compared and the basic phasor methods for extracting depth from the pixel output values are explained. The evolution of pixel size is discussed, showing performance improvement over time. Depth pipelines, which play a key role in filtering and enhancing data, have also greatly improved over time with sophisticated denoising methods now available. Key remaining challenges, such as ambient light resilience and multipath invariance, are explained, and state-of-the-art mitigation techniques are referenced. Finally, applications, use cases, and benefits of iToF are listed.



Use of time gates to integrate returning light


iToF camera measurement


Modulation contrast vs. modulation frequency used in iToF cameras


Trend of pixel sizes since 2012

Trend of pixel array sizes since 2012

Trend of near infrared pixel quantum efficiencies since 2010


Multigain column readout


Multipath mitigation

DOI link: 10.1109/TED.2022.3145762

Go to the original article...

High resolution ToF module from Analog Devices

Image Sensors World        Go to the original article...

Analog Devices has released an industrial-grade megapixel ToF module ADTF3175 and a VGA resolution sensor the ADSD3030 that seeks to bring the highest accuracy ToF technology in the most compact VGA footprint.


The ADTF3175 is a complete Time-of-Flight (ToF) module for high resolution 3D depth sensing and vision systems. Based on the ADSD3100, a 1 Megapixel CMOS indirect Time-of-Flight (iToF) imager, the ADTF3175 also integrates the lens and optical bandpass filter for the imager, an infrared illumination source containing optics, laser diode, laser diode driver and photodetector, a flash memory, and power regulators to generate local supply voltages. The module is fully calibrated at multiple range and resolution modes. To complete the depth sensing system, the raw image data from the ADTF3175 is processed externally by the host system processor or depth ISP.

The ADTF3175 image data output interfaces electrically to the host system over a 4-lane mobile industry processor interface (MIPI), Camera Serial Interface 2 (CSI-2) Tx interface. The module programming and operation are controlled through 4-wire SPI and I2C serial interfaces.

The ADTF3175 has module dimensions of 42mm × 31mm × 15.1mm, and is specified over an operating temperature range of -20°C to 65°C.

Applications:
Machine vision systems
Robotics
Building automation
Augmented reality (AR) systems

Price:
$197 in 1,000 Unit Quantities




The ADSD3030 is a CMOS 3D Time of Flight (ToF)-based 3D depth and 2D visual light imager that is available for integration into 3D sensor systems. The functional blocks required for read out, which include analog-to-digital converters (ADCs), amplifiers, pixel biasing circuitry, and sensor control logic, are built into the chip to enable a cost-effective and simple implementation into systems.

The ADSD3030 interfaces electrically to a host system over a mobile industry processor interface (MIPI), Camera Serial Interface 2 (CSI-2) interface. A lens plus optical band-pass filter for the imager and an infrared light source plus an associated driver are required to complete the working subsystem.

Applications:
Smartphones
Augmented reality (AR) and virtual reality (VR)
Machine vision systems (logistics and inventory)
Robotics (consumer and industrial)


Go to the original article...

Chronoptics compares depth sensing methods

Image Sensors World        Go to the original article...

In a blog post titled "Comparing Depth Cameras: iToF Versus Active Stereo" Refael Whyte of Chronoptics compares depth reconstructions from their indirect time-of-flight (iToF) "KEA" camera with active stereo using an Intel RealSense D435 sensor.


Specs


Setup used for comparisons

Bin picking

Pallet picking


Depth data can also be overlaid on RGB to get colored point cloud visualizations. KEA provides much cleaner-looking results:

 
KEA


D435


They show some limitations too. In this scene the floor has very low reflectivity in IR so the KEA camera struggles to collect enough photons there:


 

[PS: I wish all companies showed "failure cases" as part of their promotional materials!]

Full article here: https://medium.com/chronoptics-time-of-flight/comparing-depth-cameras-itof-versus-active-stereo-e163811f3ac8

Go to the original article...

ams OSRAM VCSELs in Melexis’ in-cabin monitoring solution

Image Sensors World        Go to the original article...

ams OSRAM VCSEL illuminator brings benefits of integrated eye safety to Melexis automotive in-cabin monitoring solution

Premstaetten, Austria (11 May, 2022) – ams OSRAM (SIX: AMS), a global leader in optical solutions, announces that it is supplying a high-performance infrared laser flood illuminator for the latest automotive indirect Time-of-Flight (iToF) demonstrator from Melexis.

The ams OSRAM vertical-cavity surface-emitting laser (VCSEL) flood illuminator from the TARA2000-AUT family has been chosen for the new, improved version of the EVK75027 iToF sensing kit because it features an integrated eye safety interlock. This provides for a more compact, more reliable and faster system implementation than other VCSEL flood illuminators that require an external photodiode and processing circuitry.

The Melexis evaluation kit demonstrates the combined capabilities of the new ams OSRAM 940nm VCSEL flood illuminator in combination with an interface board and a processor board and the MLX75027 iToF sensor. The evaluation kit provides a complete hardware implementation of iToF depth sensing on which automotive OEMs can run software for cabin monitoring functions such as occupant detection and gesture sensing.


More reliable operation, faster detection of eye safety risks

The new ams OSRAM VCSEL with integrated eye safety interlock is implemented directly on the micro-lens array of the VCSEL module, and detects any cracks or apertures that can cause an eye safety risk. Earlier automotive implementations of iToF sensing have used VCSEL illuminators that require an external photodiode, a fault-prone, indirect method of providing the eye safety interlock function.

The read-out circuit requires no additional components other than an AND gate or a MOSFET. This produces almost instant (<1µs) reactions to fault conditions. A lower component count also reduces the bill-of-materials cost compared to photodiode-based systems. By eliminating the use of an external photodiode, the eye safety interlock eliminates the false signals created by objects such as a passenger’s hand obscuring the camera module.

“Automotive OEMs are continually looking for ways to simplify system designs and reduce component count. By integrating an eye safety interlock into the VCSEL illuminator module, ams OSRAM has found a new way to bring value to automotive customers. Not only will it reduce component count, but also increase reliability while offering the very highest levels of optical performance,” said Firat Sarialtun, Global Segment Manager for In-Cabin Sensing at ams OSRAM.

“With the EVK75027, Melexis has gone beyond the provision of a stand-alone iToF sensor to offer automotive customers a high-performance platform for 3D in-cabin sensing. We are pleased to be able to improve the value of the EVK75027 by now offering the option of a more integrated VCSEL flood illuminator on the kit’s illuminator board,” said Gualtiero Bagnuoli, Marketing manager Optical Sensors.

The EVK75027 evaluation kit with ams OSRAM illumination board can be ordered from authorized distributors of Melexis products (https://www.melexis.com/en/product/EVK75027/Evaluation-Kit-VGA-ToF-Sensor).

There is also a white paper on the new illumination board for the EVK75027, describing the benefits of implementing an iToF system with a VCSEL flood illuminator that includes an eye safety interlock. The white paper can be downloaded here: https://www.melexis.com/Eye-safe-IR-illumination-for-3D-TOF

Article: https://ams-osram.com/news/press-releases/melexis-eye-safety-itof

Go to the original article...

Better Piezoelectric Light Modulators for AMCW Time-of-Flight Cameras

Image Sensors World        Go to the original article...

A team from Stanford University's Laboratory for Integrated Nano-Quantum Systems (LINQS) and ArbabianLab present a new method that can potentially convert any conventional CMOS image sensor into an amplitude-modulated continuous-wave time-of-flight camera. The paper titled "Longitudinal piezoelectric resonant photoelastic modulator for efficient intensity modulation at megahertz frequencies" appeared in Nature Communications.

Intensity modulators are an essential component in optics for controlling free-space beams. Many applications require the intensity of a free-space beam to be modulated at a single frequency, including wide-field lock-in detection for sensitive measurements, mode-locking in lasers, and phase-shift time-of-flight imaging (LiDAR). Here, we report a new type of single frequency intensity modulator that we refer to as a longitudinal piezoelectric resonant photoelastic modulator. The modulator consists of a thin lithium niobate wafer coated with transparent surface electrodes. One of the fundamental acoustic modes of the modulator is excited through the surface electrodes, confining an acoustic standing wave to the electrode region. The modulator is placed between optical polarizers; light propagating through the modulator and polarizers is intensity modulated with a wide acceptance angle and record breaking modulation efficiency in the megahertz frequency regime. As an illustration of the potential of our approach, we show that the proposed modulator can be integrated with a standard image sensor to effectively convert it into a time-of-flight imaging system.



a) A Y-cut lithium niobate wafer of diameter 50.8 mm and of thickness 0.5 mm is coated on top and bottom surfaces with electrodes having a diameter of 12.7 mm. The wafer is excited with an RF source through the top and bottom electrodes. b) Simulated ∣s11∣ of the wafer with respect to 50 Ω, showing the resonances corresponding to different acoustic modes of the wafer (loss was added to lithium niobate to make it consistent with experimental results). The desired acoustic mode appears around 3.77 MHz and is highlighted in blue. c) The desired acoustic mode ∣s11∣ with respect to 50 Ω is shown in more detail. d) The dominant strain distribution (Syz) when the wafer is excited at 3.7696 MHz with 2 Vpp is shown for the center of the wafer. This strain distribution corresponds to the ∣s11∣ resonance shown in (c). e) The variation in Syz parallel to the wafer normal and centered along the wafer is shown when the wafer is excited at 3.7696 MHz with 2 Vpp.



a) Schematic of the characterization setup is shown. The setup includes a laser (L) with a wavelength of 532 nm that is intensity-modulated at 3.733704 MHz, aperture (A) with a diameter of 1 cm, neutral density filter (N), two polarizers (P) with transmission axis t^=(a^x+a^z)/2–√, wafer (W), and a standard CMOS camera (C). The wafer is excited with 90 mW of RF power at fr = 3.7337 MHz, and the laser beam passes through the center of the wafer that is coated with ITO. The camera detects the intensity-modulated laser beam. b) The desired acoustic mode is found for the modulator by performing an s11 scan with respect to 50 Ω using 0 dBm excitation power and with a bandwidth of 100 Hz. The desired acoustic mode is highlighted in blue. c) The desired acoustic mode is shown in more detail by performing an s11 scan with respect to 50 Ω using 0 dBm excitation power with a bandwidth of 20 Hz. d) The fabricated modulator is shown. e) The depth of intensity modulation is plotted for different angles of incidence for the laser beam (averaged across all the pixels), where ϕ is the angle between the surface normal of the wafer and the beam direction k^ (see “Methods” for more details). Error bars represent the standard deviation of the depth of intensity modulation across the pixels. f) Time-averaged intensity profile of the laser beam detected by the camera is shown for ϕ = 0. g) The DoM at 4 Hz of the laser beam is shown per pixel for ϕ = 0. h) The phase of intensity modulation at 4 Hz of the laser beam is shown per pixel for ϕ = 0.


a) Schematic of the imaging setup is shown. The setup includes a standard CMOS camera (C), camera lens (CL), two polarizers (P) with transmission axis t^=(a^x+a^z)/sqrt(2), wafer (W), aperture (A) with a diameter of 4 mm, laser (L) with a wavelength of 635 nm that is intensity-modulated at 3.733702 MHz, and two metallic targets (T1 and T2) placed 1.09 m and 1.95 m away from the imaging system, respectively. For the experiment, 140 mW of RF power at fr = 3.7337 MHz is used to excite the wafer electrodes. The laser is used for illuminating the targets. The camera detects the reflected laser beam from the two targets, and uses the 2 Hz beat tone to extract the distance of each pixel corresponding to a distinct point in the scene (see “Methods” for more details). b) Bird’s eye view of the schematic in (a). c) Reconstructed depth map seen by the camera. Reconstruction is performed by mapping the phase of the beat tone at 2 Hz to distance using Eq. (3). The distance of each pixel is color-coded from 0 to 3 m (pixels that receive very few photons are displayed in black). The distance of targets T1 and T2 are estimated by averaging across their corresponding pixels, respectively. The estimated distances for T1 and T2 are 1.07 m and 1.96 m, respectively (averaged across all pixels corresponding to T1 and T2). d) Ambient image capture of the field-of-view of the camera, showing the two targets T1 and T2. e The dimensions of the targets used for ToF imaging are shown.


The paper points out limitations of other approaches such as spatial light modulators and meta-optics, but doesn't mention any potential challenges or limitations of their proposed method. Interestingly, the authors cite some recent papers on high-resolution SPAD sensors to make the claim that their method is more promising than "highly specialized costly image sensors that are difficult to implement with a large number of pixels." Although the authors do not explicitly mention this in the paper, their piezoelectric material of choice (lithium niobate) is CMOS compatible. Thin-film deposition of lithium niobate on silicon using a CMOS process seems to be an active area of research (for example, see Mercante et al., Optics Express 24(14), 2016 and Wang et al., Nature 562, 2018.)

Go to the original article...

css.php