Archives for April 2020

Techinsights Publishes Pixel Cross-section of iPad LiDAR SPAD

Image Sensors World        Go to the original article...

Techinsights twits about the first results of Apple iPad Pro 2020 LiDAR reverse engineering:

"Our analysis continues for #Sony d-ToF sensor from #Apple iPad Pro LiDAR system. TechInsights confirms stacked back-illuminated SPAD sensor; pixel-level DBI & metal filled B-DTI."


As Techinsights reported earlier, the pixel pitch is 10um. The rough scaling means that the Si thickness is 7-7.5um. This should give Sony SPAD quite a high NIR QE.

Thanks to RF for the pointer!

Go to the original article...

SK Hynix on Smart Eye Sensor

Image Sensors World        Go to the original article...

SK Hynix publishes an article "“Smart Eye”, A Computer Inside Our Eyes."

"The story we are going to tell you now is about “Smart Eye”, something we might see in the future, thanks to the development of semiconductor technology.

...the current CIS technology has not reached the level of human eyes in terms of major features such as resolution, three-dimensionality, and sensitivity. While the resolution of human eyes is 576 megapixels (MP), the highest resolution CIS can currently realize is only 108Mp. When brightness of surrounding environment changes in a sudden, CIS is also likely to suffer from latency accepting visual information.

Ho-young Cho, Technical Leader (TL) at CIS Marketing Strategy of SK hynix said, “While human eyes’ main purpose is to recognize rather than display the collected visual information, CIS is designed for securing visual information for output. If CIS can recognize at the same level as human eyes do in the future, it will also function as a displaying device that outputs the collected information.”

Cho continued, “Unlike human eyes, CIS is designed as individual modules for various purposes. As a result, CIS is detachable, and users can equip different CIS depending on various situations. Such flexibility will make our daily lives more convenient with no doubt.”
"

Go to the original article...

ADAS and AV Sensor Suites

Image Sensors World        Go to the original article...

ReesearchInChina reports that automatic parking solutions are mostly based on ultrasonic sensors with camera-based solutions are quickly gaining share:


"Rare application of ultrasonic + visual fusion solution in the past lies in lack of algorithms and powerful compute. Tesla, the pioneer going intelligent, has long resorted to ultrasonic solutions, and its automated parking capability has not performed well. Even Smart Summon launched in the second half of 2019 is not so successful, either."

Another ResearchInChina report compares Tier 1's approaches to ADAS and autonomous driving sensors. Most of them are using cameras and about 25% are using LiDARs:


In yet another report, ResearchInChina compares L4 AV sensor suites from different companies:

Go to the original article...

Tidal Hifi review

Cameralabs        Go to the original article...

This is my one-year review of Tidal, a music streaming service with Hifi sound quality. If you’ve not come across Tidal before, it’s essentially like Spotify in terms of offering convenient access to a vast online collection of music, but with much better audio quality that in some cases even exceeds CDs. If, like me,…

The post Tidal Hifi review appeared first on Cameralabs.

Go to the original article...

Sony Employee Awarded Purple Ribbon Medal for Multi-Layer Stacked Image Sensor

Image Sensors World        Go to the original article...

Sony announces that Taku Umebayashi, an employee of the Sony, will be awarded the Purple Ribbon Medal for his achievements in the development of a laminated multifunctional CMOS image sensor structure. The Purple Ribbon Medal Award is given for outstanding inventions and discoveries in the field of science and technology and in the academic, sports, and arts and culture fields.

"We succeeded in mass-producing CMOS image sensor of the laminated structure which superimposed the pixel portion where the back illuminated pixel was formed on the chip that the signal processing circuit was formed instead of the support substrate of the conventional back-illuminated CMOS image sensor. This laminated structure enables large-scale circuits to be mounted at small chip sizes, and the pixel and circuit parts that are capable of independent formation can be adopted by a specialized manufacturing process, enabling miniaturization, high image quality, and high functionality at the same time. In addition, by adopting a cutting-edge process on the chip in which the circuit is formed, it is possible to achieve faster signal processing and low power consumption. In recent years, further improvements in performance have been applied as a basic technology to promote high functionality of various sensing devices, including image sensors."

Go to the original article...

Apple iPad and Ouster LiDARs Compared

Image Sensors World        Go to the original article...

Ouster publishes an article "Why Apple chose digital lidar" by Raffi Mardirosian comparing its own design choices with Apple's. Few quotes:

"The iPad Pro is equipped with a Digital Flash Lidar (a type of solid-state lidar) system. As the name suggests, just like a camera’s flash, a flash lidar detects an object by emitting a light wall, instead of scanning the laser beam point by point in a traditional mechanical rotary lidar.

The system uses vertical cavity surface emitting lasers (VCSELs) paired with and single photon avalanche diodes (SPADs) for the light detectors, the same as with FaceID. These two technologies form the foundation of Digital Lidar and are ideally suited for commercialization for a number of reasons:
  1. VCSELs and SPADs offer a superior performance, form factor, and cost profile. VCSELs are smaller, lighter, more durable, and easier to manufacture compared with other emitter technologies. SPADs can be densely packed on a chip, count individual photons, and have excellent time resolution, resulting in a simpler, smaller, more durable, and natively digital architecture in contrast to legacy analog lidar detectors such as APDs or SiPMs (which could not fit in a consumer device).
  2. VCSELs and SPADs support a more rugged and robust system because they can both be integrated onto a chip. Printing all the lasers and all the detectors onto chips greatly reduces the number of components in the system and improves durability and ruggedness.
  3. VCSELs and SPADs have costs that fall faster with scale, are cheaper to produce in high-resolution implementations, and are improving along with Moore’s Law — whereas edge-emitting lasers and legacy analog APD and SiPM detectors are mature and have little room for improvement.
We are happy to see Apple come to the same conclusion that we did when we first designed our high-performance digital lidar sensors.

In fact, since we first designed our system in 2015, we’ve seen the performance of our VCSELs and SPADs improve by ~1000% while there has been little change in the analog technology used in other spinning lidar sensors.
"

Go to the original article...

Assorted News: Actlight, AImotive, Brookman, Akasha Imaging, Quanergy

Image Sensors World        Go to the original article...

PRNewswire: ActLight has signed a license agreement with a "leading semiconductor company" that intends to use ActLight's dynamic photodiode in healthcare applications.

"Now that the COVID-19 pandemic is spreading throughout the globe, we are proud to offer our Dynamic PhotoDiode technology to the healthcare segment together with a prestigious semiconductor company. In these times of trouble we all need to join forces to exploit innovative technologies at the service of the impacted communities," said Roberto Magnifico, Chief Commercial Officer at ActLight.


Edge AI and Vision Alliance publishes an interesting presentation by Gergely Debreczeni, Chief Scientist at AImotive, talking about many different ways to estimate the distance. The presentation slides are available here. Surprisingly, there are more camera-based approaches than anybody can think of:



Brookman publishes a video presentation of the company in Japanese:



Akasha Imaging startup led by CEO Kartik Venkataraman (ex-Pelican Imaging founder and CTO) apparently goes to combine polarization with 3D sensing:

"Akasha Imaging is a Khosla Ventures backed MIT Media Lab spinout founded on breakthrough technology using polarized light."


Quanergy adapts its LiDAR software to track social distancing during coronavirus pandemic:

In order for communities and cities to re-open and for the public to feel safe re-entering society, there must be a way to responsibly enforce social distancing,” said Kevin J. Kennedy, Chairman and CEO of Quanergy. “We believe LiDAR can play a key role in accelerating our return to work and restarting our economy. Quanergy is working closely with our current and new global partners to deploy solutions to instill confidence for businesses and the public in returning to our lives outside our homes.

Go to the original article...

Princeton Instruments Super Deep Depletion CCD Achieves 75% QE at 1000nm

Image Sensors World        Go to the original article...

It came to my attention that Teledyne Princeton Instruments Blaze CCD-based cameras achieve 75% QE at 1000nm wavelength and are quite sensitive even at 1050nm. This CCD is a fairly new product manufactured since 2017:

"Proprietary BLAZE HR-Sensors are “super-deep-depletion” CCDs manufactured from high-resistivity bulk silicon in order to yield the highest near-infrared quantum efficiency of any silicon device. The silicon depletion region of each HR-Sensor is almost 4x thicker than that of a conventional deep-depletion (NIR-sensitive) CCD, affording quantum efficiency up to 7x greater at 1 µm than the best other deep-depletion sensors.

Spatial resolution for HR-Sensors is optimized by applying a bias voltage, resulting in a “fully depleted” silicon region with no diffusion of charge. The bias voltage generates an electric field that pushes the charge toward the correct pixels and does not allow charge to migrate to adjacent pixels.

The new sensors are offered in either 1340x100 or 1340x400 array formats with 20 µm pixels.
"

Go to the original article...

Teledyne e2v Expands its Emerald Family with 3.2MP GS Sensor

Image Sensors World        Go to the original article...

GlobeNewswire: Teledyne e2v announces its new Emerald 3.2MP CMOS sensor for emerging applications such as security, drones and embedded vision, as well as traditional machine vision. With its 2.8 µm global shutter pixel, the new 3.2M sensor shares all of the characteristics of the Emerald sensor series: low-noise performance, compact format, easy integration and a wide range of embedded features.

The sensor has been designed in an ultra-compact light package format with low power to address the challenge of optimizing SWaP-C (Size, Weight, Power and Cost). The device also features a MIPI interface and is pin-to-pin and optically compatible with Emerald 2M and Emerald 5M, so that multiple resolutions are supported from one single design, saving cost.

This new sensor completes Teledyne e2v’s Emerald product portfolio which includes sensors in resolutions from 2MP to 67MP. Evaluation Kits and samples of Emerald 3.2M are now available.

Go to the original article...

OmniVision Presents its First 0.702um Pixel and 64MP Sensor for Smartphones

Image Sensors World        Go to the original article...

PRNewswire: OmniVision announces the OV64B, the industry’s only 64MP sensor with a 0.702um pixel size, enabling 64 MP resolution in a 1/2” optical format for the first time. Built on OmniVision’s PureCel Plus stacked die technology, this sensor provides 4K video recordings with EIS, as well as 8K video at 30fps.

This year, TSR estimates there will be 127 million image sensors with 64 MP or higher resolution shipped to smartphone manufacturers,” said Arun Jayaseelan, staff marketing manager at OmniVision. “The OV64B, with the industry’s smallest size for a 64 MP sensor, is further enabling this trend among high end and high mainstream smartphone designers who want the best resolution with the tiniest cameras.

The OV64B supports 3-exposure, staggered HDR timing for up to 16 MP video modes. It integrates a 4-cell CFA with on-chip hardware re-mosaic, which provides 64 MP Bayer output in real time. In low light conditions, this sensor can use near-pixel binning to output a 16 MP image with 4X the sensitivity, offering 1.4um equivalent performance.

The sensor features type-2, 2x2 microlens phase detection AF (ML-PDAF) to boost autofocus accuracy, especially in low light. It also provides a C-PHY interface and supports slow motion video for 1080p at 240 fps and 720p at 480 fps. Other output formats include 64 MP at 15 fps, 8K video at 30fps, 16 MP captures with 4-cell binning at 30fps, 4K video at 60fps and 4K video with EIS at 30fps.

Samples of the OV64B are expected to be available in May 2020.

Go to the original article...

Samsung Tetracell Promotional Video

Image Sensors World        Go to the original article...

Samsung keeps publishing its image sensor promotional videos. The 3rd video in the series talks about Tetracell pixels:

Go to the original article...

DJI Mavic Air 2 Review

Cameralabs        Go to the original article...

The DJI Mavic Air 2 is a mid-range drone aimed at anyone who wants a step-up from entry-level models without breaking the bank. Find out why Adam was impressed in his review!…

The post DJI Mavic Air 2 Review appeared first on Cameralabs.

Go to the original article...

Omnivision CameraCubeChip Reverse Engineering

Image Sensors World        Go to the original article...

SystemPlus Consulting publishes a reverse engineering report of OVM6948 CameraCubeChip:

"The smallest camera in the world, it is a Video Graphics Array (VGA) camera module. It integrates a Wafer-Level Packaged (WLP) OmniVision CMOS Image Sensor (CIS) and a small Wafer-Level Optic (WLO) manufactured by VisEra. The entire camera module is provided in a 0.65mm x 0.65mm x 1.2mm 4-pin package including a 0.58mm x 0.58mm CIS die. The CIS die is packaged by Xintec’s new WLP technology for CIS. The bumps on the backside are connected with Through Silicon Vias (TSVs). A complex stacking of eight optical layers in 1mm is necessary to provide the wide 120 degree field of view and an extended focus range of 3mm to 30mm. Moreover, the OMV6948 is a fully wafer bonded solution.

The endoscopy market was worth $6B in 2019, with reusable flexible endoscopy being the major market, worth more than $4B. However the new standard for small diameter endoscopes, specifically bron-choscopes and urethroscopes, is now becoming disposable flexible endoscopes. Omnivision is one of the leaders in providing very small camera modules aiming at supplying this new, developing market.
"

Go to the original article...

Argo.ai Keynote on SWIR SPAD LiDAR

Image Sensors World        Go to the original article...

IEEE International Conference on Computational Photography (ICCP 2020) publishes Argo.ai Mark Itzler's keynote about automotive SWIR SPAD-based LiDAR "Single-photon LiDAR Imaging: from airborne to automotive platforms." Argo.ai got into LiDAR business through the acquisition of Harris spin-off Princeton Lightwave in 2017.


Go to the original article...

SPAD-based Imaging in Sunlight

Image Sensors World        Go to the original article...

University of Wisconsin-Madison ICCP 2020 presentation explains the challenges of SPAD-based ToF camera in strong ambient light:



Another presentation from the same group talks about HDR imaging with a SPAD camera:

Go to the original article...

Backside Passivation

Image Sensors World        Go to the original article...

AIP paper "Backside passivation for improving the noise performance in CMOS image sensor" by Peng Sun, Sheng Hu, Wen Zou, Peng-Fei Wang, Lin Chen, Hao Zhu, Qing-Qing Sun, and David Wei Zhang from Fudan University and Wuhan Xinxin Semiconductor Manufacturing Co. analyses the passivation approaches:

"Great efforts have been made in the past few years to reduce the white pixel noise in complementary metal–oxide–semiconductor (CMOS) image sensors. As a promising approach, the surface passivation method focusing on the field-effect passivation has been studied in this work. Based on the metal–oxide–semiconductor capacitor device model, electrical measurement and analysis have been performed for characterizing the charge distribution in the system. The relationship between the flat band voltage and the white pixel performance has been set up, and the proposed passivation method that controls Si or SiO2 interface charge or traps has been proved effective in lowering the white pixel noise, which can be very attractive in improving the performance of CMOS image sensors for high-resolution and high-sensitivity applications."

Go to the original article...

EPFL Proposes 5T Pixel with 0.32e- Noise and Enhanced DR

Image Sensors World        Go to the original article...

IEEE Electron Device Letters gives an early access to EPFL paper "A CMOS Image Sensor Pixel Combining Deep Sub-electron Noise with Wide Dynamic Range" by Assim Boukhayma, Antonino Caizzone, and Christian Enz.

"This letter introduces a 5-transistors (5T) implementation of CMOS Image Sensors (CIS) pixels enabling the combination of deep sub-electron noise performance with wide dynamic range (DR). The 5T pixel presents a new technique to reduce the sense node capacitance without any process refinements or voltage level increase and features adjustable conversion gain (CG) to enable wide dynamic imaging. The implementation of the proposed 5T pixel in a standard 180 nm CIS process demonstrates the combination of a measured high CG of 250 μV/e- and low CG of 115 μV/e- with a saturation level of about 6500 e- offering photo-electron counting capability without compromising the DR and readout speed."

"Thanks to the high CG of 250 µV/e− and optimized PMOS SF, the read noise is as low as 0.32 e− RMS. This result is confirmed by Fig. 5 obtained by plotting the histogram of 1500 pixel outputs while the chip is exposed to very low input light. The histogram features peaks and valleys where each peak corresponds to a charge quantum."


"The reset phase consists in three steps. First, the RST switch is closed connecting IN to VRST. While VRST is set to VDD, the potential barrier between IN and SN is lowered by setting TX2 to a voltage VTX2H1 in order to dump the charge from the SN as depicted in Fig. 2(a). TX2 is set back to 0 in order to split the IN and SN and freeze the SN voltage at its maximum level.

VRST is then switched to a lower voltage VRSTL between the pin voltage of the PPD Vpin and VSN,max. After this step, the reset switch is opened again to freeze the IN voltage at a value VIN as depicted in Fig. 2(b). The last step of the reset phase consists in setting TX2 to a voltage VTX2H2 making the barrier between the IN and SN equal or slightly higher than VIN as shown in Fig. 2(c). In this way, any excess charge transferred to IN would diffuse towards the SN.

After lowering back TX2, the SN reset voltage VSN,rst is sensed. Transferring the charge integrated in the PPD to the SN takes place by pulsing both TX1 and TX2 as depicted in Fig. 2(d). TX1 is pulsed to a value VTX1H in order to set the voltage under the TG between the PPD pin voltage Vpin and the intermediate node voltage VIN while TX2 is pulsed again to transfer this charge to the SN. The signal corresponds to the difference between the SN
voltage after reset VSN,rst and the one sensed after the transfer VSN,transfer.
"

Go to the original article...

Hitachi-LG LiDAR Adapted to Check Social Distancing

Image Sensors World        Go to the original article...

Hitachi-LG 3D LiDAR has been complemented with software to check social distancing:


Go to the original article...

Ams dToF Sensor Video

Image Sensors World        Go to the original article...

Ams publishes a video explaining its TMF8801 1D dToF sensor features:



Go to the original article...

Omnivision Endoscopic Sensor Promotional Video

Image Sensors World        Go to the original article...

Omnivision publishes a promotional video about its minuscule endoscopic sensor:



Once we are at it, some time ago, Omivision sent me a season greetings with this unbelievably tiny sensor glued onto the card:

Go to the original article...

CIS Packaging Houses Anticipate Higher Demand: Tong Hsing, ATII

Image Sensors World        Go to the original article...

Digitimes: CIS packaging house Tong Hsing expands its wafer reconstruction capacity to support production of high-megapixel sensors for multi-lens cameras massively adopted by Chinese handset vendors for their new models.

While the handset sales in the China market may drop in 2020, the growing adoption of multi-lens camera modules and the upgrades in CIS resolution and chip sizes will contribute positively to the firm's CIS packaging business.

Tong Hsing has acquired Kingpak dedicated to BGA packaging for automotive CIS devices, seeking to form a heavyweight team to better serve global CIS makers. The world's top 5-6 CIS suppliers in Japan, South Korea, the US and Europe are expected to become clients of the expanded Tong Hsing.

Digitimes: Asia Tech Image (ATII) will expand monthly capacity at its factory in Myanmar from the curent 1.1M contact image sensors modules to 1.3M units by the end of 2020. As remote learning and working is on the rise due to the coronavirus pandemic, demand for contact image sensor modules for scanners and printers has grown significantly, according to company president Iris Wu.

Go to the original article...

Image Sensor-Based Random Number Generator Use Cases: from Swiss Lottery to Samsung Smartphone

Image Sensors World        Go to the original article...

IDQ shows a number of use cases for its image sensor based random number generator, a Quantis QRNG Chip:


The mentioned use cases include Swiss lottery and the UK bank:


Sammobile speculates that the newest Samsung-SK Telecom Galaxy A71 5G Quantum smartphone has Quantis QRNG IDQ250C2 chip inside. IDQ states that it "is the first Quantum Random Number Generator designed and manufactured specifically for mobile handsets, IoT and edge devices." If true, the A71 Quantum has 6 image sensors: 4 inside its rear cameras, one inside a selfie camera, and one more hidden sensor inside the random number generator.

Go to the original article...

Rise of Event Cameras

Image Sensors World        Go to the original article...

EETimes publishes Tobi Delbruck's (ETH/Zurich University) article "The Slow but Steady Rise of the Event Camera." The article refers to an excellent Github repository of event-based camera papers maintained by ETH/Zurich University.

Neuromorphic silicon retina “event camera” development languished, only gaining industrial traction when Samsung and Sony recently put their state-of-the-art image sensor process technologies on the market.

In 2017, Samsung published an ISSCC paper on a 9-um pixel, back-illuminated VGA dynamic vision sensor (DVS) using their 90-mn CIS fab. Meanwhile, Insightness announced a clever dual intensity + DVS pixel measuring a mere 7.2-um.

Both Samsung and Sony have built DVS with pixels under 5um based on stacked technologies where the back-illuminated 55-nm photosensor wafer is copper-bumped to a 28-nm readout wafer.

Similar to what occurred with CMOS image sensors, event camera startups like Insightness (recently acquired by Sony), iniVation (who carry on the iniLabs mission), Shanghai-based CelePixel and well-heeled Prophesee are established, with real products to sell. Others will surely follow.

I now think about of DVS development as mainly an industrial enterprise, but it was the heavy focus on sparse computing that has led us over the last five years to exploit activation sparsity in hardware AI accelerators. Like the spiking network in our brains, these AI accelerators only compute when needed. This approach—promoted for decades by neuromorphic engineers—is finally gaining traction in mainstream electronics.


I came up with the DVS pixel circuit. This pixel architecture is the foundation of all subsequent generations from all the major players (even when they don’t say so on their web sites).

Go to the original article...

Prophesee Article in EETimes

Image Sensors World        Go to the original article...

EETimes publishes an article "Bringing Neuromorphic Vision to the Edge" by Jean-Luc Jaffard, Prophesee’s VP of Sensor Engineering and Operations. Few quotes:

Underlying the advantages of neuromorphic vision systems is an approach called event-based vision. Event-based vision is driving a paradigm shift away from an image capture method that has been used for more than a century: frame-based capture.

Such an approach delivers substantial advantages, including:
  • Speed — Enabled by microsecond timestamping and reduced time to action
  • Data efficiency — Filtering out redundant, static data at the pixel level, before it hits the CPU
  • Light sensitivity — High dynamic range and low light sensitivity
  • Energy efficiency: Less power consumption for always-on, mobile and remote use cases

Many applications require high image quality not possible with event-based cameras alone. Hybrid frame/event-based approaches can utilize the best characteristics of each. With a hybridapproach, event-based vision can be used to acquire fewer frames and use asynchronous events to fill the gaps between them, which in turn reduces bandwidth and ultimately power consumption.

Go to the original article...

Sigma 30mm f1.4 DC DN review

Cameralabs        Go to the original article...

The Sigma 30mm f1.4 DC DN is a standard lens designed for mirrorless cameras with 'cropped' sensors. Sigma offers it in Sony e, Canon EF-M and Micro Four Thirds mounts. Find out why it could be your ideal next lens in my full review!…

The post Sigma 30mm f1.4 DC DN review appeared first on Cameralabs.

Go to the original article...

Samsung 108MP Nonacell Promotional Video

Image Sensors World        Go to the original article...

Samsung keeps publishing promotional videos about its image sensors. The last one presents 108MP ISOCELL Bright HM1:

Go to the original article...

EETimes on Event-Driven Sensor Use Cases

Image Sensors World        Go to the original article...

EETimes publishes an article "Neuromorphic Vision Sensors Eye the Future of Autonomy" by Anne-Françoise Pelé. Few quotes:

“Why do we say that an event-based vision sensor is neuromorphic? Because each pixel is a neuron, and it totally makes sense to have the artificial intelligence next to the pixel,” Pierre Cambou, principal analyst at Yole Développement (Lyon, France) told EE Times.

“It has taken awhile for us to come with a good strategy,” iniVation’s CEO Kynan Eng said in an interview. While other companies perform high-speed counting, Eng said “it is no big deal counting objects at high speed” since conventional cameras can get “a thousand frames per second, even more.” If applications don’t need to respond immediately, then “there is no point using our sensors.”

“I would [categorize] industrial vision as a relatively low risk, but low volume market,” said Eng. Hence, there has been little interest from venture funds. With an eye toward organic growth, iniVation is thinking in terms of economies of scale. Through its 2019 partnership with Samsung, iniVation shifted from manufacturing and silicon sales to selling cameras to the machine vision industry. “You can sell the $100 silicon, or you can package it in a camera and sell a $1,000 camera,” noted the Yole analyst Cambou.

“We recognized that it did not make sense for us to become a chip company,” Eng said. “We could raise a billion, and it would still not be enough to make the chip ourselves. People were asking us why our cameras were expensive and how we could make them cheap.” Partnering with Samsung, “makes that question go away.”

“A window for mobile will open in 2021 or 2022,” said Cambou. “Today, we have five cameras on the back of a Huawei phone.” Moving forward, he continued, “I don’t see anything else than an always-on neuromorphic camera. Some people talk about multispectral, but I am more thinking about always-on awareness.” An event-based camera could enable touchless interactions such as locking and unlocking phones.

Event-based cameras are power-efficient because pixel activity is insignificant; almost no energy is needed for “silent” pixels. That’s a selling point as autonomous vehicles transition from internal combustion to electric engines. For car companies, “power consumption is much more important than what I thought initially,” said Eng. “In their current planning for electric cars, if a car uses a 4kW total power budget at constant speed, half of that is for moving the car and the other half is for the computing. Every watt you can save on the compute, you can add to the range of the car or have a smaller battery.”


iniVation’s DAVIS346 DVS

Go to the original article...

ON Semi Short Range LiDAR Demo

Image Sensors World        Go to the original article...

ON Semi shows a demo of its Pandion sensor, a 400 x 100 SPAD array sensor for LiDAR applications:

Go to the original article...

Intel Capital Imaging Portfolio: Trieye SWIR, Prophesee Event-Based Sensor

Image Sensors World        Go to the original article...

Avi Bakal, Trieye CEO & Co-Founder, talks about SWIR imaging in automotive applications:



Luca Verre, Prophesee CEO & Co-Founder, talks about event driven vision:

Go to the original article...

Espros Reviews Ways to Failsafe ToF Imager

Image Sensors World        Go to the original article...

Espros April Newsletter discusses the ways to failsafe a ToF Camera:

Time-of-flight cameras are often used in safety crirical applications, e.g. in anti-collision sensors for robots. It's evident, that the sensor is working correctly or, in case of a malfunction, the control system of the of the robot detects a malfunction. A serious fault in a TOF camera is the failure of one or more pixels of the TOF imager. Whereas a «stuck--at» failure is relatively easy to detect, a floating signal which can randomly take any state is not.

A pixel in an imager can be faulty in a way that it reports any level in grayscale from fully dark to fully bright. This can also be the case in a TOF imager. Thus, in a safety critical application, the distance to an object reported by a pixel is assumed to be wrong. The pixel can report the correct distance within a given tolerance band or any other distance which is not correct. Such behavior is fatal in an anti-collision sensor based on a 3D camera. The question now is, how to detect incorrect distance reporting pixels.

There are several ways to do so:
  1. Comparison: Comparison of the reported distance with a known distance (comparison). This can be applied e.g. in a door sensor where the sensor looks from top of the door down to the floor.
  2. Offset: Adding a delay into the illumination path (or the demodulation path) to impose a virtual distance shift. By subtracting the distance shift imposed by the delay, the same or a similar distance as the one without delay should be resulting.
  3. Scaling: Changing the modulation frequency but not changing the distance calculation parameters accordingly. This is similar like 3., but the distance shift is not fix, it is dependent on the distance value.
  4. Pattern: By changing the modulation or demodulation pattern, good pixels report the same (correct) distance even in a different phase sequence.
  5. Fill & Spill: Inject a defined amount of charge into a pixel and check the response of the pixel.
There are additional ways to detect faulty pixels. However, the five concepts listed above are very simple to implement. ESPROS TOF imagers fully support all these options.

Go to the original article...

css.php