Event-Based News: Prophesee, Inivation, Samsung

Image Sensors World        Go to the original article...

EETimes publishes Junko Yoshida's interview with Luka Verre, Prophesee CEO. Few quotes:

"The commercial product we have is a VGA sensor. It’s in mass production. We are currently deploying shipping for industrial applications.

We have a new sensor generation, which is an HD sensor, so one million pixels, 720p. This is the result of joint cooperation we have done with Sony, which will be published at ISEC [ISSCC, probably] in February in San Francisco.

There has been some research work done together with Sony. Yes, Sony is indeed interested in event-based technology, but unfortunately I cannot tell you more than that. One of the main challenges we have been solving, moving from the VGA sensor to the HD sensor is the capability now to stack the sensor, to use a very advanced technology node that enables us to reduce the pixel pitch. So to make actually the sensor much smaller and cost-effective.

Automotive remains one of the key verticals we are targeting, because our technology, event-based technology, shows clear benefit in that space with respect to low latency detection, low data rate and high dynamic range.

...we did some tests in some controlled environments with one of the largest OEMs in Europe, and we compared side by side the frame-based sensor with an event-based sensor, showing that, while the frame-based camera system was failing evening in fusion with a radar system, our system was actually capable to detect pedestrians in both daylight conditions and night light conditions.

iniVation wins Best of Innovation award from the CES 2020 in the category ‘Embedded Vision’.

The award is for the company's newest product, the DVXplorer that uses an all-new custom-designed sensor from Samsung. DVXplorer is said to be the world’s first neuromorphic camera employing technologies suitable for mass-production applications.

Thanks to TL for the link!

Samsung's Hyunsurk Eric Ryu presented their event driven pixels at the 2nd International Workshop on Event-based Vision and Smart Cameras:

Go to the original article...

NIT Presents 7.5um Pixel InGaAs SWIR Sensor

Image Sensors World        Go to the original article...

New Imaging Technologies (NIT) is announces its first commercially available SWIR InGaAs sensor with a pitch of 7.5µm, resulting from several years of R&D development of an in-house hybridization process. This process does not use the classical indium bumps and allows manufacturing hybrid sensors with very small pitches with high yield at a reduced cost.

The first available component at 7.5µm pitch is a line array with the following characteristics:
  • Pixel Number: 2048
  • Pitch: 7.5µm
  • Line speed: 60KHz @ full line
  • Well Fill: 25 Ke-
  • Readout Noise: less than 70e-
  • Dark Current: 8 fA @ 15°C

Go to the original article...

Omron Demos its People Recognition Sensor

Image Sensors World        Go to the original article...

Inavate: Omron is to demo its second-generation digital signage body and face detection/face recognition system. The Human Vision Component HVC-P2 with OKAO Vision software features ten image sensing functions including body detection, face recognition, hand detection, age estimation, gender estimation, and expression estimation. The OKAO software can recognize faces up to 3m away and can detect a human body up to 17m away.

Go to the original article...

Sony Automotive Sensors, LiDAR, ToF

Image Sensors World        Go to the original article...

Sony publishes a video presenting its safety cocoon devices:

Go to the original article...

FLIR Updates its Periodic Table of Image Sensors

Image Sensors World        Go to the original article...

FLIR (former Point Grey) publishes an 2020 version of its Periodic Table of Image Sensors:

"Updated for 2020 - now with over 130 machine vision sensors, incluing third generation Sony Pregius and fourth generation Sony Pregius S global shutter sensors.

With so many sensors to choose from, we understand that it could be tricky to keep track of them. This handy chart organizes over 130 sensors from classic CCDs to the latest CMOS technology by resolution and speed. We suggest printing off this free poster and laminating it, then pinning it up on your wall for easy reference.

Thanks to TL for the link!

Go to the original article...


Image Sensors World        Go to the original article...

Optics Express paper "3D LIDAR imaging using Ge-on-Si single–photon avalanche diode detectors" by Kateryna Kuzmenko, Peter Vines, Abderrahim Halimi, Robert J. Collins, Aurora Maccarone, Aongus McCarthy, Zoë M. Greener, Jarosław Kirdoda, Derek C. S. Dumas, Lourdes Ferre Llin, Muhammad M. Mirza, Ross W. Millar, Douglas J. Paul, and Gerald S. Buller from Heriot-Watt and Edinburgh Universities, UK, presents a concept design of LiDAR with a SPAD detector cooled down to 100K:

"We present a scanning light detection and ranging (LIDAR) system incorporating an individual Ge-on-Si single-photon avalanche diode (SPAD) detector for depth and intensity imaging in the short-wavelength infrared region. The time-correlated single-photon counting technique was used to determine the return photon time-of-flight for target depth information. In laboratory demonstrations, depth and intensity reconstructions were made of targets at short range, using advanced image processing algorithms tailored for the analysis of single–photon time-of-flight data. These laboratory measurements were used to predict the performance of the single-photon LIDAR system at longer ranges, providing estimations that sub-milliwatt average power levels would be required for kilometer range depth measurements.

... recently, the use of planar geometry devices [39] yielded a significant step change improvement in performance. Vines et al. [39] reported a normal incidence planar geometry Ge-on-Si SPADs with 38% SPDE at 125 K at a wavelength of 1310 nm and a noise–equivalent power (NEP) of 2 × 10−16 WHz-1/2. In addition, these devices clearly demonstrated lower levels of afterpulsing compared with InGaAs/InP SPAD detectors operated under nominally identical conditions. The high SPDEs of Ge-on-Si SPADs and their reduced afterpulsing compared to InGaAs/InP SPADs provides the potential for significantly higher count rate operation and, consequently, reduced data acquisition times. Planar Ge-on-Si SPADs exhibit compatibility with Si CMOS processing, potentially leading to the development of inexpensive, highly efficient Ge-on-Si SPAD detector arrays. Here we report a successful demonstration of LIDAR 3D imaging using an individual planar Ge-on-Si SPAD operating at a wavelength of 1450 nm.

Go to the original article...

Omnivision and Artilux to Collaborate on Ge-on-Si Sensors for Smartphones

Image Sensors World        Go to the original article...

PRNewswire: OmniVision and Artilux announce their execution of a formal letter of intent to collaborate on GeSi-based 3D sensors, after a series of evaluation and analysis. The main objective of this collaboration is to combine OmniVision's CMOS imaging technology and market position with Artilux's GeSi 3D sensing technology, and accelerate the delivery of comprehensive RGB and 3D imaging solutions to the mobile phone segment.

The new product offerings will not only cover the mainstream light sensing spectrum from visible light to 850nm/940nm, but will further extend to 1350nm/1550nm, for improved outdoor experience and eye safety for multiple growing digital imaging market segments.

Go to the original article...

SET Hybrid Bonding Machine Boasts 1um (3-sigma) Accuracy

Image Sensors World        Go to the original article...

SET NEO HB hybrid/direct bonding machine claims +/-1um 3-sigma alignment accuracy:

Go to the original article...

Brillnics 4um Voltage Domain GS Pixel

Image Sensors World        Go to the original article...

MDPI paper "A Stacked Back Side-Illuminated Voltage Domain Global Shutter CMOS Image Sensor with a 4.0 μm Multiple Gain Readout Pixel" by Ken Miyauchi, Kazuya Mori, Toshinori Otaka, Toshiyuki Isozaki, Naoto Yasuda, Alex Tsai, Yusuke Sawai, Hideki Owada, Isao Takayanagi, and Junichi Nakamura from Brillnics is a part of the Special issue on the 2019 International Image Sensor Workshop (IISW2019).

"A backside-illuminated complementary metal-oxide-semiconductor (CMOS) image sensor with 4.0 μm voltage domain global shutter (GS) pixels has been fabricated in a 45 nm/65 nm stacked CMOS process as a proof-of-concept vehicle. The pixel components for the photon-to-voltage conversion are formed on the top substrate (the first layer). Each voltage signal from the first layer pixel is stored in the sample-and-hold capacitors on the bottom substrate (the second layer) via micro-bump interconnection to achieve a voltage domain GS function. The two sets of voltage domain storage capacitor per pixel enable a multiple gain readout to realize single exposure high dynamic range (SEHDR) in the GS operation. As a result, an 80dB SEHDR GS operation without rolling shutter distortions and motion artifacts has been achieved. Additionally, less than −140dB parasitic light sensitivity, small noise floor, high sensitivity and good angular response have been achieved."

Go to the original article...

e2v Announces 37.7MP 86fps GS Sensor in 4/3-inch Format

Image Sensors World        Go to the original article...

Teledyne e2v announces its new Emerald 36M, a 37.7MP sensor for industrial and outdoor applications requiring both high resolution and high speed.

Emerald 36M combines 6k square resolution, high frame rate, low noise, high QE, and wide angular response. The sensor is available in an ultra-high speed and a high-speed versions, respectively providing 86fps and 43fps at full resolution.

The sensor fits standard Four Thirds optics, and is said to be the world’s highest resolution global shutter sensor to fit these lenses. Emerald 36M is pin-to-pin and optically compatible with Emerald 67M, so that multiple resolutions and speed grades are possible from a single camera design.

Marie-Charlotte Leclerc, Marketing Manager at Teledyne e2v said, “We are delighted to release the Emerald 36M. The sensor is already gathering interest from vision system designers looking forward to improved accuracy and throughput with optimized inspection system paths. And beyond the factory floor, Emerald 36M enables surveillance over wider fields of view and provides higher autonomy for aerial mapping and security solutions.

Evaluation kits and samples of Emerald 36M are available now.

Go to the original article...

Samsung Surface Plasmon Enhanced Organic Image Sensor

Image Sensors World        Go to the original article...

Nature paper "Surface plasmon enhanced Organic color image sensor with Ag nanoparticles coated with silicon oxynitride" by Sung Heo, Jooho lee, Gae Hwang Lee, Chul-Joon Heo, Seong Heon Kim, Dong-Jin Yun, Jong-Bong Park, Kihong Kim, Yongsung Kim, Dongwook Lee, Gyeong-Su Park, Hoon Young Cho, Taeho Shin, Sung Young Yun, Sunghan Kim, Yong Wan Jin, and Kyung-Bae Park from Samsung Advanced Institute of Technology, Dongguk University, Yonsei University, Seoul National University, and Chonbuk National University shows Samsung efforts to improve OPD pixels:

"As organic photodetectors with less than 1 μm pixel size are in demand, a new way of enhancing the sensitivity of the photodetectors is required to compensate for its degradation due to the reduction in pixel size. Here, we used Ag nanoparticles coated with SiOxNy as a light-absorbing layer to realize the scale-down of the pixel size without the loss of sensitivity. The surface plasmon resonance appeared at the interface between Ag nanoparticles and SiOxNy. The plasmon resonance endowed the organic photodetector with boosted photon absorption and external quantum efficiency. As the Ag nanoparticles with SiOxNy are easily deposited on ITO/SiO2, it can be adapted into various organic color image sensors. The plasmon-supported organic photodetector is a promising solution for realizing color image sensors with high resolution below 1 μm."

"In summary,... Although the effective area for receiving the incident photon is expected to decrease with the scaling-down of the pixels, the introduction of the SPR in OCIS counters the problem without losing the spatial resolution. With further systematic research conducted on the pattern and size of Ag NPs, the SPR is likely to be the sole solution for realizing OCISs with high resolution below 1 μm."

Go to the original article...

LIGO Interferometer Reduces Quantum Fluctuations of Light

Image Sensors World        Go to the original article...

LIGO interferometer for gravitational waves observations announces that it uses squeezed light to improve the measurement accuracy:

"The Heisenberg uncertainty principle states that we can't know both the position and the velocity of a quantum particle perfectly--the better we know the position, the worse we know the velocity, and vice versa. For light waves, the Heisenberg principle tells us that there are unavoidable uncertainties in amplitude and phase that are connected in a similar way. One of the stranger consequences of quantum theory is that there must be fluctuating electric and magnetic fields, even in a total vacuum. In a normal vacuum state, these "zero-point" fluctuations are completely random and the total uncertainty is distributed equally between the amplitude and the phase. However, by using a crystal with non-linear optical properties, it is possible to prepare a special state of light where most of the uncertainty is concentrated in only one of the two variables. Such a crystal can convert normal vacuum to "squeezed vacuum", which has phase fluctuations SMALLER than normal vacuum! At the same time, the amplitude fluctuations are larger, but phase noise is what really matters for LIGO."

Go to the original article...

SPAD PDP Simulation

Image Sensors World        Go to the original article...

National Chiao Tung University, Taiwan paper "Photon-Detection-Probability Simulation Method for CMOS Single-Photon Avalanche Diodes" by Chin-An Hsieh, Chia-Ming Tsai, Bing-Yue Tsui, Bo-Jen Hsiao, and Sheng-Di Lin is a part of MPDI Special issue on the 2019 International Image Sensor Workshop (IISW2019).

"Single-photon avalanche diodes (SPADs) in complementary metal-oxide-semiconductor (CMOS) technology have excellent timing resolution and are capable to detect single photons. The most important indicator for its sensitivity, photon-detection probability (PDP), defines the probability of a successful detection for a single incident photon. To optimize PDP is a cost- and time-consuming task due to the complicated and expensive CMOS process. In this work, we have developed a simulation procedure to predict the PDP without any fitting parameter. With the given process parameters, our method combines the process, the electrical, and the optical simulations in commercially available software and the calculation of breakdown trigger probability. The simulation results have been compared with the experimental data conducted in an 800-nm CMOS technology and obtained a good consistence at the wavelength longer than 600 nm. The possible reasons for the disagreement at the short wavelength have been discussed. Our work provides an effective way to optimize the PDP of a SPAD prior to its fabrication."

Go to the original article...

SmartSens Announces Starlight Upgrade Technology, IoT Award

Image Sensors World        Go to the original article...

The Starlight H-Series sensor announcement has been removed at SmartSens request.

PRNewswire: SmartSens SC132GS sensor has been selected as the winner of the "IoT Semiconductor Solution of the Year" award in the 4th annual IoT Breakthrough Awards program from IoT Breakthrough, a market intelligence organization.

"We expect to see the SmartSens SC132GS perform outstandingly in fields such as IoT that demand high efficiency and efficacy," said Chris Yiu, CMO, SmartSens. "IoT solutions, ITS, machine vision for manufacturing automation, and intelligent security and surveillance are just a few examples of these fields. SmartSens is confident that as AI and 5G technologies mature, more applications will arise which the SC132GS and subsequent products from SmartSens will be able to handle in stride. We are proud to receive this significant industry recognition from IoT Breakthrough in recognition of our innovation and success."

Go to the original article...

4th International Workshop on Image Sensor and Systems (IWISS2018)

Image Sensors World        Go to the original article...

A full collection of papers from 4th International Workshop on Image Sensor and Systems (IWISS2018) held in November 2018 at Tokyo Institute of Technology, Japan, is published on-line at International Image Sensor Society site.

Go to the original article...

Espros ToF Performance in Sunlight

Image Sensors World        Go to the original article...

Espros publishes a demo video of its ToF camera performance with sunlight in the frame:

Go to the original article...

SPAD ToF Imager Thesis

Image Sensors World        Go to the original article...

1University of Oulu, Finland, publishes PhD Thesis "Time-gating technique for a single-photon detection-based solid-state time-of-flight 3D range imager" by Henna Ruokamo.

"This thesis is concerned with the development of a solid-state 3D range imager based on use of the sliding time-gate technique in a SPAD array and short (~200 ps), intensive laser pulses. The area of the in-pixel electronics needed in time-gated imagers is small, which leads to a high fill factor and a possibility for implementing large arrays. The use of short laser pulses increases the precision and frame rate, since depth measurement can be limited to the range of interest, e.g. around the surface of the target. To increase the frame rate further, the array can be divided into subarrays with independently defined ranges. Tolerance of high background light is achieved by using sub-ns time-gate widths.

A SPAD array of 80 x 25 pixels is developed and realized here. The array is divided into 40 subarrays, the narrow (less than 0.8 ns) time-gating positions for which can be set independently. The time-gating for each of the subarrays is selected separately with an on-chip DLL block that has 240 outputs and a delay grid of ~100 ps. The fill factor of the sensor area is 32%. A 3D range image measurement at ~10 frames per second with centimetre-level precision is demonstrated for the case of passive targets within a range of ~4 metres and a field of view of 18 × 28 degrees, requiring an average active illumination power of only 0.1 mW. A frame rate of 70 range images per second was achieved with a higher laser average illumination power (~5 mW) and pulsing rate (700 kHz) when limiting the scanning range for each subarray to 30 cm around the surfaces of the targets.

An FPGA-based algorithm which controls the time-gating of the SPAD array and produces the range images in real time was also developed and realized.

Go to the original article...

Opnous ToF Presentation

Image Sensors World        Go to the original article...

China-based startup Opnous publishes a presentation on its ToF products, reportedly licensed from Brookman and Shizuoka University.

Go to the original article...

LiDAR News: SK Telecom, Pioneer, Canon, Outsight, SOS Lab, ON Semi, Valeo, Kyocera, Livox

Image Sensors World        Go to the original article...

SK Telecom announces ‘Next-Generation Single Photon LiDAR’ developed with Pioneer Smart Sensing Innovations Corporation (PSSI). SK Telcom and PSSI entered into a joint development agreement in September 2019 to develop a next-generation single photon LiDAR and have been actively working together since to commercialize the new LiDAR by 2021. The LiDAR combines SK Telecom's 1550nm SPAD with Pioneer's 2D MEMS scanning mirror.

SK Telecom’s single photon LiDAR transceiver technologies consist of 1550 nm laser, SPAD, and Time-Correlated Single Photon Counting (TCSPC). 1550 nm laser, much stronger than 905 nm laser, is said to enable extended detection of objects at a distance of up to 500m.

Rather than a linear-mode APD, SK Telecom uses SPAD to ensure higher sensitivity to light. With SPAD, Next-Generation Single Photon LiDAR can accurately detect low-reflectivity objects like tires or pedestrians dressed in black.

Pioneer adds that Canon too is a part of this joint 500m-range LiDAR project with SK Telecom:

"Together with Canon Inc. (“Canon,” hereafter), PSSI is engaged in co-development of 3D-LiDAR sensors, which are regarded as an indispensable key device for the realization of autonomous driving in level-three and above autonomous vehicles (conditional automation).., which utilizes Micro Electric Memory Systems (MEMS) mirror-based scanning method and Canon’s optical technologies... The newly developed next-generation 3D-LiDAR sensor is a 1550nm wavelength sensor model which—although based on the same core technologies developed by PSSI and Canon—offers a greatly extended measurement distance made possible by the addition of transceiver (transmitter / receiver) technologies developed by SK Telecom Co., Ltd. (“SK Telecom,” hereafter) of South Korea. The new sensor is capable of high resolution and measurement at long distance of 500m."

EIN Newsdesk: Hyperspectral LiDAR startup Outsight just got a first customer. After an international competition, the Paris airport group (ADP) has chosen Outsight's "3D Smart Monitoring" system to be used in two areas of Paris-Charles de Gaulle airport in the international Terminal 2E, including the baggage claim zone. One of the key elements of this technology is the Edge Privacy feature: the video stream doesn't leave the sensors. Because the calculations on the images are created on the device itself, it is completely autonomous in the analysis of the data captured, thus avoiding the transit of sensitive data via the networks.

Raul Bravo, President and Co-Founder of Outsight, says: "This first deployment at Paris Charles-de-Gaulle Airport, just a few months after the company's creation, demonstrates the relevance of our 3D perception approach in the context of improved operations and security. This first implementation follows a succession of announcements for Outsight, and can then be extended to other areas such as shopping malls and train stations."

BusinessWire: SOS Lab is to use ON Semi SPADs in its LiDAR. “We expect to develop (solid-state) type lidars more quickly and plan to mass-produce lidars for vehicles with built-in headlamps and bumpers within 2-3 years,” says JiSeong Jeong, CEO of SOS Lab.

Valeo Investor Presentation releases some data on its LiDAR sales, as a response to skeptics saying there is no market for automotive LiDARs now:

Kyocera presents LiDAR and camera in a single box:

There is a big problem with many of these LIDAR systems,” Senior Manager of Research Hiroyuki Minagawa says, “they use mechanical motors to rotate their scanning mirrors, and they can’t withstand the shaking and vibration that happens normally in a car. I don’t think they would last more than a couple of years.To overcome this limitation, Kyocera develops another exclusive solution, using a MEMS Mirror housed inside Kyocera’s exclusive Ceramic Packaging Technology. Ceramics are in Kyocera’s DNA, right down to its name, and breakthroughs in ceramic technology are an integral part of Kyocera’s history. This advanced ceramic technology makes Kyocera’s Camera-LIDAR Fusion Sensor far more durable in real-world driving conditions, making it the superior choice for autonomous driving systems.

Livox explains its non-repetitive scanning pattern advantages:

"The environment scanned by a Livox sensor increases with longer integration time as the laser explores new spaces within its Field of View (or FOV). As seen in the image below, a Livox Mid-40 or Mid-100 sensor generates a unique flower-like scanning pattern to create a 3D image of the surrounding environment. Image fidelity increases rapidly over time. In comparison, conventional lidar sensors use horizontal linear scanning methods that run the risk of blind spots, causing some objects in their FOV to remain undetected regardless of how long the scan lasts. The unique non-repetitive scanning method of the Livox lidar sensors enables nearly 100% FOV coverage with longer integration time which does not exist in any market alternatives today at this cost."

Livox Horizon LiDAR is sold for for $999, Tele-15 costs $1,499.

Go to the original article...

TPSCo 2.5um GS Pixel

Image Sensors World        Go to the original article...

TPSCo paper "A High-Performance 2.5 μm Charge Domain Global Shutter Pixel and Near Infrared Enhancement with Light Pipe Technology" by Ikuo Mizuno, Masafumi Tsutsui, Toshifumi Yokoyama, Tatsuya Hirata, Yoshiaki Nishi, Dmitry Veinger, Adi Birman, and Assaf Lahav is a part of MDPI Special Issue on the 2019 International Image Sensor Workshop (IISW2019).

"We developed a new 2.5 μm global shutter (GS) pixel using a 65 nm process with an advanced light pipe (LP) structure. This is the world’s smallest charge domain GS pixel reported so far. This new developed pixel platform is a key enabler for ultra-high resolution sensors, industrial cameras with wide aperture lenses, and low form factors optical modules for mobile applications. The 2.5 μm GS pixel showed excellent optical performances: 68% quantum efficiency (QE) at 530 nm, ±12.5 degrees angular response (AR), and quite low parasitic light sensitivity (PLS)—10,400 1/PLS with the F#2.8 lens. In addition, we achieved an extremely low memory node (MN) dark current 13 e−/s at 60 °C by fully pinned MN. Furthermore, we studied how the LP technology contributes to the improvement of the modulation transfer function (MTF) in near infrared (NIR) enhanced GS pixel. The 2.8 μm GS pixel using a p-substrate showed 109 lp/mm MTF@50% at 940 nm, which is 1.6 times better than that without an LP. The MTF can be more enhanced by the combination of the LP and the deep photodiode (PD) electrically isolated from the substrate. We demonstrated the advantage of using LP technology and our advanced stacked deep photodiode (SDP) technology together. This unique combination showed an improvement of more than 100% in NIR QE while maintaining an MTF that is close to the theoretical Nyquist limit (MTF @50% = 156 lp/mm)."

Go to the original article...

ISSCC Forum ‘Sensors for Health’

Image Sensors World        Go to the original article...

ISSCC 2020 Forum "Sensors for Health" organized by Matteo Perenzoni, FBK, Italy, has a number of image sensing presentations:
  • Flexible Electronics for Medical Imaging: from Patches to Large-Area X-Ray Imaging
    Kris Myny, IMEC, Leuven, Belgium
  • SPADs, ISFETs and Photodiodes: Mixed-Mode Sensing for Healthcare
    David Cumming, University of Glasgow, Glasgow, United Kingdom
  • CMOS Sensor Architectures and Circuital Solutions for Nuclear Medicine: from Scintillator-Based dSiPM to Monolithic Detectors
    Nicola Massari, FBK, Trento, Italy
  • CMOS/BiCMOS THz System-on-Chip for Life-Science Applications
    Ullrich Pfeiffer, University of Wuppertal, Wuppertal, Germany

Go to the original article...

Sony Announces 2.74um GS Products, 5.8um Quad-Bayer Sensor

Image Sensors World        Go to the original article...

Sony adds 2.74um GS pixel products to its lineup - shown in pink color below:

For security and surveillance applications, Sony introduces 5.8um Starvis quad-Bayer pixel in its new 1080p90 IMX482LQR sensor:

Go to the original article...

Ams Presents First Fruit of its Cooperation with SmartSens

Image Sensors World        Go to the original article...

BusinessWire: ams introduces the NIR CMOS Global Shutter Sensor CGSS130 aimed to 3D optical sensing applications such as face recognition, payment authentication and more to operate at much lower power than alternative implementations.

The CGSS130 sensor is said to be 4 times more sensitive to NIR wavelengths than most other image sensor on the market today. Since the IR emitter consumes most of the power in face recognition and other 3D sensing applications, the use of the CGSS130 sensor will enable manufacturers to extend battery run-time in mobile devices. The sensor also creates the opportunity to implement face recognition in wearable devices and in other products which are powered by a very small battery, or to enable a new range of applications beyond face recognition as the increased sensitivity extends the measurement range for the same power budget.

Stephane Curral, EVP and GM at ams’ ISS division, says: “Following the announcement of ams’ partnership with SmartSens Technology earlier this year, we are delighted to announce the first 3D Active Stereo Vision (ASV) reference design based on the CGSS130 voltage-based NIR enhanced global shutter image sensor. The 1.3MP stacked BSI sensor offers the highest Quantum Efficiency at 940nm, ideally suited for battery-powered devices. By supplying all main parts of the 3D system (illumination, receiver, SW) ams enables superior system performance with lower costs and a faster time to market for its customers.

Development of the CGSS130 has been accelerated by ams’ partnership with SmartSens Technology. The new sensor is available for sampling.

The CGSS130 has a QE up to 40% at 940nm, and up to 58% at 850nm. The stacked BSI process allows for a small footprint of just 3.8mm x 4.2mm and the GS pixel size of 2.7um.

The sensor produces monochrome images with an effective pixel array of 1080H × 1280V at a maximum frame rate of 120 fps. This high frame rate and global shutter operation produce clean images free of blur or other motion artifacts.

Go to the original article...

Are There Too Many LiDAR Companies?

Image Sensors World        Go to the original article...

Wired article "There Are Too Many Lidar Companies. They Can't All Survive" quotes Shahin Farshchi, a partner at VC fund Lux who invests in LiDAR company Aeva, saying that for every 10 LiDAR startups "three will fold, four will be acquired for modest sums, and the remainder will produce impressive returns."

Recently, the LiDAR industry has shown signs of consolidation, and some see a shakeout coming. Luminar CEO Austin Russell says he has been approached by a half-dozen competitors asking if Luminar would be interested in acquiring them.

Meanwhile, France’s Valeo has logged $564 million worth of orders for its LiDARs, which work best for shorter distances.

Reuters: Alternative uses and customers are needed to keep revenue flowing at LiDAR startups waiting for the expected boom in self-driving cars, which still looks to be years away. For smaller LiDAR companies backed by venture capital, developing new markets is key.

Henry Patent Law Firm publishes its LiDAR patent landscape analysis:

According to Woodside Capital Partners presentation, investment in LiDAR companies is far greater than in other AV vision solutions:

PhotonicsSpectra publishes Greg Smolka's, VP of business development at Insight LiDAR, article on the VC market:

"According to IDTechEx, as of August, $1.9 billion was invested in the 2019 lidar market. PitchBook’s third quarter 2019 Mobility Tech report shows that the lidar industry was on track for a record year, with approximately $1.2 billion in venture capital (VC) investment in the first three quarters of the year. Since 2009, investors have deployed over $2.5 billion in VC dollars into the industry.

With 100 or so players in the market, many offering similar technology, consolidation is bound to happen. [Dexin Chen, senior analyst at IHS Markit] said we’ve begun to see an uptick in mergers and acquisitions for lidar makers. Ultimately, those that prevail will have the ability to meet all of the specifications necessary for L4/L5 AVs, including cost targets. The vast majority of companies working on automotive lidar are also pursuing other applications such as security, mapping, and industrial automation. While the automotive lidar market may consolidate, technological advancements driven by this market will have far-reaching effects.

Go to the original article...

More LiDAR News: Velodyne $100 LiDAR, XenomatiX Partners with Marelli

Image Sensors World        Go to the original article...

BusinessWire: Velodyne introduces $199 Velabit, Velodyne’s smallest LiDAR which "delivers the same technology and performance found on Velodyne’s full suite of state-of-the-art sensors and will be the catalyst for creating endless possibilities for new applications in a variety of industries."

The Velabit democratizes lidar with its ultra-small form factor and its sensor pricing targeted at $100 in high-volume production, making 3D lidar available for all safety-critical applications,” said Anand Gopalan, CEO, Velodyne Lidar. “Its combination of performance, size and price position the Velabit to drive a quantum leap in the number of lidar-powered applications. The sensor delivers what the industry has been seeking: a breakthrough innovation that can jump-start a new era of autonomous solutions on a global scale.

Velabit’s features:
  • Integrated processing in a compact size of 2.4” x 2.4” x 1.38” – smaller than a deck of playing cards – to be easily embedded in a wide range of solutions.
  • Range up to 100 meters.
  • 60-deg horizontal FoV x 10-deg vertical FOV.
  • Highly configurable to support a range of applications.
  • Class 1 eye-safe 903nm laser.
  • Bottom connector with cable length options.
  • Multiple manufacturing sources scheduled to be available for qualified production projects.

PRNewswire: Marelli and XenomatiX enter into a technical and commercial development agreement in the autonomous driving field.

"Marelli is a leading automotive supplier with the right competencies to develop modular LiDAR solutions fulfilling different Automotive OEM needs, integrating them into larger systems, based on the True Solid State LiDAR technology we designed for the automotive market," states Filip Geuens, CEO of XenomatiX. "Marelli`s long-standing experience in the automotive field and with the 3D sensors is key to this partnership."

Marelli introduces the Smart Corner, a solution integrating sensors for autonomous driving within vehicle headlamps and tail lamps, while maintaining attractive styling and world-class lighting performance:

PRNewswire: Carnavicom, a South Korean automotive supplier, presents its new LiDAR that has achieved an average of a 26% decrease in costs thanks to local procurement of parts and key components such as brushless direct current motor (BLDC), laser diode (LD) and avalanche photodiode (APD). Such efforts have enabled LiDAR sensors to be more affordable to integrate into other LiDAR products and applications.

Go to the original article...

Pig Facial Recognition

Image Sensors World        Go to the original article...

CounterpointResearch: Chinese Alibaba, JD.com, and Tencent are developing AI-based Smart Agriculture platforms which they believe will lead to improved agricultural efficiency, particularly with respect to pig rearing. Pig facial recognition works in a similar way to human facial recognition, recording details of the pig’s eyes, ears, snout and bristles.

Start-up Yingzi Technology, one of the first Chinese companies to come up with a pig facial recognition system, is trialing its system on a farm with 3,000 pigs. Yingzi claims that identification accuracy is more than 98% and that a pig can be identified even if it is moving in a herd.

Go to the original article...

Low Cost SWIR Options

Image Sensors World        Go to the original article...

IMVE publishes an article "Saving on SWIR" talks about cheaper than InGaAs option for SWIR imaging. One of them is Imec quantum dots:

"Quantum dots are nanocrystals that, depending on their size, offer different light absorption properties. For example, particles approximately 3nm in size absorb at 940nm, while particles around 5.5nm in size absorb at 1,450nm. The pixel stacks of the new sensor can be tuned to target a spectrum from visible light all the way up to 2µm wavelength.

‘Right now there isn’t much of a SWIR imaging market, because there is such a high [price] threshold for acquiring a SWIR camera,’ said
[Pawel Malinowski, Imec’s thin-film imagers programme manager.] ‘In a lot of machine vision applications people are not using SWIR because they cannot get a camera, so what we are hoping for is that because we can offer SWIR imaging at orders of magnitude lower price, then new applications will pop up.’

The first generation of Imec’s quantum dot sensor has a resolution of 758 x 512 pixels and a pixel pitch of 5µm. According to Malinowski, however, the second-generation chips, currently being tested, will have a pixel pitch as low as 1.8μm. He noted that the typical pixel pitch of an InGaAs sensor is between 15μm and 20μm.

Despite the lower fabrication cost and higher resolutions achievable with the new sensor technology, Malinowski said quantum efficiency – the performance achieved for the amount of light – will only be around 30 to 40 per cent; InGaAs sensors are able to offer 80 to 90 per cent quantum efficiency. He added: ‘I think that InGaAs will remain unbeatable in terms of high-end performance for the time being.

SWIR Vision company too pursues quantum dots:

"Quantum dot-based SWIR imaging technologies are also available from US-based SWIR Vision Systems, which has been selling its Acuros colloidal quantum dot (CQD) VIS-SWIR cameras from Q3 2018. The cameras are available in VGA (640 x 512 pixels), one-megapixel (1,280 x 1024 pixels), and full HD (1,920 x 1,080 pixels) formats.

‘Demand for these cameras has been increasing throughout 2019,’ said George Wildeman, CEO of SWIR Vision Systems, who remarked that the 1,920 x 1,080-pixel model, which has a resolution six times higher than the current standard 640 x 512-pixel InGaAs cameras, is the first of its kind to be commercially available. ‘There are a few high-resolution InGaAs cameras with 1,280 x 1,024-pixel sensor arrays, but these are very high cost,’ he said. ‘It is a big challenge to scale InGaAs cameras to larger array sizes without a large increase in their price point.

Yet another option is Emberion graphene imagers:

"The first product samples of the sensor, which offers VGA resolution, 20µm pixel pitch, 100fps frame rate, and a spectral range from 400nm to 2,000nm, will be available in June 2020.

‘This wide spectral range is the key advantage that our sensor provides over standard InGaAs sensors, which tend to go between 900nm to 1,700nm,’ said Jyri Hämäläinen, director of sales and marketing at Emberion. ‘Beyond 1,700nm is usually called “extended InGaAs”, and it is here that InGaAs technology becomes very expensive. In comparison our sensor is much more affordable while being able to detect these wavelengths.

Thanks to TL for the link!

Go to the original article...

Vision Processing Limitations in Stacked Image Sensors

Image Sensors World        Go to the original article...

Arizona State University, Tempe, publishes arxiv.org paper "Stagioni: Temperature management to enable near-sensor processing for energy-efficient high-fidelity imaging" by Venkatesh Kodukula, Saad Katrawala, Britton Jones, Carole-Jean Wu, and Robert LiKamWa.

"Many researchers advocate pushing processing close to the sensor to substantially reduce data movement. However, continuous near-sensor processing raises the sensor temperature, impairing the fidelity of imaging/vision tasks. We characterize the thermal implications of using 3D stacked image sensors with near-sensor vision processing units. Our characterization reveals that near-sensor processing reduces system power but degrades image quality. For reasonable image fidelity, the sensor temperature needs to stay below a threshold, situationally determined by application needs. Fortunately, our characterization also identifies opportunities -- unique to the needs of near-sensor processing -- to regulate temperature based on dynamic visual task requirements and rapidly increase capture quality on demand. Based on our characterization, we propose and investigate two thermal management strategies -- stop-capture-go and seasonal migration -- for imaging-aware thermal management. We present parameters that govern the policy decisions and explore the trade-offs between system power and policy overhead. Our evaluation shows that our novel dynamic thermal management strategies can unlock the energy-efficiency potential of near-sensor processing. For our evaluated tasks, our strategies save up to 53% of system power with negligible performance impact and sustained image fidelity."

Go to the original article...

LiDAR News: Velodyne, Luminar, Innovusion, Insight, Outsight, Baraja, Ouster, Sony, Mobileye

Image Sensors World        Go to the original article...

BusinessWire: Velodyne announces Anand Gopalan as its new CEO. Gopalan, who previously was Velodyne’s CTO, assumes the position from Velodyne’s legendary founder David Hall. Hall will continue as full-time Chairman of the Board and remain actively involved in directing the company’s technology, product vision and business strategy.

David Hall is more than the founder of Velodyne, he is also the founder of our industry. I am grateful for the trust he has placed in me and excited to lead a company with such a deep history in innovation as Velodyne. We are the forefront of our market, ready to drive the age of autonomy. Velodyne is bringing improved mobility and safety through versatility, responsiveness and agility,” said Gopalan.

BusinessWire: Luminar attempts to switch to a recurring revenue model. The company introduces Hydra Perception Compute Unit (PCU) reference design powered by the NVIDIA Xavier SoC. This solution is said to substantially shorten the industry timelines, enabling autonomy to be commercialized in production in 2022.

Hydra begins shipping this quarter and is available through a new subscription model -- the first of its kind for LiDAR. With the release of Hydra, Luminar has transitioned its core business from selling sensors to a subscription-based service for its autonomous vehicle development partners that enables a deeper integration throughout development cycles, increasing development speed as well as enabling more focused feature development.

Luminar LiDAR is now the established industry gold standard for performance and safety, and the perfect platform to enable the dramatic software and perception improvements required for automakers to transition from test vehicles to commercial autonomy,” said Austin Russell, Founder and CEO, Luminar. “We’ve been quietly developing Hydra, the most advanced 3D perception system in the industry, for over three years now and it’s time for our 40 partners and the rest of the world to see it.

Hydra is an integrated product of three key self-driving technologies:
  • Luminar’s LiDAR, built from the chip-level up;
  • Luminar’s new software suite, built and optimized specifically for Luminar LiDAR;
  • Luminar’s new perception computer, a reference design built on the NVIDIA Xavier SoC

BusinessWire: Innovusion announces Falcon long-range LiDAR. With a vertical and horizontal resolution of 0.07 degrees at 10 fps and a FoV of 110 degrees x 30 degrees, Falcon reaches a range of 120 meters on pedestrians for the entire 110-degree FoV.

In the last year, there has been an industry-wide delay in roadmaps for the deployment of Level 4 autonomous driving while driver assistance technology has become more prevalent. There are still gaps in autonomous system performance and the disengagement is still too high,” said Ian Zhu, Managing Partner at NIO Capital. “We are confident that with the release of Falcon, Innovusion is enabling the automotive industry to take the necessary steps towards LiDAR adoption for the greater market, whether that is in self-driving cars or otherwise.

BusinessWire: Insight LiDAR announced its Digital Coherent LiDAR, an ultra-high resolution, long-range LiDAR sensor targeted at the emerging autonomous vehicle (AV) market. Among the breakthroughs built into Digital Coherent LiDAR are:

  • Long Range – 200 meters to 10 percent reflectivity targets
  • Ultra-High resolution – up to 0.025 x 0.025 degrees
  • Large Field of View – 120 x 340 degrees
  • Direct Doppler velocity in every pixel
  • True solid-state, flexible fast-axis scanning
  • Complete immunity from sunlight and other lidar
  • Low-cost chip scale, all-semiconductor approach

Insight LiDAR’s patent portfolio covers not only the design and control of the laser source, critical for the FMCW detection technique, but also includes key system IP enabling Insight’s high-resolution, foveation, large field of view and long-range performance.

LaTribune: As reported earlier, Outsight raises its seed $20M investment for development of hyperspectral LiDAR that analyses the object material simultaneously with distance. The production model is expected to be completed by 2021.

BusinessWire: Baraja, developer of Spectrum-Scan LiDAR, unveils its sensing platform with inherent interference immunity. Baraja LiDAR is said to be the only system available today using randomly modulated continuous wave, technology that completely blocks interference from other LiDARs and environmental light sources.

Sensor interference is one of the leading causes of disengagements for autonomous vehicles today and the issue will only continue to grow as more LiDAR-equipped vehicles hit the road,” said Baraja Co-Founder and CEO, Federico Collarte. “Interference risks leaving the vehicle with blind-spots, and driving blind is obviously unacceptable. Our experience developing technology in the telecom industry uniquely positions Baraja to address the problem of interference by encoding the light transmitted by our laser, using the same mature, volume-produced components that encode information for interference-free communications.

Interference occurs when a LiDAR transmits laser light and picks up another source of light, from a different laser or environmental source, like bright sunlight, creating errors and uncertainty that manifest as vehicle blind spots. Today, this situation triggers the autonomous technology to disengage and hand over to the safe driver.

Baraja is addressing interference at the sensor level with its Spectrum-Scan technology, which forms the basis of its sensing platform. Spectrum-Scan works by rapidly switching the laser’s wavelength and transmitting light through a prism, which diffracts each color of light in a different direction. When the light returns to the sensor, it is only processed if wavelength, angle, timing and encoding matches on all signals, insuring immunity to interference. Baraja’s LiDAR operates at 1550 nm and exceeds the industry long-range sensing requirement of detecting a 10% reflectivity objects at more than 200m.

BusinessWire: Ouster introduces the ultra-wide 90-deg FoV OS0-128 LiDAR. “High-resolution perception has always been reserved for expensive, long-range applications. That’s finally beginning to change,” said Angus Pacala, CEO and co-founder of Ouster. “With Ouster’s full range of 128-channel sensors, we have a complete high-resolution sensor suite for every application, and for short-range applications, the OS0-128 is in a class of its own.

The OS0 and OS2 series offer a full range of resolution options, with the OS0 available with 32 or 128 channels, while the OS2 is available in 32, 64, and 128 configurations. The OS0-32 is priced at $6,000 and the OS0-128 at $18,000. The OS2-32 is priced at $16,000, the OS2-64 at $20,000 and the OS2-128 at $24,000.

: Sony is to demo its "Solid State LiDAR which uses highly accurate distance measurement to gain a precise 3D grasp of real-life spaces" this week at CES.

Meanwhile, Intel Mobileye presents VIDAR - LiDAR functionality with cameras only. The name VIDAR has been coined by academic cycles:

Go to the original article...

PoLight Announces First Design Win

Image Sensors World        Go to the original article...

poLight announces that its AF TLens is being used in a smartwatch for children launched to market on 7th January 2020. The OEM is undisclosed. The watch has two cameras, one main camera used to take pictures which includes an advanced autofocus (AF) function delivered by poLight, and one camera integrated in the screen used for face camera without AF.

This is an important milestone for poLight and we are very proud to be included in this innovative smartwatch flagship,” said Øyvind Isaksen, CEO of poLight.

Go to the original article...