Archives for January 2020

Event-based Detection Dataset for Automotive Applications

Image Sensors World        Go to the original article...

Prophesee publishes "A Large Scale Event-based Detection Dataset for Automotive" by Pierre de Tournemire. Davide Nitti, Etienne Perot, Davide Migliore, Amos Sironi:

"We introduce the first very large detection dataset for event cameras. The dataset is composed of more than 39 hours of automotive recordings acquired with a 304x240 ATIS sensor. It contains open roads and very diverse driving scenarios, ranging from urban, highway, suburbs and countryside scenes, as well as different weather and illumination conditions. Manual bounding box annotations of cars and pedestrians contained in the recordings are also provided at a frequency between 1 and 4Hz, yielding more than 255,000 labels in total. We believe that the availability of a labeled dataset of this size will contribute to major advances in event-based vision tasks such as object detection and classification. We also expect benefits in other tasks such as optical flow, structure from motion and tracking, where for example, the large amount of data can be leveraged by self-supervised learning methods."

Go to the original article...

LiDAR News: IHS Markit, Blickfeld, Espros, Xenomatix

Image Sensors World        Go to the original article...

Autosens publishes Dexin Chen, Senior Technology Analyst at IHS Markit presentation "The race to a low-cost LIDAR system" in Brussels in Sept. 2019:




Another presentation at Autosens Brussels 2019 explains the technology behind Blickfeld LiDAR "The New Generation of MEMS LiDAR for Automotive Applications" by Timor Knudsen, Blickfeld Head of Embedded Software:




Yet another presentation "A novel CCD LiDAR imager" by Beat De Coi – Founder and CEO, ESPROS Photonics Corporation says that there is no value in the ability to detect single photons in LiDAR, like SPADs do:




XenomatiX CEO Filip Geuens presents "Invisible integration of solid-state LIDAR to make beautiful self-driving cars"


Go to the original article...

ISP News: ARM, Geo, Qualcomm

Image Sensors World        Go to the original article...

Autosens publishes ARM Director ISP Algorithms & Image Quality Alexis Lluis Gomez talk "ISP optimization for ML/CV automotive applications" in Brussels in Sept. 2019:



GEO Semi's Bjorn Grubelich, Product Manager for Automotive Camera Solutions, presents CV challenges in pedestrian detection in a compact automotive camera:



Qualcomm announces mid-rand and low-end mobile SoC Snapdragon 720G, 662 and 460. Even the low-end 460 chip supports 48MP imaging:


The mid-range 720G supports 192MP camera:

Go to the original article...

FLIR Demos Automotive Thermal Cameras for Pedestrian Detection

Image Sensors World        Go to the original article...

FLIR publishes a couple of videos about advantages of thermal cameras for pedestrian detection:



Go to the original article...

Autosens Detroit 2020 Agenda

Image Sensors World        Go to the original article...

Autosens Detroit to be held in mid-May 2020 publishes its agenda with many image sensor related presentations:

  • The FIR Revolution: How FIR Technology Will Bring Level 3 and Above Autonomous Vehicles to the Mass Market
    Yakov Shaharabani, CEO, Adasky
  • The Future of Driving: Enhancing Safety on the Road with Thermal Sensors
    Tim LeBeau, CCO, Seek Thermal
  • RGB-IR Sensors for In-Cabin Automotive Applications
    Boyd Fowler, CTO, OmniVision
  • Robust Inexpensive Frequency Domain LiDAR using Hamiltonian Coding
    Andreas Velten, Director, Computational Optics Group, University of Wisconsin-Madison
  • The Next Generation of SPAD Arrays for Automotive LiDAR
    Wade Appelman, VP Lidar Technology, ON Semiconductor
  • Progress with P2020 – developing standards for automotive camera systems
    Robin Jenkin, Principal Image Quality Engineer, NVIDIA
  • What’s in Your Stack? Why Lidar Modulation Should Matter to Every Engineer
    Bill Paulus, VP of Manufacturing, Blackmore Sensors and Analytics, Inc
  • Addressing LED flicker
    Brian Deegan, Senior Expert, Valeo Vision Systems
  • The influence of colour filter pattern and its arrangement on resolution and colour reproducibility
    Tsuyoshi Hara, Solution Architect, Sony
  • Highly Efficient Autonomous Driving with MIPI Camera Interfaces
    Hezi Saar, Sr. Staff Product Marketing Manager, Synopsys
  • Tuning image processing pipes (ISP) for automotive use
    Manjunath Somayaji, Director of Imaging R&D, GEOSemiconductor
  • ISP optimization for ML/CV automotive applications
    Alexis Lluis Gomez, Director ISP Algorithms & Image Quality, ARM
  • Computational imaging through occlusions; seeing through fog
    Guy Satat, Researcher, MIT
  • Taming the Data Tsunami
    Barry Behnken, Co-Founder and SVP of Engineering, AEye

Go to the original article...

Samsung and Oxford Propose Image Sensors Instead of Accelerometers for Activity Recognition in Smartwatches

Image Sensors World        Go to the original article...

Arxive.org paper "Are Accelerometers for Activity Recognition a Dead-end?" by Catherine Tong, Shyam A. Tailor, and Nicholas D. Lane from Oxford University and Samsung says that image sensors might better suit for tracking calories in smartwatches and smartbands:

"Accelerometer-based (and by extension other inertial sensors) research for Human Activity Recognition (HAR) is a dead-end. This sensor does not offer enough information for us to progress in the core domain of HAR---to recognize everyday activities from sensor data. Despite continued and prolonged efforts in improving feature engineering and machine learning models, the activities that we can recognize reliably have only expanded slightly and many of the same flaws of early models are still present today.

Instead of relying on acceleration data, we should instead consider modalities with much richer information---a logical choice are images. With the rapid advance in image sensing hardware and modelling techniques, we believe that a widespread adoption of image sensors will open many opportunities for accurate and robust inference across a wide spectrum of human activities.

In this paper, we make the case for imagers in place of accelerometers as the default sensor for human activity recognition. Our review of past works has led to the observation that progress in HAR had stalled, caused by our reliance on accelerometers. We further argue for the suitability of images for activity recognition by illustrating their richness of information and the marked progress in computer vision. Through a feasibility analysis, we find that deploying imagers and CNNs on device poses no substantial burden on modern mobile hardware. Overall, our work highlights the need to move away from accelerometers and calls for further exploration of using imagers for activity recognition.
"

Go to the original article...

CIS Production Short of Demand, Prices Rise

Image Sensors World        Go to the original article...

IFNews quotes Chinese-language Soochow Securities report that Sony, Samsung, and OmniVision have recently increased their image sensor prices by 10%-20% due to supply shortages.

Soochow Securities analyst says: "According to our grassroots research, the size of this [CIS] market may even exceed that of memory." (Google translation)


Digitimes too reports that Omnivision "will raise its quotes for various CMOS image sensors (CIS) by 10-40% in 2020 to counter ever-expanding demand for application to handsets, notebooks, smart home devices, automotive and security surveillance systems."

Go to the original article...

Trieye Automotive SWIR Presentation

Image Sensors World        Go to the original article...

Autosens publishes Trieye CTO Uriel Levy's presentation "ShortWave Infrared Breaking the Status Quo" in Brussels in Sept. 2019:

Go to the original article...

LiDAR News: Ibeo, Velodyne, Outsight

Image Sensors World        Go to the original article...

ArsTechnica: "Ibeo Operations Director Mario Brumm told Ars that Ibeo's next-generation lidar, due out later this year, would feature an 128-by-80 array of VCSELs coupled with a 128-by-80 array of SPADs. Ibeo is pursuing a modular design that will allow the company to use different optics to deliver a range of models with different capabilities—from a long-range lidar with a narrow field of view to a wide-angle lidar with shorter range. Ibeo is aiming to make these lidars cheap enough that they can be sold to automakers for mass production starting in late 2022 or early 2023."

IEEE Spectrum quotes Velodyne new Anand Gopalan talking about the company's new solid state Velabit LiDAR:

Gopalan wouldn’t say much about how it works, only that the beam-steering did not use tiny mirrors based on micro-electromechanical systems (MEMS). “It differs from MEMS in that there’s no loss of form factor or loss of light,” he said. “In the most general language, it uses a metamaterial activated by low-cost electronics.”

Autosens publishes Outsight President Raul Bravo's presentation at Brussels Sept. 2019 conference:

Go to the original article...

SmartSens Announces SmartClarity H Series

Image Sensors World        Go to the original article...

PRNewswire: SmartSens launches six new sensors as part of the SmartClarity H Series family. The SC8238H, SC5238H, SC4210H, SC4238H, SC2210H, and SC2310H sensors have resolutions in range of 2MP to 8MP and are said to provide a significant improvement over previous generation products in lowering dark current and reducing white pixels and temperature noise suitable to use in home surveillance and other video based applications.

"Overheating and high temperature has been the culprit that creates bottlenecks to excellent CIS image quality, especially in the field of non-stop running security applications. As the leader in Security CIS sensors, we've always set our sight in tackling this issue. As we see the improvements from our high-performing CIS technologies," noted Chris Yiu, CMO of SmartSens, "We're reaching the level comparable to other top-notch industry leaders."

Samples of these six products are now available.

Go to the original article...

Fujifilm XT200 review – preview

Cameralabs        Go to the original article...

The Fujifilm X-T200 is an entry-level mirrorless camera, featuring a 24 Megapixel APSC sensor, built-in viewfinder, fully-articulated 3.5in touchscreen, mic input and uncropped 4k video. Find out more in my preview! …

The post Fujifilm XT200 review – preview appeared first on Cameralabs.

Go to the original article...

Himax Unveils Low Power VGA Sensor

Image Sensors World        Go to the original article...

GlobeNewswire: Himax announces the commercial availability for HM0360, said to be an industry-first ultra-low power and low latency BSI CMOS sensor with autonomous modes of operations for always on, intelligent visual sensing applications such as human presence detection and tracking, gaze detection, behavioral analysis, and pose estimation for growing markets like smart home, smart building, healthcare, smartphone and AR/VR devices. Himax is currently working with leading AI framework providers such as Google and industry partners to develop reference design that can enable low power hardware and platform options to reduce time to market for intelligent edge vision solutions.

One of the key challenges to 2-dimensional image sensing for computer vision is the high power consumption and data bandwidth of the sensor and processing,” said Amit Mittra, CTO of Himax Imaging. “The HM0360 addresses this opportunity by delivering a very low power image sensor that achieves excellent image quality with high signal-to-noise ratio and dynamic range, which allows algorithms to operate under challenging lighting conditions, from bright sunlight to moonlight. The VGA resolution can double the range of detection over Himax’s HM01B0 QVGA sensor, especially to support greater than 90-degree wide field of view lens. Additionally, the HM0360 introduces several new features to reduce camera latency, system overhead and power consumption.

Smart sensors that can run on batteries or energy harvesting for years will enable a massive number of new applications over the next decade,” said Pete Warden, Technical Lead of TensorFlow Lite for Microcontrollers at Google. “TensorFlow Lite's microcontroller software can supply the brains behind these products but having low-power sensors is essential. Himax’s camera can operate at less than one milliwatt, which allows us to create a complete sensing system that's able to run continuously on battery power alone.

Unique features of HM0360 include:

  • Several autonomous modes of operation
  • Pre-metering function to ensure exposure quality for every event frame
  • Short sensor initialization time
  • Extremely low less than 2ms wake up time
  • Fast context switching and frame configuration update
  • Multi-camera synchronization
  • 150 parallel addressable regions of interests
  • Event sensing modes with programmable interrupts to allow the host processor to be placed in low power sleep until notified by the sensor
  • Operating up to VGA resolution of 60 frames per second sensor at 14.5mW and consumes less than 500µW using binning mode readout at 3FPS
  • Supporting multiple power supply configurations with minimal passive components to enable a highly compact camera module design for next generation smart camera devices.

M0360 is currently available for sampling and will be ready for mass production in the second quarter of 2020.

Go to the original article...

ADI Demos ToF Applications

Image Sensors World        Go to the original article...

Analog Devices publishes a video presenting its ToF solution inside PicoZense camera for face recognition:



Another demo talks about in-cabin driver status monitoring:



Yet another demo shows ToF camera-equipped autonomous robot:

Go to the original article...

ON Semi on Automotive Image Sensor Challenges

Image Sensors World        Go to the original article...

AutoSens publishes a video "Overview of the Challenges and Requirements for Automotive Image Sensors" by Geoff Ballew, Senior Director of Marketing, Automotive Sensing Division, ON Semiconductor presented at Autosens Brussels in Sept. 2019:

Go to the original article...

Reverse Engineered: Sony CIS inside Denso Automotive Camera, Tesla Model 3 Triple Camera, Samsung Galaxy Note 10+ ToF Camera

Image Sensors World        Go to the original article...

SystemPlus has published few interesting reverse engineering reports. "Denso’s Monocular Forward ADAS Camera in the Toyota Alphard" reveals Sony 1.27 automotive sensor with improved night vision:

"The monocular camera, manufactured by Denso in Japan, is 25% smaller and uses 30% less components and parts than the previous model. Also, for the first time in an automotive camera, we have found a Sony CMOS image sensor. This 1.27M-pixel sensor offers higher sensitivity to the near-infrared sensor, and the low-light use-designed lens housing improves the camera’s night-time identification of other road users and road signs. Regarding processing, the chipset is composed of a Sony image signal processor, a Toshiba image recognition processor, and a Renesas MCU."


"Triple Forward Camera from Tesla Model 3" "captures front images over up to 250 meters that are used by the Tesla Model 3 Driver Assist Autopilot Control Module Unit. The system integrates a serializer but no computing.

Tesla chose dated but well known and reliable components with limited bandwidth, speed and dynamic range. This is probably due to the limited computing performance of the downstream Full Self-Driving chipset. This three-cameras setup is therefore very far from the robotic autonomous vehicle (AV) technology used on Waymo and GM Cruise vehicles. The level of autonomy Tesla will achieve with this kind of hardware is the trillion-dollar question you will be able to assess with this report.

We assume this system is manufactured in the USA. It is an acquisition module without treatment using three cameras, narrow, main and wide, with 3 CMOS Image Sensors with 1280×960 1.2Mp resolution.
"


"Samsung Galaxy Note 10+ 3D Time of Flight Depth Sensing Camera Module" "is using the latest generation of Sony Depth sensing Solutions’ Time-of-Flight (ToF) camera, which is unveiled in this report.

It includes a backside illumination (BSI) Time of Flight Image Sensor array featuring 5µm size pixels and 323 kilopixel VGA resolution developed by Sony Depth sensing Solutions. It has one VCSEL for the flood illuminator coming from a major supplier.

The complete system features a main red/green/blue camera, a Telephoto, a Wide-angle Camera Module and a 3D Time of Flight Camera.
"

Go to the original article...

5 CMOS Sensor Companies Order 34 Inspection Machines from Camtek

Image Sensors World        Go to the original article...

PRNewswire: Camtek has received orders for 34 systems for 2D inspection of CMOS sensors from five leading manufacturers, of which 25 are from two customers. Most of the orders are expected to be installed during the first half of 2020.

Ramy Langer, COO, comments, "Our longstanding expertise in inspection technologies designed specifically for the CMOS image sensors market, makes the EagleT and EagleT Plus the ultimate inspection tools for this market."

Go to the original article...

1 Tera-fps Camera Needed to Observe Signal Travel through Neurons

Image Sensors World        Go to the original article...

Optics.org: Science Magazine paper "Picosecond-resolution phase-sensitive imaging of transparent objects in a single shot" by Taewoo Kim, Jinyang Liang, Liren Zhu, and Lihong V. Wang from Caltech says that fast frame speed is needed to study some biology processes:

"As signals travel through neurons, there is a minute dilation of nerve fibers that we hope to see. If we have a network of neurons, maybe we can see their communication in real time," Professor of Medical Engineering and Electrical Engineering Lihong Wang says.

"Here, we present phase-sensitive compressed ultrafast photography (pCUP) for single-shot real-time ultrafast imaging of transparent objects by combining the contrast of dark-field imaging with the speed and the sequence depth of CUP. By imaging the optical Kerr effect and shock wave propagation, we demonstrate that pCUP can image light-speed phase signals in a single shot with up to 350 frames captured at up to 1 trillion frames per second. We expect pCUP to be broadly used for a vast range of fundamental and applied sciences."

Go to the original article...

JPEG vs HEIF: Canon commits to HIF

Cameralabs        Go to the original article...

Time for JPEG to retire? With the 1Dx III, Canon becomes the first traditional camera company to back the more efficient HEIF format which delivers better quality in smaller files. Here's how Canon makes it work.…

The post JPEG vs HEIF: Canon commits to HIF appeared first on Cameralabs.

Go to the original article...

Samsung-Corephotonics Unveils Foveated Automotive Camera

Image Sensors World        Go to the original article...

Smartphone folded-zoom lens and multi-camera solutions developer Corephotonics acquired by Samsung a year ago, announces its first product since the acquisition - Roadrunner automotive camera:


Thanks to AB for the pointer!

Go to the original article...

Omnivision Aims to Close the Gap with Sony and Samsung and Lead the Market in 1 Year

Image Sensors World        Go to the original article...

IFNews quotes Laoyaoba interview with Omnivision's SVP of Global Sales Wu Xiaodong giving a lot of interesting info about the company plans:
  • Omnivision's 64MP high-end smartphone sensor is expected to enter mass production soon this year
  • Although in terms of global market share Omnivision ranks third with 12.4%, it scores first with 48% share in security, second with 30% share in autonomous vehicles, first with 50% in computing, first with 48% in emerging businesses such as IoT, and first with 81% share on medical CIS market
  • From 2018 to 2019, the overall CIS market size grew at AAGR of 20%. After 2020, AAGR is expected to go down to 10%.
  • In the end of August 2019, Will Semi has completed the acquisition of Omnivision and Superpix and officially renamed them to Omnivision Group
  • Omnivision Group currently has more than 2,000 customers, with annual chip shipments exceeding 13 billion.
  • Omnivision has R&D centers in the US, Japan, Europe, China, and Singapore.
  • So far, Omnivision employs a total of 1,300 engineers and has more than 4,000 patents.
  • Omnivision Group cooperates with TSMC, SMIC, Huali (HLMC), Dongfang (DDF), and other foundries.
"In the past, our gap [with Sony and Samsung has been,] may be, about one year. Last year, we were half a year behind, and our goal is to achieve new products to be leveled this year, and to achieve a lead next year," says Wu Xiaodong.

Go to the original article...

IRNova on LWIR Polarimetric Imaging

Image Sensors World        Go to the original article...

As mentioned in comments, Sweden-based IRNova publishes an application note "Polarimetric QWIP infrared imaging sensor" talking about its Garm LW Pol camera.

"Quantum well infrared photodetectors (QWIP) are by design inherently suited for polarization sensitive imaging. The detection principle in regular QWIPs relies on etched 2-D gratings to couple the light to the quantum wells for absorption. By replacing the 2D gratings with 1D (lamellar) gratings polarization sensitivity is added to the thermal detection.

Thermal imaging is a great way to detect objects, but it requires the objects to be of different temperature or to have different emissivity than the background. Polarization detection further extends the possibility to differentiate between objects that have the same temperature but consist of different materials, since infrared polarized light can be generated by reflection or emission of radiation from planar surfaces. This allows for detecting objects that are previously undetectable by an infrared detector since they may be covered under a canvas or they may have a low thermal signature like an UAV.
"

Go to the original article...

Event-Based News: Prophesee, Inivation, Samsung

Image Sensors World        Go to the original article...

EETimes publishes Junko Yoshida's interview with Luka Verre, Prophesee CEO. Few quotes:

"The commercial product we have is a VGA sensor. It’s in mass production. We are currently deploying shipping for industrial applications.

We have a new sensor generation, which is an HD sensor, so one million pixels, 720p. This is the result of joint cooperation we have done with Sony, which will be published at ISEC [ISSCC, probably] in February in San Francisco.

There has been some research work done together with Sony. Yes, Sony is indeed interested in event-based technology, but unfortunately I cannot tell you more than that. One of the main challenges we have been solving, moving from the VGA sensor to the HD sensor is the capability now to stack the sensor, to use a very advanced technology node that enables us to reduce the pixel pitch. So to make actually the sensor much smaller and cost-effective.

Automotive remains one of the key verticals we are targeting, because our technology, event-based technology, shows clear benefit in that space with respect to low latency detection, low data rate and high dynamic range.

...we did some tests in some controlled environments with one of the largest OEMs in Europe, and we compared side by side the frame-based sensor with an event-based sensor, showing that, while the frame-based camera system was failing evening in fusion with a radar system, our system was actually capable to detect pedestrians in both daylight conditions and night light conditions.
"


iniVation wins Best of Innovation award from the CES 2020 in the category ‘Embedded Vision’.



The award is for the company's newest product, the DVXplorer that uses an all-new custom-designed sensor from Samsung. DVXplorer is said to be the world’s first neuromorphic camera employing technologies suitable for mass-production applications.

Thanks to TL for the link!


Samsung's Hyunsurk Eric Ryu presented their event driven pixels at the 2nd International Workshop on Event-based Vision and Smart Cameras:

Go to the original article...

NIT Presents 7.5um Pixel InGaAs SWIR Sensor

Image Sensors World        Go to the original article...

New Imaging Technologies (NIT) is announces its first commercially available SWIR InGaAs sensor with a pitch of 7.5µm, resulting from several years of R&D development of an in-house hybridization process. This process does not use the classical indium bumps and allows manufacturing hybrid sensors with very small pitches with high yield at a reduced cost.

The first available component at 7.5µm pitch is a line array with the following characteristics:
  • Pixel Number: 2048
  • Pitch: 7.5µm
  • Line speed: 60KHz @ full line
  • Well Fill: 25 Ke-
  • Readout Noise: less than 70e-
  • Dark Current: 8 fA @ 15°C

Go to the original article...

Omron Demos its People Recognition Sensor

Image Sensors World        Go to the original article...

Inavate: Omron is to demo its second-generation digital signage body and face detection/face recognition system. The Human Vision Component HVC-P2 with OKAO Vision software features ten image sensing functions including body detection, face recognition, hand detection, age estimation, gender estimation, and expression estimation. The OKAO software can recognize faces up to 3m away and can detect a human body up to 17m away.

Go to the original article...

Sony Automotive Sensors, LiDAR, ToF

Image Sensors World        Go to the original article...

Sony publishes a video presenting its safety cocoon devices:

Go to the original article...

FLIR Updates its Periodic Table of Image Sensors

Image Sensors World        Go to the original article...

FLIR (former Point Grey) publishes an 2020 version of its Periodic Table of Image Sensors:

"Updated for 2020 - now with over 130 machine vision sensors, incluing third generation Sony Pregius and fourth generation Sony Pregius S global shutter sensors.

With so many sensors to choose from, we understand that it could be tricky to keep track of them. This handy chart organizes over 130 sensors from classic CCDs to the latest CMOS technology by resolution and speed. We suggest printing off this free poster and laminating it, then pinning it up on your wall for easy reference.
"


Thanks to TL for the link!

Go to the original article...

Ge-on-Si SPAD LiDAR

Image Sensors World        Go to the original article...

Optics Express paper "3D LIDAR imaging using Ge-on-Si single–photon avalanche diode detectors" by Kateryna Kuzmenko, Peter Vines, Abderrahim Halimi, Robert J. Collins, Aurora Maccarone, Aongus McCarthy, Zoë M. Greener, Jarosław Kirdoda, Derek C. S. Dumas, Lourdes Ferre Llin, Muhammad M. Mirza, Ross W. Millar, Douglas J. Paul, and Gerald S. Buller from Heriot-Watt and Edinburgh Universities, UK, presents a concept design of LiDAR with a SPAD detector cooled down to 100K:

"We present a scanning light detection and ranging (LIDAR) system incorporating an individual Ge-on-Si single-photon avalanche diode (SPAD) detector for depth and intensity imaging in the short-wavelength infrared region. The time-correlated single-photon counting technique was used to determine the return photon time-of-flight for target depth information. In laboratory demonstrations, depth and intensity reconstructions were made of targets at short range, using advanced image processing algorithms tailored for the analysis of single–photon time-of-flight data. These laboratory measurements were used to predict the performance of the single-photon LIDAR system at longer ranges, providing estimations that sub-milliwatt average power levels would be required for kilometer range depth measurements.

... recently, the use of planar geometry devices [39] yielded a significant step change improvement in performance. Vines et al. [39] reported a normal incidence planar geometry Ge-on-Si SPADs with 38% SPDE at 125 K at a wavelength of 1310 nm and a noise–equivalent power (NEP) of 2 × 10−16 WHz-1/2. In addition, these devices clearly demonstrated lower levels of afterpulsing compared with InGaAs/InP SPAD detectors operated under nominally identical conditions. The high SPDEs of Ge-on-Si SPADs and their reduced afterpulsing compared to InGaAs/InP SPADs provides the potential for significantly higher count rate operation and, consequently, reduced data acquisition times. Planar Ge-on-Si SPADs exhibit compatibility with Si CMOS processing, potentially leading to the development of inexpensive, highly efficient Ge-on-Si SPAD detector arrays. Here we report a successful demonstration of LIDAR 3D imaging using an individual planar Ge-on-Si SPAD operating at a wavelength of 1450 nm.
"

Go to the original article...

Omnivision and Artilux to Collaborate on Ge-on-Si Sensors for Smartphones

Image Sensors World        Go to the original article...

PRNewswire: OmniVision and Artilux announce their execution of a formal letter of intent to collaborate on GeSi-based 3D sensors, after a series of evaluation and analysis. The main objective of this collaboration is to combine OmniVision's CMOS imaging technology and market position with Artilux's GeSi 3D sensing technology, and accelerate the delivery of comprehensive RGB and 3D imaging solutions to the mobile phone segment.

The new product offerings will not only cover the mainstream light sensing spectrum from visible light to 850nm/940nm, but will further extend to 1350nm/1550nm, for improved outdoor experience and eye safety for multiple growing digital imaging market segments.


Go to the original article...

SET Hybrid Bonding Machine Boasts 1um (3-sigma) Accuracy

Image Sensors World        Go to the original article...

SET NEO HB hybrid/direct bonding machine claims +/-1um 3-sigma alignment accuracy:

Go to the original article...

Brillnics 4um Voltage Domain GS Pixel

Image Sensors World        Go to the original article...

MDPI paper "A Stacked Back Side-Illuminated Voltage Domain Global Shutter CMOS Image Sensor with a 4.0 μm Multiple Gain Readout Pixel" by Ken Miyauchi, Kazuya Mori, Toshinori Otaka, Toshiyuki Isozaki, Naoto Yasuda, Alex Tsai, Yusuke Sawai, Hideki Owada, Isao Takayanagi, and Junichi Nakamura from Brillnics is a part of the Special issue on the 2019 International Image Sensor Workshop (IISW2019).

"A backside-illuminated complementary metal-oxide-semiconductor (CMOS) image sensor with 4.0 μm voltage domain global shutter (GS) pixels has been fabricated in a 45 nm/65 nm stacked CMOS process as a proof-of-concept vehicle. The pixel components for the photon-to-voltage conversion are formed on the top substrate (the first layer). Each voltage signal from the first layer pixel is stored in the sample-and-hold capacitors on the bottom substrate (the second layer) via micro-bump interconnection to achieve a voltage domain GS function. The two sets of voltage domain storage capacitor per pixel enable a multiple gain readout to realize single exposure high dynamic range (SEHDR) in the GS operation. As a result, an 80dB SEHDR GS operation without rolling shutter distortions and motion artifacts has been achieved. Additionally, less than −140dB parasitic light sensitivity, small noise floor, high sensitivity and good angular response have been achieved."

Go to the original article...

css.php