Archives for December 2019

Canon develops video-analysis technology that utilizes deep learning to count crowds, with ability to count several thousand people in real-time

Newsroom | Canon Global        Go to the original article...

Go to the original article...

TechInsights Reviews 2019 Trends and Achievements

Image Sensors World        Go to the original article...

TechInsights Senior Technology Analyst Ray Fontaine publishes an interesting summary of 2019 achievements "Imaging + Sensing End-of-Year Highlights." The most important points are:

  • Smartphone imaging: Push to higher resolutions, sub-micron pixels, larger sensor areas
  • More experiments with PDAF, new CFA patterns
  • ToF pixel pitch reduced down to 5um
  • Event-driven sensors show up in mass market products (Samsung S5K231YX DVS inside home monitoring system)

Looking into 2020, "We are looking forward to more back-illuminated global shutter products to analyze, continued high resolution and sub-micron pixel development, enhanced near-infrared (NIR) sensors, and the push towards non-Si detectors."

Go to the original article...

Image Sensors at EI 2020

Image Sensors World        Go to the original article...

Electronic Imaging Conference to be held on Jan. 27-30 in Burlingame, CA, unveils its agenda with quite a few image sensor papers:

3D-IC smart image sensors
Laurent Millet, Stephane Chevobbe
CEA/LETI, CEA/LIST, France
This presentation will introduce 3D-IC technologies applied to imaging, and give some examples of 3D-IC or stacked sensors and their 3D partitioning topologies. A focus will be given on our stacked vision chip that embeds flexible pre-processing at high-speed and low latency, like fast event detection, edge detection or convolution computation. The perspectives will show how this technology can pave the way for new sensor architectures and applications.

Indirect time-of-flight CMOS image sensor using 4-tap charge-modulation pixels and range-shifting multi-zone technique
Kamel Mars, Keita Kondo, Michihiro Inoue, Shohei Daikoku, Masashi Hakamata, Keita Yasutomi, Keiichiro Kagawa, Sung-Wook Jun, Yoshiyuki Mineyama, Satoshi Aoyama, Shoji Kawahito
Shizuoka University, Tokyo Institute of Technology, Brookman Technology, Japan

This paper presents an indirect TOF image sensor using short pulse modulation based 4-tap one drain pixels and fast sub-frames readout for range shifted multiple pulse capturing time window. measurement uses a short pulse modulation technique combined with short multiple sub-frames where the accumulations number for each sub-frame is carefully selected for near and far zone in order to ovoid sensor saturation due to strong laser power or strong ambient light. Current setup uses two sub-frames where the gate opening sequence is set as G21G2G3G4 and where the gate pulse width is set to 10ns. The proposed timing sequence allows 3-time windows at each sub-frame. By combining the last gate of the first sub-frame and the first gate of the second sub-frame, an extra time window is also obtained making seven measurable time windows in total. The process of combining the two sub-frame is achieved offline by an automated calculation algorithm allowing automated and smooth measurement of two zone simultaneously. TOF image and range of 10.5m have been successfully measured using 2-subframes and 7-time windows where the used light pulse width is also set to 10ns allowing a 1.5m measurement range for each window. A depth resolution of 1 percent was achieved at 10m range.

A short-pulse based time-of-flight image sensor using 4-tap charge-modulation pixels with accelerated carrier response
Michihiro Inoue, Shohei Daikoku, Keita Kondo, Akihito Komazawa, Keita Yasutomi, Keiichiro Kagawa, Shoji Kawahito
Shizuoka University, Japan

Most of the reported CMOS indirect TOF range imagers are designed for CW (continuous wave) modulation and their range resolutions have been greatly improved by using high modulation frequency of over 100MHz. On the other hand, for extending the applications of indirect TOF image sensors to outdoor and high ambient light environments, a short-pulse-based TOF image sensor with multi-tap charge-modulation pixels will be a good candidate. The TOF sensor to be announced this time shows that the pixel with three n-type doping layers and substrate biasing has a sufficient gating response to the light pulse width of 4ns with the linearity of 3%.

A high-linearity time-of-flight image sensor using a time-domain feedback technique
Juyeong Kim, Keita Yasutomi, Keiichiro Kagawa, Shoji Kawahito
Shizuoka University, Japan

In this paper, we proposed a time-domain feedback technique for Time-of-Flight (ToF) image sensor. The time-domain feedback has an advantage of easily time-to-digital conversion and effectively suppressing the linearity error. The time-domain feedback technique has been implemented by 2-tap lock-in pixels and 5b digitally-controlled delay lines (DCDLs). The prototype ToF sensor is fabricated in a 0.11μm (1P4M) CIS process. The lock-in pixels, having a size of 16.8×16.8μm2, are driven by 7ns of pulse signal from the 5b DCDLs. The light pulse delay is controlled to measure the performance. Full-range is set to 0 to 105cm with an 11b for the full scale in 22ms. Our sensor has attained the linearity of less than 0.3%, and the range resolution of 2.67mm (peak) and 0.27mm (mean) has been achieved without any calibration techniques.

A 4-tap global shutter pixel with enhanced IR sensitivity for VGA time-of-flight CMOS image sensors
Taesub Jung, Yonghun Kwon, Sungyoung Seo, Min-Sun Keel, Changkeun Lee, Sung-Ho Choi, Sae-Young Kim, Sunghyuck Cho, Youngchan Kim, Young-Gu Jin, Moosup Lim, Hyunsurk Ryu, Yitae Kim, Joonseok Kim, Chang-Rok Moon
Samsung Electronics, Korea

An indirect time-of-flight (ToF) CMOS image sensor has been designed with 4-tap 7 µm global shutter pixel in back-side illumination process. 15000 e- of high full-well capacity (FWC) per a tap of 3.5 µm pitch and 3.6 e- of read-noise has been realized by employing true correlated double sampling (CDS) structure with storage gates (SGs). Noble characteristics such as 86 % of demodulation contrast (DC) at 100MHz operation, 37 % of higher quantum efficiency (QE) and lower parasitic light sensitivity (PLS) at 940 nm have been achieved. As a result, the proposed ToF sensor shows depth noise less than 0.3 % with 940 nm illuminator in even long distance.

An over 120dB dynamic range linear response single exposure CMOS image sensor with two-stage lateral overflow integration trench capacitors
Yasuyuki Fujihara, Maasa Murata, Shota Nakayama, Rihito Kuroda, Shigetoshi Sugawa
Tohoku University, Japan

This paper presents a prototype linear response single exposure CMOS image sensor with two-stage lateral overflow integration trench capacitors (LOFITreCs) exhibiting over 120dB dynamic range with 11.4Me- full well capacity and maximum signal-to-noise ratio (SNR) of 70dB. The measured SNR at all switching points were over 35dB thanks to the proposed two-stage LOFITreCs.

Deep image demosaicing for submicron image sensors (JIST-first)
Irina Kim, Seongwook Song, SoonKeun Chang, SukHwan Lim, Kai Guo
Samsung Electronics, Korea

The latest trend in image sensor technology allowing submicron pixel size for high-end mobile devices comes at very high image resolutions and with irregularly sampled Quad Bayer Color Filter Array (CFA). Sustaining image quality becomes a challenge for the Image Signal Processor (ISP), namely for demosaicing. Inspired by the success of deep learning approach to standard Bayer demosaicing, we aim to investigate how artifacts-prone Quad Bayer Array can benefit from it. We found that deeper networks are capable to improve image quality and reduce artifacts; however, deeper networks can be hardly deployed on mobile devices given very high image resolutions: 24MP, 36MP, 48MP. In this paper, we propose an efficient end-to-end solution to bridge this gap - a Duplex Pyramid Network (DPN). Deep hierarchical structure, residual learning, linear feature maps depth growth allow very large receptive field, yielding better details restoration and artifacts reduction, while staying computationally efficient. Experiments show that the proposed network outperforms state-of-the-art for both Bayer and Quad Bayer demosaicing. For challenging Quad Bayer CFA it reduces visual artifacts better than other deep networks including artifacts existing in conventional commercial solution. While superior in image quality, it is x2-x25 times faster than state-of-the-art deep neural networks and therefore feasible for deployment on mobile devices, paving the way for a new era of on-device deep ISPs.

Imaging in the autonomous vehicle revolution
Gary Hicok
NVIDIA, USA

Innovation of imaging capabilities for AVs has been rapidly improving to the point that the cornerstone AV sensors are cameras. Much like the human brain processes visual data taken in by the eyes, AVs must be able to make sense of this constant flow of information, which requires high-performance computing to respond to the flow of sensor data. This presentation will delve into how these developments in imaging are being used to train, test and operate safe autonomous vehicles. Attendees will walk away with a better understanding of how deep learning, sensor fusion, surround vision and accelerated computing are enabling this deployment.

Single-shot multi-frequency pulse-TOF depth imaging with sub-clock shifting for multi-path interference separation
Tomoya Kokado, Yu Feng, Masaya Horio, Keita Yasutomi, Shoji Kawahito, Takashi Komuro, Hajime Ngahara, Keiichiro Kagawa
Shizuoka University, Saitama University, Osaka University, Japan

Short-pulse-based time-of-flight (TOF) depth imaging using on a multi-tap macro-pixel computational ultra-fast CMOS image sensor with temporally coded shutters was demonstrated. To separate multi-path components and shorten the minimal separation between the adjacent pulses in a single shot and to overcome the range-resolution tradeoff, an application of multi-frequency coded shutters and sub-clock shifting is proposed. The computational CMOS image sensor incorporates an array of macro-pixels each of which is composed of four sub-pixels. The subpixels are implemented with four-tap lateral electric field charge modulators (LEFMs) with dedicated charge draining gates. For the macro-pixel, 16 different temporal binary shutters are applied to acquire a mosaic image of cross-correlations between an incident temporal optical signal and the temporal shutters. The effectiveness of the proposed method was verified experimentally with the computational CMOS image sensor. The clock frequency for the shutter generator was 73MHz. A 520nm sub-ns pulse laser was used. A two-component multi-path optical signal created by a transparent acrylic plate and a mirror, which were placed 8.2m apart each other, and change in time of flight that was a half as long as the minimal time window were successfully distinguished.

Improving the disparity for depth extraction by decreasing the pixel height in monochrome CMOS image sensor with offset pixel apertures
Jimin Lee1, Sang-Hwan Kim, Hyeunwoo Kwen, Seunghyuk Chang, JongHo Park, Sang-Jin Lee, Jang-Kyoo Shin
Kyungpook National University, Korea Advanced Institute of Science and Technology, Korea

This paper introduces the disparity improvement due to pixel height decrease in monochrome CMOS image sensor (CIS) with offset pixel apertures (OPAs) for depth extraction. A 3D image is a stereoscopic image created by adding depth information to a planar two-dimensional image. In the monochrome CIS with the OPAs described in this paper, the disparity is an important factor for obtaining depth information. As the pixel height decreases, the incident angle of light transferred from the microlens to the metal pattern opening increases. Therefore, the light response angle of left-OPA (LOPA) pixel and right-OPA (ROPA) pixel increases and thus the disparity improves. In this work, silicon-region-etching (SRE) process is applied to the proposed monochrome CIS with OPAs and the overall height of the pixel is lowered. Monochrome CIS with OPAs is used for the experiment, and a chief-ray-angle (CRA) experiment is implemented to measure the change of the disparity according to the pixel height. The proposed monochrome CIS with OPAs was designed and manufactured using the 0.11-μm CIS process. Improved disparity due to decreased pixel height has been experimentally verified.

Planar microlenses for near infrared CMOS image sensors
Lucie Dilhan, Jérôme Vaillant, Alain Ostrovsky, Lilian Masarotto, Céline Pichard, Romain Paquet
University Grenoble Alpes, CEA, STMicroelectronics, France

In this paper we present planar microlenses designed to improve the sensitivity of SPAD pixels. We designed diffractive and metasurface planar microlens structures based on rigorous optical simulations, then we implemented the diffractive microlens on a SPAD design available on STMicroelectronics 40nm CMOS testchips (32 x 32 SPAD array), and compared with the process of reference melted microlens. We characterized circuits and demonstrated optical gain from our designed microlenses.

Event threshold modulation in dynamic vision spiking imagers for data throughput reduction
Luis Cubero, Arnaud Peizerat, Dominique Morche, Gilles Sicard
LETI, CEA, University Grenoble Alpes, France

Dynamic vision sensors are growing in popularity for Computer Vision and moving scenes: its output is a stream of events reflecting temporal lighting changes, instead of absolute values. One of its advantages is fast detection of events, as they are read asynchronously as spikes. However, high event data throughput implies an increasing workload for the read-out. That can lead to data loss or to prohibitively large power consumption for constrained devices. This work presents a technique to reduce that event data throughput at the cost of a very compact additional circuitry at the pixel level: less events are generated while preserving most of the information. Our simulated example depicts a data throughput reduced to 14 %, in the case of the most aggressive version of our approach.

Go to the original article...

Samsung Promotes its 108MP Sensor

Image Sensors World        Go to the original article...

Samsung publishes a promotional article about its 108MP ISOCELL Bright HMX mobile sensor.


Go to the original article...

ams Announces X-Ray Sensor

Image Sensors World        Go to the original article...

BusinessWire: ams announces the AS5950 integrated sensor chip for X-ray detection will enable an improved CT detector for more detailed images at lower system costs.

The AS5950 is a CMOS device that combines a high-sensitivity photodiode array and a 64-channel ADC on the same die. As a single chip, the AS5950 is easier to mount in a CT detector module. Current CT scanner manufacturers need to assemble a discrete photodiode array on a complex PCB, connected via long traces to a discrete read-out chip. In 8- and 16-slice CT scanners, replacement of this complex PCB assembly with a single AS5950 chip dramatically reduces the image-noise performance and – importantly – manufacturers’ materials and production costs.

Jose Vinau, Marketing Director for the Medical & Specialty Sensors business line at ams, says: “ams wants to help make CT scanners more affordable and available throughout the world. The introduction of the AS5950 and its module will reduce the hurdles in assembly and manufacturing of an X-ray detector.

Go to the original article...

IEDM 2019: Samsung Presents its Event-Based Sensor

Image Sensors World        Go to the original article...

Samsung presented a paper "Low-Latency Interactive Sensing for Machine Vision" by Paul K. J. Park, Jun-Seok Kim, Chang-Woo Shin, Hyunku Lee, Weiheng Liu, Qiang Wang, Yohan Roh, Jeonghan Kim, Yotam Ater, Evgeny Soloveichik, and Hyunsurk Eric Ryu at IEDM last week.

"In this paper, we introduce the low-latency interactive sensing and processing solution for machine vision applications. The event-based vision sensor can compress the information of moving objects in a costeffective way, which in turn, enables the energy-efficient and real-time processing in various applications such as person detection, motion recognition, and Simultaneous Localization and Mapping (SLAM). Our results show that the proposed technique can achieve superior performance than conventional methods in terms of accuracy and latency.

For this, we had previously proposed 640x480 VGA-resolution DVS with a 9-um pixel pitch supporting a data rate of 300Meps by employing a fully synthesized word-serial group address-event representation (G-AER) which handles massive events in parallel by binding neighboring 8 pixels into a group [3]. The chip only consumes a total of 27mW at a data rate of 100Keps and 50mW at 300Meps.
"

Go to the original article...

ON Semi Marketing on Vision IoT

Image Sensors World        Go to the original article...

ON Semi publishes a marketing webinar about its Vision IoT solutions:

Go to the original article...

Micro-power ToF Camera

Image Sensors World        Go to the original article...

IEEE Sensors Journal publishes EPFL open-access paper "An Ultra-Low Power PPG and mm-Resolution ToF PPD-Based CMOS Chip Towards All-in-One Photonic Sensors" by Assim Boukhayma, Antonino Caizzone, and Christian Enz describing an extremely low power ToF camera:

"This paper presents a CMOS photonic sensor covering multiple applications from ambient light sensing to time resolved photonic sensing. The sensor is made of an array of gated pinned photodiodes (PPDs) averaged using binning and passive switched-capacitor (SC) charge sharing combined with ultra-low-power amplification and analog-to-digital conversion. The chip is implemented in a 180 nm CMOS image sensor (CIS) process and features high sensitivity, low-noise and low-power performance. Measurement results demonstrate uW health monitoring through Photoplethysmography (PPG), 10 ps resolution for time resolved light sensing and mm precision for time-of-flight (ToF) distance ranging obtained with a frame rate of 50 Hz and 20 dB ambient light rejection."


Go to the original article...

Brillnics 2.8um, 120 Ke− Full Well Pixel with 160 µV/e− Conversion Gain

Image Sensors World        Go to the original article...

MDPI paper "A 120-ke− Full-Well Capacity 160-µV/e− Conversion Gain 2.8-µm Backside-Illuminated Pixel with a Lateral Overflow Integration Capacitor" by Isao Takayanagi, Ken Miyauchi, Shunsuke Okura, Kazuya Mori, Junichi Nakamura, and Shigetoshi Sugawa from Brillnics, Ritsumeikan University, and Tohoku University is a part of Special issue on the 2019 International Image Sensor Workshop (IISW2019).

"In this paper, a prototype complementary metal-oxide-semiconductor (CMOS) image sensor with a 2.8-μm backside-illuminated (BSI) pixel with a lateral overflow integration capacitor (LOFIC) architecture is presented. The pixel was capable of a high conversion gain readout with 160 μV/e− for low light signals while a large full-well capacity of 120 ke− was obtained for high light signals. The combination of LOFIC and the BSI technology allowed for high optical performance without degradation caused by extra devices for the LOFIC structure. The sensor realized a 70% peak quantum efficiency with a normal (no anti-reflection coating) cover glass and a 91% angular response at ±20° incident light. This 2.8-μm pixel is potentially capable of higher than 100 dB dynamic range imaging in a pure single exposure operation."

Go to the original article...

VGA to Stay in Smartphones

Image Sensors World        Go to the original article...

IFNews quotes Industrial Securities report forecasting that VGA and 1.3MP sensors are here to stay in smartphones:


IFNews also quotes Credit Suisse report that "Samsung Electronics is winding down production of low-pixel-count CISs (16MP and below) and preferentially allocating logic manufacturing capacity to 24MP/48MP and above CISs. Omnivision is benefiting from this in particular and has raised prices by around 15% in 4Q19. This is causing CIS market conditions to improve rapidly."

Go to the original article...

Leakage Non-Uniformity and RTN

Image Sensors World        Go to the original article...

MDPI paper "Leakage Current Non-Uniformity and Random Telegraph Signals in CMOS Image Sensor Floating Diffusions Used for In-Pixel Charge Storage" by by Alexandre Le Roch, Vincent Goiffon, Olivier Marcelot, Philippe Paillet, Federico Pace, Jean-Marc Belloir, Pierre Magnan, and Cédric Virmontois from Université de Toulouse, CEA, and Centre Nationale d’Etudes Spatiales (CNES), France belongs to Special Issue "Special issue on the 2019 International Image Sensor Workshop (IISW2019)"

"The leakage current non-uniformity, as well as the leakage current random and discrete fluctuations sources, are investigated in pinned photodiode CMOS image sensor floating diffusions. Different bias configurations are studied to evaluate the electric field impacts on the FD leakage current. This study points out that high magnitude electric field regions could explain the high floating diffusion leakage current non-uniformity and its fluctuation with time called random telegraph signal. Experimental results are completed with TCAD simulations allowing us to further understand the role of the electric field in the FD leakage current and to locate a high magnitude electric field region in the overlap region between the floating diffusion implantation and the transfer gate spacer."

Go to the original article...

Fab Equipment Spending Upswing Led by Image Sensors

Image Sensors World        Go to the original article...

PRNewswire: The rebound in fab equipment spending is led by image sensors market, according to SEMI:

"Lead by Sony, image sensors spending is expected to jump 20 percent in the first half of 2020 and soar by over 90 percent in the second half, peaking at US$1.6 billion."

Go to the original article...

Panasonic Lumix TZ95 ZS80 review

Cameralabs        Go to the original article...

The Panasonic Lumix TZ95 / ZS80 is a pocket super-zoom camera packing a 30x optical range, 20 Megapixel sensor, 4k video, flip-up touchscreen and built-in viewfinder. It updates its predecessor with a more detailed viewfinder and Bluetooth for easier wireless. In his full review, Ken compares it against the Nikon COOLPIX A1000 and other rivals!…

The post Panasonic Lumix TZ95 ZS80 review appeared first on Cameralabs.

Go to the original article...

IEDM 2019: Samsung to use 14nm FinFET Process for 144MP Sensor

Image Sensors World        Go to the original article...

Samsung presented 14nm FinFET process optimized for imaging applications at IEDM last week: "14nm FinFET process technology platform for over 100M pixel density and ultra low power 3D Stack CMOS Image Sensor" by Donghee Yu, Choong jae Lee, Myounkyu Park, Junghwan Park, Seungju Hwang, Joonhyung Lee, Sunghun Yu, Hyunjung Shin, ByoungHo Kim, Jong-Won Choi, Sangil Jung, Minho Kwon2, Il-Seon Ha, Chaesung Kim, Sanghyun Cho, Seunghyun Lim, Won-Woong Kim, Moo-Young Kim, Seonghye Park, Ki-Don Lee3, Rakesh Ranjan, Shigenobu Maeda, and Gitae Jeong.

"CMOS Image Sensor(CIS) products need higher voltage device and better analog characteristics than conventional SOC & Logic products. This work presents newly developed 14nm FinFET process with 2.xV high voltage FinFET device characteristics showing excellent analog and low power digital characteristics comparing to 28nm planar process. Gm is improved by 30% and 67% in FinFET process for NMOS and PMOS, respectively. Rout characteristics increased by 40 times and 6 times over 28nm planar process. Interface state density(Nit) improved by more than 40% and flicker noise characteristics also improved by 64% and 42% for NMOS and PMOS, respectively. Digital logic Transistor ion-ioff performance improved by by 32% and by 211% for NMOS and PMOS, respectively compared to 28nm planar device and the chip power consumption of digital logic functional block reduced by 34% in real Si of 12M pixel product. 14nm FinFET process expected to improve power consumption by 42% in 144M pixel density."

Go to the original article...

Canon requests removal of toner cartridge offered by JUNBS from Amazon.com

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Canon requests removal of toner cartridge offered by Mal-ll from Amazon.com

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Canon Requests Removal of Toner Cartridge offered by Sprint Toner from Amazon.ca

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Dark Current and Plasma Damage

Image Sensors World        Go to the original article...

MDPI paper "CMOS Image Sensors and Plasma Processes: How PMD Nitride Charging Acts on the Dark Current" by Yolène Sacchettini, Jean-Pierre Carrère, Romain Duru, Jean-Pierre Oddou, Vincent Goiffon, and Pierre Magnan from STMicroelectronics and ISAE-SUPAERO, Université de Toulouse is apart of Special Issue on the 2019 International Image Sensor Workshop (IISW2019).

"Plasma processes are known to be prone to inducing damage by charging effects. For CMOS image sensors, this can lead to dark current degradation both in value and uniformity. An in-depth analysis, motivated by the different degrading behavior of two different plasma processes, has been performed in order to determine the degradation mechanisms associated with one plasma process. It is based on in situ plasma-induced charge characterization techniques for various dielectric stack structures (dielectric nature and stack configuration). A degradation mechanism is proposed, highlighting the role of ultraviolet (UV) light from the plasma in creating an electron hole which induces positive charges in the nitride layer at the wafer center, and negative ones at the edge. The trapped charges de-passivate the SiO2/Si interface by inducing a depleted interface above the photodiode, thus emphasizing the generation of dark current. A good correlation between the spatial distribution of the total charges and the value of dark current has been observed."

Go to the original article...

Light Co. Changes its Focus to Automotive 3D

Image Sensors World        Go to the original article...

Light Co. appears to change its main technology focus to automotive 3D perception. The L16 camera and Nokia 9 smartphone info has been moved to the "Case Studies" tab on Light web site.

"A missing piece in long-range depth perception

For automobiles to safely navigate the real world, they need to be able to perceive as humans do: a full picture with accurate depth throughout ranges. Lidar provides accurate information, but only up to a point, and with limited resolution. Radar detects when an object is in the far distance but it isn't sophisticated enough to discern whether its a truck or a barn. The range that radar is truly capable of is also often far less than claimed.

The Opportunity

The hole that exists in long-range, accurate sensing for ADAS/ADS is where Light comes in. We are developing an incredibly resilient perception technology that provides precise object detection, definition, and tracking through extended ranges. All in real-time."

Go to the original article...

Samsung to Adopt RISC-V for its Image Sensors

Image Sensors World        Go to the original article...

The Register writer Chris Williams reports from RISC-V Summit held this week in Silicon Valley that Samsung is going to use RISC-V in its image sensors, as well as in AI edge devices. Earlier this year, Sony too presented at RISC-V conference in Japan.

Go to the original article...

MagikEye to Demo its Invertible Light Image Sensor Technology

Image Sensors World        Go to the original article...

BusinessWire: Magik Eye Inc. will be holding demonstrations for its Invertible Light Technology (ILT) at the 2020 CES. ILT is said to be an alternative to ToF and Structured Light 3D imaging solutions, the smallest, fastest and most power-efficient 3D sensing method. “We are pleased to demonstrate our new 3D sensing solutions that will enable exciting use cases for applications in robotics and smart phones,” said Takeo Miyazawa, Founder & CEO of MagikEye. The company's presentation is available at Slideshare.

Go to the original article...

Free ToF Book

Image Sensors World        Go to the original article...

INRIA, Grenoble, France, posts a ToF book based on its cooperative research project with the 3D Mixed Reality Group at the Samsung Advanced Institute of Technology. The book "Time-of-Flight Cameras: Principles, Methods and Applications" by Miles Hansard, Seungkyu Lee, Ouk Choi, and Radu Horaud is dated by November 2012:

"This book describes a variety of recent research into time-of-flight imaging. Time-of-flight cameras are used to estimate 3D scene-structure directly, in a way that complements traditional multiple-view reconstruction methods. The first two chapters of the book explain the underlying measurement principle, and examine the associated sources of error and ambiguity. Chapters three and four are concerned with the geometric calibration of time-of-flight cameras, particularly when used in combination with ordinary colour cameras. The final chapter shows how to use time-of-flight data in conjunction with traditional stereo matching techniques. The five chapters, together, describe a complete depth and colour 3D reconstruction pipeline. This book will be useful to new researchers in the field of depth imaging, as well as to those who are working on systems that combine colour and time-of-flight cameras."

Go to the original article...

Yole on Race for Event-Driven Sensor Dominance

Image Sensors World        Go to the original article...

i-Micronews article "The race between Sony and Samsung for neuromorphic image sensors is heating up" forecasts that neuromorphic semiconductors, sensing and computing will become a $7.1B market by 2029.

Go to the original article...

CCD History Told by Business Teachers

Image Sensors World        Go to the original article...

For those who interested in history, Strategic Entrepreneurship Journal publishes a paper "The pre‐commercialization emergence of the combination of product features in the charge‐coupled device image sensor" by Raja Roy, Curba M. Lampert, and MB Sarkar. While the final version of the paper is behind the paywall, there is a number of draft versions openly available on the Internet, for example, here and here. The paper mentions some exotic technologies that industry tried in the early CCD days, like Germanium CCDs, hybrid FSI-BSI devices, peristaltic CCDs:


One of the questions that authors could not answer is "we cannot explain, why early members of the innovation ecosystem—such as TI, RCA, Fairchild, Sony, Matsushita, Kodak, Philips, and others in the context of CCD—exchanged information and recombined knowledge to refine the product design. Do potential buyers strategically make such knowledge flow possible? Are firms in the pre-commercialization phase motivated to recombine knowledge to overcome the initial uncertainties associated with developing the product that meets the needs of large institution buyer? These are some of the critical questions that need to be answered in future research."

Go to the original article...

AIStorm Wins Frost & Sullivan’s 2019 Technology Innovation Award

Image Sensors World        Go to the original article...

PRNewswire: California-based startup AIStorm AI-in-Sensor (AIS) technology that enables real-time processing of sensor data at the edge, without digitization wins Frost & Sullivan's 2019 Technology Innovation Award. The AIS technology uses a "charge domain processing that controls the electron movement between the storage elements in the chip and uses switch charge circuits for mathematical control over the charge transfer."

AIStorm's chips integrate imagers (CIS or Lidar), voice (MEMs, microphones), or waveform (vibration or motion) sensors as well as flow, network, memory, power management and communication tasks. AIStorm's solutions enable "always on" imaging and audio event driven capability without polling, utilizing intelligent AI-based trigger mechanism, thus eliminating false triggers and using minimal power while waiting for an event.

Go to the original article...

Omnivision Promotes its 8.3MP Automotive Sensor

Image Sensors World        Go to the original article...

EETimes publishes Junko Yoshida's article on Omnivision's new 8.3MP image sensor with LED flicker mitigation:

"Celine Baron, OmniVision’s staff automotive product manager, noted during an interview with EE Times that LEDs are everywhere, ranging from headlamps and traffic lights to road signs, billboards and bus displays. Given their ubiquity, it’s hard to avoid LED flickering. It can be distracting enough to human eyes, but it could be fatal to an AVs’ machine vision. Human vision can compensate for flickering. AV machine vision can’t."

Go to the original article...

LiDAR News: Blickfeld, Aeva, Outsight, Leddartech, First Sensor, Draper, SiLC, Robosense

Image Sensors World        Go to the original article...

Blickfeld publishes an article explaining challenges of automotive MEMS LiDAR:

"In order to capture as much light as possible, a large aperture, i.e. as large a mirror as possible, is required. However, the mirror size is also limited by certain factors – it is therefore necessary to calculate the optimum size on the basis of these factors.

MEMS mirrors oscillate at a certain resonant frequency. The resonant frequency at which a mirror oscillates depends on the size and mounting of the mirror. For this purpose we have developed a proprietary embedding of the mirrors in order to be able to use particularly large mirrors. Due to the unusually large diameter, a large number of photons can be directed onto the scene and back onto the detector, which allows Blickfeld LiDAR sensors to achieve a long range. In addition, thanks to their size, the mirrors are more robust than conventional products, which are only a few millimeters in diameter. Yet, they have a high resonant frequency due to their lightweight construction which ensures that the photons are returned to the detector. If the mirror oscillates too quickly or too slowly, the photons are deflected past the detector due to the coaxial structure.
"


IEEE Spectrum, Reuters, Businesswire: Aeva announces its FMCW LiDAR that integrates all the key elements of a LiDAR sensor onto a photonics chip. Aeva’s 4D LiDAR-on-chip reduces the size and power of the device by orders of magnitude while achieving full range performance of over 300m for low reflective objects and the ability to measure instant velocity for every point. Aeva’s LiDAR-on-chip will cost less than $500 at scale, in contrast to the several tens of thousands of dollars for today’s LiDAR sensors.

Not all FMCW LiDARs are created equally,” said Mina Rezk, Co-Founder of Aeva. “A key differentiator of our approach is breaking the dependency between maximum range and points density, which has been a barrier for time-of-flight and FMCW LiDARs so far. Our 4D LiDAR integrates multiple beams on a chip, each beam uniquely capable of measuring more than 2 million points per second at distances beyond 300m.

Aeva promises to unveil at CES 2020 its next generation LiDAR, Aeries, features a 120-degree FOV at only half the size of Aeva’s first product. Aeries meets the final production requirements for autonomous driving robo-taxis and large volume ADAS customers and will be available for use in development vehicles in the first half of 2020.

We have scanned the market closely and believe Aeva’s 4D LiDAR on a chip technology is the best LiDAR solution on the market, solving a fundamental bottleneck for perception in taking autonomous driving to mass scale,” said Alex Hitzinger, SVP of Autonomous Driving at VW Group and CEO of VW Autonomy GmbH. “Together we are looking into using Aeva’s 4D LiDAR for our VW ID Buzz AV, which is scheduled to launch in 2022/23.


EIN Presswire: The French startup Outsight announces it has raised $20M in seed funding. Outsight's 3D Semantic Camera can includes hyperspectral-based detection of the material composition of objects.

Earlier, Outsight announced a collaboration with Faurecia and Safran. Founded in 2019 by Cedric Hutchings and Raul Bravo, Outsight launched its 3D Semantic Camera in September.

Our 3D Semantic Camera is not only a new device but a change of paradigm where Situation Awareness becomes plug&play for the first time: we’re creating a new category of solutions that will unleash tremendous business value. We’re proud of having the support of such solid and knowledgeable investors that share our ambition,” said Raul Bravo, President co-founder of Outsight.


Globenewswire: LeddarTech announces a strategic collaboration with First Sensor AG that is also now joining the Leddar Ecosystem.

LeddarTech, with the support of First Sensor and other industry leaders, is developing the only open and comprehensive LiDAR platform option for OEMs and Tier1s. The platform provides the following benefits:

LeddarTech and First Sensor will develop a LiDAR Evaluation Kit, a demonstration tool for Tier 1s and system integrators to develop their own LiDAR based on LeddarEngine technology, First Sensor APDs, and additional ecosystem partners’ technologies, products, and services. The evaluation kit will be primarily targeting automotive front LiDAR applications for high-speed highway driving such as Highway Pilot and Traffic Jam Assist.


Draper unveils LiDAR with MEMS beamsteering. Draper’s all-digital switches provide robustness for the harsh automotive environment, which carries advantages over competing solid-state approaches that rely on analog beamsteering. With Draper’s LiDAR, light is emitted through a matrix of optical switches and collected through the same optical switches, which allows for a favorable signal-to-noise ratio, since little ambient light is collected.

Draper’s LiDAR is being developed to image a range of hundreds of meters while providing a corresponding angular resolution targeted at less than 0.1-degrees, a significant advancement over competing LiDAR systems, many of which offer lower range and resolution.

At Draper, we have experience with differing beamsteering methods, such as optical phased arrays. However, we feel MEMS optical switches provide an elegant simplicity,” said Sabrina Mansur, Draper’s self-driving vehicle program manager. “If we want to image a target at a specified location, we simply enable the corresponding optical switch, whereas other approaches rely on precise analog steering, which is challenging given automotive’s thermal and vibration environment.

The new offering which is available to license, adds to Draper’s all-weather LiDAR technology, named Hemera, a detection capability designed to see through dense fog and is compatible with most LiDAR systems.


PRNewswire: SiLC Technologies, a developer of integrated single-chip FMCW LiDAR, and Varroc Lighting Systems announces a seamless LiDAR integration into a production automotive headlamp. The Varroc Lighting Systems headlamp is based on a sophisticated production LED design and leverages four of SiLC's silicon photonics FMCW vision chips providing a full 20 x 80-degree FOV per headlamp.

SiLC's 1550nm LiDAR chip can be inconspicuously embedded anywhere on a vehicle for optimal vision and safety. SiLC's 4D+ Vision Chip integrates all required functionality, such as a coherent light source and optical signal processing, to enable additional information to be extracted from the returning photons before their conversion to electrons. SiLC's vision sensor can detect height, width, distance, reflectivity, velocity, and light polarization of objects. The coherent interferometric sensing approach improves achievable accuracy by orders of magnitude over existing technologies. SiLC's 4D+ Vision Chip can detect low reflectance objects beyond 200m, providing enough time for a vehicle to avoid an obstacle at highway speeds.


Businesswire: RoboSense launches a complete LiDAR perception solution for Robo Taxi (RS-Fusion-P5) in markets outside China. The RS-Fusion-P5 was first launched in China last month. Equipped with an RS-Ruby and four RS-BPearl, The RS-Fusion-P5 is considered to be the alternative to Waymo's LiDAR solution, further accelerating the development of Robo Taxi.

Go to the original article...

1/f Noise in CMOS Sensors

Image Sensors World        Go to the original article...

A paper "1/f Noise Modelling and Characterization for CMOS Quanta Image Sensors" by Wei Deng and Eric R. Fossum, Dartmouth College belongs to MDPI Special issue on the 2019 International Image Sensor Workshop (IISW2019). The paper presents rather surprising results that match Hooge mobility fluctuation model, largely abandoned by the industry and academic worlds:

"This work fits the measured in-pixel source-follower noise in a CMOS Quanta Image Sensor (QIS) prototype chip using physics-based 1/f noise models, rather than the widely-used fitting model for analog designers. This paper discusses the different origins of 1/f noise in QIS devices and includes correlated double sampling (CDS). The modelling results based on the Hooge mobility fluctuation, which uses one adjustable parameter, match the experimental measurements, including the variation in noise from room temperature to –70 °C. This work provides useful information for the implementation of QIS in scientific applications and suggests that even lower read noise is attainable by further cooling and may be applicable to other CMOS analog circuits and CMOS image sensors."

Go to the original article...

D-ToF LiDAR Model

Image Sensors World        Go to the original article...

A paper "Modeling and Analysis of a Direct Time-of-Flight Sensor Architecture for LiDAR Applications" by Preethi Padmanabhan, Chao Zhang, and Edoardo Charbon, EPFL and TU Delft, belongs to MDPI Special issue on the 2019 International Image Sensor Workshop.

"Direct time-of-flight (DTOF) is a prominent depth sensing method in light detection and ranging (LiDAR) applications. Single-photon avalanche diode (SPAD) arrays integrated in DTOF sensors have demonstrated excellent ranging and 3D imaging capabilities, making them promising candidates for LiDARs. However, high background noise due to solar exposure limits their performance and degrades the signal-to-background noise ratio (SBR). Noise-filtering techniques based on coincidence detection and time-gating have been implemented to mitigate this challenge but 3D imaging of a wide dynamic range scene is an ongoing issue. In this paper, we propose a coincidence-based DTOF sensor architecture to address the aforementioned challenges. The architecture is analyzed using a probabilistic model and simulation. A flash LiDAR setup is simulated with typical operating conditions of a wide angle field-of-view (FOV = 40 ∘ ) in a 50 klux ambient light assumption. Single-point ranging simulations are obtained for distances up to 150 m using the DTOF model. An activity-dependent coincidence is proposed as a way to improve imaging of wide dynamic range targets. An example scene with targets ranging between 8–60% reflectivity is used to simulate the proposed method. The model predicts that a single threshold cannot yield an accurate reconstruction and a higher (lower) reflective target requires a higher (lower) coincidence threshold. Further, a pixel-clustering scheme is introduced, capable of providing multiple simultaneous timing information as a means to enhance throughput and reduce timing uncertainty. Example scenes are reconstructed to distinguish up to 4 distinct target peaks simulated with a resolution of 500 ps. Alternatively, a time-gating mode is simulated where in the DTOF sensor performs target-selective ranging. Simulation results show reconstruction of a 10% reflective target at 20 m in the presence of a retro-reflective equivalent with a 60% reflectivity at 5 m within the same FOV."

Go to the original article...

Intel Unveils Indoor MEMS LiDAR

Image Sensors World        Go to the original article...

Intel announces the RealSense lidar camera L515 able to generate 23M depth points per second with mm accuracy. The LiDAR Camera L515 has a focus on indoor applications that require depth data at high resolution and high accuracy. The L515 uses a proprietary MEMS mirror scanner, enabling better laser power efficiency compared to other ToF technologies. The new camera has an internal vision processor, motion blur artifact reduction and short photon-to-depth latency.

The Intel RealSense lidar is priced at $349 and available for pre-order now.

The main features of the L515 indoor LiDAR:

  • Laser wavelength: 860nm
  • Technology: Laser scanning
  • Depth Field of View (FOV): 70° × 55° (±2°)
  • Maximum Distance: 9m
  • Minimum Depth Distance:0.25m
  • Depth Output Resolution & Frame Rate: Up to 1024 × 768 depth pixels, 30 fps
  • Ambient Temperature: 0-30 °C
  • Power consumption: less than 3.5W


Go to the original article...

css.php