NHK R&D Journal Issue on Image Sensing Devices

Image Sensors World        Go to the original article...

March 2019 issue of NHK STRL R&D Journal devoted to imaging devices being developed by the company:

Dark Current Reduction In Crystalline Selenium-Based Stacked-Type CMOS Image Sensors
Shigeyuki IMURA, Keitada MINEO, Kazunori MIYAKAWA, Masakazu NANBA,
Hiroshi OHTAKE And Misao KUBOTA
There is a possibility that highly sensitive imaging devices can be acquired by using avalanches (c-Se)-based stacked-type CMOS image sensors. In this visible region. The increase in the dark current in the low-electric field region (non-avalanche region) has been an issue. In this study, we optimized the growth conditions of the tellurium (Te) nucleation We have fabricated a test device on glass substrates and successfully reduced the dark current to below 100 pA, which is used to prevent the se film from peeling, resulting in a reduction of the dark current in the non-avalanche region. / cm2 (by a factor of 1/100) at a reverse-bias voltage of 15 V.


Improvement in Performance of Photocells Using Organic Photoconductive Films Sandwiched Between Transparent Electrodes
Toshikatsu SAKAI, Tomomi TAKAGI, Yosuke HORI, Takahisa SHIMIZU,
Hiroshi OHTAKE And Satoshi AIHARA
We have developed a superior type of image sensor that has high sensitivity with three sensor elements, each of which is sensitive to only primary colors. , for each R / G / B-sensitive photocell sandwiched between transparent ITO electrodes.

3D Integrated Image Sensors With Pixel-Parallel Signal Processing
Masahide GOTO, Yuki HONDA, Toshihisa WATABE, Kei HAGIWARA,
Masakazu NANBA And Yoshinori IGUCHI
Photodiodes, pulse generation circuits and 16-bit pulse counters are three-dimensional. We studied a three-dimensional integrated image sensor that is capable of pixel-parallel signal processing, there by meeting integrated within each pixel by direct bonding of silicon on insulator (SOI) layers with embedded Au electrodes, which provides in-pixel pulse frequency modulation A / D converters. Pixel-parallel video images with Quarter Video Graphics Array (QVGA) resolution were obtained, demonstrating the feasibility of these next-generation image sensors.


The Japanese version of the Journal has much many papers but it's harder to figure out their technical content.

Go to the original article...

Image Sensors at VLSI Symposia 2019

Image Sensors World        Go to the original article...

VLSI Symposia to be held in June this year in Kyoto, Japan, publishes its agenda with many image sensor papers:

A 640x640 Fully Dynamic CMOS Image Sensor for Always-On Object Recognition,
I. Park*, W. Jo*, C. Park*, B. Park*, J. Cheon** and Y. Chae*, *Yonsei Univ. and **Kumoh National Institute of Technology, Korea
This paper presents a 640x640 fully dynamic CMOS image sensor for always-on object recognition. A pixel output is sampled with a dynamic source follower (SF) into a parasitic column capacitor, which is readout by a dynamic single-slope (SS) ADC based on a dynamic bias comparator and an energy efficient two-step counter. The sensor, implemented in a 0.11μm CMOS, achieves 0.3% peak non-linearity, 6.8erms RN and 67dB DR. Its power consumption is only 2.1mW at 44fps and is further reduced to 260μW at 15fps with sub-sampled 320x320 mode. This work achieves the state-of-the-art energy efficiency FoM of 0.7e-·nJ.

A 132 by 104 10μm-Pixel 250μW 1kefps Dynamic Vision Sensor with Pixel-Parallel Noise and Spatial Redundancy Suppression,
C. Li*, L. Longinotti*, F. Corradi** and T. Delbruck***, *iniVation AG, **iniLabs GmbH and ***Univ. of Zurich, Switzerland
This paper reports a 132 by 104 dynamic vision sensor (DVS) with 10μm pixel in a 65nm logic process and a synchronous address-event representation (SAER) readout capable of 180Meps throughput. The SAER architecture allows adjustable event frame rate control and supports pre-readout pixel-parallel noise and spatial redundancy suppression. The chip consumes 250μW with 100keps running at 1k event frames per second (efps), 3-5 times more power efficient than the prior art using normalized power metrics. The chip is aimed for low power IoT and real-time high-speed smart vision applications.

Automotive LIDAR Technology,
M. E. Warren, TriLumina Corporation, USA
LIDAR is an optical analog of radar providing high spatial-resolution range information. It is an essential part of the sensor suite for ADAS (Advanced Driver Assistance Systems), and ultimately, autonomous vehicles. Many competing LIDAR designs are being developed by established companies and startup ventures. Although there are no standards, performance and cost expectations for automotive LIDAR are consistent across the automotive industry. Why are there so many different competing designs? We can look at the system requirements and organize the design options around a few key technologies.

A 64x64 APD-Based ToF Image Sensor with Background Light Suppression Up to 200 klx Using In-Pixel Auto-Zeroing and Chopping,
B. Park, I. Park, W. Choi and Y. C. Chae, Yonsei Univ., Korea
This paper presents a time-of-flight (ToF) image sensor for outdoor applications. The sensor employs a gain-modulated avalanche photodiode (APD) that achieves high modulation frequency. The suppression capability of background light is greatly improved up to 200klx by using a combination of in-pixel auto-zeroing and chopping. A 64x64 APD-based ToF sensor is fabricated in a 0.11μm CMOS. It achieves depth ranges from 0.5 to 2 m with 25MHz modulation and from 2 to 20 m with 1.56MHz modulation. For both ranges, it achieves a non-linearity below 0.8% and a precision below 3.4% at a 3D frame rate of 96fps.

A 640x480 Indirect Time-of-Flight CMOS Image Sensor with 4-tap 7-μm Global-Shutter Pixel and Fixed-Pattern Phase Noise Self- Compensation Scheme,
M.-S. Keel, Y.-G. Jin, Y. Kim, D. Kim, Y. Kim, M. Bae, B. Chung, S. Son, H. Kim, T. An, S.-H. Choi, T. Jung, C.-R. Moon, H. Ryu, Y. Kwon, S. Seo, S.-Y. Kim, K. Bae, S.-C. Shin and M. Ki, Samsung Electronics Co., Ltd., Korea
A 640x480 indirect Time-of-Flight (ToF) CMOS image sensor has been designed with 4-tap 7-μm global-shutter pixel in 65-nm back-side illumination (BSI) process. With novel 4-tap pixel structure, we achieved motion artifact-free depth map. Column fixed-pattern phase noise (FPPN) is reduced by introducing alternative control of the clock delay propagation path in the photo-gate driver. As a result, motion artifact and column FPPN are not noticeable in the depth map. The proposed ToF sensor shows depth noise less than 0.62% with 940-nm illuminator over the working distance up to 400 cm, and consumes 197 mW for VGA, which is 0.64 pW/pixel.

A 128x120 5-Wire 1.96mm2 40nm/90nm 3D Stacked SPAD Time Resolved Image Sensor SoC for Microendoscopy,
T. Al Abbas*, O. Almer*, S. W. Hutchings*, A. T. Erdogan*, I. Gyongy*, N. A. W.Dutton** and R. K. Henderson*, *Univ. of Edinburgh and
**STMicroelectronics, UK
An ultra-compact 1.4mmx1.4mm, 128x120 SPAD image sensor with a 5-wire interface is designed for time-resolved fluorescence microendoscopy. Dynamic range is extended by noiseless frame summation in SRAM attaining 126dB time resolved imaging at 15fps with 390ps gating resolution. The sensor SoC is implemented in STMicroelectronics 40nm/90nm 3D-stacked BSI CMOS process with 8μm pixels and 45% fill factor.

Fully Integrated Coherent LiDAR in 3D-Integrated Silicon Photonics/65nm CMOS,
P. Bhargava*, T. Kim*, C. V. Poulton**, J. Notaros**, A. Yaacobi**, E. Timurdogan**, C. Baiocco***, N. Fahrenkopf***, S. Kruger***, T. Ngai***, Y. Timalsina***, M. R. Watts** and V. Stojanovic*, *Univ. of California, Berkeley, **Massachusetts Institute of Technology and ***College of Nanoscale Science and Engineering, USA
We present the first integrated coherent LiDAR system with experimental ranging demonstrations operating within the eyesafe 1550nm band. Leveraging a unique wafer-scale 3D integration platform which includes customizable silicon photonics and nanoscale CMOS, our system seamlessly combines a high-sensitivity optical coherent detection front-end, a large-scale optical phased array for beamforming, and CMOS electronics in a single chip. Our prototype, fabricated entirely in a 300mm wafer facility, shows that low-cost manufacturing of high-performing solid-state LiDAR is indeed possible, which in turn may enable extensive adoption of LiDARs in consumer products, such as self-driving cars, drones, and robots.

Automotive Image Sensor for Autonomous Vehicle and Adaptive Driver Assistance System,
H. Matsumoto, Sony Corp.
Human vision is the most essential sensor to drive vehicle. Instead of human eyes, CMOS image sensor is the best sensing device to recognize objects and environment around the vehicle. Image sensors are also used in various use cases such as driver and passenger monitor in cabin of vehicle. For these use cases, some special functionalities and specification are needed. In this session the requirements for automotive image sensor will be discussed such as high dynamic range, flicker mitigation and low noise. In the last part the key technology to utilize image sensor, such as image recognition and computer vision will be discussed.

426-GHz Imaging Pixel Integrating a Transmitter and a Coherent Receiver with an Area of 380x470 μm2 in 65-nm CMOS,
Y. Zhu*, P. R. Byreddy*, K. K. O* and W. Choi*, **, *The Univ. of Texas at Dallas and **Oklahoma state Univ., USA
A 426-GHz imaging pixel integrating a transmitter and a coherent receiver using the three oscillators for 3-push within an area of 380x470 μm2 is demonstrated. The TX power is -11.3 dBm (EIRP) and sensitivity is -89.6 dBm for 1-kHz noise bandwidth. The sensitivity is the lowest among imaging pixels operating above 0.3 THz. The pixel consumes 52 mW from a 1.3 V VDD. The pixel can be used with a reflector with 47 dB gain to form a camera-like reflection mode image for an object 5 m away.

Monolithic Three-Dimensional Imaging System: Carbon Nanotube Computing Circuitry Integrated Directly Over Silicon Imager,
T. Srimani, G. Hills, C. Lau and M. Shulaker, Massachusetts Institute of Technology, USA
Here we show a hardware prototype of a monolithic three-dimensional (3D) imaging system that integrates computing layers directly in the back-end-of-line (BEOL) of a conventional silicon imager. Such systems can transform imager output from raw pixel data to highly processed information. To realize our imager, we fabricate 3 vertical circuit layers directly on top of each other: a bottom layer of silicon pixels followed by two layers of CMOS carbon nanotube FETs (CNFETs) (comprising 2,784 CNFETs) that perform in-situ edge detection in real-time, before storing data in memory. This approach promises to enable image classification systems with improved processing latencies.

Record-High Performance Trantenna Based on Asymmetric Nano-Ring FET for Polarization-Independent Large-Scale/Real-Time THz Imaging, E.-S. Jang*, M. W. Ryu*, R. Patel*, S. H. Ahn*, H. J. Jeon*, K. Han** and K. R. Kim*, *Ulsan National Institute of Science and Technology and **Dongguk Univ., Korea
We demonstrate a record-high performance monolithic trantenna (transistor-antenna) using 65-nm CMOS foundry in the field of a plasmonic terahertz (THz) detector. By applying ultimate structural asymmetry between source and drain on a ring FET with source diameter (dS) scaling from 30 to 0.38 micrometer, we obtained 180 times more enhanced photoresponse (∆u) in on-chip THz measurement. Through free-space THz imaging experiments, the conductive drain region of ring FET itself showed a frequency sensitivity with resonance frequency at 0.12 THz in 0.09 ~ 0.2 THz range and polarization-independent imaging results as an isotropic circular antenna. Highly-scalable and feeding line-free monolithic trantenna enables a highperformance THz detector with responsivity of 8.8kV/W and NEP of 3.36 pW/Hz0.5 at the target frequency.

Custom Silicon and Sensors Developed for a 2nd Generation Augmented Reality User Interface,
P. O'Connor, Microsoft, USA.

Go to the original article...

Event-Based Cameras Review

Image Sensors World        Go to the original article...

Zurich University paper "Event-based Vision: A Survey" by G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, D. Scaramuzza compares different event-based cameras:

"Event cameras are bio-inspired sensors that work radically different from traditional cameras. Instead of capturing images at a fixed rate, they measure per-pixel brightness changes asynchronously. This results in a stream of events, which encode the time, location and sign of the brightness changes. Event cameras posses outstanding properties compared to traditional cameras: very high dynamic range (140 dB vs. 60 dB), high temporal resolution (in the order of microseconds), low power consumption, and do not suffer from motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as high speed and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world."

Go to the original article...

From Events to Video

Image Sensors World        Go to the original article...

Zurich University publishes a video explanations of its paper "Events-to-Video: Bringing Modern Computer Vision to Event Cameras" by Henri Rebecq, René Ranftl, Vladlen Koltun, and Davide Scaramuzza to be presented at IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, June 2019.

Go to the original article...

Chemical Imaging in EUV

Image Sensors World        Go to the original article...

Semiconductor Engineering publishes a nice article on photoresist operation in EUV photolithography systems used in advanced processes. It shows how far the chemical imaging, the predecessor of image sensors, can go:

"In the early days of EUV development, supporters of the technology argued that it was “still based on photons,” as opposed to alternatives like electron beam lithography. While that’s technically true, even a casual glance at EUV optics shows that these photons interact with matter differently.

An incoming EUV photon has so much energy that it doesn’t interact with the molecular orbitals to any significant degree. John Petersen, principal scientist at Imec, explained that it ejects one of an atom’s core electrons.

...the photoelectron recombines with the material, ejecting another electron. This cascade of absorption/emission events, with energy dissipating at each step, continues until the electron energy drops below about 30 eV.

Once the electron energy is in the 10 to 20 eV range, Petersen said, researchers see the formation of quantized plasma oscillations, known as plasmons. The plasmons in turn create an electric field, with effects on further interactions that are not yet understood.

Only after energy falls below 5 to 10 eV, where electrons have quantum resonance with molecular orbitals, does the familiar resist chemistry of older technologies emerge. At this level, molecular structure and angular momentum drive further interactions.
"

Go to the original article...

Chemical Imaging in EUV

Image Sensors World        Go to the original article...

Semiconductor Engineering publishes a nice article on photoresist operation in EUV photolithography systems used in advanced processes. It shows how far the chemical imaging, the predecessor of image sensors, can go:

"In the early days of EUV development, supporters of the technology argued that it was “still based on photons,” as opposed to alternatives like electron beam lithography. While that’s technically true, even a casual glance at EUV optics shows that these photons interact with matter differently.

An incoming EUV photon has so much energy that it doesn’t interact with the molecular orbitals to any significant degree. John Petersen, principal scientist at Imec, explained that it ejects one of an atom’s core electrons.

...the photoelectron recombines with the material, ejecting another electron. This cascade of absorption/emission events, with energy dissipating at each step, continues until the electron energy drops below about 30 eV.

Once the electron energy is in the 10 to 20 eV range, Petersen said, researchers see the formation of quantized plasma oscillations, known as plasmons. The plasmons in turn create an electric field, with effects on further interactions that are not yet understood.

Only after energy falls below 5 to 10 eV, where electrons have quantum resonance with molecular orbitals, does the familiar resist chemistry of older technologies emerge. At this level, molecular structure and angular momentum drive further interactions.
"

Go to the original article...

Teledyne e2v Re-Announces 4K 710fps APS-C Sensor with GS

Image Sensors World        Go to the original article...

GlobeNewswire: Teledyne e2v announces samples availability of Lince11M sensor half a year after the original announcement. Lince11M is designed for applications that require 4K resolution at very high shutter speed. This standard sensor combines 4K resolution at 710 fps in APS-C format.

Go to the original article...

SRI to Develop Night Vision Sensor

Image Sensors World        Go to the original article...

PRNewswire: SRI International has received an award to deliver digital night vision camera prototypes to support the U.S. Army's IVAS (Integrated Visual Augmentation System) program. SRI will design a low-light-level CMOS sensor and integrate the device into a custom camera module optimized for low size, weight and power (SWAP).

"SRI has been steadily advancing the low-light-level performance of night vision CMOS (NV-CMOS®) image sensors and we are pleased that the IVAS program will incorporate our fourth generation NV-CMOS imagers," said Colin Earle, associate director, Imaging Systems, SRI International.

Go to the original article...

BAE Announces no-ITAR Restricted 2.3MP 60fps Thermal Sensor

Image Sensors World        Go to the original article...

BAE Systems' Sensor Solutions is launching Athena1920 full HD (1920x1200) thermal camera core. Based on uncooled 12µm pixels, the Athena1920 is available now with no ITAR restrictions at 60Hz frame rate:

Go to the original article...

All Huawei P30 Cameras Made by Sony

Image Sensors World        Go to the original article...

EETimes publishes SystemPlus teardown results of Huawei P30 Pro flagship smartphone:

"Separating Huawei P30 Pro, more than anything else though, is its use of quad cameras. The new smartphone literally has four cameras. They include a main camera, plus cameras for wide-angle, Time-of-Flight and a periscope view. All four use Sony CMOS image sensors. “It’s a full design win for Sony,” said Stéphane Elisabeth, costing analyst expert at SystemPlus Consulting.

Go to the original article...

Sony Robotics and Interaction Future is Based on ToF and Stereo Technologies

Image Sensors World        Go to the original article...

Sony exhibition at Milan Design Week devoted to AI and robotics funture is based on the company's ToF and stereo vision technologies:

"Sony's leading image sensor technologies are used in the exhibits of "Affinity in Autonomy". Stereo cameras with back-illuminated Time-of-Flight image sensor and CMOS image sensor for sensing applications equipped with global shutter enable new interactive experiences by sensing conditions surrounding human and robotics.

Back-illuminated Time-of-Flight image sensor

With ToF technology, the distance to an object is measured by the time it takes for light from a light source to reach the object and reflect back to the sensor. ToF image sensors detect distance information for every pixel, resulting in highly accurate depth maps.
The new sensor which adopts back-illuminated CMOS image sensor architecture allows for more accurate detection of the reflected light because of improved sensor sensitivity.

CMOS image sensor for sensing applications equipped with global shutter function(IMX418)

The new product builds on the advantages of Sony's CMOS image sensor equipped with a global shutter function without focal plane distortion, with lower power consumption.

This product employs an angle of view with a 1:1 aspect ratio, which minimizes image information loss due to device tilt, whether the camera is mounted on the front, back, top, bottom, left or right of an HMD, drone, or autonomous robot.
"

Go to the original article...

SiC Image Sensor Thesis

Image Sensors World        Go to the original article...

KTH Royal Institute of Technology, Stockholm, Sweden publishes a PhD Thesis "Silicon Carbide High Temperature Photodetectors and Image Sensor" bu Shouben Hou.

"Silicon Carbide (SiC) has the advantages of ultraviolet (UV) sensing and high temperature characteristics because of its wide band gap. Driven by the objective of probing the high temperature surface of Venus (460 °C), this thesis develops SiC photodetectors and an image sensor for extremely high temperature functions. The devices and circuits are demonstrated through the procedure of layout design, in-house processing and characterizations on two batches.

The photodetectors developed in this thesis, including photodiodes with various mesa areas, a phototransistor and a phototransistor Darlington pair have stable characteristics in a wide temperature range (25 °C ~ 500 °C). The maximum operational temperature of the p-i-n photodiode (550 °C) is the highest recorded temperature accomplished ever by a photodiode. The optical responsivity of the photodetectors covers the spectrum from 220 nm to 380 nm, which is UV-only.

The SiC pixel sensor and image sensor developed in this thesis are pioneer works. The pixel sensor overcomes the challenge of monolithic integration of SiC photodiode and transistors by sharing the same epitaxial layers and topside contacts. The pixel sensor is characterized from 25 °C to 500 °C. The whole image sensor circuit has 256 (16 ×16) pixel sensors and one 8-bit counter together with two 4-to-16 decoders for row/column selection. The digital circuits are built by the standard logic gates selected from the TTL PDK. The image sensor has 1959 transistors in total. The function of the image sensor up to 400 °C is verified by taking basic photos of nonuniform UV illumination on the pixel sensor array.
"

Go to the original article...

Hillhouse Renamed to CelePixel and Relocated to Shanghai

Image Sensors World        Go to the original article...

Hillhouse Technology Singapore has been renamed to CelePixel Technology and relocated to Shanghai, China. The company develops neuromorphic event-driven sensor and has filed for 7 US patents:

"In 1989, Carver Mead, US computer scientist, a founder of Moore’s law and VLSI, created the concept of Neuromorphic Engineering.

In 1990s, his students Misha Mahowald and Kwabena Boahen developed the first Retinomorphic sensor based on Address Event Representation. Subsequently, a number of scientific institutions started to research on Retinomorphic sensors.

Standing on the shoulders of giants, CelePixel has gone further in technological innovations and explorations, to take the cutting-edge underlying technology to forefront of commercial applications.
"


The company has won Audi Innovation Lab Award:

Go to the original article...

Interview with Sony 48MP CIS Designers

Image Sensors World        Go to the original article...

Sony publishes an article "Perspectives from the creators of the image sensor “microcosm”" with interviews with the IMX586 CMOS sensor designers. Few quotes:

"With smartphone cameras getting more and more sophisticated in recent years, every company has been striving to make pixels smaller to meet the demand for more advanced cameras that are still small enough to fit in a phone. So, in order to stay ahead of the competition, we needed to develop even smaller pixels. With the IMX586, we were able to achieve a pixel size of 0.8 μm, which in turn made it possible to deliver a high resolution of 48 effective megapixels even on a compact sensor of 1/2 inch (8.0 mm diagonal).

Downsizing even 0.1 μm is, in fact, incredibly difficult... the trend of miniaturization is about to enter a turning point. That is, we will eventually reach the limit for simply making pixels smaller and face tradeoffs due to miniaturization.

...we can differentiate our product by curtailing noise so as to realize high sensitivity performance and pioneering new pixel structures and miniaturization.
In addition, at Sony, we have people nearby thinking about signal processing algorithms, and we have the manufacturing company within our Group. This proximity gives us an advantage in that it makes it easier for us to find ways to achieve overall optimization.

...for the IMX586, our algorithms played a big role in functions such as the high dynamic range (HDR) image composition, the array conversion processing for the Quad Bayer color filter array that achieves both high sensitivity and high resolution, and the phase difference detection entailed in high-speed autofocusing.

...since the pixel size of the IMX586 was a world-first at 0.8 μm, the basic development started at Nagasaki, the core manufacturing site for smartphone image sensor development. However, due to circumstances related to other product development, resources and production, we decided to develop and produce in Oita.

The team at Oita was, frankly, very surprised with that move as we did not believe that we had enough experience in image sensor development compared with other Sony technology centers, and so we never thought that we would be at the forefront of product development for such a challenging technology.

Secondly, it had only been a little while since the Oita Technology Center joined Sony Semiconductor Manufacturing, so there were many differences in development procedure and culture. For that reason, it was my mission to find a way to smoothly integrate the culture of the Oita plant with the culture of Sony Semiconductor Manufacturing. In the development of IMX586, the schedule was very tight, so there were challenges with unifying all the team members while working at the same time to meet the timeline.

The smaller the pixel, the more it becomes necessary to build the photodiodes in the depth direction of the silicon substrate. To do that, you need to use greater energy to inject impurities into the silicon.

Also, in the photolithography process, we use a thing called thick film resist. This time it was particularly difficult to address fluctuations in the imaging characteristics due to the change in shape of this thick film resist. We had to spend a lot of time improving processing reproducibility using the same equipment and uniformity in the wafer surface.
"

Go to the original article...

Interview with Sony 48MP CIS Designers

Image Sensors World        Go to the original article...

Sony publishes an article "Perspectives from the creators of the image sensor “microcosm”" with interviews with the IMX586 CMOS sensor designers. Few quotes:

"With smartphone cameras getting more and more sophisticated in recent years, every company has been striving to make pixels smaller to meet the demand for more advanced cameras that are still small enough to fit in a phone. So, in order to stay ahead of the competition, we needed to develop even smaller pixels. With the IMX586, we were able to achieve a pixel size of 0.8 μm, which in turn made it possible to deliver a high resolution of 48 effective megapixels even on a compact sensor of 1/2 inch (8.0 mm diagonal).

Downsizing even 0.1 μm is, in fact, incredibly difficult... the trend of miniaturization is about to enter a turning point. That is, we will eventually reach the limit for simply making pixels smaller and face tradeoffs due to miniaturization.

...we can differentiate our product by curtailing noise so as to realize high sensitivity performance and pioneering new pixel structures and miniaturization.
In addition, at Sony, we have people nearby thinking about signal processing algorithms, and we have the manufacturing company within our Group. This proximity gives us an advantage in that it makes it easier for us to find ways to achieve overall optimization.

...for the IMX586, our algorithms played a big role in functions such as the high dynamic range (HDR) image composition, the array conversion processing for the Quad Bayer color filter array that achieves both high sensitivity and high resolution, and the phase difference detection entailed in high-speed autofocusing.

...since the pixel size of the IMX586 was a world-first at 0.8 μm, the basic development started at Nagasaki, the core manufacturing site for smartphone image sensor development. However, due to circumstances related to other product development, resources and production, we decided to develop and produce in Oita.

The team at Oita was, frankly, very surprised with that move as we did not believe that we had enough experience in image sensor development compared with other Sony technology centers, and so we never thought that we would be at the forefront of product development for such a challenging technology.

Secondly, it had only been a little while since the Oita Technology Center joined Sony Semiconductor Manufacturing, so there were many differences in development procedure and culture. For that reason, it was my mission to find a way to smoothly integrate the culture of the Oita plant with the culture of Sony Semiconductor Manufacturing. In the development of IMX586, the schedule was very tight, so there were challenges with unifying all the team members while working at the same time to meet the timeline.

The smaller the pixel, the more it becomes necessary to build the photodiodes in the depth direction of the silicon substrate. To do that, you need to use greater energy to inject impurities into the silicon.

Also, in the photolithography process, we use a thing called thick film resist. This time it was particularly difficult to address fluctuations in the imaging characteristics due to the change in shape of this thick film resist. We had to spend a lot of time improving processing reproducibility using the same equipment and uniformity in the wafer surface.
"

Go to the original article...

Kingpak Reports Higher Sales of Sony and ON Semi Sensors

Image Sensors World        Go to the original article...

Digitimes: Kingpak packaging house reports its Q1 revenues sequential growth of 15.8% and annual increase of 8.8%. The company utilization rate has rises sharply due to large orders from ON Semi and Sony. Kingpak now focuses its production on automotive devices with high gross margins, which contributes over 70% of the company's revenues. The company is expanding its production capacity by 40% to meet the next wave of robust demand for CIS devices driven by the growing penetration of ADAS.

Go to the original article...

Canon EOS 250D Rebel SL3 review – preview

Cameralabs        Go to the original article...

The Canon EOS 250D / Rebel SL3 is a compact DSLR aimed at first-time buyers looking for a step-up from the cheapest models. You get a 24MP APSC sensor, optical viewfinder, fully-articulated touchscreen and mic input, and while the 4k is limited, the 1080 enjoys great autofocus. See my preview for details!…

The post Canon EOS 250D Rebel SL3 review – preview appeared first on Cameralabs.

Go to the original article...

Ambarella Processor for Security Cameras Promises 2-5 Year Battery Life

Image Sensors World        Go to the original article...

BusinessWire: Ambarella introduces the S6LM camera SoC for both professional and home security cameras. The S6LM includes Ambarella’s latest HDR and low-light processing technology, 4K H.264 and H.265 encoding, multi-streaming, on-chip 360-degree dewarping, cyber-security features, and a quad-core Arm CPU. Fabricated in 10nm process, the SoC has very low-power operation, making it well-suited for small form factor and battery-powered designs.

An S6LM-based battery-powered camera or PIR video camera can shut down in less than one second when something such as an animal, shadow, or rain causes a false alert, effectively extending the camera’s battery life to between 2 to 5 years.

Go to the original article...

Qualcomm Enhances Camera AI Capabilities

Image Sensors World        Go to the original article...

Qualcomm Snapdragon 665 SoC is said to improve its AI capabilities over the previous generation:

"Snapdragon 665 is loaded with advanced AI capabilities to enhance your daily life. Powered by our third generation Qualcomm AI Engine, Hexagon 686 DSP, and Hexagon Vector eXtensions (HVX) for advanced on-device imaging and computing, you can enjoy features like AR Translate that instantly translates words in multiple languages. This latest platform also performs smart biometrics for enhanced security with features like 3D Face Unlock. Overall, these leading on-device AI features are 2X faster than the previous generation mobile platform, the Snapdragon 660.

Some of the Snapdragon 665’s most exciting AI features are related to the camera, opening up the possibilities for brilliant new capture capabilities. Take better shots thanks to object detection, auto scene detect, and smart cropping. Additionally, portrait mode, low-light night mode, and super resolution are designed to ensure you can capture the detail you want up close, at night, and in a multitude of different settings.
"




Snapdragon 730 and 730G SoCs feature the company's 4th generation AI processor:

"AI: Packing 2x the power of its predecessor, Qualcomm Technologies’ 4th generation multi-core Qualcomm® AI Engine accelerates intuitive on-device interactions for camera, gaming, voice and security. The Qualcomm® Hexagon™ 688 Processor inside Snapdragon 730 supports improved base scalar and Hexagon Vector eXtensions (HVX) performance, as well as the new Hexagon Tensor Accelerator—now adding dedicated AI processing into the Hexagon Processor. The combination of these provides a powerful blend of dedicated and programmable AI acceleration now in the 7 series.

Camera: For the first time in the 7 series, the Snapdragon 730 features the Qualcomm Spectra™ 350, featuring a dedicated Computer Vision (CV) ISP, to provide up to 4x overall power savings in Computer Vision compared to the previous generation. The lower power and faster CV can capture 4K HDR videos in Portrait Mode (Bokeh). The CV-ISP is also capable of high resolution depth sensing and the ability to support triple cameras that feature ultra-wide, portrait and telephoto lenses. It also captures photos and videos in the HEIF format so users can document life from multiple angles and store it all at half the file size to the previous generation.
"


Qualcomm's AI Day video shows the broad capabilities that the company expects to bring to the market:

Go to the original article...

Fraunhofer CSPAD-based LiDAR

Image Sensors World        Go to the original article...

Fraunhofer IMS shows its CSPAD-based flash LiDAR camera. CSPAD detectors are CMOS integrated SPADs with on-chip readout circuits. The implementation in a standard CMOS process allows cost efficient manufacturing and the design of compact sensors for applications that require high resolution imagers.


Go to the original article...

9th Fraunhofer IMS Workshop on CMOS Imaging

Image Sensors World        Go to the original article...

9th Fraunhofer IMS Workshop on CMOS Imaging to be held in Duisburg, Germany on May 7-8 2019, publishes its agenda:

"After a series of very successful workshops since 2002 we are happy to announce our 9th workshop on CMOS Imaging, a forum for the European industry and academia to meet and exchange the latest developments in CMOS based imaging technology. 15 presentations of excellent speakers stand for the high quality level of the event.

This year’s key topics are 3D imaging and LiDAR technologies, detectors for space, quantum imaging, and new trends in CMOS imaging, among others.
"

  • Flash LiDAR with CSPAD Arrays, Jennifer Ruskowski, Fraunhofer IMS
  • Components for LiDAR in Industrial and Automotive Applications, Winfried Reeb & Jeff Britton, Laser Components
  • Scanning Solid State LiDAR, Michael Kiehn, IBEO Automotive
  • LiDAR Sensors for ADAS and AD, Alexis Debray, Yole Développement SA
  • LiDAR Receivers for Automotive Applications, Marc Schillgalies, First Sensor AG
  • CMOS SPAD Array for Flash LiDAR, Ralf Kühnold, ELMOS AG
  • Advanced optical inspection with in-line computational Imaging, Ernst Bodenstorfer, AIT Austrian Institute of Technology GmbH
  • Backside Illumination Technology for CMOS Imagers, Stefan Dreiner, Fraunhofer IMS
  • Datasheets and Real Performance of CMOS Image Sensors, Albert Theuwissen, Harvest Imaging
  • CMOS SPAD Arrays for Fundamental Research, Peter Fischer, Universität Heidelberg
  • Optical Imaging based on Quantum Technologies, Nils Trautmann, Carl Zeiss AG
  • Ghost Imaging Using Entangled Photons, Dominik Walter, Fraunhofer IOSB
  • ISS Rendezvous and Beyond – LiDAR Sensors in Space, Jakub Bikowski, Jena-Optronik GmbH
  • Challenges for Optical Detectors in Space, Dirk Viehmann, Airbus D+S
  • CMOS TDI Detector for Earth Observation, Stefan Gläsener, Fraunhofer IMS
  • Optional: Visit of Fraunhofer Wafer Fab

Go to the original article...

OmniVision Announces Industry’s Smallest Cabin-Monitoring Automotive Image Sensor

Image Sensors World        Go to the original article...

PRNewswire: OmniVision announces the OV2778 automotive image sensor, which is said to provide the best value of any 2MP RGB-IR sensor for cabin- and occupant-monitoring, such as detecting packages and unattended children. The OV2778 comes in the smallest package available for the automotive in-cabin market segment — a 6.5 x 5.7mm automotive CSP. It also offers advanced ASIL functional safety, which is important for in-cabin applications when the OV2778 is being integrated as part of an ADAS system.

Demand for cabin and occupant monitoring are accelerating growth in the global automotive image sensor market,” said Thilo Rausch, product marketing manager at OmniVision. “Our new OV2778 image sensor enables these applications in mainstream vehicles by providing the best value with high sensitivity across all lighting conditions.

The OV2778 is built on 2.8um OmniBSI-2 Deep Well pixel technology, which delivers a 16-bit linear output from a single exposure. With the second exposure, the DR increases to 120dB. Additionally, with an integrated RGB-IR, 4x4 pattern color filter and external frame synchronization capability, the OV2778 yields top performance across varying lighting conditions.

This image sensor is AEC-Q100 Grade 2 certified for automotive applications. OV2778 samples are available now, along with a plug-and-play automotive reference design system that can be connected to any vehicle for rapid development.

Go to the original article...

More Pictures from Huawei RYYB Sensor Presentation

Image Sensors World        Go to the original article...

There are few more photos published on Twitter from Huawei P30 and P30 Pro presentation on CYYB CFA:


IFNews quotes Cowen Research comparing camera BOM in flagship smartphones. Huawei invests the most in its camera:

Go to the original article...

aiCTX Neuromorphic CNN Processor for Event-Driven Sensors

Image Sensors World        Go to the original article...

Swiss startup aiCTX announces a fully-asynchronous event-driven neuromorphic AI processor for low power, always-on, real-time applications. DynapCNN opens new possibilities for dynamic vision processing, bringing event-based vision applications to power-constrained devices for the first time.

DynapCNN is a 12mm^2 chip, fabricated in 22nm technology, housing over 1 million spiking neurons and 4 million programmable parameters, with a scalable architecture optimally suited for implementing Convolutional Neural Networks. It is a first of its kind ASIC that brings the power of machine learning and the efficiency of event-driven neuromorphic computation together in one device. DynapCNN is the most direct and power-efficient way of processing data generated by Event-Based and Dynamic Vision Sensors.

As a next-generation vision processing solution, DynapCNN is said to be 100–1000 times more power efficient than the state of the art, and delivers 10 times shorter latencies in real-time vision processing. Based on fully-asynchronous digital logic, the event-driven design of DynapCNN, together with custom IPs from aiCTX, allow it to perform ultra-low-power AI processing.

For real-time vision processing, almost all applications are for movement driven tasks (for example, gesture recognition; face detection/recognition; presence detection; movement tracking/recognition). Conventional image processing systems analyse video data on a frame by frame basis. “Even if nothing is changing in front of the camera, computation is performed on every frame,” explains Ning Qiao, CEO of aiCTX. “Unlike conventional frame-based approaches, our system delivers always-on vision processing with close to zero power consumption if there is no change in the picture. Any movement in the scene is processed using the sparse computing capabilities of the chip, which further reduces the dynamic power requirements.

Those savings in energy mean that applications based on DynapCNN can be always-on, and crunch data locally on battery powered, portable devices. “This is something that is just not possible using standard approaches like traditional deep learning ASICs,” adds Qiao.

Computation in DynapCNN is triggered directly by changes in the visual scene, without using a high-speed clock. Moving objects give rise to sequences of events, which are processed immediately by the processor. Since there is no notion of frames, DynapCNN’s continuous computation enables ultra-low-latency of below 5ms. This represents at least a 10x improvement from the current deep learning solutions available in the market for real-time vision processing.

Sadique Sheik, a senior R&D engineer at aiCTX, explains why having their processors do the computation locally would be a cost and energy efficient solution, and would bring additional privacy benefits. “Providing IoT devices with local AI allows us to eliminate the energy used to send heavy sensory data to the cloud for processing. Since our chips do all that processing locally, there’s no need to send the video off the device. This is a strong move towards providing privacy and data protection for the end user.

DynapCNN Development Kits will be available in Q3 2019.

Go to the original article...

ULIS Releases World’s Smallest 60 Hz VGA 12um Thermal Image Sensor

Image Sensors World        Go to the original article...

ALA News: ULIS launches Atto640, a 60fps VGA 12um pixel thermal image sensor for reduced overall size and cost of the camera. The target market is commercial and defense applications, such as Thermal Weapon Sights (TWS), surveillance and handheld thermography cameras, as well as Personal Vision Systems (PVS), including portable monoculars and binoculars for consumer outdoor leisure, law enforcement and border control.

ULIS is adding a VGA format to its existing QVGA Atto320 to give camera manufacturers more choice in its 12 µm product range. The interest for camera makers is that, compared to 17 µm pixel pitch technology, the 12 µm pitch enables them to use smaller and lower cost optics.

Atto640 achieves its size advantage over competing models through its Wafer Level Packaging (WLP) technology, in which the detector window is directly bonded to the wafer, a technique enabling a significant reduction in the overall dimension of the sensor. Atto640’s footprint is half the size of ULIS’ Pico640-046 (17µm) model. Since Atto640 is designed with WLP, a batch-processing technique, it is suited to high-volume production.

Samples of Atto640 are currently available, with production ramp-up slated for the end of 2019. ULIS intends to further extend its 12µm product line up with larger resolution sensors.



Go to the original article...

FBK Talks about Entangled Light Super-Resolution Microscopy

Image Sensors World        Go to the original article...

FBK publishes a video on SUPERTWIN project - the European entangled light super-resolution microscopy program:

Go to the original article...

SmartSens Unveils DSI Pixel Technology

Image Sensors World        Go to the original article...

PRNewswire: DSI pixel is the next-generation sensor technology provided by SmartSens that has better performance, faster time to market and higher cost effectiveness, compared to previous technologies. The DSI pixel technology integrates SmartSens' design and pixel & process knowledge into the foundry service by DB HiTek (Dongbu).

The DSI pixel is said to surpass both FSI and BSI in terms of performance. Compared to the current SmartPixel FSI performance, the DSI pixel excels in sensitivity improvement and dark current reduction by 2x and 5x respectively. When compared to a different vendor's BSI sensor performance, SmartSens' DSI technology offers enhanced SNR1 and read noise performance.

"With the rapid rise of IoT and AIoT, the market is demanding high-performance image sensors at a low cost of production and fast time to market," said William Ma, COO of SmartSens. "The SmartSens DSI technology goes beyond FSI and BSI technologies paving the way for unique technological advances in image recognition."

Go to the original article...

Sony Polarsens Videos

Image Sensors World        Go to the original article...

Sony publishes new videos explaining its polarization image sensors and showing some example pictures, such as Paris in polarized light:



Go to the original article...

Vision System Design 2019 Innovators Awards

Image Sensors World        Go to the original article...

VisionSystemDesign: Omnivision OS02C10 1080p HDR CMOS sensor won Vision Systems Design's Silver Innovator's Award. The OS02C10 has 2.9 µm pixel with QE of 60% at 850 µm and 40% at 940 µm. The sensor combines OmniVision’s ultra-low light (ULL) and Nyxel near-infrared (NIR) technologies to enable nighttime camera performance.


Sony Image Sensing Solutions XCG-CP510 polarized camera and SDK won Gold AWARD. In 2018, Sony Europe’s Image Sensing Solutions division launched the XCG-CP510 that uses Sony’s IMX250MZR sensor with on-chip polarization filters. In addition, Sony launched an SDK, which provides a dedicated image processing library to speed solution development, as well as numerous functions, such as stress measurement, glare reduction, and support functions such as demosaic and raw extraction.

LUCID Vision Labs Helios ToF 3D camera won Gold Awatd too. The camera is based on Sony’s DepthSense IMX556PLR BSI ToF image sensor with high NIR sensitivity, 10μm pixel size and high modulation contrast ratio. The camera can produce depth data at 60 fps with 640×480 resolution over a PoE Gigabit Ethernet interface. The camera has a precision of 2.5mm at 1m and 4.5mm at 2m.


Photoneo MotionCam-3D won Platinum Award for its "Parallel Structured Light Technology." The technology lets users capture high resolution images of moving objects at a maximum speed of 40 m/second. The camera also features a custom CMOS image sensor and can acquire 1068 x 800-point clouds at up to 60 fps. Additionally, the 3D camera features an NVIDIA Maxwell GPU and a recommended scanning distance of 350 to 2000 mm.

Go to the original article...

Sony vs Canon 135mm – can a 23 year-old lens really compete

Cameralabs        Go to the original article...

Owners of the Canon EF 135mm f2L USM consider it a legend, but can a 23 year old lens really compete with a modern design? Ben Harvey pitches the blisteringly-sharp Sony FE 135mm f1.8 GM against his beloved Canon to find out.…

The post Sony vs Canon 135mm – can a 23 year-old lens really compete appeared first on Cameralabs.

Go to the original article...

css.php