Archives for April 2019

Image Sensors in Jewelry

Image Sensors World        Go to the original article...

There happens to be one emerging application that is not sensitive to image quality, pixel size, power consumption, or any other parameter - jewelry. It is not clear how large this market is:

Go to the original article...

Graphene Photodetectors Overview

Image Sensors World        Go to the original article...

Arxiv.org paper "Recent Progress and Future Prospects of 2D-based Photodetectors" by Nengjie Huo and Gerasimos Konstantatos from Barcelona Institute of Science and Technology and ICREA reviews graphene imagers developed in the recent years.

"Conventional semiconductors such as silicon and InGaAs based photodetectors have encountered a bottleneck in modern electronics and photonics in terms of spectral coverage, low resolution, non-transparency, non-flexibility and CMOS-incompatibility. New emerging 2D materials such as graphene, TMDs and their hybrid systems thereof, however, can circumvent all these issues benefiting from mechanical flexibility, extraordinary electronic and optical properties, as well as wafer-scale production and integration. Heterojunction-based photodiodes based on 2D materials offer ultrafast and broadband response from visible to far infrared range. Phototransistors based on 2D hybrid systems combined with other material platforms such as quantum dots, perovskites, organic materials, or plasmonic nanostructures yield ultrasensitive and broadband light detection capabilities. Notably the facile integration of 2D-photodetectors on silicon photonics or CMOS platforms paves the way towards high performance, low-cost, broadband sensing and imaging modalities."

Go to the original article...

ON Semi Analyst Day and Q1 2019 Results

Image Sensors World        Go to the original article...

ON Semi has held Analyst Day on March 9, 2019 and also announces Q1 earnings a few days ago. Few quotes:


From SeekingAlpha earnings call transcript:

"In ADAS applications, our momentum continues to accelerate. We are seeing strong interest from customers in our broad portfolio of automotive image sensor products. Recall that we are the only provider of automotive image sensors with complete portfolio of 1 megapixel, 2 megapixel and 8 megapixel image sensors. The breadth of our portfolio has enabled us to secure many design wins from leading global OEMs and tier-1s."

Go to the original article...

Rode Wireless Go review

Cameralabs        Go to the original article...

The Rode Wireless Go is the World’s smallest wireless microphone system and one of the most affordable options for anyone wanting to explore the benefits of cable-free recording. A built-in mic means you can also clip the transmitter to clothing without mucking about with a separate mic. Find out why it could transform your videos in my full review!…

The post Rode Wireless Go review appeared first on Cameralabs.

Go to the original article...

Velodyne Abandons its San Jose Mega-Factory Project?

Image Sensors World        Go to the original article...

Velodyne closes an agreement with Nikon, under which Sendai Nikon Corporation will manufacture LiDARs for Velodyne with plans to start mass production in the second half of 2019. “Mass production of our high-performance lidar sensors is key to advancing Velodyne’s immediate plans to expand sales in North America, Europe, and Asia,” said Marta Hall, President and CBDO, Velodyne Lidar. “For years, Velodyne has been perfecting lidar technology to produce thousands of lidar units for autonomous vehicles (AVs) and advanced driver assistance systems (ADAS). It is our goal to produce lidar in the millions of units with manufacturing partners such as Nikon."

Compare this last statement with a previous Velodyne PR on its Megafactory: "Located in San Jose, CA, the enormous facility not only has enough space for high-volume manufacturing, but also for the precise distance and ranging alignment process for LiDAR sensors as they come off the assembly line. ...more than one million LiDAR sensors [are] expected to be built in the facility in 2018. That high-volume manufacturing will feed the global demand for Velodyne’s solid-state hybrid LiDAR."

Instead of shipping 1M LiDARs in 2018 alone, "Velodyne has shipped a cumulative total of 30,000 lidar sensors" from the start of the company to the end of March 2019.

One of the reasons expanding in Japan is said to be cost: "Working with Nikon, an expert in precision manufacturing, is a major step toward lowering the cost of our lidar products. Nikon is notable for expertly mass-producing cameras while retaining high standards of performance and uncompromising quality. Together, Velodyne and Nikon will apply the same attention to detail and quality to the mass production of lidar. Lidar sensors will retain the highest standards while at the same time achieving a price that will be more affordable for customers around the world,” Marta Hall says. However, Japan is not a cheap manufacturing location these days. It's not clear how production in Japan makes Velodyne LiDAR cheaper.

The companies are said to continue to investigate further areas of a wide-ranging and multifaceted business alliance.

One major part missing from this PR is a fate of Velodyne Mega-Factory in San-Jose. Half a year ago, the company has appointed a new COO with responsibility to introduce even more automation on the site. It looks like these efforts were not successful enough.

Go to the original article...

Caeleste MAF HDR GS BSI Rad-Hard Sensor

Image Sensors World        Go to the original article...

Caeleste ELFIS imager is said to be the first image sensor ever combining the following features:
  • True HDR (“MAF HDR”, motion artifact free HDR)
  • Global shutter using GS technology,
  • Allowing low noise CDS readout
  • Enabling Global Shutter (IWR) without dark current penalty
  • Backside illumination
  • TDI radiation hard design

It has been developed for ESA in collaboration with LFoundry and Airbus. A whitepaper explains the sensor's operation:

Go to the original article...

Sony Reports FY2018 Results

Image Sensors World        Go to the original article...

Sony results of FY 2018 ended on March 31, 2019 show 9.5% YoY growth in image sensor sales. The company forecasts a 18% sales growth next year:

Go to the original article...

Sony FE 35mm f2.8 ZA review

Cameralabs        Go to the original article...

The Sony FE 35mm f2.8 ZA is a wide-angle prime lens for Alpha mirrorless cameras. Launched with the original A7 full-frame bodies, it remains a popular choice for anyone wanting the most compact general-purpose lens for the system. Find out how if it’s still a contender against multiple rivals and alternatives in our full review!…

The post Sony FE 35mm f2.8 ZA review appeared first on Cameralabs.

Go to the original article...

ON Semi Gen3 SiPM LiDAR Demo

Image Sensors World        Go to the original article...

ON Semi demos its 3rd Gen SiPM LiDAR design:



Update: The demo has been prepared in collaboration with Phantom Intelligence startup.

Go to the original article...

Tesla Self-Driving Chip Supports 2.5Gpix/s Camera Input

Image Sensors World        Go to the original article...

Tesla revealed its Gen. 3 HW chip in its Autonomy Day presentation:

Go to the original article...

Sony to Delay Automotive 7.42MP Sensor Production to 2020

Image Sensors World        Go to the original article...

Nikkei: Sony decided to delay mass production of its IMX424 and IMX324 7.42MP automotive sensor to 2020. The company has shipped the first samples of the sensor in 2017, aiming for the volume production in June 2018. However, the production start has been delayed due to specification changes, additional functions and market trends. Now, the company finally decided when to start volume production.

On Semi too plans to start its 8MP sensor volume production in the early 2020s. Sony presented a demo of its new cameras on a model car - 4 cameras are 7.42MP and another 4 are 2MP:

Go to the original article...

ResearchInChina: Automotive Thermal Vision is of Little Value

Image Sensors World        Go to the original article...

ResearchInChina publishes a report "Global and China Automotive Night Vision System Industry Report, 2019-2025." Few quotes:

"For the automotive sector, night vision system is of little value and seems like “chicken ribs” – tasteless when eaten but a pity to throw away.

In function, night vision system is a special solution for automobiles now that it enables a vehicle to see an object more than 300m ahead at night (compared with a mere 80m offered by headlamps) and gives driver more time to react, ensuring safer driving. ADAS and other technologies (like LiDAR and ordinary optical camera), however, play a part in night driving safety as well. And the stubbornly high price justifies the sluggish demand for night vision systems such as infrared night vision system.

According to the statistics, night vision system was a standard configuration for 58 of vehicle models available on the Chinese market in March 2019, just less than in 2015, of which 18 were Savana (caravans). Audi, Mercedes-Benz and BMW are less enthusiastic about the technology, and just equip it to their luxury models each priced above RMB1 million (a combined 67% of models carrying the system).

In the meantime, the insiders hold such different views on night vision system as follows:

Negative:

“It’s not something that’s really necessary because optical cameras actually do pretty well at night and you have a radar system as backup that is not affected by light,” said Dan Galves, a senior vice president at Intel Corporation’s Mobileye.

Bosch argues that technical advances bring about the decreasing demand for night vision system. One reason is that ordinary camera alone can work outstandingly at night with the maturity of image sensing technology. Also, the progress in technologies for automotive lighting, like LED headlamp, offers a horizon as long as 100-200m. So Bosch has shifted its attention away from night vision solution.

Positive:

Tim LeBeau, the vice president of Seek Thermal, thinks that the current optical radar for autonomous cars cannot detect the heat of an object to ensure whether it is a creature or not, and that the cost of thermal sensors is slashed by about 20 percent a year as they get widely used.

People who detest high beam agree that headlamps delivering 200m beam will interfere with other drivers’ sight, and the solution combining low beam and passive night vision (infrared thermal image) system is the best choice.

Still, some vendors are sparing no efforts in making the technology more feasible for automotive application. Examples include Veoneer whose third-generation night vision system capable of detecting both pedestrians and animals is integrated with rotary LED headlamps which will automatically turn to the front object detected by the system; and Adasky’s Viper system that can classify the obstacles through convolutional neural network-based unique algorithms and display them on the cockpit screen to remind the driver.

Vendors will also work on laser-based night vision, low-light-level night vision, bionic night vision and head-up display (HUD) as well as headlamp fusion.

In brief, as long as price comes down to an affordable level, “the chicken ribs” will become “a delicious homely dish.”
"

Go to the original article...

LiDAR News: Livox, Apple, Canon-Pioneer, ON Semi

Image Sensors World        Go to the original article...

Livox goes though a learning cycle that incumbent LiDAR companies like Velodyne went a long time ago:

"After hearing from end-users about their specific needs, we’re releasing three new Livox Mid Series firmware for special application testing.

Multi-Return Firmware:

This firmware is designed for situations where the LiDAR laser may hit multiple objects simultaneously and produce multiple returns. It allows users to receive these multiple returns instead of the standard single return.


Short-Blind-Zone Firmware:

This firmware reduces the blind zone from 1 meter down to 0.3 meters which is helpful for shorter range detection applications like interior 3D modeling and mapping.


Threadlike-Noise Filtering Firmware:

This firmware supports processing for threadlike-noise points produced by consecutive return signals and allows you to set the depth of these points to zero."


Livox demos its scanning pattern at Tech.AD Berlin 2019 event:


Reuters reports that Apple has held talks with at least four LiDAR companies as possible suppliers for its self-driving cars, evaluating the companies’ technology while also still working on its own LiDAR design.

Apple is seeking LiDARs that would be smaller, cheaper and more easily mass produced than the current technology. The designs could potentially be made with conventional semiconductor manufacturing. Apple also wants sensors that can see at several hundred meters distance.

Pioneer and Canon announce that the companies have entered into an agreement to co-develop a 3D-LiDAR sensor.

Pioneer has been pursuing the development of compact, high-performance MEMS mirrors that can be produced at a low cost with the aim of mass production from 2020 onwards. In addition to developing object recognition algorithms and vehicle ego-localization algorithms, the company provided its 2018 3D-LiDAR sensor models to companies for testing in September 2018. Additionally, in January 2019, Pioneer established a new organizational structure that integrates autonomous-vehicle-related R&D, technology development and business development to further accelerate the growth of its autonomous vehicle business.

The companies will engage in the joint development of a 3D-LiDAR sensor towards the goal of mass production by Pioneer.

ON Semi fuses SiPM depth map with a regular AR0231 image in this demo:

Go to the original article...

NHK R&D Journal Issue on Image Sensing Devices

Image Sensors World        Go to the original article...

March 2019 issue of NHK STRL R&D Journal devoted to imaging devices being developed by the company:

Dark Current Reduction In Crystalline Selenium-Based Stacked-Type CMOS Image Sensors
Shigeyuki IMURA, Keitada MINEO, Kazunori MIYAKAWA, Masakazu NANBA,
Hiroshi OHTAKE And Misao KUBOTA
There is a possibility that highly sensitive imaging devices can be acquired by using avalanches (c-Se)-based stacked-type CMOS image sensors. In this visible region. The increase in the dark current in the low-electric field region (non-avalanche region) has been an issue. In this study, we optimized the growth conditions of the tellurium (Te) nucleation We have fabricated a test device on glass substrates and successfully reduced the dark current to below 100 pA, which is used to prevent the se film from peeling, resulting in a reduction of the dark current in the non-avalanche region. / cm2 (by a factor of 1/100) at a reverse-bias voltage of 15 V.


Improvement in Performance of Photocells Using Organic Photoconductive Films Sandwiched Between Transparent Electrodes
Toshikatsu SAKAI, Tomomi TAKAGI, Yosuke HORI, Takahisa SHIMIZU,
Hiroshi OHTAKE And Satoshi AIHARA
We have developed a superior type of image sensor that has high sensitivity with three sensor elements, each of which is sensitive to only primary colors. , for each R / G / B-sensitive photocell sandwiched between transparent ITO electrodes.

3D Integrated Image Sensors With Pixel-Parallel Signal Processing
Masahide GOTO, Yuki HONDA, Toshihisa WATABE, Kei HAGIWARA,
Masakazu NANBA And Yoshinori IGUCHI
Photodiodes, pulse generation circuits and 16-bit pulse counters are three-dimensional. We studied a three-dimensional integrated image sensor that is capable of pixel-parallel signal processing, there by meeting integrated within each pixel by direct bonding of silicon on insulator (SOI) layers with embedded Au electrodes, which provides in-pixel pulse frequency modulation A / D converters. Pixel-parallel video images with Quarter Video Graphics Array (QVGA) resolution were obtained, demonstrating the feasibility of these next-generation image sensors.


The Japanese version of the Journal has much many papers but it's harder to figure out their technical content.

Go to the original article...

Image Sensors at VLSI Symposia 2019

Image Sensors World        Go to the original article...

VLSI Symposia to be held in June this year in Kyoto, Japan, publishes its agenda with many image sensor papers:

A 640x640 Fully Dynamic CMOS Image Sensor for Always-On Object Recognition,
I. Park*, W. Jo*, C. Park*, B. Park*, J. Cheon** and Y. Chae*, *Yonsei Univ. and **Kumoh National Institute of Technology, Korea
This paper presents a 640x640 fully dynamic CMOS image sensor for always-on object recognition. A pixel output is sampled with a dynamic source follower (SF) into a parasitic column capacitor, which is readout by a dynamic single-slope (SS) ADC based on a dynamic bias comparator and an energy efficient two-step counter. The sensor, implemented in a 0.11μm CMOS, achieves 0.3% peak non-linearity, 6.8erms RN and 67dB DR. Its power consumption is only 2.1mW at 44fps and is further reduced to 260μW at 15fps with sub-sampled 320x320 mode. This work achieves the state-of-the-art energy efficiency FoM of 0.7e-·nJ.

A 132 by 104 10μm-Pixel 250μW 1kefps Dynamic Vision Sensor with Pixel-Parallel Noise and Spatial Redundancy Suppression,
C. Li*, L. Longinotti*, F. Corradi** and T. Delbruck***, *iniVation AG, **iniLabs GmbH and ***Univ. of Zurich, Switzerland
This paper reports a 132 by 104 dynamic vision sensor (DVS) with 10μm pixel in a 65nm logic process and a synchronous address-event representation (SAER) readout capable of 180Meps throughput. The SAER architecture allows adjustable event frame rate control and supports pre-readout pixel-parallel noise and spatial redundancy suppression. The chip consumes 250μW with 100keps running at 1k event frames per second (efps), 3-5 times more power efficient than the prior art using normalized power metrics. The chip is aimed for low power IoT and real-time high-speed smart vision applications.

Automotive LIDAR Technology,
M. E. Warren, TriLumina Corporation, USA
LIDAR is an optical analog of radar providing high spatial-resolution range information. It is an essential part of the sensor suite for ADAS (Advanced Driver Assistance Systems), and ultimately, autonomous vehicles. Many competing LIDAR designs are being developed by established companies and startup ventures. Although there are no standards, performance and cost expectations for automotive LIDAR are consistent across the automotive industry. Why are there so many different competing designs? We can look at the system requirements and organize the design options around a few key technologies.

A 64x64 APD-Based ToF Image Sensor with Background Light Suppression Up to 200 klx Using In-Pixel Auto-Zeroing and Chopping,
B. Park, I. Park, W. Choi and Y. C. Chae, Yonsei Univ., Korea
This paper presents a time-of-flight (ToF) image sensor for outdoor applications. The sensor employs a gain-modulated avalanche photodiode (APD) that achieves high modulation frequency. The suppression capability of background light is greatly improved up to 200klx by using a combination of in-pixel auto-zeroing and chopping. A 64x64 APD-based ToF sensor is fabricated in a 0.11μm CMOS. It achieves depth ranges from 0.5 to 2 m with 25MHz modulation and from 2 to 20 m with 1.56MHz modulation. For both ranges, it achieves a non-linearity below 0.8% and a precision below 3.4% at a 3D frame rate of 96fps.

A 640x480 Indirect Time-of-Flight CMOS Image Sensor with 4-tap 7-μm Global-Shutter Pixel and Fixed-Pattern Phase Noise Self- Compensation Scheme,
M.-S. Keel, Y.-G. Jin, Y. Kim, D. Kim, Y. Kim, M. Bae, B. Chung, S. Son, H. Kim, T. An, S.-H. Choi, T. Jung, C.-R. Moon, H. Ryu, Y. Kwon, S. Seo, S.-Y. Kim, K. Bae, S.-C. Shin and M. Ki, Samsung Electronics Co., Ltd., Korea
A 640x480 indirect Time-of-Flight (ToF) CMOS image sensor has been designed with 4-tap 7-μm global-shutter pixel in 65-nm back-side illumination (BSI) process. With novel 4-tap pixel structure, we achieved motion artifact-free depth map. Column fixed-pattern phase noise (FPPN) is reduced by introducing alternative control of the clock delay propagation path in the photo-gate driver. As a result, motion artifact and column FPPN are not noticeable in the depth map. The proposed ToF sensor shows depth noise less than 0.62% with 940-nm illuminator over the working distance up to 400 cm, and consumes 197 mW for VGA, which is 0.64 pW/pixel.

A 128x120 5-Wire 1.96mm2 40nm/90nm 3D Stacked SPAD Time Resolved Image Sensor SoC for Microendoscopy,
T. Al Abbas*, O. Almer*, S. W. Hutchings*, A. T. Erdogan*, I. Gyongy*, N. A. W.Dutton** and R. K. Henderson*, *Univ. of Edinburgh and
**STMicroelectronics, UK
An ultra-compact 1.4mmx1.4mm, 128x120 SPAD image sensor with a 5-wire interface is designed for time-resolved fluorescence microendoscopy. Dynamic range is extended by noiseless frame summation in SRAM attaining 126dB time resolved imaging at 15fps with 390ps gating resolution. The sensor SoC is implemented in STMicroelectronics 40nm/90nm 3D-stacked BSI CMOS process with 8μm pixels and 45% fill factor.

Fully Integrated Coherent LiDAR in 3D-Integrated Silicon Photonics/65nm CMOS,
P. Bhargava*, T. Kim*, C. V. Poulton**, J. Notaros**, A. Yaacobi**, E. Timurdogan**, C. Baiocco***, N. Fahrenkopf***, S. Kruger***, T. Ngai***, Y. Timalsina***, M. R. Watts** and V. Stojanovic*, *Univ. of California, Berkeley, **Massachusetts Institute of Technology and ***College of Nanoscale Science and Engineering, USA
We present the first integrated coherent LiDAR system with experimental ranging demonstrations operating within the eyesafe 1550nm band. Leveraging a unique wafer-scale 3D integration platform which includes customizable silicon photonics and nanoscale CMOS, our system seamlessly combines a high-sensitivity optical coherent detection front-end, a large-scale optical phased array for beamforming, and CMOS electronics in a single chip. Our prototype, fabricated entirely in a 300mm wafer facility, shows that low-cost manufacturing of high-performing solid-state LiDAR is indeed possible, which in turn may enable extensive adoption of LiDARs in consumer products, such as self-driving cars, drones, and robots.

Automotive Image Sensor for Autonomous Vehicle and Adaptive Driver Assistance System,
H. Matsumoto, Sony Corp.
Human vision is the most essential sensor to drive vehicle. Instead of human eyes, CMOS image sensor is the best sensing device to recognize objects and environment around the vehicle. Image sensors are also used in various use cases such as driver and passenger monitor in cabin of vehicle. For these use cases, some special functionalities and specification are needed. In this session the requirements for automotive image sensor will be discussed such as high dynamic range, flicker mitigation and low noise. In the last part the key technology to utilize image sensor, such as image recognition and computer vision will be discussed.

426-GHz Imaging Pixel Integrating a Transmitter and a Coherent Receiver with an Area of 380x470 μm2 in 65-nm CMOS,
Y. Zhu*, P. R. Byreddy*, K. K. O* and W. Choi*, **, *The Univ. of Texas at Dallas and **Oklahoma state Univ., USA
A 426-GHz imaging pixel integrating a transmitter and a coherent receiver using the three oscillators for 3-push within an area of 380x470 μm2 is demonstrated. The TX power is -11.3 dBm (EIRP) and sensitivity is -89.6 dBm for 1-kHz noise bandwidth. The sensitivity is the lowest among imaging pixels operating above 0.3 THz. The pixel consumes 52 mW from a 1.3 V VDD. The pixel can be used with a reflector with 47 dB gain to form a camera-like reflection mode image for an object 5 m away.

Monolithic Three-Dimensional Imaging System: Carbon Nanotube Computing Circuitry Integrated Directly Over Silicon Imager,
T. Srimani, G. Hills, C. Lau and M. Shulaker, Massachusetts Institute of Technology, USA
Here we show a hardware prototype of a monolithic three-dimensional (3D) imaging system that integrates computing layers directly in the back-end-of-line (BEOL) of a conventional silicon imager. Such systems can transform imager output from raw pixel data to highly processed information. To realize our imager, we fabricate 3 vertical circuit layers directly on top of each other: a bottom layer of silicon pixels followed by two layers of CMOS carbon nanotube FETs (CNFETs) (comprising 2,784 CNFETs) that perform in-situ edge detection in real-time, before storing data in memory. This approach promises to enable image classification systems with improved processing latencies.

Record-High Performance Trantenna Based on Asymmetric Nano-Ring FET for Polarization-Independent Large-Scale/Real-Time THz Imaging, E.-S. Jang*, M. W. Ryu*, R. Patel*, S. H. Ahn*, H. J. Jeon*, K. Han** and K. R. Kim*, *Ulsan National Institute of Science and Technology and **Dongguk Univ., Korea
We demonstrate a record-high performance monolithic trantenna (transistor-antenna) using 65-nm CMOS foundry in the field of a plasmonic terahertz (THz) detector. By applying ultimate structural asymmetry between source and drain on a ring FET with source diameter (dS) scaling from 30 to 0.38 micrometer, we obtained 180 times more enhanced photoresponse (∆u) in on-chip THz measurement. Through free-space THz imaging experiments, the conductive drain region of ring FET itself showed a frequency sensitivity with resonance frequency at 0.12 THz in 0.09 ~ 0.2 THz range and polarization-independent imaging results as an isotropic circular antenna. Highly-scalable and feeding line-free monolithic trantenna enables a highperformance THz detector with responsivity of 8.8kV/W and NEP of 3.36 pW/Hz0.5 at the target frequency.

Custom Silicon and Sensors Developed for a 2nd Generation Augmented Reality User Interface,
P. O'Connor, Microsoft, USA.

Go to the original article...

Event-Based Cameras Review

Image Sensors World        Go to the original article...

Arxiv.org: Zurich University paper "Event-based Vision: A Survey" by G. Gallego, T. Delbruck, G. Orchard, C. Bartolozzi, B. Taba, A. Censi, S. Leutenegger, A. Davison, J. Conradt, K. Daniilidis, D. Scaramuzza compares different event-based cameras:

"Event cameras are bio-inspired sensors that work radically different from traditional cameras. Instead of capturing images at a fixed rate, they measure per-pixel brightness changes asynchronously. This results in a stream of events, which encode the time, location and sign of the brightness changes. Event cameras posses outstanding properties compared to traditional cameras: very high dynamic range (140 dB vs. 60 dB), high temporal resolution (in the order of microseconds), low power consumption, and do not suffer from motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as high speed and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world."

Go to the original article...

From Events to Video

Image Sensors World        Go to the original article...

Zurich University publishes a video explanations of its paper "Events-to-Video: Bringing Modern Computer Vision to Event Cameras" by Henri Rebecq, René Ranftl, Vladlen Koltun, and Davide Scaramuzza to be presented at IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Long Beach, June 2019.

Go to the original article...

Chemical Imaging in EUV

Image Sensors World        Go to the original article...

Semiconductor Engineering publishes a nice article on photoresist operation in EUV photolithography systems used in advanced processes. It shows how far the chemical imaging, the predecessor of image sensors, can go:

"In the early days of EUV development, supporters of the technology argued that it was “still based on photons,” as opposed to alternatives like electron beam lithography. While that’s technically true, even a casual glance at EUV optics shows that these photons interact with matter differently.

An incoming EUV photon has so much energy that it doesn’t interact with the molecular orbitals to any significant degree. John Petersen, principal scientist at Imec, explained that it ejects one of an atom’s core electrons.

...the photoelectron recombines with the material, ejecting another electron. This cascade of absorption/emission events, with energy dissipating at each step, continues until the electron energy drops below about 30 eV.

Once the electron energy is in the 10 to 20 eV range, Petersen said, researchers see the formation of quantized plasma oscillations, known as plasmons. The plasmons in turn create an electric field, with effects on further interactions that are not yet understood.

Only after energy falls below 5 to 10 eV, where electrons have quantum resonance with molecular orbitals, does the familiar resist chemistry of older technologies emerge. At this level, molecular structure and angular momentum drive further interactions.
"

Go to the original article...

Chemical Imaging in EUV

Image Sensors World        Go to the original article...

Semiconductor Engineering publishes a nice article on photoresist operation in EUV photolithography systems used in advanced processes. It shows how far the chemical imaging, the predecessor of image sensors, can go:

"In the early days of EUV development, supporters of the technology argued that it was “still based on photons,” as opposed to alternatives like electron beam lithography. While that’s technically true, even a casual glance at EUV optics shows that these photons interact with matter differently.

An incoming EUV photon has so much energy that it doesn’t interact with the molecular orbitals to any significant degree. John Petersen, principal scientist at Imec, explained that it ejects one of an atom’s core electrons.

...the photoelectron recombines with the material, ejecting another electron. This cascade of absorption/emission events, with energy dissipating at each step, continues until the electron energy drops below about 30 eV.

Once the electron energy is in the 10 to 20 eV range, Petersen said, researchers see the formation of quantized plasma oscillations, known as plasmons. The plasmons in turn create an electric field, with effects on further interactions that are not yet understood.

Only after energy falls below 5 to 10 eV, where electrons have quantum resonance with molecular orbitals, does the familiar resist chemistry of older technologies emerge. At this level, molecular structure and angular momentum drive further interactions.
"

Go to the original article...

Teledyne e2v Re-Announces 4K 710fps APS-C Sensor with GS

Image Sensors World        Go to the original article...

GlobeNewswire: Teledyne e2v announces samples availability of Lince11M sensor half a year after the original announcement. Lince11M is designed for applications that require 4K resolution at very high shutter speed. This standard sensor combines 4K resolution at 710 fps in APS-C format.

Go to the original article...

SRI to Develop Night Vision Sensor

Image Sensors World        Go to the original article...

PRNewswire: SRI International has received an award to deliver digital night vision camera prototypes to support the U.S. Army's IVAS (Integrated Visual Augmentation System) program. SRI will design a low-light-level CMOS sensor and integrate the device into a custom camera module optimized for low size, weight and power (SWAP).

"SRI has been steadily advancing the low-light-level performance of night vision CMOS (NV-CMOS®) image sensors and we are pleased that the IVAS program will incorporate our fourth generation NV-CMOS imagers," said Colin Earle, associate director, Imaging Systems, SRI International.

Go to the original article...

BAE Announces no-ITAR Restricted 2.3MP 60fps Thermal Sensor

Image Sensors World        Go to the original article...

BAE Systems' Sensor Solutions is launching Athena1920 full HD (1920x1200) thermal camera core. Based on uncooled 12µm pixels, the Athena1920 is available now with no ITAR restrictions at 60Hz frame rate:

Go to the original article...

All Huawei P30 Cameras Made by Sony

Image Sensors World        Go to the original article...

EETimes publishes SystemPlus teardown results of Huawei P30 Pro flagship smartphone:

"Separating Huawei P30 Pro, more than anything else though, is its use of quad cameras. The new smartphone literally has four cameras. They include a main camera, plus cameras for wide-angle, Time-of-Flight and a periscope view. All four use Sony CMOS image sensors. “It’s a full design win for Sony,” said Stéphane Elisabeth, costing analyst expert at SystemPlus Consulting.

Go to the original article...

Sony Robotics and Interaction Future is Based on ToF and Stereo Technologies

Image Sensors World        Go to the original article...

Sony exhibition at Milan Design Week devoted to AI and robotics funture is based on the company's ToF and stereo vision technologies:

"Sony's leading image sensor technologies are used in the exhibits of "Affinity in Autonomy". Stereo cameras with back-illuminated Time-of-Flight image sensor and CMOS image sensor for sensing applications equipped with global shutter enable new interactive experiences by sensing conditions surrounding human and robotics.

Back-illuminated Time-of-Flight image sensor

With ToF technology, the distance to an object is measured by the time it takes for light from a light source to reach the object and reflect back to the sensor. ToF image sensors detect distance information for every pixel, resulting in highly accurate depth maps.
The new sensor which adopts back-illuminated CMOS image sensor architecture allows for more accurate detection of the reflected light because of improved sensor sensitivity.

CMOS image sensor for sensing applications equipped with global shutter function(IMX418)

The new product builds on the advantages of Sony's CMOS image sensor equipped with a global shutter function without focal plane distortion, with lower power consumption.

This product employs an angle of view with a 1:1 aspect ratio, which minimizes image information loss due to device tilt, whether the camera is mounted on the front, back, top, bottom, left or right of an HMD, drone, or autonomous robot.
"

Go to the original article...

SiC Image Sensor Thesis

Image Sensors World        Go to the original article...

KTH Royal Institute of Technology, Stockholm, Sweden publishes a PhD Thesis "Silicon Carbide High Temperature Photodetectors and Image Sensor" bu Shouben Hou.

"Silicon Carbide (SiC) has the advantages of ultraviolet (UV) sensing and high temperature characteristics because of its wide band gap. Driven by the objective of probing the high temperature surface of Venus (460 °C), this thesis develops SiC photodetectors and an image sensor for extremely high temperature functions. The devices and circuits are demonstrated through the procedure of layout design, in-house processing and characterizations on two batches.

The photodetectors developed in this thesis, including photodiodes with various mesa areas, a phototransistor and a phototransistor Darlington pair have stable characteristics in a wide temperature range (25 °C ~ 500 °C). The maximum operational temperature of the p-i-n photodiode (550 °C) is the highest recorded temperature accomplished ever by a photodiode. The optical responsivity of the photodetectors covers the spectrum from 220 nm to 380 nm, which is UV-only.

The SiC pixel sensor and image sensor developed in this thesis are pioneer works. The pixel sensor overcomes the challenge of monolithic integration of SiC photodiode and transistors by sharing the same epitaxial layers and topside contacts. The pixel sensor is characterized from 25 °C to 500 °C. The whole image sensor circuit has 256 (16 ×16) pixel sensors and one 8-bit counter together with two 4-to-16 decoders for row/column selection. The digital circuits are built by the standard logic gates selected from the TTL PDK. The image sensor has 1959 transistors in total. The function of the image sensor up to 400 °C is verified by taking basic photos of nonuniform UV illumination on the pixel sensor array.
"

Go to the original article...

Hillhouse Renamed to CelePixel and Relocated to Shanghai

Image Sensors World        Go to the original article...

Hillhouse Technology Singapore has been renamed to CelePixel Technology and relocated to Shanghai, China. The company develops neuromorphic event-driven sensor and has filed for 7 US patents:

"In 1989, Carver Mead, US computer scientist, a founder of Moore’s law and VLSI, created the concept of Neuromorphic Engineering.

In 1990s, his students Misha Mahowald and Kwabena Boahen developed the first Retinomorphic sensor based on Address Event Representation. Subsequently, a number of scientific institutions started to research on Retinomorphic sensors.

Standing on the shoulders of giants, CelePixel has gone further in technological innovations and explorations, to take the cutting-edge underlying technology to forefront of commercial applications.
"


The company has won Audi Innovation Lab Award:

Go to the original article...

Interview with Sony 48MP CIS Designers

Image Sensors World        Go to the original article...

Sony publishes an article "Perspectives from the creators of the image sensor “microcosm”" with interviews with the IMX586 CMOS sensor designers. Few quotes:

"With smartphone cameras getting more and more sophisticated in recent years, every company has been striving to make pixels smaller to meet the demand for more advanced cameras that are still small enough to fit in a phone. So, in order to stay ahead of the competition, we needed to develop even smaller pixels. With the IMX586, we were able to achieve a pixel size of 0.8 μm, which in turn made it possible to deliver a high resolution of 48 effective megapixels even on a compact sensor of 1/2 inch (8.0 mm diagonal).

Downsizing even 0.1 μm is, in fact, incredibly difficult... the trend of miniaturization is about to enter a turning point. That is, we will eventually reach the limit for simply making pixels smaller and face tradeoffs due to miniaturization.

...we can differentiate our product by curtailing noise so as to realize high sensitivity performance and pioneering new pixel structures and miniaturization.
In addition, at Sony, we have people nearby thinking about signal processing algorithms, and we have the manufacturing company within our Group. This proximity gives us an advantage in that it makes it easier for us to find ways to achieve overall optimization.

...for the IMX586, our algorithms played a big role in functions such as the high dynamic range (HDR) image composition, the array conversion processing for the Quad Bayer color filter array that achieves both high sensitivity and high resolution, and the phase difference detection entailed in high-speed autofocusing.

...since the pixel size of the IMX586 was a world-first at 0.8 μm, the basic development started at Nagasaki, the core manufacturing site for smartphone image sensor development. However, due to circumstances related to other product development, resources and production, we decided to develop and produce in Oita.

The team at Oita was, frankly, very surprised with that move as we did not believe that we had enough experience in image sensor development compared with other Sony technology centers, and so we never thought that we would be at the forefront of product development for such a challenging technology.

Secondly, it had only been a little while since the Oita Technology Center joined Sony Semiconductor Manufacturing, so there were many differences in development procedure and culture. For that reason, it was my mission to find a way to smoothly integrate the culture of the Oita plant with the culture of Sony Semiconductor Manufacturing. In the development of IMX586, the schedule was very tight, so there were challenges with unifying all the team members while working at the same time to meet the timeline.

The smaller the pixel, the more it becomes necessary to build the photodiodes in the depth direction of the silicon substrate. To do that, you need to use greater energy to inject impurities into the silicon.

Also, in the photolithography process, we use a thing called thick film resist. This time it was particularly difficult to address fluctuations in the imaging characteristics due to the change in shape of this thick film resist. We had to spend a lot of time improving processing reproducibility using the same equipment and uniformity in the wafer surface.
"

Go to the original article...

Interview with Sony 48MP CIS Designers

Image Sensors World        Go to the original article...

Sony publishes an article "Perspectives from the creators of the image sensor “microcosm”" with interviews with the IMX586 CMOS sensor designers. Few quotes:

"With smartphone cameras getting more and more sophisticated in recent years, every company has been striving to make pixels smaller to meet the demand for more advanced cameras that are still small enough to fit in a phone. So, in order to stay ahead of the competition, we needed to develop even smaller pixels. With the IMX586, we were able to achieve a pixel size of 0.8 μm, which in turn made it possible to deliver a high resolution of 48 effective megapixels even on a compact sensor of 1/2 inch (8.0 mm diagonal).

Downsizing even 0.1 μm is, in fact, incredibly difficult... the trend of miniaturization is about to enter a turning point. That is, we will eventually reach the limit for simply making pixels smaller and face tradeoffs due to miniaturization.

...we can differentiate our product by curtailing noise so as to realize high sensitivity performance and pioneering new pixel structures and miniaturization.
In addition, at Sony, we have people nearby thinking about signal processing algorithms, and we have the manufacturing company within our Group. This proximity gives us an advantage in that it makes it easier for us to find ways to achieve overall optimization.

...for the IMX586, our algorithms played a big role in functions such as the high dynamic range (HDR) image composition, the array conversion processing for the Quad Bayer color filter array that achieves both high sensitivity and high resolution, and the phase difference detection entailed in high-speed autofocusing.

...since the pixel size of the IMX586 was a world-first at 0.8 μm, the basic development started at Nagasaki, the core manufacturing site for smartphone image sensor development. However, due to circumstances related to other product development, resources and production, we decided to develop and produce in Oita.

The team at Oita was, frankly, very surprised with that move as we did not believe that we had enough experience in image sensor development compared with other Sony technology centers, and so we never thought that we would be at the forefront of product development for such a challenging technology.

Secondly, it had only been a little while since the Oita Technology Center joined Sony Semiconductor Manufacturing, so there were many differences in development procedure and culture. For that reason, it was my mission to find a way to smoothly integrate the culture of the Oita plant with the culture of Sony Semiconductor Manufacturing. In the development of IMX586, the schedule was very tight, so there were challenges with unifying all the team members while working at the same time to meet the timeline.

The smaller the pixel, the more it becomes necessary to build the photodiodes in the depth direction of the silicon substrate. To do that, you need to use greater energy to inject impurities into the silicon.

Also, in the photolithography process, we use a thing called thick film resist. This time it was particularly difficult to address fluctuations in the imaging characteristics due to the change in shape of this thick film resist. We had to spend a lot of time improving processing reproducibility using the same equipment and uniformity in the wafer surface.
"

Go to the original article...

Kingpak Reports Higher Sales of Sony and ON Semi Sensors

Image Sensors World        Go to the original article...

Digitimes: Kingpak packaging house reports its Q1 revenues sequential growth of 15.8% and annual increase of 8.8%. The company utilization rate has rises sharply due to large orders from ON Semi and Sony. Kingpak now focuses its production on automotive devices with high gross margins, which contributes over 70% of the company's revenues. The company is expanding its production capacity by 40% to meet the next wave of robust demand for CIS devices driven by the growing penetration of ADAS.

Go to the original article...

Canon EOS 250D Rebel SL3 review

Cameralabs        Go to the original article...

The Canon EOS 250D / Rebel SL3 is a compact DSLR aimed at first-time buyers looking for a step-up from the cheapest models. You get a 24MP APSC sensor, optical viewfinder, fully-articulated touchscreen and mic input, and while the 4k is limited, the 1080 enjoys great autofocus. Check out my in-depth review!…

The post Canon EOS 250D Rebel SL3 review appeared first on Cameralabs.

Go to the original article...

css.php