MIPI Cleans Out Offensive Terminology

Image Sensors World        Go to the original article...

MIPI BoD directed its working groups to replace the offensive terms in MIPI documents. While the primary examples were “master” and “slave,” the groups were also asked to identify any other problematic words, such as “blacklist” and ”whitelist” for example, and work to replace them till the end of 2022.

Although working groups can propose additional terms, the following replacements have been made available as options:
  • For master: active, central, controller, default, host, initiator, leader, main, manager, parent, primary, principal, requester, supervisor
  • For slave: auxiliary, child, client, completer, device, follower, peripheral, proxy, replica, responder, secondary, standby, subordinate, supporter, target, worker

Go to the original article...

Imec Spin-Off Spectricity Raises €14M Series B Funding

Image Sensors World        Go to the original article...

BusinessWire: An Imec spin-off Spectricity announces a €14M ($16M US) Series B funding round to accelerate the development and mass production of hyperspectral sensors and imagers for high-volume, low-cost applications from wearables to smartphones and IoT devices.

I am pleased to see Spectricity growing and closing their second financing round, three years after its creation as imec spin-off,” said Luc Van den hove, President and CEO of imec. “Spectricity’s products are based on unique imec technology, and we will continue to maintain a strong link between our R&D and Spectricity’s development to enable a lasting competitive edge for Spectricity’s products.

Following more than 10 years of research at imec, Spectricity was founded in 2018 by a team including research engineers of imec. CEO Vincent Mouret who joined the company last April and Chairman of the Board, Pieter Vorenkamp, ex-SVP of Broadcom, helped to productize the technology and scale the company.

Go to the original article...

57 Slides about Sony Stacked Sensors

Image Sensors World        Go to the original article...

Sony ISSCC 2021 Forum 5 presentation "Evolving Image Sensor Architecture through Stacking Devices" by Yusuke Oike is published on-line. A small subset is below:

Go to the original article...

Sony Officially Warns about CIS Laser Damage

Image Sensors World        Go to the original article...

Sony publishes an official warning of possibility of CIS damage from laser pointer and laser displays:

Go to the original article...

Assorted Videos: Omnivision, Aeye, HK University

Image Sensors World        Go to the original article...

Omnivision continues publishing a series of short interviews with its CEO Boyd Fowler:


It turns out that Aeye automotive LiDAR happens to have has a bullet tracking capability with no HW modifications needed. Not sure what is the use cas for that:


Yang Chai from Hong Kong University presents "Near-/in-sensor computing for neuromorphic machine vision."

Go to the original article...

A Positive Effect of Image Sensor Noise

Image Sensors World        Go to the original article...

There is one positive consequence of image sensor noise - it helps to prevent forgeries. Université Paris-Saclay, France, and Universidad de la República, Uruguay, publish a paper "Noisesniffer: a Fully Automatic Image Forgery Detector Based on Noise Analysis" by Marina Gardella, Pablo Musé, Jean-Michel Morel, and Miguel Colom.

"Images undergo a complex processing chain from the moment light reaches the camera’s sensor until the final digital image is delivered. Each of these operations leave traces on the noise model which enable forgery detection through noise analysis. In this article we define a background stochastic model which makes it possible to detect local noise anomalies characterized by their number of false alarms. The proposed method is both automatic and blind, allowing quantitative and subjectivity-free detections. Results show that the proposed method outperforms the state of the art."

Go to the original article...

Quantum Dots Thesis

Image Sensors World        Go to the original article...

Universidad de Sevilla, Spain, publishes a BSc Thesis "Quantum dots: concept and application for image sensors" by Adri´an Romero Campelo.

"In the first part of this work (Chapter 2), a general outline of what quantum dots are and how they are manufactured (materials, techniques employed) will be provided. Besides, a complete description of the band structure of quantum dots, with an emphasis on their optoelectronic features, will be given too. In the second part of the thesis (Chapter 3), photodetection technologies are covered. After an introduction to the state of art of image sensors, the latest advances in quantum dot photodetection will be presented, considering their figures of merit and possible adaptation to current available production methods."


Another interesting recent thesis is from CEA-Leti and devoted to the modern microlens fabrication techniques, in French: "Étude d’une méthode de microfabrication 3D pour des applications de microlentilles d’imageurs" by Pierre Chevalier.

Go to the original article...

Samsung Paper on Under-Display Camera

Image Sensors World        Go to the original article...

EI publishes Samsung paper "Under Display Camera Image Recovery through Diffraction Compensation" by Jeongguk Lee, Yunseok Choi, Han-Sol Lee, Eundoo Heo, Dongpan Lim, Geunyoung Lee, and Seongwook Song presented at EI conference in January 2021.

"Under Display Camera(UDC) technology is being developed to eliminate camera holes and place cameras behind display panels according to full display trend in mobile phone. However, these camera systems cause attenuation and diffraction as light passes through the panel, which is inevitable to deteriorate the camera image. In particular, the deterioration of image quality due to diffraction and flares is serious, in this regard, this paper discusses techniques for restoring it. The diffraction compensation algorithm in this paper is aimed at real-time processing through HW implementation in the sensor for preview and video mode, and we've been able to use effective techniques to reduce computation by about 40 percent."

Go to the original article...

Samsung Paper on Under-Display Camera

Image Sensors World        Go to the original article...

EI publishes Samsung paper "Under Display Camera Image Recovery through Diffraction Compensation" by Jeongguk Lee, Yunseok Choi, Han-Sol Lee, Eundoo Heo, Dongpan Lim, Geunyoung Lee, and Seongwook Song presented at EI conference in January 2021.

"Under Display Camera(UDC) technology is being developed to eliminate camera holes and place cameras behind display panels according to full display trend in mobile phone. However, these camera systems cause attenuation and diffraction as light passes through the panel, which is inevitable to deteriorate the camera image. In particular, the deterioration of image quality due to diffraction and flares is serious, in this regard, this paper discusses techniques for restoring it. The diffraction compensation algorithm in this paper is aimed at real-time processing through HW implementation in the sensor for preview and video mode, and we've been able to use effective techniques to reduce computation by about 40 percent."

Go to the original article...

IDTechEx Forecasts Event-Based Sensor Sales of $20M in 10 Years from Now

Image Sensors World        Go to the original article...

PRNewswire: IDTechEx analyst Matt Dyson says: "IDTechEx forecast the market for the event-based vision sensor chips alone rising from its primarily pre-revenue status today to $20 million per year over the next 10 years. Furthermore, much of the value is likely to be captured by the software facilitated by event-based vision hardware, leading to a much greater total market."

Regarding the potential markets and applications, IDTechEx thinks: "Event-based vision is highly relevant to recording rapidly changing situations that require immediate data processing (since the volume of data produced is much less). Applications that require high temporal resolution or high dynamic range are especially relevant.

IDTechEx, therefore, perceives the most promising applications as collision avoidance and navigation for autonomous vehicles/ADAS and unmanned aerial vehicles (drones). These markets have huge potential but will require substantial software development and data collection to fully interpret the event-based vision data. As such, IDTechEx believes that smaller markets with much more predictable input data, such as iris-tracking for AR/VR goggles and laser beam profiling, will see the earliest adoption of event-based vision."

Go to the original article...

IDTechEx Forecasts Event-Based Sensor Sales of $20M in 10 Years from Now

Image Sensors World        Go to the original article...

PRNewswire: IDTechEx analyst Matt Dyson says: "IDTechEx forecast the market for the event-based vision sensor chips alone rising from its primarily pre-revenue status today to $20 million per year over the next 10 years. Furthermore, much of the value is likely to be captured by the software facilitated by event-based vision hardware, leading to a much greater total market."

Regarding the potential markets and applications, IDTechEx thinks: "Event-based vision is highly relevant to recording rapidly changing situations that require immediate data processing (since the volume of data produced is much less). Applications that require high temporal resolution or high dynamic range are especially relevant.

IDTechEx, therefore, perceives the most promising applications as collision avoidance and navigation for autonomous vehicles/ADAS and unmanned aerial vehicles (drones). These markets have huge potential but will require substantial software development and data collection to fully interpret the event-based vision data. As such, IDTechEx believes that smaller markets with much more predictable input data, such as iris-tracking for AR/VR goggles and laser beam profiling, will see the earliest adoption of event-based vision."

Go to the original article...

Ams Releases NanEyeM Module

Image Sensors World        Go to the original article...

ams OSRAM has first announced NanEyeM camera module for single-use medical endoscopy almost 3 years ago. Over the past time, the company "made a few packaging changes to the module for improved robustness." Now ams fully releases NanEyeM to production. The small dimensions of 1.0 mm x 1.0 mm x 2.7 mm allow the module to be used in the smallest of areas.

Thanks to its space-saving size, the NanEyeM is made for use in areas of severe size restrictions, which includes single-use applications in bronchoscopy, urological endoscopy or endoscopic procedures in the kidney,” says Dina Aguiar, Marketing Manager at ams OSRAM. “The combination with the requisite high image quality makes the camera module a unique and attractive solution for the fast growing disposable endoscope market.

The module uses a so-called "chip on tip" approach. Here, the image sensor and the optics are placed at the tip of the device (distal end). This results in significantly better image quality than when the camera module is located at the other, proximal end. The NanEyeM offers a fully integrated imaging module with a wafer level multi-element optics. It was specifically designed for optimal performance at close range distances. The lens combines a wide FoV with an extended depth of field (EDOF), reducing distortions and delivering a sharp and accurate image. The camera has LVDS interface to transmit over long lengths of cable without loss of signal integrity. NanEyeM boasts a frame rate of up to 49 fps, while maintaining low power consumption. 

NanEyeM is the second generation of the NanEye2D from ams OSRAM. The camera module has been released to production and is available for ordering.

Go to the original article...

3D-Stacked SPAD Image Sensor

Image Sensors World        Go to the original article...

University of Edinburgh, University of Glasgow, and Heriot-Watt University publish a SPIE paper "High-speed vision with a 3D-stacked SPAD image sensor" by Istvan Gyongy, Germán Mora Martín, Alex Turpin, Alice Ruget, Abderrahim Halimi, Robert Henderson, and Jonathan Leach.

"We here consider an advanced direct ToF SPAD imager with a 3D-stacked structure, integrating significant photon processing. The device generates photon timing histograms in-pixel, resulting in a maximum throughput of 100's of giga photons per second. This advance enables 3D frames to be captured at rates in excess of 1000 frames per second, even under high ambient light levels. By exploiting the re-configurable nature of the sensor, higher resolution intensity (photon counting) data may be obtained in alternate frames, and depth upscaled accordingly. We present a compact SPAD camera based on the sensor, enabling high-speed object detection and classification in both indoor and outdoor environments. The results suggest a significant potential in applications requiring fast situational awareness."

Go to the original article...

ON Semi Announces 16MP Global Shutter Sensor for Machine Vision Applications

Image Sensors World        Go to the original article...

BusinessWire: ON Semiconductor expands its XGS series of CMOS sensors. The XGS 16000 is a 16MP global shutter sensor for factory automation applications including robotics and inspection systems. Consuming 1W at 65fps, the XGS 16000 is said to be one of the best in class for power consumption, among 29 x 29 mm sensors.

The XGS 16000 shares a common architecture and footprint with other XGS CMOS image sensors. This enables manufacturers to use a single camera design to develop products in different resolutions. 

The XGS 16000 is designed in a unique 1:1 square aspect ratio, which helps maximize the image capture area within the optical circle of the camera lens and ensure optimal light sensitivity. 

ON Semi offers color and mono versions of the XGS 16000 X-Cube and X-Celerator developer kits.

Go to the original article...

Sharp Image Sensor Lineup

Image Sensors World        Go to the original article...

2021 Sharp catalog reveals that its image sensor linup is still dominated by CCDs, although CMOS sensors section is expanding. Sharp CCDs are quite fast by CCD standards with 8MP one reaching 25fps frame rate:

Go to the original article...

e2v iToF Sensor Demos

Image Sensors World        Go to the original article...

Teledyne e2v publishes 3 video demos of its iToF Hydra3D sensor announced a year ago (1, 2, 3):

Go to the original article...

Photomultiplication in NIR Organic Diodes

Image Sensors World        Go to the original article...

Nature publishes a paper "Enhancing sub-bandgap external quantum efficiency by photomultiplication for narrowband organic near-infrared photodetectors" by Jonas Kublitski, Axel Fischer, Shen Xing, Lukasz Baisinger, Eva Bittrich, Donato Spoltore, Johannes Benduhn, Koen Vandewal, and Karl Leo from Technische Universität Dresden (Germany), Leibniz-Institut für Polymerforschung Dresden (Germany) and Hasselt University (Belgium).

"Photomultiplication-type organic photodetectors have been shown to achieve high quantum efficiencies mainly in the visible range. Much less research has been focused on realizing near-infrared narrowband devices. Here, we demonstrate fully vacuum-processed narrow- and broadband photomultiplication-type organic photodetectors. Devices are based on enhanced hole injection leading to a maximum external quantum efficiency of almost 2000% at −10 V for the broadband device. The photomultiplicative effect is also observed in the charge-transfer state absorption region. By making use of an optical cavity device architecture, we enhance the charge-transfer response and demonstrate a wavelength tunable narrowband photomultiplication-type organic photodetector with external quantum efficiencies superior to those of pin-devices. The presented concept can further improve the performance of photodetectors based on the absorption of charge-transfer states, which were so far limited by the low external quantum efficiency provided by these devices."

Go to the original article...

FBK on SPAD IR Sensitivity Enhancement

Image Sensors World        Go to the original article...

FBK presents "NIR-sensitive single-photon devices (SiPM and SPADs in custom technologies), for industrial and automotive LIDAR applications" by Fabio Acerbi, G. Paternoster, A. Mazzi, A. Gola, and L. Ferrario:

Go to the original article...

Elmos iToF Presentation

Image Sensors World        Go to the original article...

Elmos publishes a slide deck "3D ToF sensor design and it‘s application in gesture and object recognition" by Sarah Blumenthal:

Go to the original article...

ST Presentation on Pixel-Level Stacking

Image Sensors World        Go to the original article...

ST presentation "Challenges and capabilities of 3D integration in CMOS imaging sensors" by Dominique Thomas, Jean Michailos, Krysten Rochereau, Joris Jourdon, and Sandrine Lhostis presents the company's achievements up to September 2019:

Go to the original article...

A (Wrong) Attempt to Improve Imaging

Image Sensors World        Go to the original article...

University of Glasgow and University of Edinburgh publish a paper "Noise characteristics with CMOS sensor array scaling" by Claudio Accarino, Valerio F. Annese, Boon Chong Cheah, Mohammed A. Al-Rawhania, Yash D. Shaha, James Beeley, Christos Giagkoulovitis, Srinjoy Mitra, David R.S.Cumming. The paper compares SNR of a large single sensor with an array of smaller sensors having the same combined area. The conclusion looks fairly strange:

"In this paper we have compared the noise performance of a sensor system made using a single large sensor, versus the noise achieved when averaging the signal from an array of small independent sensors. Whilst the SNR of a smaller physical sensor is typically less than that of a single larger sensor, the properties of uncorrelated Gaussian noise are such that the overall performance of an array of small sensors is significantly better when the signal is averaged.

This elegant result suggests that there is merit in using sensor arrays, such as those that can be implemented in CMOS, even if the application only calls for a single measurement. Given the relatively low cost of CMOS and the wide availability of CMOS sensors, it is therefore beneficial to use arrays in any application where low noise or multiple parallel sensing are a priority."

Go to the original article...

More about Sony-TSMC Fab in Japan

Image Sensors World        Go to the original article...

NikkeiAsia, TaiwanNews: The planned TSMC fab in Kumamoto, on the island of Kyushu in western Japan, would go forward in two phases, according to Nikkei Asia. The board of TSMC is expected to decide on the investment in the current quarter. 

The plant is expected to start operation in 2023. Once both phases are complete, the new fab will produce about 40,000 wafers per month in 28nm process. The fab is expected to be mainly used to make image sensors for Sony, TSMC's largest Japanese customer. Nikkei has been told that TSMC is open to a collaboration that would give Sony more say in operating the plant and negotiating with the Japanese government.

ElectronicsWeekly presents another view on Sony-TSMC fab project: "Sony has a $7 billion+  revenue business in image sensors which makes the  $2.5 billion cost of such a fab a reasonable proposition."

Bloomberg reports that Japan intends to revive its domestic chip design and production industry and reverse the current downward R&D trend:

Go to the original article...

Image Sensors at In-Person Autosens Brussels

Image Sensors World        Go to the original article...

Austosens Brussels is to be held in in-person (!!!) on September 15-16. The agenda has been published and includes a lot of image sensor related stuff:
  • Sensor technology and safety features to address the challenging needs for reliable and robust sensing/viewing systems
    Yuichi Motohasi, Automotive Image Sensor Applications Engineer, Sony
    In this presentation, the key characteristics of the image sensors will be presented. Also, the state-of-the-art of functional safety and cybersecurity requirement to achieve reliable and robust sensing/viewing system will be discussed.
  • Beyond the Visible: SWIR Enhanced Gated Depth Imaging
    Ziv Livne, CBO, TriEye
    We will introduce a new and exciting SWIR-based sensor modality which provides HD imaging and ranging information in all conditions (“SEDAR”). How it works, its main benefits, and why it is the future. We will then show experimental evidence of SEDAR superiority over sensors of other wavelengths. These include recordings in difficult conditions such as nighttime, fog, glare, dust, and more. Also, show depth map field results.
  • Automotive 2.1 µm High Dynamic Range Image Sensors
    Sergey Velichko, Sr. Manager, ASD Technology and Product Strategy, ON Semiconductor
    This work describes a first generation 8.3 Mega-Pixel (MP) 2.1 µm dual conversion gain (DCG) pixel image sensor developed and released to the market. The sensor has high dynamic range (HDR) up to 140 dB and cinematographic image quality. Non-bayer color filter arrays improve low light performance significantly for front and surround Advanced Driver Assistance System (ADAS) cameras. This enables transitioning from level 2 to level 3 autonomous driving (AD) and fulfilling challenging Euro NCAP requirements.
  • High Dynamic Range Backside Illuminated Voltage Mode Global Shutter CIS for in Cabin Monitoring
    Boyd Fowler, CTO, OmniVision Technologies
    Although global shutter operation is required to minimize motion artifacts in in-cabin monitoring, it forces large changes in the CIS architecture. Most global shutter CMOS image sensors available in the market today have larger pixels and lower dynamic range than rolling shutter image sensors. This adversely impacts their size/cost and performance under different lighting conditions. In this paper we describe the architecture and operation of backside illuminated voltage mode global shutter pixels. We also describe how the dynamic range of these pixels can be extended using either multiple integration times or LOFIC techniques. In addition, how backside illuminated voltage mode global shutter pixels can be scaled, enabling smaller more cost effective camera solutions and results from recent backside illuminated voltage mode global shutter CIS will be presented.
  • Chip-scale LiDAR for affordability and manufacturability
    Dongjae Shin, Principal researcher, Samsung Advanced Institute of Technology
    In this presentation, we introduce a chip-scale solid-state LiDAR technology promising the cost and manufacturability advantages inherited from the silicon technology. The challenge of the light source integration has been overcome by the III/V-on-silicon technology that has just emerged in the silicon industry. With the III/V-on-silicon chip in the core, initial LiDAR module performance, performance scalability, and application status are presented for the first time. Cost-volume analysis and eco-system implications are also discussed.
  • A novel scoring methodology and tool for assessing LiDAR performance
    Dima Sosnovsky, Principal System Architect, Huawei
    This presentation presents a tool, which summarizes the most crucial characteristics and provides a common ground to compare each solution's pros and cons, by drawing a scoring envelope based on 8 major parameters of the LiDAR system, representing its performance, suitability to an automotive application, and business advantages.

Go to the original article...

Assorted Videos: Omnivision, Aeye, Qualcomm, MIPI

Image Sensors World        Go to the original article...

Omnivision continues its series of video interviews with its CTO Boyd Fowler. This part is about LED flicker mitigation:


Aeye tells a (marketing) story behind its inception:


Qualcomm publishes a panel with its customers adopting its AI camera technology:


MIPI Alliance publishes a couple of presentation about the future imaging needs needs and A-PHY standard (link1 and link2):

Go to the original article...

Luminar Acquires InGaAs Sensor Manufacturer

Image Sensors World        Go to the original article...

BusinessWire: LiDAR maker Luminar is acquiring its exclusive InGaAs chip design partner and manufacturer, OptoGration Inc., securing supply chain as Luminar scales Iris LiDAR into series production. The acquisition secures a key part of Luminar’s supply chain and enables deeper integration with its ROIC design subsidiary Black Forest Engineering (BFE), which Luminar acquired in 2017. Luminar is combining the latest technology from Optogration and BFE to power its new fifth-generation lidar chip in Iris as the company prepares for series production of its product and technology.

For the past five years, Luminar has been closely collaborating with OptoGration, developing, iterating, and perfecting the specialized InGaAs photodetector technology that is required for 1550nm lidar. OptoGration has capacity to produce approximately one million InGaAs chips with Luminar’s design each year at their specialized fabrication facility in Wilmington, Mass, with the opportunity to expand to up to ten million units per year capacity.

Acquiring OptoGration is the culmination of a deep, half-decade long technology partnership that has dramatically advanced the proprietary lidar chips that power the industry-leading performance of our newest Iris sensor,” said Jason Eichenholz, Co-founder and CTO at Luminar. “The OptoGration team is unique in their ability to deliver photodetectors with the performance and quality that achieve our increasingly demanding requirements. Chip-level innovation and integration has been key to unlocking our performance and driving the substantial cost reductions we’ve achieved.

Luminar combines its InGaAs photodetector chips from Optogration with silicon ASICs, produced by BFE, to create its lidar receiver and processing chip, which is said to be the most sensitive, highest DR InGaAs receiver of its kind in the world.

OptoGration’s founders are joining Luminar as part of this transaction and will continue to lead the business with support from Luminar.

Luminar is a great home for OptoGration because we share a vision for transforming automotive safety and autonomy with lidar,” said William Waters, President of OptoGration. “We also share a commitment to continuous innovation and have an incredible track record of combining our technologies to increase performance and lower cost. Together we can go even faster to scale and realize Luminar’s vision.

The OptoGration acquisition is expected to close in the third quarter. The transaction price was not disclosed but does not represent a material impact to Luminar’s cash position or share count.

Go to the original article...

Optical Neural Processor Integrated onto Image Sensor

Image Sensors World        Go to the original article...

Metasurface-based optical CNNs start to be a hot topic for papers and presentations, for example, here and here. Another metasurface CNN example by Aydogan Ozcan from UCLA is shown in the video below:



A recent Arxiv.org paper "Metasurface-Enabled On-Chip Multiplexed Diffractive Neural Networks in the Visible" by Xuhao Luo, Yueqiang Hu, Xin Li, Xiangnian Ou, Jiajie Lai, Na Liu, and Huigao Duan from Hunan University (China), University of Stuttgart (Germany), and Max Planck Institute for Solid State Research (Germany) presents a fairly complete system integrated on an image sensor:

"Replacing electrons with photons is a compelling route towards light-speed, highly parallel, and low-power artificial intelligence computing. Recently, all-optical diffractive neural deep neural networks have been demonstrated. However, the existing architectures often comprise bulky components and, most critically, they cannot mimic the human brain for multitasking. Here, we demonstrate a multi-skilled diffractive neural network based on a metasurface device, which can perform on-chip multi-channel sensing and multitasking at the speed of light in the visible. The metasurface is integrated with a complementary metal oxide semiconductor imaging sensor. Polarization multiplexing scheme of the subwavelength nanostructures are applied to construct a multi-channel classifier framework for simultaneous recognition of digital and fashionable items. The areal density of the artificial neurons can reach up to 6.25x106/mm2 multiplied by the number of channels. Our platform provides an integrated solution with all-optical on-chip sensing and computing for applications in machine vision, autonomous driving, and precision medicine."

Go to the original article...

Event-Based Camera Tutorial

Image Sensors World        Go to the original article...

Tobi Delbruck delivers an excellent tutorial on event-based cameras prepared for the 2020 Telluride Neuromorphic workshop and ESSCIRC. The pdf file with slides is available here.

Go to the original article...

Graphene and Other 2D Materials Sensors Review

Image Sensors World        Go to the original article...

Nature publishes a review paper "Silicon/2D-material photodetectors: from near-infrared to mid-infrared" by Chaoyue Liu, Jingshu Guo, Laiwen Yu, Jiang Li, Ming Zhang, Huan Li, Yaocheng Shi & Daoxin Dai from Zhejiang University, China.

"Two-dimensional materials (2DMs) have been used widely in constructing photodetectors (PDs) because of their advantages in flexible integration and ultrabroad operation wavelength range. Specifically, 2DM PDs on silicon have attracted much attention because silicon microelectronics and silicon photonics have been developed successfully for many applications. 2DM PDs meet the imperious demand of silicon photonics on low-cost, high-performance, and broadband photodetection. In this work, a review is given for the recent progresses of Si/2DM PDs working in the wavelength band from near-infrared to mid-infrared, which are attractive for many applications. The operation mechanisms and the device configurations are summarized in the first part. The waveguide-integrated PDs and the surface-illuminated PDs are then reviewed in details, respectively. The discussion and outlook for 2DM PDs on silicon are finally given."

Go to the original article...

Assorted Videos: ST, Leti, Omnivision, Innoviz, P2020, University of Wisconsin-Madison

Image Sensors World        Go to the original article...

ST presents one more use case for its ToF proximity sensors:


CEA-Leti publishes a video about its perovskite-based X-Ray imagers:


Omnivision publishes its CTO Boyd Fowler's interview on automotive in-cabin monitoring: "Automotive in-cabin monitoring is on the rise – not just for drivers, but for passengers as well. Why? Hear from our FutureInSight chief technology officer, Boyd Fowler, in his one-on-one with AutoSens researcher Francis Nedvidek."


Innoviz CEO Omer Keilaf presents his company's LiDAR technology:


American Traffic Safety Services Association (ATSSA) publishes an IEEE P2020 presentation on LED flicker by Brian Deegan from Valeo, team leader of the IEEE P2020 Automotive Image Quality Working Group, LED Flicker Subgroup, and Robin Jenkin, Principal Image Quality Engineer at NVIDIA.


University of Wisconsin-Madison publishes an 1-hour long presentation by Mohit Gupta on single-photon imaging:

Go to the original article...

Smartsens News: Star Light Sensor Series, Mobile Market Debut, AI Family Improvements

Image Sensors World        Go to the original article...

Smartsens unveils SC850SL, a 4K-resolution image sensor in a new Star Light (SL) Series product lineup for high-end night vision cameras.

The 8MP SC850SL features stack BSI and rolling shutter process and offers 15% sensitivity improvement over "its type on the market." Also, compared with similar products in the industry, the SC850SL leverages SmartSens’ ultra-low noise reading circuit design to reduce the RN and FPN by 69% and 59% respectively. Additionally, with SmartSens’ 2nd Gen NIR+ Technology, the QE of the SC850SL is boosted up by 55% at 850 nm and 80% at 940 nm in comparison with the previous generation NIR+ technology, respectively. Furthermore, the sensor supports both the Staggered HDR with up to 100 dB dynamic range and PixGain HDR without HDR combination artifacts. Additional features include enhanced SNR and higher image quality under high temperature conditions.

Currently, SC850SL is sampling and is expected to begin mass production in September 2021.


Smartsens also enters low-end smartphone image sensor sensor market with 2MP products. The SC201CS comes in a 1/5.1-inch optical format at a pixel size of 1.75 μm, in two versions—SC201CS-mono and SC201-color. The SC201CS FPN and the read noise are as low as 0.43e- rms and 1.5e- rms respectively. Also, the SC201CS delivers 35e-/s dark current and 87 ppm white pixel at 60°C and thus adapts well under high temperatures.

The company intends to expand its mobile phone image sensor lineup in future.


Smartsens announces upgrades its Advanced Imaging (AI) image sensor family: the SC230AI, SC430AI, and its third 3K-resolution CIS SC530AI in the AI Series to include performance upgrade technology SmartClarity-2. These three products target an assortment of 2MP to 5MP surveillance applications.

SmartClarity-2 leverages SFCPixel technology, PixGain technology, NIR+ technology and low noise reading circuit design to offer increased sensitivity and reduce the readout noise level by 21%. SmartSens NIR+ technology boosts QE of SC230AI to 36% at 850nm and 21% at 940nm.

SmartClarity-2 provides up to 50% higher FWC compared with the mainstream sensors of its type. Each sensor is developed in a package that is Pin2Pin compatible with the previous generation and makes it an easy upgrade for customers.

Go to the original article...

css.php