RADOPT 2023 Nov 29-30 in Toulouse, France

Image Sensors World        Go to the original article...

The 2023 workshop on Radiation Effects on Optoelectronic Detectors and Photonics Technologies (RADOPT) will be co-organised by CNES, UJM, SODERN, ISAE-SUPAERO AIRBUS DEFENCE & SPACE, THALES ALENIA SPACE in Touluse, France on November 29 and 30, 2023.

After the success of RADOPT 2021, this second edition of the workshop, will continue to combine and replace two well-known events from the Photonic Devices and IC’s community: the “Optical Fibers in Radiation Environments Days -FMR” and the Radiation effects on Optoelectronic Detectors Workshop, traditionally organized every-two years by the COMET OOE of CNES.

The objective of the workshop is to provide a forum for the presentation and discussion of recent developments regarding the use of optoelectronics and photonics technologies in radiation-rich environments. The workshop also offers the opportunity to highlight future prospects in the fast-moving space, high energy physics, fusion and fission research fields and to enhance exchanges and collaborations between scientists. Participation of young researchers (PhD) is especially encouraged.




Go to the original article...

SWIR Vision Systems announces 6 MP SWIR sensor to be released in 2024

Image Sensors World        Go to the original article...

The sensor is based on quantum dot crystals deposited on silicon.

Link: https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/

Acuros® CQD® sensors are fabricated via the deposition of quantum dot semiconductor crystals upon the surface of silicon wafers. The resulting CQD photodiode array enables high resolution, small pixel pitch, broad bandwidth, low noise, and low inter-pixel crosstalk arrays, eliminating the prohibitively expensive hybridization process inherent to InGaAs sensors. CQD sensor technology is silicon wafer-scale compatible, opening its potential to very low-cost high-volume applications.

Features:

  •  3072 x 2048 Pixel Array
  •  7µm Pixel Pitch
  •  Global Snapshot Shutter
  •  Enhanced QE
  •  100 Hz Framerate
  •  Integrated 12bit ADC
  •  Full Visible-to-SWIR bandwidth
  •  Compatible with a range of SWIR lenses
Applications:
  • Industrial Inspection: Suitable for inspection and quality control in various industries, including semiconductor, electronics, and pharmaceuticals.
  •  Agriculture: Crop health monitoring, food quality control, and moisture content analysis.
  •  Medical Imaging: Blood vessel imaging, tissue differentiation, and endoscopy.
  •  Degraded Visual Environment: Penetrating haze, smoke, rain & snow for improved situational awareness.
  •  Security and Defense:Target recognition, camouflage detection, and covert surveillance.
  •  Scientific Research: Astronomy, biology, chemistry, and material science.
  •  Remote Sensing: Environmental monitoring, geology, and mineral exploration

 

Full press release:

SWIR Vision Systems to release industry-leading 6 MP SWIR sensors for defense, scientific, automotive, and industrial vision markets
 
The company’s latest innovation, the Acuros® 6, leverages its pioneering CQD® Quantum Dot image sensor technology, further contributing to the availability of very high resolution and broad-band sensors for a diversity of applications.

Durham, N.C., October 31, 2023 – SWIR Vision Systems today announces the upcoming release of two new models of short-wavelength infrared (SWIR) image sensors for Defense, Scientific, Automotive, and Industrial Users. The new sensors are capable of capturing images in the visible, the SWIR, and the extended SWIR spectral ranges. These very high resolution SWIR sensors are made possible by the company’s patented CQD Quantum Dot sensor technology.

SWIR Vision’s new products include both the Acuros 6 and the Acuros 4 CQD SWIR image sensors, featuring 6.3 megapixel and 4.2 megapixel global shutter arrays. Each sensor has a 7-micron pixel-pitch, 12-bit digital output, low read noise, and enhanced quantum efficiency, resulting in excellent sensitivity and SNR performance for a broad array of applications.

The new products employ SWIR Vision’s CQD photodiode technology, in which photodiodes are created via the deposition of low-cost films directly on top of silicon readout ICs. This approach enables small pixel sizes, affordable prices, broad spectral response, and industry-leading high-resolution SWIR focal plane arrays.

SWIR Vision is now engaging global camera makers, automotive, industrial, and defense system integrators, who will leverage these breakthrough sensors to tackle challenges in laser inspection and manufacturing, semiconductor inspection, automotive safety, long-range imaging, and defense.
“Our customers challenged us again to deliver more capability to their toughest imaging problems. The Acuros 4 and the Acuros 6 sensors deliver the highest resolution and widest spectral response available today,” said Allan Hilton, SWIR Vision’s Chief Product Officer. “The industry can expect to see new camera and system solutions based on these latest innovations from our best-in-class CQD sensor engineering group”.

About SWIR Vision Systems – SWIR Vision Systems (www.swirvisionsystems.com), a North Carolina-based startup company, has pioneered the development and introduction of high-definition, Colloidal Quantum Dot (CQD® ) infrared image sensor technology for infrared cameras, delivering breakthrough sensor capability. Imaging in the short wavelength IR has become critical for key applications within industrial, defense systems, mobile phones, and autonomous vehicle markets.
To learn more about our 6MP Sensors, go to https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/.

Go to the original article...

Acuros announces 6 MP SWIR Sensor to be released in 2024

Image Sensors World        Go to the original article...

The sensor is based on quantum dot crystals deposited on silicon.

Link: https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/

Acuros® CQD® sensors are fabricated via the deposition of quantum dot semiconductor crystals upon the surface of silicon wafers. The resulting CQD photodiode array enables high resolution, small pixel pitch, broad bandwidth, low noise, and low inter-pixel crosstalk arrays, eliminating the prohibitively expensive hybridization process inherent to InGaAs sensors. CQD sensor technology is silicon wafer-scale compatible, opening its potential to very low-cost high-volume applications.

Features:

  •  3072 x 2048 Pixel Array
  •  7µm Pixel Pitch
  •  Global Snapshot Shutter
  •  Enhanced QE
  •  100 Hz Framerate
  •  Integrated 12bit ADC
  •  Full Visible-to-SWIR bandwidth
  •  Compatible with a range of SWIR lenses
Applications:
  • Industrial Inspection: Suitable for inspection and quality control in various industries, including semiconductor, electronics, and pharmaceuticals.
  •  Agriculture: Crop health monitoring, food quality control, and moisture content analysis.
  •  Medical Imaging: Blood vessel imaging, tissue differentiation, and endoscopy.
  •  Degraded Visual Environment: Penetrating haze, smoke, rain & snow for improved situational awareness.
  •  Security and Defense:Target recognition, camouflage detection, and covert surveillance.
  •  Scientific Research: Astronomy, biology, chemistry, and material science.
  •  Remote Sensing: Environmental monitoring, geology, and mineral exploration

 

Full press release:

SWIR Vision Systems to release industry-leading 6 MP SWIR sensors for defense, scientific, automotive, and industrial vision markets
 
The company’s latest innovation, the Acuros® 6, leverages its pioneering CQD® Quantum Dot image sensor technology, further contributing to the availability of very high resolution and broad-band sensors for a diversity of applications.

Durham, N.C., October 31, 2023 – SWIR Vision Systems today announces the upcoming release of two new models of short-wavelength infrared (SWIR) image sensors for Defense, Scientific, Automotive, and Industrial Users. The new sensors are capable of capturing images in the visible, the SWIR, and the extended SWIR spectral ranges. These very high resolution SWIR sensors are made possible by the company’s patented CQD Quantum Dot sensor technology.

SWIR Vision’s new products include both the Acuros 6 and the Acuros 4 CQD SWIR image sensors, featuring 6.3 megapixel and 4.2 megapixel global shutter arrays. Each sensor has a 7-micron pixel-pitch, 12-bit digital output, low read noise, and enhanced quantum efficiency, resulting in excellent sensitivity and SNR performance for a broad array of applications.

The new products employ SWIR Vision’s CQD photodiode technology, in which photodiodes are created via the deposition of low-cost films directly on top of silicon readout ICs. This approach enables small pixel sizes, affordable prices, broad spectral response, and industry-leading high-resolution SWIR focal plane arrays.

SWIR Vision is now engaging global camera makers, automotive, industrial, and defense system integrators, who will leverage these breakthrough sensors to tackle challenges in laser inspection and manufacturing, semiconductor inspection, automotive safety, long-range imaging, and defense.
“Our customers challenged us again to deliver more capability to their toughest imaging problems. The Acuros 4 and the Acuros 6 sensors deliver the highest resolution and widest spectral response available today,” said Allan Hilton, SWIR Vision’s Chief Product Officer. “The industry can expect to see new camera and system solutions based on these latest innovations from our best-in-class CQD sensor engineering group”.

About SWIR Vision Systems – SWIR Vision Systems (www.swirvisionsystems.com), a North Carolina-based startup company, has pioneered the development and introduction of high-definition, Colloidal Quantum Dot (CQD® ) infrared image sensor technology for infrared cameras, delivering breakthrough sensor capability. Imaging in the short wavelength IR has become critical for key applications within industrial, defense systems, mobile phones, and autonomous vehicle markets.
To learn more about our 6MP Sensors, go to https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/.

Go to the original article...

imec paper on thin film pinned photodiode

Image Sensors World        Go to the original article...

Kim et al. from imec and coauthors from universities in Belgium and Korea recently published a paper titled "A Thin-Film Pinned-Photodiode Imager Pixel with Fully Monolithic Fabrication and beyond 1Me- Full Well Capacity" in MDPI Sensors. This paper describes imec's recent thin film pinned photodiode technology.

Open access paper link: https://www.mdpi.com/1424-8220/23/21/8803

Abstract
Thin-film photodiodes (TFPD) monolithically integrated on the Si Read-Out Integrated Circuitry (ROIC) are promising imaging platforms when beyond-silicon optoelectronic properties are required. Although TFPD device performance has improved significantly, the pixel development has been limited in terms of noise characteristics compared to the Si-based image sensors. Here, a thin-film-based pinned photodiode (TF-PPD) structure is presented, showing reduced kTC noise and dark current, accompanied with a high conversion gain (CG). Indium-gallium-zinc oxide (IGZO) thin-film transistors and quantum dot photodiodes are integrated sequentially on the Si ROIC in a fully monolithic scheme with the introduction of photogate (PG) to achieve PPD operation. This PG brings not only a low noise performance, but also a high full well capacity (FWC) coming from the large capacitance of its metal-oxide-semiconductor (MOS). Hence, the FWC of the pixel is boosted up to 1.37 Me- with a 5 μm pixel pitch, which is 8.3 times larger than the FWC that the TFPD junction capacitor can store. This large FWC, along with the inherent low noise characteristics of the TF-PPD, leads to the three-digit dynamic range (DR) of 100.2 dB. Unlike a Si-based PG pixel, dark current contribution from the depleted semiconductor interfaces is limited, thanks to the wide energy band gap of the IGZO channel material used in this work. We expect that this novel 4 T pixel architecture can accelerate the deployment of monolithic TFPD imaging technology, as it has worked for CMOS Image sensors (CIS).


Figure 1. Pixel cross-section for the monolithic TFPD image sensor (a) 3 T and (b) 4 T (TF-PPD) structure (TCO: transparent conductive oxide, HTL: hole transport layer, PG: photogate, TG: transfer gate, FD: floating diffusion). Electric potential and signal readout configuration for 3 T pixel (c) and for 4 T pixel (d). Pixel circuit diagram for 3 T pixel (e) and for the 4 T pixel (f).

 


Figure 2. I-V characteristic of QDPD test structure (a) and of IGZO TFT (b), a micrograph of the TF-PPD passive pixel array (c), and its measurement schematic (d). Band diagrams for the PD (e) and PG (f).


Figure 3. Silvaco TCAD simulation results; (a) simulated structure, (b) lateral potential profile along the IGZO layer, and (c) potential profile when TG is turned off and (d) on.


Figure 4. Signal output vs. integration time with different VPG and VTG values with the illumination. Signal curves with the fixed VTG (−1 V), varying VPG (−4~−1 V) (a), the same graphs for the fixed VPG (−2 V), and different VTGs (−6.5~−1 V) (b).

Figure 4. Signal output vs. integration time with different VPG and VTG values with the illumination. Signal curves with the fixed VTG (−1 V), varying VPG (−4~−1 V) (a), the same graphs for the fixed VPG (−2 V), and different VTGs (−6.5~−1 V) (b).

Figure 5. (a) Pixel output vs. integration time for different pixel pitches. (b) FWC comparison between estimation and measurement.

Figure 6. FWC comparison by different pixel fill factors. Pixel schematics for different shapes (a), and FWC by different pixel shapes and pitches (b).



Figure 7. Potential diagram describing FWC increase by the larger VPG (a), and FWC vs. VPG (b).

Figure 8. Passive pixel dark current (a) and Arrhenius plots (b) for the QDPD test structure and the passive pixel.

Figure 9. FWC vs. pixel area. A guideline showing the FWC density per unit area for this work (blue) and a trend line for the most of CISs (red).

 



Go to the original article...

EETimes article about imec’s new thin film pinned photodiode

Image Sensors World        Go to the original article...

Full article: https://www.eetimes.eu/imec-taps-pinned-photodiode-to-build-a-better-swir-sensor/

Imec Taps Pinned Photodiode to Build a Better SWIR Sensor

‘Monolithic hybrid’ prototype integrates PPD into the TFT structure to lower the cost of light detection in the nonvisible range, with improved noise performance. 

Silicon-based image sensors can detect light within a limited range of wavelengths and thus have limitations in applications like automotive and medical imaging. Sensors that can capture light beyond the visible range, such as short-wave infrared (SWIR), can be built using III-V materials, which combine such elements as gallium, indium, aluminum and phosphorous. But while those sensors perform well, their manufacture requires a high degree of precision and control, increasing their cost.

Research into less expensive alternatives has yielded thin-film absorbers such as quantum-dot (QD) and other organic photodiode (OPD) materials that are compatible with the CMOS readout circuits found in electronic devices, an advantage that has boosted their adoption for IR detection. But thin-film absorbers exhibit higher levels of noise when capturing IR light, resulting in lower image quality. They are also known to have lower sensitivity to IR.

The challenge, then, is to design a cost-effective image sensor that uses thin-film absorbers but offers better noise performance. Imec has taken aim at the problem by revisiting a technology that was first used in the 1980s to improve noise in early CMOS image sensors: the pinned photodiode (PPD).
The PPD structure’s ability to completely remove electrical charges before starting a new capture cycle makes it an efficient approach, as the sensor can reset without unwanted background noise (kTC noise) or any lingering influence from the previous image frame. PPDs quickly became the go-to choice for consumer-grade silicon-based image sensors. Their low noise and high power efficiency made them a favorite among camera manufacturers.

Researchers at imec integrated a PPD structure into thin-film–transistor (TFT) image sensors to yield a hybrid prototype. The sensor structure also uses imec’s proprietary indium gallium zinc oxide (IGZO) technology for electron transport.

“You can call such systems ‘monolithic hybrid’ sensors, where the photodiode is not a part of the CMOS circuit [as in CMOS image sensors, in which silicon is used for light absorption], but is formed with another material as the photoactive layer,” Pawel Malinowski, Pixel Innovations program manager at imec, told EE Times Europe. “The spectrum this photodiode captures is something separate … By introducing an additional thin-film transistor in between, it enables separation of the storage and readout nodes, making it possible to fully deplete the photodiode and transfer all charges to the readout, [thereby] preventing the generation of kTC noise and reducing image lag.”

Unlike the conventional thin-film-based pixel architecture, imec’s TFT hybrid PPD structure introduces a separate thin-film transistor (TFT) to the design, which acts as a transfer gate and a photogate—in other words, it functions as a middleman. Here, imec’s IGZO technology serves as an effective electron transport layer, as it has higher electron mobility. Also acting as the gate dielectric, it contributes to the performance of the sensor by controlling the flow of charges and enhancing absorption characteristics.
With the new elements strategically placed within the traditional PDD structure, the prototype 4T image sensor showed a low readout noise of 6.1e-, compared to >100e- for the conventional 3T sensor, demonstrating its superior noise performance, imec stated. Because of IGZO’s large bandgap, the TFT hybrid PPD structure also entails lower dark current than traditional CMOS image sensors. This means the image sensor can capture infrared images with less noise, less distortion or interference and more accuracy and detail, according to imec


Figure 1: Top (a) and cross-sectional (b) view of structure of TF-PPD pixels


By using thin-film absorbers, imec’s prototype image sensor can detect at SWIR wavelengths and beyond, imec said. Image sensors operating in the near-infrared range are already used in automotive applications and consumer apps like iPhone Face ID. Going to longer wavelengths, such as SWIR, enables better transmission through OLED displays, which leads to better “hiding” of the components behind the screen and reduction of the “notch.”


Malinowski said, “In automotive, going to longer wavelengths can enable better visibility in adverse weather conditions, such as visibility through fog, smoke or clouds, [and achieve] increased contrast of some materials that are hard to distinguish against a dark background—for example, high contrast of textiles against poorly illuminated, shaded places.” Using the thin-film image sensor could make intruder detection and monitoring in dark conditions more effective and cost-efficient. It could also aid in medical imaging, which uses SWIR to study veins, blood flow and tissue properties.


Looking ahead, imec plans to diversify the thin-film photodiodes that can be used in the proposed architecture. The current research has tested for two types of photodiodes: a photodiode sensitive to near-infrared and a QD photodiode sensitive to SWIR.


“Current developments were focused on realizing a proof-of-concept device, with many design and process variations to arrive at a generic module,” Malinowski said. “Further steps include testing the PPD structure with different photodiodes—for example, other OPD and QDPD versions. Furthermore, next-generation devices are planned to focus on a more specific use case, with a custom readout suitable for a particular application.


“SWIR imaging with quantum dots is one of the avenues for further developments and is also a topic with high interest from the imaging community,” Malinowski added. “We are open to collaborations with industrial players to explore and mature this exciting sensor technology.”

Go to the original article...

onsemi announces Hyperlux low power CIS for smart home

Image Sensors World        Go to the original article...

Press release: https://www.onsemi.com/company/news-media/press-announcements/en/onsemi-introduces-lowest-power-image-sensor-family-for-smart-home-and-office

onsemi Introduces Lowest Power Image Sensor Family for Smart Home and Office 

Hyperlux LP Image Sensors can extend battery life by up to 40%¹



What's New: Today onsemi introduced the Hyperlux LP image sensor family ideally suited for industrial and commercial cameras such as smart doorbells, security cameras, AR/VR/XR headsets, machine vision and video conferencing. These 1.4 µm pixel sensors deliver industry-leading image quality and low power consumption while maximizing performance to capture crisp, vibrant images even in difficult lighting conditions.

The product family also features a stacked architecture design that minimizes its footprint and at its smallest approaches the size of a grain of rice, making it ideal for devices where size is critical. Depending on the use case, customers can choose between the 5-megapixel AR0544, the 8-megapixel AR0830 or the 20-megapixel AR2020.

Why It Matters: Home and business owners continue to choose cameras to protect themselves more than any other security measure, with the market expected to triple by the end of the decade.² As a result, consumers are demanding devices that offer better image quality, reliability and longer battery life to improve the overall user experience.

With the image sensors, cameras can deliver clearer images and more accurate object detection even in harsh weather and lighting conditions. Additionally, these cameras are often placed in locations that can be difficult to access to replace or recharge batteries, making low power consumption a critical feature.

How It Works: The Hyperlux LP family is packed with features and proprietary technologies that optimize performance and resolution including:

  •  Wake on Motion – Enables the sensors to operate in a low-power mode that draws a fraction of the power needed in the full-performance mode. Once the sensor detects movement, it moves to a higher performance state in less time than it takes to snap a photo.
  •  Smart ROI – Delivers more than one region of interest (ROI) to give a context view of the scene at reduced bandwidth and a separate ROI in original detail.
  •  Near-Infrared (NIR) Performance – Delivers superior image quality due to the innovative silicon design and pixel architecture, with minimal supplemental lighting.
  •  Low Power – Reduces thermal noise which negatively impacts image quality and eliminates the need for heat sinks, reducing the overall cost of the vision system.

Supporting Quotes:
“By leveraging our superior analog design and pixel architecture, our sensors elevate the two most important elements people consider when buying a device, picture quality and battery life. Our new image sensor family delivers performance that matters with a significantly increased battery life and exquisite, highly detailed images,” said Ross Jatou, senior vice president and general manager, Intelligent Sensing Group, onsemi.

In addition to smart home devices, one of the other applications the Hyperlux LP family can improve is the office meeting experience with more intuitive, seamless videoconferencing solutions.
“Our video collaboration solutions require high-quality image sensors that bring together multiple factors for the best user experience. The superior optical performance, innovative features and extremely low power consumption of the Hyperlux LP image sensors enable us to deliver a completely immersive virtual meeting experience in highly intelligent and optimized videoconferencing systems,” said Ashish Thanawala, Sr. Director of Systems Engineering, Owl Labs.

What's Next: The Hyperlux LP Image Sensor Family will be available in the fourth quarter of 2023.

More Information:
 Learn more about the AR2020, the AR0830 and the AR0544.
 Read the blog: A Closer Look - Hyperlux LP Image Sensors

¹ Based on internal tests conducted under specific conditions. Actual results may vary based on device, usage patterns, and other external factors.
² Status of the CMOS Image Sensor Industry, Yole Intelligence Report, 2023.

Go to the original article...

ESSCIRC 2023 Lecture on "circuit insights" by Dr. Sara Pellegrini

Image Sensors World        Go to the original article...


In this invited talk at ESSCIRC 2023, Dr. Pellegrini shares her insights on circuits and sensor design through her research career at Politecnico Milano, Heriot Watt and now at STMicro. The lecture covers basics of LiDAR and SPAD sensors, and various design challenges such as low signal strength and background illumination.

Go to the original article...

Dr. Robert Henderson’s lecture on time-of-flight SPAD cameras

Image Sensors World        Go to the original article...


 

Imaging Time: Cameras for the Fourth Dimension

Abstract
Time is often considered as the fourth dimension, along with the length, width and depth that form the fabric of space-time. Conventional cameras observe only two of those dimensions inferring depth from spatial cues and record time only coarsely relative to many fast phenomena in the natural world. In this talk, I will introduce the concept of time cameras, devices based on single photon avalanche diodes (SPADs) that can record the time dimension of a scene at the picosecond scales commensurate with the speed of light. This talk will chart 2 decades of my research into these devices which have seen their transformation from a research curiosity to a mainstream semiconductor technology with billions of SPAD devices in consumer use in mobile phones for depth sensing autofocus-assist. We will illustrate the talk with videos and demonstrations of ultrafast SPAD cameras developed at the University of Edinburgh. I am proud that my group’s research maintains the University position at forefront of imaging technology which has transformed our lives, seeing the transition from chemical film to digital cameras, the omnipresence of camera phones and video meetings. In the near future, SPAD-based time cameras can also be expected to play a major societal role, within optical radars (LIDARs) for robotic vision and driverless cars, surgical guidance for cancer and perhaps even to add two further dimensions to the phone camera in your pocket!

Biography
Robert K. Henderson is a Professor of Electronic Imaging in the School of Engineering at the University of Edinburgh. He obtained his PhD in 1990 from the University of Glasgow. From 1991, he was a research engineer at the Swiss Centre for Microelectronics, Neuchatel, Switzerland. In 1996, he was appointed senior VLSI engineer at VLSI Vision Ltd, Edinburgh, UK where he worked on the world’s first single chip video camera. From 2000, as principal VLSI engineer in STMicroelectronics Imaging Division he developed image sensors for mobile phone applications. He joined University of Edinburgh in 2005, designing the first SPAD image sensors in nanometer CMOS technologies in the MegaFrame and SPADnet EU projects. This research activity led to the first volume SPAD time-of-flight products in 2013 in the form of STMicroelectronics FlightSense series, which perform an autofocus-assist now present in over 1 billion smartphones. He benefits from a long-term research partnership with STMicroelectronics in which he explores medical, scientific and high speed imaging applications of SPAD technology. In 2014, he was awarded a prestigious ERC advanced fellowship. He is an advisor to Ouster Automotive and a Fellow of the IEEE and the Royal Society of Edinburgh.

Go to the original article...

Image Sensing Topics at Upcoming IEDM 2023 Dec 9-13 in San Francisco

Image Sensors World        Go to the original article...

The 69th annual IEEE International Electron Devices Meeting (IEDM) will be held in San Francisco Dec. 9-13. This year there are three sessions dealing with advanced image sensing topics. You can find summaries of all of these papers by going here (https://submissions.mirasmart.com/IEDM2023/Itinerary/EventsAAG.aspx) and then clicking on the relevant sessions and papers within each one:
 
Session #8 on Monday, Dec. 11 is “Advanced Photonics for Image Sensors and High-Speed Communications.” It features six papers describing advanced photonics for image sensors and high speed communications. The first three deal with device and integration concepts for sub-diffraction color filters targeting imaging key performance indicators, while the second three deal with devices and technologies for high speed communication systems.

  1.  IMEC will describe a novel sub-micron integration approach to color-splitting, to match human eye color sensitivity.
  2.  VisEra Technologies will describe the use of nano-light pillars to improve the quantum efficiency and signal-to-noise ratio (SNR) of color filters on CMOS imaging arrays under low-light conditions.
  3.  Samsung will detail a metasurface nano-prism structure for wide field-of-view lenses, demonstrating 25% higher sensitivity and 1.2 dB increased SNR vs. conventional micro-lenses.
  4.  National University of Singapore will describe the integration of ferroelectric material into a LiNbO3-on-insulator photonic platform, demonstrating non-volatile memory and high-efficiency modulators with an efficiency of 66 pm/V.
  5.  IHP will discuss the first germanium electro-optical modulator operating at 100 GHz in a SiGe BiCMOS photonics technology.
  6.  An invited paper from Intel will discuss the first 256 Gbps WDM transceiver with eight 200 GHz-spaced wavelengths simultaneously modulated at 32 Gbps, and with a bit-error-rate less than 1e-12.

 
Session #20 on Tuesday, Dec. 12 is Emerging Photodetectors. It features five papers describing recent developments in emerging photodetectors spanning the MIR to the DUV spectral range, and from group IV and III-V sensors to organic detectors.

  1.  The first paper by KAIST presents a fully CMOS-compatible Ge-on-Insulator platform for detection of wavelengths beyond 4 µm.
  2.  The second paper by KIST (not a typo) presents a new record-low-jitter SPAD device integrated into a CIS process technology, covering a spectral range of visible up to NIR.
  3.  The third paper by KAIST describes a wavelength-tunable detection device combining optical gratings and phase-change materials, reaching wavelengths up to 1700 nm.
  4.  The University of Science and Technology of China will report on a dual-function tunable emitter and NIR photodetector combination based on III-V GaN/AlGaN nanowires on silicon.
  5.  An invited paper from France’s CNRS gives an overview on next-generation sustainable organic photodetectors and emitters.

 
Session #40 on Wednesday, Dec. 13 features six papers describing the most recent advances in image sensors.

  1.  Samsung will describe a 0.5 µm pixel, 3 layers-stacked, CMOS image sensor (CIS) with in-pixel Cu-Cu bonding technology featuring improved conversion gain and noise.
  2.  Omnivision will present a 2.2 µm-2 layer stacked high dynamic range VDGS CIS with 1x2 shared structure offering dual conversion gain and achieving low FPN.
  3.  STMicroelectronics will describe a 2.16 µm 6T BSI VDGS CIS using deep trench capacitors and achieving 90 dB dynamic range using spatially-split exposure.
  4.  Meta will describe a 2 megapixel - 4.23 µm pixel pitch - offering block-parallel A/D architecture and featuring programmable sparse-capture with a fine grain gating scheme for power saving.
  5.  Canon will introduce a new twisted photodiode CIS structure - 6 µm pixel pitch - enabling all-directional autofocus for high speed and accuracy and 95 dB DR.
  6.  Shanghai Jiao Tong University will present a 64x64-pixel organic imager prototype, based on a novel hole transporting layer (HTL)-free structure achieving the highest recorded low-light performance.

 
Full press release about the conference is below.

2023 IEEE International Electron Devices Meeting to Highlight Advances in Critical Semiconductor Technologies with the Theme, “Devices for a Smart World Built Upon 60 Years of CMOS”

Four Focus Sessions on topics of intense research interest:

  •  3D Stacking for Next-Generation Logic & Memory by Wafer Bonding and Related Technologies
  •  Logic, Package and System Technologies for Future Generative AI
  •  Neuromorphic Computing for Smart Sensors
  •  Sustainability in Semiconductor Device Technology and Manufacturing

 
SAN FRANCISCO, CA – Since it began in 1955, the IEEE International Electron Devices Meeting (IEDM) has been where the world’s best and brightest electronics technologists go to learn about the latest breakthroughs in semiconductor and related technologies. That tradition continues this year, when the 69th annual IEEE IEDM conference takes place in-person December 9-13, 2023 at the Hilton San Francisco Union Square hotel, with online access to recorded content available afterward.
 
The 2023 IEDM technical program, supporting the theme, “Devices for a Smart World Built Upon 60 Years of CMOS,” will consist of more than 225 presentations plus a full slate of panels, Focus Sessions, Tutorials, Short Courses, a career luncheon, supplier exhibit and IEEE/EDS award presentations.
 
“The IEDM offers valuable insights into where the industry is headed, because the leading-edge work presented at the conference showcases major trends and paradigm shifts in key semiconductor technologies,” said Jungwoo Joh, IEDM 2023 Publicity Chair and Process Development Manager at Texas Instruments. “For example, this year many papers discuss ways to stack devices in 3D configurations. This is of course not new, but two things are especially noteworthy about this work. One is that it isn’t just happening with conventional logic and memory devices, but with sensors, power, neuromorphic and other devices as well. Also, many papers don’t describe futuristic laboratory studies, but rather specific hardware demonstrations that have generated solid results, opening pathways to commercial feasibility.”
 
“Finding the right materials and device configurations to develop transistors that will perform well with acceptable levels of reliability remains a key challenge,” said Kang-ill Seo, IEDM 2023 Publicity Vice Chair and Vice President, Semiconductor R&D, Samsung Semiconductor. “This year’s program shows that electrothermal considerations remain a key focus, particularly with attempts to add functionality to a chip’s interconnect, or wiring, which is fabricated using low-temperature processes.”
 
Here are details of the 2023 IEEE International Electron Devices Meeting:
 
Tutorial Sessions – Saturday, Dec. 9
The Saturday tutorial sessions on emerging technologies are presented by experts in the field to bridge the gap between textbook-level knowledge and leading-edge current research, and to introduce attendees to new fields of interest. There are three time slots, each with two tutorials running in parallel:
1:30 p.m. - 2:50 p.m.
• Innovative Technology for Beyond 2 nm, Matthew Metz, Intel
• CMOS+X: Functional Augmentation of CMOS for Next-Generation Electronics, Sayeef Salahuddin, UC-Berkeley
3:05 p.m. - 4:25 p.m.
• Reliability Challenges of Emerging FET Devices, Jacopo Franco, Imec
• Advanced Packaging and Heterogeneous Integration - Past, Present & Future, Madhavan Swaminathan, Penn State
4:40 p.m. - 6:00 p.m.
• Synapses, Circuits, and Architectures for Analog In-Memory Computing-Based Deep Neural Network Inference Hardware Acceleration, Irem Boybat, IBM
• Tools for Device Modeling: From SPICE to Scientific Machine Learning, Keno Fischer, JuliaHub
 
Short Courses – Sunday, Dec. 10
In contrast to the Tutorials, the full-day Short Courses are focused on a single technical topic. They offer the opportunity to learn about important areas and developments, and to network with global experts.

• Transistor, Interconnect, and Chiplets for Next-Generation Low-Power & High-Performance Computing, organized by Yuri Y. Masuoka, Samsung

  •  Advanced Technology Requirement for Edge Computing, Jie Deng, Qualcomm
  •  Process Technology toward 1nm and Beyond, Tomonari Yamamoto, Tokyo Electron
  •  Empowering Platform Technology with Future Semiconductor Device Innovation, Jaehun Jeong, Samsung
  •  Future Power Delivery Process Architectures and Their Capability and Impact on Interconnect Scaling, Kevin Fischer, Intel
  •  DTCO/STCO in the Era of Vertical Integration, YK Chong, ARM
  •  Low Power SOC Design Trends/3D Integration/Packaging for Mobile Applications, Milind Shah, Google

 
• The Future of Memory Technologies for High-Performance Memory and Computing, organized by Ki Il Moon, SK Hynix

  •  High-Density and High-Performance Technologies for Future Memory, Koji Sakui, Unisantis Electronics Singapore/Tokyo Institute of Technology
  •  Advanced Packaging Solutions for High Performance Memory and Compute, Jaesik Lee, SK Hynix
  •  Analog In-Memory Computing for Deep Learning Inference, Abu Sebastian, IBM
  •  The Next Generation of AI Architectures: The Role of Advanced Packaging Technologies in Enabling Heterogeneous Chiplets, Raja Swaminathan, AMD
  •  Key Challenges and Directional Path of Memory Technology for AI and High-Performance Computing, Keith Kim, NVIDIA
  •  Charge-Trapping Memories: From the Fundamental Device Physics to 3D Memory Architectures (3D NAND, 3D NOR, 3D DRAM) and Computing in Memory (CIM), Hang-Ting (Oliver) Lue, Macronix

 
Plenary Presentations – Monday, Dec. 11

  •  Redefining Innovation: A Journey forward in the New Dimension Era, Siyoung Choi, President & GM, Samsung Foundry Business, Device Solutions Division
  •  The Next Big Thing: Making Memory Magic and the Economics Beyond Moore's Law, Thy Tran, Vice President of Global Frontend Procurement, Micron
  •  Semiconductor Challenges in the 5G and 6G Technology Platforms, Björn Ekelund, Corporate Research Director, Ericsson

 
Evening Panel Session – Tuesday evening, Dec. 12
The IEDM evening panel session is an interactive forum where experts give their views on important industry topics, and audience participation is encouraged to foster an open exchange of ideas. This year’s panel will be moderated by Dan Hutcheson, Vice Chair at Tech Insights.

  •  AI: Semiconductor Catalyst? Or Disrupter? Artificial Intelligence (AI) has long been a hot topic. In 2023 it became super-heated when large language models became readily available to the public. This year’s IEDM will not rehash what’s been dragged through media. Instead, it will bring together industry experts to have a conversation about how AI is changing the semiconductor industry and to ask them how they are using AI to transform their efforts. The topics will be wide-ranging, from how AI will drive demand for semiconductors, to how it’s changing design and manufacturing, and even to how it will change the jobs and careers of those working in it.

 
Luncheon – Tuesday, Dec. 12
There will be a career-focused luncheon featuring industry and scientific leaders talking about their personal experiences in the context of career growth. The discussion will be moderated by Jennifer Zhao, President/CEO, asm OSRAM USA Inc. The speakers will be:

  •  Ilesanmi Adesida, University Provost and Acting President, Nazarbayev University, Kazakhstan -- Professor Ilesanmi Adesida is a scientist/engineer and an experienced administrator in both scientific and educational circles, with more than 350 peer-reviewed articles/250 presentations at international conferences.
  •  Isabelle Ferain, Vice-President of Technology Development, GlobalFoundries -- Dr. Ferain oversees GF’s technology development mission in its 300mm fabs in the US and Europe.

 
Vendor Exhibition/MRAM Poster Session/MRAM Global Innovation Forum

  •  A vendor exhibition will be held once again.
  •  A special poster session dedicated to MRAM (magnetoresistive RAM memory) will take place during the IEDM on Tuesday, Dec. 12 from 2:20 pm to 5:30 p.m., sponsored by the IEEE Magnetics Society.
  •  Also sponsored by the IEEE Magnetics Society, the 15th MRAM Global Innovation Forum will be held in the same venue after the IEDM conference concludes, on Thursday, Dec. 14.

 
For registration and other information, visit www.ieee-iedm.org.
 
Follow IEDM via social media

 
About IEEE & EDS
IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice on a wide variety of areas ranging from aerospace systems, computers, and telecommunications to biomedical engineering, electric power, and consumer electronics. The IEEE Electron Devices Society is dedicated to promoting excellence in the field of electron devices, and sponsors the IEEE IEDM.

Go to the original article...

Metalenz announces polarization sensor for face ID

Image Sensors World        Go to the original article...

Press release: https://metalenz.com/metalenz-launches-polar-id-enabling-simple-secure-face-unlock-for-smartphones/

Metalenz Launches Polar ID, Enabling Simple, Secure Face Unlock for Smartphones 

  • The world’s first polarization sensor for smartphones, Polar ID provides ultra-secure facial authentication in a condensed footprint, lowering implementation cost and complexity.
  •  Now demonstrated on Qualcomm Technologies’ latest Snapdragon mobile platform, Polar ID is poised to drive large-scale adoption of secure face unlock across the Android ecosystem.

Boston, MA – October 26, 2023 Meta-optics industry leader Metalenz unveiled Polar ID, a revolutionary new face unlock solution, at Qualcomm Technologies’ annual Snapdragon Summit this week. Being the world’s only consumer-grade imaging system that can sense the full polarization state of light, Polar ID enables the next level of biometric security. Using breakthrough advances in meta-optic capability, Polar ID accurately captures the unique “polarization signature” of a human face. With this additional layer of information, even the most sophisticated 3D masks and spoof instruments are immediately detected as non-human.


Facial authentication provides a seamless method for unlocking phones and allowing digital payment. However, to make the solution sufficiently secure requires expensive, bulky, and often power-hungry optical modules. Historically, this has limited the implementation of face unlock to only a few high-end phone models. Polar ID harnesses meta-optic technology to extract additional information such as facial contour details and to detect human tissue liveness from a single image. It is significantly more compact and cost effective than incumbent “Structured Light” face authentication solutions which require an expensive dot-pattern projector and multiple images.


Now demonstrated on a smartphone reference design powered by the new Snapdragon® 8 Gen 3 Mobile Platform, Polar ID has the efficiency, footprint, and price point to enable any Android smartphone OEM to bring the convenience and security of face unlock to the 100s of millions of mobile devices that currently use fingerprint sensors.

“Size, cost, and performance, those are the key metrics in the consumer industry”, said Rob Devlin, Metalenz CEO & Co-founder. “Polar ID offers an advantage in all three. Its small enough to fit in the most challenging form factors, eliminating the need for a large notch in the display. Its secure enough that it doesn’t get fooled by the most sophisticated 3D masks. Its substantially higher resolution than existing facial authentication solutions, so even if you’re wearing sunglasses and a surgical mask, the system still works. As a result, Polar ID delivers secure facial recognition at less than half the size and cost of incumbent solutions.”


“With each new generation of our flagship Snapdragon 8 series, our goal is to deliver the next generation of cutting-edge smartphone imaging capabilities to consumers. Our advanced Qualcomm® Spectra™ ISP and Qualcomm® Hexagon™ NPU were specifically designed to enable complex new imaging solutions, and we are excited to work with Metalenz to support their new Polar ID biometric imaging solution on our Snapdragon mobile platform for the first time,” said Judd Heape, VP of Product Management, Qualcomm Technologies, Inc.


“Polar ID is a uniquely powerful biometric imaging solution that combines our polarization image sensor with post-processing algorithms and sophisticated machine learning models to reliably and securely recognize and authenticate the phone’s registered user. Working closely with Qualcomm Technologies to implement our solution on their reference smartphone powered by Snapdragon 8 Gen 3, we were able to leverage the advanced image signal processing capabilities of the Qualcomm Spectra ISP while also implementing mission critical aspects of our algorithms in the secure framework of the Qualcomm Hexagon NPU, to ensure that the solution is not only spoof-proof but also essentially unhackable” said Pawel Latawiec, CTO of Metalenz. “The result is an extremely fast and compute efficient face unlock solution ready for OEMs to use in their next generation of Snapdragon 8 Gen 3-powered flagship Android smartphones.”


Polar ID is under early evaluation with several top smartphone OEMs, and additional evaluation kits will be made available in early 2024. Metalenz will exhibit its revolutionary Polar ID solution at MWC Barcelona and is now booking meetings to showcase a live demo of the technology to mobile OEMs.
Contact sales@metalenz.com to reserve your demo.
 


 

Go to the original article...

Fraunhofer IMS 10th CMOS Imaging Workshop Nov 21-22 in Duisburg, Germany

Image Sensors World        Go to the original article...

https://www.ims.fraunhofer.de/en/Newsroom/Fairs-and-events/10th-cmos-imaging-workshop.html

10th CMOS Imaging Workshop 

What to expect
You are kindly invited to an exciting event, which will promote the exchange of users, developers and researchers of optical sensing to enhance synergy and pave the way to great applications and ideas.

Main topics

  •  Single photon imaging
  •  Spectroscopy, scientific and medical imaging
  •  Quantum imaging
  •  Image sensor technologies

The workshop will not be limited to CMOS as a sensor technology, but will be fundamentally open to applications, technologies and methods based on advanced optical sensing.




Go to the original article...

Prophesee announces GenX320 low power event sensor for IoT applications

Image Sensors World        Go to the original article...

Press release: https://prophesee-1.reportablenews.com/pr/prophesee-launches-the-world-s-smallest-and-most-power-efficient-event-based-vision-sensor-bringing-more-intelligence-privacy-and-safety-than-ever-to-consumer-edge-ai-devices

Prophesee launches the world’s smallest and most power-efficient event-based vision sensor, bringing more intelligence, privacy and safety than ever to consumer Edge-AI devices

Prophesee’s latest event-based Metavision® sensor - GenX320 - delivers new levels of performance including ultra-low power, low latency, high flexibility for efficient integration in AR/VR, wearables, security and monitoring systems, touch-free interfaces, always-on IoT and many more

October 16, 2023 2pm CET PARIS –– Prophesee SA, inventor of the world’s most advanced neuromorphic vision systems, today announced the availability of the GenX320 Event-based Metavision sensor, the industry’s first event-based vision sensor developed specifically for integration into ultra-low-power Edge AI vision devices. The fifth generation Metavision sensor, available in a tiny 3x4mm die size, expands the reach of the company’s pioneering technology platform into a vast range of fast-growing intelligent Edge market segments, including AR/VR headsets, security and monitoring/detection systems, touchless displays, eye tracking features, always-on smart IoT devices and many more.

The GenX320 event-based vision sensor builds on Prophesee’s track record of proven success and expertise in delivering the speed, low latency, dynamic range and power efficiency and privacy benefits of event-based vision to a diverse array of applications.

The 320x320 6.3μm pixel BSI stacked event-based vision sensor offers a tiny 1/5” optical format. It has been developed with a specific focus on the unique requirements of efficient integration of innovative event sensing in energy-, compute- and size-constrained embedded at-the-edge vision systems. The GenX320 enables robust, high-speed vision at ultra-low power and in challenging operating and lighting conditions.

GenX320 benefits include:

  •  Low latency µsec resolution timestamping of events with flexible data formatting.
  •  On-chip intelligent power management modes reduce power consumption to as low as 36uW and enable smart wake-on-events. Deep sleep and standby modes are also featured.
  •  Easy integrability/interfacing with standard SoCs with multiple integrated event data pre-processing, filtering, and formatting functions to minimize external processing overhead.
  •  MIPI or CPI data output interfaces offer low-latency connectivity to embedded processing platforms, including low-power microcontrollers and modern neuromorphic processor architectures.
  •  AI-ready: on-chip histogram output compatible with multiple AI accelerators;
  •  Sensor-level privacy-enabled thanks to event sensor’s inherent sparse frameless event data with inherent static scene removal.
  •  Native compatibility with Prophesee Metavision Intelligence, the most comprehensive, free, event-based vision software suite, used by a fast-growing community of 10,000+ users.

“The low-power Edge-AI market offers a diverse range of applications where the power efficiency and performance characteristics of event sensors are ideally suited. We have built on our foundation of commercial success in other application areas and developed this new event-based Metavision sensor to address the needs of Edge system developers with a sensor that is easy to integrate, configure and optimize for multiple compelling use cases in motion and object detection, presence awareness, gesture recognition, eye tracking, and other high growth areas,” said Luca Verre, CEO and co-founder of Prophesee.


Specific use case potential

  •  High speed eye-tracking for foveated rendering for seamless interaction in AR/VR/XR headsets
  •  Low latency touch-free human machine interface in consumer devices (TVs, laptops, game consoles, smart home appliances and devices, smart displays and more)
  •  Smart presence detection and people counting in IoT cameras and other devices
  •  Ultra-low power always-on area monitoring systems
  •  Fall detection cameras in homes and health facilities

Availability
The GenX320 is available for purchase from Prophesee and its sales partners. It is supported by a complete range of development tools for easy exploration and optimization, including a comprehensive Evaluation Kit housing a chip on board (COB) GenX320 module, or a compact optical flex module. In addition, Prophesee is offering a range of adapter kits that enable seamless connectivity to a large range of embedded platforms, such as a STM32 MCU, enabling faster time-to-market.


Early adopters
Zinn Labs
“Zinn Labs is developing the next generation of gaze tracking systems built on the unique capabilities of Prophesee’s Metavision event sensors. The new GenX320 sensor meets the demands of eye and gaze movements that change on millisecond timescales. Unlike traditional video-based gaze tracking pipelines, Zinn Labs is able to leverage the GenX320 sensor to track features of the eye with a fraction of the power and compute required for full-blown computer vision algorithms, bringing the footprint of the gaze tracking system below 20 mW. The small package size of the new sensor makes this the first time an event-based vision sensor can be applied to space-constrained head-mounted applications in AR/VR products. Zinn Labs is happy to be working with Prophesee and the GenX320 sensor as we move towards integrating this new sensor into upcoming customer projects.”
Kevin Boyle, CEO & Founder
 

XPERI
“Privacy continues to be one of the biggest consumer concerns when vision-based technology is used in our products such as DMS and TV services. Prophesee’s event-based Metavision technology enables us to take our ‘privacy by design’ principle to an even more secure level by allowing scene understanding without the need to have explicit visual representation of the scene. By capturing only changes in every pixel, rather than the entire scene as with traditional frame-based imaging sensors, our algorithms can derive knowledge to sense what is in the scene, without a detailed representation of it. We have developed a proof-of-concept demo that demonstrates DMS is fully possible using neuromorphic sensors. Using a 1MP neuromorphic sensor we can infer similar performance as an active NIR illumination 2MP vision sensor-based solution. Going forward, we focus on the GenX320 neuromorphic sensor that can be used in privacy sensitive smart devices to improve user experience.”
Petronel Bigioi, Chief Technology Officer
 

ULTRALEAP
“We have seen the benefits of Prophesee’s event-based sensors in enabling hands-free interaction via highly accurate gesture recognition and hand tracking capabilities in Ultraleap’s TouchFree application. Their ability to operate in challenging environmental conditions, at very efficient power levels, and with low system latency enhances the overall user experience and intuitiveness of our touch free UIs. With the new Genx320 sensor, these benefits of robustness, low power consumption, latency and high dynamic range can be extended to more types of applications and devices, including battery-operated and small form factors systems, proliferating hands-free use cases for increased convenience and ease of use in interacting with all sorts of digital content.”
Tom Carter, CEO & Co-founder

Additional coverage on EETimes:

https://www.eetimes.com/prophesee-reinvents-dvs-camera-for-aiot-applications/

Prophesee’s GenX30 chip, sensor die at the top, processor at the bottom. ESP refers to the digital event signal processing pipeline. (Source: Prophesee)

 

Go to the original article...

Omnivision’s new sensor for security cameras

Image Sensors World        Go to the original article...

OMNIVISION Announces New 4K2K Resolution Image Sensor for Home and Professional Security Cameras
 
The OS08C10 is a high-performance 8MP resolution, small-form-factor image sensor with on-chip staggered and DAG HDR technology, designed to produce superb video/image quality in challenging lighting environments
 
SANTA CLARA, Calif. – October 24, 2023 – OMNIVISION, a leading global developer of semiconductor solutions, including advanced digital imaging, analog, and touch & display technology, today announced the new OS08C10, an 8-megapixel (MP) backside illumination (BSI) image sensor that features both staggered high dynamic range (HDR) and single exposure dual analog gain (DAG) for high-performance imaging in challenging lighting conditions. The 1.45-micron (µm) BSI pixel supports 4K2K resolution and high frame rates. It comes in a small 1/2.8-inch optical format, a popular size for home and professional security, IoT and action cameras.
 
“Our new 1.45 µm pixel OS08C10 image sensor provides improved sensitivity and optimized readout noise, closing the gap with big-pixel image sensors that have traditionally been required for high-performance imaging in the security market,” said Cheney Zhang, senior marketing manager, OMNIVISION. “The OS08C10 supports both staggered HDR and DAG HDR. Staggered HDR extends dynamic range in both bright and low lighting conditions; the addition of built-in DAG provides single-exposure HDR support and reduces motion artifacts. Our new feature-packed sensor supports 4K2K resolution for superior image quality with finer details and enhanced clarity.”
 
OMNIVISION’s OS08C10 captures real-time 4K video at 60 frames per second (fps) with minimal artifacts. Its selective conversion gain (SCG) pixel design allows the sensor to flexibly select low and high conversion gain, depending on the lighting conditions. The sensor adopts the new correlated multi-sampling (CMS) to further reduce readout noise and improve SNR1 and low-light performance. The OS08C10’s on-chip defective pixel correction (DPC) improves quality and reliability above and beyond standard devices by providing real-time correction of defective pixels that can result throughout the sensor’s life cycle, especially in harsh operating conditions.
 
The OS08C10 is built on OMNIVISION’s PureCel®Plus-S stacked-die technology, enabling high-resolution 8MP in a small 1.45 µm BSI pixel. At 300 mW (60 fps), the OS08C10 achieves the lowest power consumption on the market. OMNIVISION’s OS08C10 is a cost-effective 4K2K solution for security, IoT and action cameras applications.
 
The OS08C10 is sampling now and will be in mass production in Q1 2024. For more information, contact your OMNIVISION sales representative: www.ovt.com/contact-sales.


 

Go to the original article...

Sony introduces IMX900 stacked CIS

Image Sensors World        Go to the original article...

Sony Semiconductor Solutions to Launch 1/3-Type-Lens-Compatible, 3.2-Effective-Megapixel Stacked CMOS Image Sensor with Global Shutter for Industrial Use Featuring Highest Resolution in This Class in the Industry

Atsugi, Japan — Sony Semiconductor Solutions Corporation (SSS) today announced the upcoming release of the IMX900, a 1/3-type-lens-compatible, 3.2-effective-megapixel stacked CMOS image sensor with a global shutter for industrial use that boasts the highest resolution in its class.
The new sensor product employs an original pixel structure to dramatically improve light condensing efficiency and near infrared sensitivity compared to conventional products, enabling miniaturization of pixels while maintaining the key characteristics required of industrial image sensors. This design achieves the industry’s highest resolution of 3.2 effective megapixels for a 1/3.1-type, global shutter system which fits in the S-mount (M12), the mount widely used in compact industrial cameras and built-in vision cameras.

The new product will contribute to the streamlining of industrial tasks in numerous ways, by serving in applications such as code reading in the logistics market and assisting in automating manufacturing processes using picking robot applications on production lines, thereby helping to resolve issues in industrial applications.

With demand for automation and manpower savings on the rise in every industry, SSS’s original Pregius S™ global shutter technology contributes to improved image recognition by enabling high-speed, high-precision, motion distortion-free imaging in a compact design. The new sensor utilizes a unique pixel structure developed based on Pregius S, moving the memory unit that was previously located on the same substrate as the photodiode to a separate signal processing circuit area. This new design makes it possible to enlarge the photodiode area, enabling pixel miniaturization (2.25 μm) while maintaining a high saturation signal volume, successfully delivering a higher pixel count of approximately 3.2 effective megapixels for a 1/3.1-type sensor.

Moving the memory unit to the signal processing circuit area has also increased the aperture ratio, bringing significant improvements to both incident light angle dependency and quantum efficiency. These features enable a much greater level of flexibility in the lens design for the cameras which employ this sensor. Additionally, a thicker photodiode area enhances the near infrared wavelength (850 nm) sensitivity, and nearly doubles the quantum efficiency compared to conventional products.

This compact, 1/3.1-type product is available in a package size that fits in the S-mount (M12), the versatile mount type used in industrial applications. It can be used in a wide range of applications where more compact, higher performance product designs are desired, such as in compact cameras for barcode readers in the logistics market, picking robot cameras on production lines, and the automated guided vehicles (AGVs) and autonomous mobile robots (AMRs) that handle transportation tasks for workers.

Main Features

  •  Industry’s highest resolution for an image sensor with a global shutter compatible with a 1/3-type lens, at approximately 3.2 effective megapixels
  •  Vastly improved incident light angle dependency lend greater flexibility to lens design
  • Delivers approximately double the quantum performance of conventional products in the near infrared wavelength
  • Includes on-chip features for greater convenience in reducing post-production image processing load
  • High-speed, 113 fps imaging


Cross-section of pixel structure
Product using conventional Pregius S technology (left) and the IMX900 using the new pixel structure (right)

Example of effects due to improved incident light angle dependency

Imaging comparison using near-infrared lighting (850 nm)
(Comparison in 2.25 μm pixel equivalent using conventional Pregius structure)


Usage example of Fast Auto Exposure function




Go to the original article...

Gpixel introduces 5MP and 12MP MIPI-enabled CIS

Image Sensors World        Go to the original article...

Gpixel adds MIPI-enabled 5 MP and 12 MP NIR Global Shutter image sensors to popular GMAX family


October 18, 2023, Changchun, China: Gpixel announces the pin-compatible GMAX3405 and
GMAX3412 CMOS image sensors - both based on a high-performance 3.4 μm charge domain global
shutter pixel to complete its c-mount range of GMAX products. With options for read out via either
LVDS or MIPI channels, these new sensors are optimized for easy integration into cost-sensitive
applications in machine vision, industrial bar code reading, logistics, and traffic.


GMAX3405 provides a 2448(H) x 2048(V), 5 MP resolution in a 2/3” optical format. In 10-bit mode,
reading out through all 12 pairs of LVDS channels, the frame rate is over 164 fps. In 12-bit mode,
100 fps can be achieved. Using the 4 MIPI D-PHY channels, the maximum frame rate is 73 fps with
a 12-bit depth. GMAX3412 provides a 4096(H) x 3072(V), 12 MP resolution in a 1.1” optical format.
In 10-bit mode, reading out through all 16 pairs of LVDS channels, the frame rate is over 128 fps.
In 12-bit mode, 60 fps can be achieved. Using the 4 MIPI D-PHY channels, the maximum frame rate
is 30 fps with a 12-bit depth. In both sensors, various multiplexing options are available for both
LVDS and MIPI readout to reduce the number of lanes.

 The 3.4 μm charge-domain global shutter pixel achieves a full well capacity of 10 ke- and noise of
3.6 e- at default x1 PGA gain, down to 1.5 e- at max gain setting (x16), delivering up to 68.8 dB
linear dynamic range. The advanced pixel design and Red Fox technology combined brings a peak
QE of 75% @ 540 nm, a NIR QE of 33% @850 nm , a Parasitic Light Sensitivity of -88 dB and an
excellent angular response of > 15° @ 80% response. All of this combined with multislope HDR
mode and ultra-short exposure time modes down to 1 us.


“The GMAX family was originally known for the world’s first 2.5 μm global shutter pixel. As the
product family grows, we are leveraging the advanced technology that makes the 2.5 μm pixel
possible to bring more generous light sensitivity with larger pixel sizes fitting mainstream optical
formats. With the addition of the MIPI interface and pin-compatibility and excellent NIR response,
these 2 new models bring more flexibility and cost-effectiveness to the GMAX product family.” says
Wim Wuyts, Gpixel’s Chief Commercial Officer.


Both GMAX3405 and GMAX3412 are housed in 176 pin ceramic LGA packages, both being pin-
compatible to each other. The outer dimensions of the 5MP and 12MP sensors respectively are
17.60 mm x 15.80 mm and 22.93 mm x 19.39 mm. The LGA pad pattern is optimized for reliable
solder connections and the sensor assembly includes a double-sided AR coated cover glass lid.

Engineering samples of both products, in both color and monochrome variants, can be ordered
today for delivery in November 2023. For more information about Gpixel’s roadmap of products
for industrial imaging, please contact info@gpixel.com to arrange for an overview.

Go to the original article...

Galaxycore announces dual analog gain HDR CIS

Image Sensors World        Go to the original article...

Press release: https://en.gcoreinc.com/news/detail-66

GalaxyCore Unveils Industry's First DAG Single-Frame HDR 13Megapixels CIS

2023.08.11

GalaxyCore has officially launched the industry's first 13megapixels image sensor with Single-Frame High Dynamic Range (HDR) capability – the GC13A2. This groundbreaking 1/3.1", 1.12μm pixel back-illuminated CIS features GalaxyCore's unique Dual Analog Gain (DAG) circuit architecture, enabling low-power consumption 12bit HDR output during previewing, photography, and video recording. This technology enhances imaging dynamic range for smartphones, tablets, and more, resulting in vividly clear images for users.

The GC13A2 also supports on-chip Global Tone Mapping, which compresses real-time 12bit data into 10bit output, preserving HDR effects and expanding compatibility with a wider range of smartphone platforms.



High Dynamic Range Technology

Dynamic range refers to the range between the darkest and brightest images an image sensor can capture. Traditional image sensors have limitations in dynamic range, often failing to capture scenes as perceived by the human eye. High Dynamic Range (HDR) technology emerged as a solution to this issue.


Left Image: blowout in the bright part resulting from narrow dynamic range/Right Image: shot with DAG HDR

Currently, image sensors use multi-frame synthesis techniques to enhance dynamic range:
Photography: Capturing 2-3 frames of the same scene with varying exposure times – shorter exposure to capture highlight details and longer exposure to supplement shadow details – then combining them to create an image with a wider dynamic range.

Video Recording: Utilizing multi-frame synthesis, the image sensor alternates between outputting 60fps long-exposure and short-exposure images, which the platform combines to produce a 30fps frame with preserved highlight color and shadow details. While multi-frame synthesis yields noticeable improvements in dynamic range, it significantly increases power consumption, making it unsuitable for prolonged use on devices like smartphones and tablets. Moreover, it tends to produce motion artifacts when capturing moving objects.



Left Image: shot with Multi-Frame HDR (Motion Artifact) Right Image: shot with DAG HDR

GalaxyCore's Patented DAG HDR Technology

GalaxyCore's DAG HDR technology, based on single-frame imaging, employs high analog gain in shadow regions for improved clarity and texture, while low analog gain is used in highlight parts to prevent overexposure and preserve details. Compared to traditional multi-frame HDR, DAG HDR not only increases dynamic range and mitigates artifact issues but also addresses the power consumption problem associated with multi-frame synthesis. For instance, in photography, scenes that used to require 3-frame synthesis are reduced by 50% when utilizing DAG HDR.

Left Image: Traditional HDR Photography Right Image: DAG HDR Photography

GC13A2 Empowers Imaging Excellence with HDR


Empowered by DAG HDR, the GC13A2 is capable of low-power 12bit HDR image output and 4K 30fps video capture. It reduces the need for frame synthesis during photography and lowers HDR video recording power consumption by approximately 30%, while avoiding the distortion caused by motion artifacts.

Compared to other image sensors of the same specifications in the industry, GC13A2 supports real-time HDR previewing, allowing users to directly observe every frame's details while shooting. This provides consumers with an enhanced shooting experience.

GC13A2 has already passed initial verification by brand customers and is set to enter mass production. In the future, GalaxyCore will introduce a series of high-resolution DAG single-frame HDR products, including 32Megapixels and 50Megapixels variants. This will further enhance GalaxyCore’s high-performance product lineup, promoting superior imaging quality and an enhanced user experience for smartphones.

Go to the original article...

ISSW 2024 call for papers announced

Image Sensors World        Go to the original article...

Link: https://issw2024.fbk.eu/cfp

International SPAD Sensor Workshop (ISSW 2024) will be organized by Fondazione Bruno Kessler - FBK.
When: June 4-6, 2024
Location: Trento, Italy

Call for Papers & Posters

The 2024 International SPAD Sensor Workshop (ISSW) is a biennial event focusing on Single-Photon Avalanche Diodes (SPAD), SPAD-based sensors and related applications. The workshop welcomes all researchers (including PhDs, postdocs, and early-career researchers), practitioners, and educators interested in these topics.
 
After two on-line editions, the fourth edition of the workshop will return to an in-person only format.
The event will take place in the city of Trento, in northern Italy, hosted at Fondazione Bruno Kessler, in a venue suited to encourage interaction and a shared experience among the attendees.

The workshop will follow a 1-day long introductory school on SPAD sensor technology, which will be held in the same venue as the workshop on June 3rd, 2024.
 
The workshop will include a mix of invited talks and, for the first time, peer-reviewed contributions.
Accepted works will be published on the International Image Sensor Society website (https://imagesensors.org/).

Submitted works may cover any of the aspects of SPAD technology, including device modelling, engineering and fabrication, SPAD characterization and measurements, pixel and sensor architectures and designs, and SPAD applications.
 
Topics
Papers on the following SPAD-related topics are solicited:
● CMOS/CMOS-compatible technologies
● SiPMs
● III-V, Ge-on-Si
● Modelling
● Quenching and front-end circuits
● Architectures
● Time-to-Digital Converters
● Smart histogramming techniques
● Applications of SPAD arrays, such as:
o Depth sensing / ToF / LiDAR
o Time-resolved imaging
o Low-light imaging
o High dynamic range imaging
o Biophotonics
o Computational imaging
o Quantum imaging
o Quantum RNG
o High energy physics
o Free space communication
● Emerging technologies & applications
 
Paper submission
Workshop proposals must be submitted online. A link will be soon made available.
 
Each submission should consist of a 100-word abstract, and a camera-ready manuscript of 2-to-3 pages (including figures), and include authors’ name(s) and affiliation, short bio & picture, mailing address of the presenter, telephone, and e-mail address of the presenter. A template will be provided soon.
The deadline for paper submission is 23:59 CET, Friday December 8th, 2023.
 
Papers will be considered on the basis of originality and quality. High quality papers on work in progress are also welcome. Papers will be reviewed confidentially by the Technical Program Committee.

Accepted papers will be made freely available for download from the International Image Sensor Society website. Please note that no major modifications are allowed.

Authors will be notified of the acceptance of their abstract & posters at the latest by Wednesday Jan 31st, 2024.
 
Poster submission
In addition to talks, we wish to offer all graduate students, post-docs, and early-career researchers an opportunity to present a poster on their research projects or other research relevant to the workshop topics .

If you wish to take up this opportunity, please submit a 1-page description (including figures) of the proposed research activity, along with authors’ name(s) and affiliation, mailing address, telephone, and e-mail address.

The deadline for poster submission is 23:59 CET, Friday December 8th, 2023.

Go to the original article...

MDPI IISW2023 special issue – 316MP, 120FPS, HDR CIS

Image Sensors World        Go to the original article...

A. Agarwal et al. have published a full length article on their IISW 2023 conference presentation in a special issue of MDPI Sensors. The paper is titled "A 316MP, 120FPS, High Dynamic Range CMOS Image Sensor for Next Generation Immersive Displays" and is joint work between Forza Silicon (AMETEK Inc.) and Sphere Entertainment Co..

Full article (open access): https://doi.org/10.3390/s23208383

Abstract
We present a 2D-stitched, 316MP, 120FPS, high dynamic range CMOS image sensor with 92 CML output ports operating at a cumulative date rate of 515 Gbit/s. The total die size is 9.92 cm × 8.31 cm and the chip is fabricated in a 65 nm, 4 metal BSI process with an overall power consumption of 23 W. A 4.3 µm dual-gain pixel has a high and low conversion gain full well of 6600e- and 41,000e-, respectively, with a total high gain temporal noise of 1.8e- achieving a composite dynamic range of 87 dB.

Figure 1. Sensor on a 12 inch wafer (4 dies per wafer), die photo, and stitch plan.



Figure 2. Detailed block diagram showing sensor partitioning.


Figure 3. Distribution of active and dark rows in block B/H, block E, and final reticle plan.


Figure 5. Sensor timing for single-exposure dual-gain (HDR) operation.



Figure 6. Data aggregation and readout order for single-gain mode.


Figure 7. Data aggregation and readout order for dual-gain mode.

Figure 8. ADC output multiplexing network for electrical crosstalk mitigation.


Figure 9. Conventional single-ended ADC counter distribution.


Figure 10. Proposed pseudo-differential ADC counter distribution.


Figure 11. Generated thermal map from static IR drop simulation.

Figure 12. Measured dark current distribution.

Figure 13. SNR and transfer function in HDR mode.


Figure 14. Full-resolution color image captured in single-gain mode at 120 FPS.







Go to the original article...

Review paper on IR photodiodes

Image Sensors World        Go to the original article...

A team from Military University of Technology (Poland) and Shanghai Institute of Technical Physics (China) have published a review article titled "Infrared avalanche photodiodes from bulk to 2D materials" in Light: Science & Applications journal.

Open access paper: https://www.nature.com/articles/s41377-023-01259-3

Abstract: Avalanche photodiodes (APDs) have drawn huge interest in recent years and have been extensively used in a range of fields including the most important one—optical communication systems due to their time responses and high sensitivities. This article shows the evolution and the recent development of AIIIBV, AIIBVI, and potential alternatives to formerly mentioned—“third wave” superlattices (SL) and two-dimensional (2D) materials infrared (IR) APDs. In the beginning, the APDs fundamental operating principle is demonstrated together with progress in architecture. It is shown that the APDs evolution has moved the device’s performance towards higher bandwidths, lower noise, and higher gain-bandwidth products. The material properties to reach both high gain and low excess noise for devices operating in different wavelength ranges were also considered showing the future progress and the research direction. More attention was paid to advances in AIIIBV APDs, such as AlInAsSb, which may be used in future optical communications, type-II superlattice (T2SLs, “Ga-based” and “Ga-free”), and 2D materials-based IR APDs. The latter—atomically thin 2D materials exhibit huge potential in APDs and could be considered as an alternative material to the well-known, sophisticated, and developed AIIIBV APD technologies to include single-photon detection mode. That is related to the fact that conventional bulk materials APDs’ performance is restricted by reasonably high dark currents. One approach to resolve that problem seems to be implementing low-dimensional materials and structures as the APDs’ active regions. The Schottky barrier and atomic level thicknesses lead to the 2D APD dark current significant suppression. What is more, APDs can operate within visible (VIS), near-infrared (NIR)/mid-wavelength infrared range (MWIR), with a responsivity ~80 A/W, external quantum efficiency ~24.8%, gain ~105 for MWIR [wavelength, λ = 4 μm, temperature, T = 10–180 K, Black Phosphorous (BP)/InSe APD]. It is believed that the 2D APD could prove themselves to be an alternative providing a viable method for device fabrication with simultaneous high-performance—sensitivity and low excess noise.


Fig. 1: Bulk to low-dimensional material, tactics to fabricate APDs and possible applications: FOC, FSO, LIDAR and QKDs.



Fig. 2: The APD’s operating principle. a Electron and hole multiplication mechanisms, schematic of multiplication mechanism for b k = 0 (αh = 0) and c k = 1 (αe = αh), where k = αh/αe – αe, αh represent electron and hole multiplication coefficients. d αe, αh ionization coefficients versus electric field for selected semiconductors used for APDs’ fabrication


Fig. 3: APDs. a p–n device, b SAM device, and c SAGCM device with electric field distribution. F(M) dependence on M for the selected k = αh/αe in APDs when: d electrons and e holes dominate in the avalanche mechanism. The multiplication path length probability distribution functions in the: f local and g non-local field “dead space” models

Fig. 4: InGaAs/InP SAM-APD. a device structure, b energy band profile, and electric field under normal reverse bias condition. AlxIn1–xAsySb1–y based SACM APD: c detector’s design with the E distribution within the detector, d measured and theoretically simulated gain, dark current, photocurrent versus reverse voltage for 90 μm diameter device at room temperature39. InAs planar avalanche photodiode: e a schematic design diagram, f comparison of the gain reached by 1550 nm wavelength laser132,133. The M normalized dark current for 100 μm radius planar APD was presented for 200 K

Fig. 5: F(M) versus M for. a Si, AlInAs, GaAs, Ge, InP [the solid lines present the F(M) for k within the range 0–1 (increment 0.1) calculated by the local field model24, typical F(M) are shown by shaded regions37 and b selected materials: 3.5 μm thick intrinsic InAs APDs (50 μm and 100 μm radius), 4.2 μm cut-off wavelengths HgCdTe and 2.2 μm InAlAs APDs



Fig. 6: Gain and k versus Hg1–xCdxTe bandgap energy. a the crossover between e-APD and h-APD. The crossover at Eg ≈ 0.65 eV corresponds to the λc = 1.9 μm for 300 K46. Hole-initiated avalanche HgCdTe photodiode: b detector profile, c energy band structure, d hole-initiated multiplication process energy band structure. The multiplication layer bandgap energy is adjusted to the resonance condition where the bandgap and the split-off valence band energy and the top of the heavy-hole valence band energy difference are equal. Electron-initiated avalanche HgCdTe photodiode: e diagram of electron-initiated avalanche process for HgCdTe-based high-density vertically integrated photodiode (HDVIP) structure (n-type central region and p-type material around), f electron avalanche mechanism, and g relative spectral response for 5.1 μm cut-off wavelength HgCdTe HDVIP at T = 80 K

 


Fig. 7: HgCdTe APDs performance. a the experimental gain versus bias for selected cut-off wavelengths for DRS electron-initiated APDs at 77 K together with extra measured data points taken at ∼77 K51 and LETI e-APDs at 80 K59, b constant F(M) ~ 1 versus M at 80 K for 4.3 μm cut-off wavelength APD135

Fig. 8: The device structure comparison between low-noise PMT and multi-quantum well APDs. a schematic presentation of a photomultiplier tube, b multi-quantum well p-i-n APD energy band sketch with marked intrinsic region (i), c energy band profiles of staircase APD under zero (top) and reverse (bottom) voltage. Multistep AlInAsSb staircase avalanche photodiode: d 3-step staircase APD device profile, e theoretically calculated by Monte Carlo method and measured gain of 1-, 2-, and 3-stairs APDs for 300 K70. MWIR SAM-APD structure with AlAsSb/GaSb superlattice: f device design profile, g energy band structure under reverse voltage, and h carriers impact multiplication coefficients versus reciprocal electric field at 200 K


Fig. 9: Low-dimensional solid avalanche photodetectors. a graphite/InSe Schottky avalanche detector - injection, ionization, collection electron transport mechanisms, b e-ph scattering dimensionality reduction affects electron acceleration process and gain versus electric field in 2D (red line) and 3D (blue line), c breakdown voltage (Vbd) and gain as a function of temperature—exhibits a negative temperature coefficient81. Nanoscale vertical InSe/BP heterostructures ballistic avalanche photodetector: d schematic of the graphene/BP/metal avalanche device83, e ballistic avalanche photodetector operating principle, f quasi-periodic current oscillations, g schematic of the graphene InSe/BP83, h Ids–Vds characteristics for selected temperatures (40 − 180 K), i avalanche breakdown threshold voltage (Vth) and gain versus temperature—showing a negative temperature coefficient. Pristine PN junction avalanche photodetector: j device structure, k as the number of layers increases, a positive/negative signal of SCM denotes hole/electron carries, l APD’s low temperature (~100 K) dark and photocurrent I–V curves


Fig. 10: An idea of laser-gated system connected with passive thermal imaging for enhanced distant identification. a operation principle [at t0—camera is closed—light pulse is emitted, at t1—target reflects light pulse, at t2—the camera is opened for a short period (∆t) matching the needed depth of view]; b typical images of wide FOV thermal and laser-gating systems


Go to the original article...

Review paper on long range single-photon LiDAR

Image Sensors World        Go to the original article...

Hadfield et al. recently published a review paper titled "Single-photon detection for long-range imaging and sensing" in Optica:

Abstract: Single-photon detectors with picosecond timing resolution have advanced rapidly in the past decade. This has spurred progress in time-correlated single-photon counting applications, from quantum optics to life sciences and remote sensing. A variety of advanced optoelectronic device architectures offer not only high-performance single-pixel devices but also the ability to scale up to detector arrays and extend single-photon sensitivity into the short-wave infrared and beyond. The advent of single-photon focal plane arrays is poised to revolutionize infrared imaging and sensing. In this mini-review, we set out performance metrics for single-photon detection, assess the requirements of single-photon light detection and ranging, and survey the state of the art and prospects for new developments across semiconductor and superconducting single-photon detection technologies. Our goal is to capture a snapshot of a rapidly developing landscape of photonic technology and forecast future trends and opportunities.

 

Fig. 1. Examples of imaging LIDAR configurations. (a) Flash LIDAR configuration using an array sensor and full-field illumination (a bistatic system is shown, with source and sensor separated). (b) Scanning LIDAR approach where the source is scanned and an individual sensor is used. (In this illustration, a bistatic configuration is shown; however, a monostatic scanning configuration is often used with a common transmit and receive axis).


 

Fig. 2. Single-photon LIDAR depth profiles taken at a range of greater than 600 m using a 100-channel Si SPAD detector system in scanning configuration. The operational wavelength is 532 nm. (a) Visible-band photograph of scene. (b) Reconstructed depth image of the city scene. (c) Detailed depth profile of the subsection of the scene within the red rectangle in (a). Further details in Z. Li et al. [60]. Figure reproduced with permission of Optica Publishing Group.



Fig. 3. Example of data fusion of a 3D image from a CMOS SPAD detector array and passive imagery of a scene at 150 m range. (a) Retrieved depth information from a SPAD detector array. (b) Intensity information from the SPAD overlaid on top of the retrieved depth information. (c) Intensity information from a color camera overlaid on top of the retrieved depth information [65]. Figure reproduced with permission of Springer Nature publishing.


Fig. 4. Solar irradiance versus wavelength at sea level (red) and in the upper atmosphere (blue). MODTRAN simulation [86]. The following spectral bands beyond the visible wavelength range are denoted by the shaded regions: near infrared (NIR), yellow; short-ware infrared (SWIR), cyan; mid-wave infrared (MWIR), red.



Fig. 5. Example of scanning SWIR single-photon LIDAR imaging. (a) Visible-band image of a residential building taken with an f=200mm camera lens. (b) Depth intensity plot of the building imaged with 32×32 scan points over a range of 8.8 km. (c) Depth plot of the building imaged with 32×32 scan points over a range of 8.8 km; side view of the target [89]. Figure reproduced with permission of Optica Publishing Group.

Fig. 6. Reconstruction results of a mountain scene over a range of 201.5 km using SWIR single-photon LIDAR [91]. (a) Visible-band imaged photograph. (b) Reconstructed depth result using algorithm by Lindell et al. [92] for data with signal-to-background ratio ∼0.04 and mean signal photon per pixel ∼3.58. (c) 3D profile of the reconstructed result. Figure reproduced with permission of Optica Publishing Group.


Fig. 7. Analysis of a scene with an actor holding a wooden plank across his chest and standing 1 m behind camouflage netting at a range of 230 m in daylight conditions. (a) Photograph of the scene, showing the actor holding a wooden plank behind the camouflage. (b), (c) Intensity and depth profiles of the target scene using all the collected single-photon LIDAR data. (d), (e) Intensity and depth profiles after time gating to exclude all data except those with a 0.6 m range around the target location. The pixel format used in the depth and intensity profiles is 80×160 [95]. Figure reproduced with permission of SPIE publishing.



Fig. 8. Schematic diagram of a SWIR single-photon 3D flash imaging experiment. The scene consists of two people walking behind a camouflage net at a stand-off distance of 320 m from the LIDAR system. An RGB camera was positioned a few meters from the 3D scene and used to acquire a reference video. The proposed algorithm is able to provide real-time 3D reconstructions using a graphics processing unit (GPU). As the LIDAR presents only 32×32 pixels, the point cloud was estimated in a higher resolution of 96×96 pixels. The acquired movie is shown in [101]. Figure reproduced with permission of Springer Nature publishing.

Fig. 9. Single-photon detector technologies for infrared single-photon LIDAR, with spectral coverage for each detector type indicated. (a) Schematic diagram cross section of a Si-based SPAD detector. The design is a homojunction. (b) Schematic diagram cross section of a Ge-on-Si structure, illustrating optical absorption in the Ge layer, and multiplication in the intrinsic Si layer. (c) Schematic diagram cross section of an InGaAs/InP SPAD detector; the absorption is in the narrow-gap InGaAs and the multiplication in the wider gap InP layer. In both (b) and (c), the charge sheet is used to alter the relative electric fields in the absorption and multiplication layers. (d) Schematic illustration of SNSPD architecture for near-unity efficiency at 1550 nm wavelength and optical micrograph of chip with single-pixel detector [109]; (d) reproduced with permission of Optica Publishing Group.

 

 


Link to paper (open access): https://opg.optica.org/optica/abstract.cfm?URI=optica-10-9-1124

Go to the original article...

MDPI IISW2023 Special Issue – paper on random telegraph noise

Image Sensors World        Go to the original article...

The first article in the Sensors special issue for IISW2023 is now available:

https://www.mdpi.com/1424-8220/23/18/7959

Chao et al. from TSMC in a paper titled "Random Telegraph Noise Degradation Caused by Hot Carrier Injection in a 0.8 μm-Pitch 8.3Mpixel Stacked CMOS Image Sensor" write:

In this work, the degradation of the random telegraph noise (RTN) and the threshold voltage (Vt) shift of an 8.3Mpixel stacked CMOS image sensor (CIS) under hot carrier injection (HCI) stress are investigated. We report for the first time the significant statistical differences between these two device aging phenomena. The Vt shift is relatively uniform among all the devices and gradually evolves over time. By contrast, the RTN degradation is evidently abrupt and random in nature and only happens to a small percentage of devices. The generation of new RTN traps by HCI during times of stress is demonstrated both statistically and on the individual device level. An improved method is developed to identify RTN devices with degenerate amplitude histograms.

 

Figure 1. Simplified test chip architecture. The device under stress is the source follower (SF) NMOS in the 4 × 2-shared pixels on the top layer. The PD0–7 are the photodiodes, and the TG0–7 are the transfer gates in each 4 × 2-shared pixel. The total number of SF is 628 × 1648 = 1.03 M.


Figure 2. (a) The measured IB of a SF device vs. VD with VG stepping from 1.3 V to 2.8 V; (b) The same data as in (a) but plotted against VDS−VDsat≈VD−VG+Vt with Vt as a fitting parameter; (c) The same data as in (b) plotted against 1/(VDS−VDsat) with P=(P1,P2) as two fitting parameters according to Equation (1).


Figure 3. The bias configuration of the SF under test. The red and blue solid circles symbolize electrons and holes, respectively.


Figure 4. The histograms of the measured VGS of the SF for stress time (t) from 0 to 100 min.



Figure 5. (a) The histograms of the threshold voltage shift (ΔVt) after 10-, 20-, 50-, and 100-min stress; (b) The inverse cumulative distribution function (ICDF) curves of ΔVt; (c) the constant ICDF contours against stress time (t).



Figure 6. (a) The histograms of the random noise changes (ΔRN) after 10, 20, 50, 100 min stress; (b) The inverse cumulative distribution function (ICDF) curves; (c) the constant ICDF contours as functions of stress time (t).


Figure 7. The correlation of the SF threshold voltage shift (ΔVt) after 10 min of HCI stress vs. after (a) 20 min, (b) 50 min, and (c) 100 min of stress, respectively. The linear least-square fit of the x/y ratio (red dash line) shows the continuous increase of the ΔVt as the stress time increases. The ΔVt increases are relatively uniform among all 1M devices, which is quite different from the random noise increases in Figure 8 below. Random colors are assigned to the data points to separate the dots from each other.


Figure 8. The correlation of the random noises (RN) before HCI stress (t = 0) vs. after (a) 10 min, (b) 20 min, and (c) 100 min stress, respectively The RN increases are noticeably nonuniform. The RN along the x = y red dash line remains relatively unchanged. The devices on the lower-right branches show a significant increase in RN. The population of the lower branch increases as stress time increases. Random colors are assigned to the data points to separate the dots from each other.



Figure 9. The 2D histograms of the correlation of the Vt shift and RN degradation shows dramatically different statistical behaviors. (a) The Vt change after 100-min stress versus that after 10-min stress. (b) The RN after 100 min stress versus that before the stress.


Figure 10. Generation of RTN traps during HCI stress. The 5000-frame waveforms before (t = 0) and after the HCI stress (t = 20, 100 min) with the corresponding histograms are shown for three selected examples. (a) Device (296, 137) shows one trap before stress and remains unchanged after stress. (b) Device (202, 1338) shows no trap before stress and one trap generated after 20 min of stress. (c) Device (400, 816) shows no trap before stress; however, one trap is generated after 100 min of stress. The RN unit is mV-rms.


Figure 11. Degeneration of the RTN discrete levels. During HCI stress, the non-RTN noises may be increased significantly such that the discrete RTN levels become indistinguishable. (a) Device (141, 1393) show such degeneration after 100 min of stress. (b) Device (481, 405) show degeneration after 20 min of stress. (c) Device (519, 1638) shows unsymmetric side peaks and unsymmetric degeneration after 20 min and 100 min of stresses. The RN unit is mV-rms.


Figure 12. For devices showing a single histogram peak, if the histogram is significantly different from the Gaussian distribution, they are counted as RTN-like devices. The ratio R expressed in Equation (2) is defined as the red area versus the total area under the black histogram. The R values in examples (a) and (b) are 36% and 28%, respectively. The RN unit is mV-rms.



Figure 13. Devices with amplitude distributions close to Gaussian are considered as non-RTN devices. The deviation ratio R is 7% for device (587, 492) in (a) and 9% for device (124, 1349) in (b). The RN unit is mV-rms.


Figure 14. The RN distribution of the RTN and non-RTN devices, sorted by the improved algorithm: (a) before HCI stress, (b) after 20 min stress, and (c) after 100 min stress. The RTN devices clearly contribute to and dominate the long tails of the RN histograms. The number of RTN devices (Nx) (with the R-threshold set to 15%) increases systematically as the stress time increases.


Figure 15. The count of RTN devices increases consistently as stress time increases. N2 is the number of devices showing two or more peaks in amplitude histograms. Nx is N2 plus the number of RTN-like devices determined by setting the R-threshold to 10%, 15%, and 20%, respectively.



Figure 16. (a) The Vt shift and (b) the RN degradation trends against the effective stress defined in Equation (3), where the effectiveness factors are treated as empirical fitting parameters such that all the constant-ICDF points for different voltages fall onto a family of continuous and smooth curves. The fitting results are listed in Table 1.







Go to the original article...

MDPI IISW2023 Special Issue – paper on random telegraph noise

Image Sensors World        Go to the original article...

The first article in the Sensors special issue for IISW2023 is now available:

https://www.mdpi.com/1424-8220/23/18/7959

Chao et al. from TSMC in a paper titled "Random Telegraph Noise Degradation Caused by Hot Carrier Injection in a 0.8 μm-Pitch 8.3Mpixel Stacked CMOS Image Sensor" write:

In this work, the degradation of the random telegraph noise (RTN) and the threshold voltage (Vt) shift of an 8.3Mpixel stacked CMOS image sensor (CIS) under hot carrier injection (HCI) stress are investigated. We report for the first time the significant statistical differences between these two device aging phenomena. The Vt shift is relatively uniform among all the devices and gradually evolves over time. By contrast, the RTN degradation is evidently abrupt and random in nature and only happens to a small percentage of devices. The generation of new RTN traps by HCI during times of stress is demonstrated both statistically and on the individual device level. An improved method is developed to identify RTN devices with degenerate amplitude histograms.

 

Figure 1. Simplified test chip architecture. The device under stress is the source follower (SF) NMOS in the 4 × 2-shared pixels on the top layer. The PD0–7 are the photodiodes, and the TG0–7 are the transfer gates in each 4 × 2-shared pixel. The total number of SF is 628 × 1648 = 1.03 M.


Figure 2. (a) The measured IB of a SF device vs. VD with VG stepping from 1.3 V to 2.8 V; (b) The same data as in (a) but plotted against VDS−VDsat≈VD−VG+Vt with Vt as a fitting parameter; (c) The same data as in (b) plotted against 1/(VDS−VDsat) with P=(P1,P2) as two fitting parameters according to Equation (1).


Figure 3. The bias configuration of the SF under test. The red and blue solid circles symbolize electrons and holes, respectively.


Figure 4. The histograms of the measured VGS of the SF for stress time (t) from 0 to 100 min.



Figure 5. (a) The histograms of the threshold voltage shift (ΔVt) after 10-, 20-, 50-, and 100-min stress; (b) The inverse cumulative distribution function (ICDF) curves of ΔVt; (c) the constant ICDF contours against stress time (t).



Figure 6. (a) The histograms of the random noise changes (ΔRN) after 10, 20, 50, 100 min stress; (b) The inverse cumulative distribution function (ICDF) curves; (c) the constant ICDF contours as functions of stress time (t).


Figure 7. The correlation of the SF threshold voltage shift (ΔVt) after 10 min of HCI stress vs. after (a) 20 min, (b) 50 min, and (c) 100 min of stress, respectively. The linear least-square fit of the x/y ratio (red dash line) shows the continuous increase of the ΔVt as the stress time increases. The ΔVt increases are relatively uniform among all 1M devices, which is quite different from the random noise increases in Figure 8 below. Random colors are assigned to the data points to separate the dots from each other.


Figure 8. The correlation of the random noises (RN) before HCI stress (t = 0) vs. after (a) 10 min, (b) 20 min, and (c) 100 min stress, respectively The RN increases are noticeably nonuniform. The RN along the x = y red dash line remains relatively unchanged. The devices on the lower-right branches show a significant increase in RN. The population of the lower branch increases as stress time increases. Random colors are assigned to the data points to separate the dots from each other.



Figure 9. The 2D histograms of the correlation of the Vt shift and RN degradation shows dramatically different statistical behaviors. (a) The Vt change after 100-min stress versus that after 10-min stress. (b) The RN after 100 min stress versus that before the stress.


Figure 10. Generation of RTN traps during HCI stress. The 5000-frame waveforms before (t = 0) and after the HCI stress (t = 20, 100 min) with the corresponding histograms are shown for three selected examples. (a) Device (296, 137) shows one trap before stress and remains unchanged after stress. (b) Device (202, 1338) shows no trap before stress and one trap generated after 20 min of stress. (c) Device (400, 816) shows no trap before stress; however, one trap is generated after 100 min of stress. The RN unit is mV-rms.


Figure 11. Degeneration of the RTN discrete levels. During HCI stress, the non-RTN noises may be increased significantly such that the discrete RTN levels become indistinguishable. (a) Device (141, 1393) show such degeneration after 100 min of stress. (b) Device (481, 405) show degeneration after 20 min of stress. (c) Device (519, 1638) shows unsymmetric side peaks and unsymmetric degeneration after 20 min and 100 min of stresses. The RN unit is mV-rms.


Figure 12. For devices showing a single histogram peak, if the histogram is significantly different from the Gaussian distribution, they are counted as RTN-like devices. The ratio R expressed in Equation (2) is defined as the red area versus the total area under the black histogram. The R values in examples (a) and (b) are 36% and 28%, respectively. The RN unit is mV-rms.



Figure 13. Devices with amplitude distributions close to Gaussian are considered as non-RTN devices. The deviation ratio R is 7% for device (587, 492) in (a) and 9% for device (124, 1349) in (b). The RN unit is mV-rms.


Figure 14. The RN distribution of the RTN and non-RTN devices, sorted by the improved algorithm: (a) before HCI stress, (b) after 20 min stress, and (c) after 100 min stress. The RTN devices clearly contribute to and dominate the long tails of the RN histograms. The number of RTN devices (Nx) (with the R-threshold set to 15%) increases systematically as the stress time increases.


Figure 15. The count of RTN devices increases consistently as stress time increases. N2 is the number of devices showing two or more peaks in amplitude histograms. Nx is N2 plus the number of RTN-like devices determined by setting the R-threshold to 10%, 15%, and 20%, respectively.



Figure 16. (a) The Vt shift and (b) the RN degradation trends against the effective stress defined in Equation (3), where the effectiveness factors are treated as empirical fitting parameters such that all the constant-ICDF points for different voltages fall onto a family of continuous and smooth curves. The fitting results are listed in Table 1.







Go to the original article...

Omnivision announces new sensor for security and surveillance applications

Image Sensors World        Go to the original article...

OMNIVISION Announces New Low-power, Enhanced-performance 2MP Image Sensor for Security Surveillance Cameras
 
The OS02N features a 2.5-micron enhanced-performance FSI pixel with on-sensor DPC for higher sensitivity, performance and reliability while remaining cost-effective
 
SANTA CLARA, Calif. – September 27, 2023 – OMNIVISION, a leading global developer of semiconductor solutions, including advanced digital imaging, analog, and touch & display technology, today announced the new OS02N, a 2-megapixel (MP) frontside illumination (FSI) image sensor with optimized defective pixel correction (DPC) algorithm for higher sensitivity, improved performance and increased reliability for IP and HD analog security cameras, including professional surveillance and outdoor home security cameras. The OS02N supports always-on with its low-power capability.
 
“Customers need high-performing security cameras that produce sharp, high-resolution images with low power consumption for extended battery life. The OS02N meets these requirements and is also a cost-effective solution,” said Cheney Zhang, senior marketing manager, OMNIVISION. “The OS02N uses FSI technology, which has a large pixel size for better quantum efficiency and excellent signal-to-noise ratio, resulting in high sensitivity in low-light conditions and dramatically improved image quality and performance. It has a 1/3.27-inch optical format and is designed to be pin-to-pin compatible with our OS04L and OS04D image sensors.”
 
The OS02N features a 2.5-micron pixel based on OMNIVISION’s OmniPixel®3-HS technology. This enhanced-performance, cost-effective solution uses FSI technology for true-to-life color reproduction in both bright and dark conditions. Optimized DPC algorithm improves sensor quality and reliability above and beyond standard devices by providing real-time correction of defective pixels that can result throughout the sensor’s life cycle, especially in harsh operating conditions. The OS02N features 1920x1080 resolution at 30 frames per second (FPS).
 
The OS02N supports MIPI and DVP interfaces. It is sampling now and will be in mass production in Q1 2024. For more information, contact your OMNIVISION sales representative: www.ovt.com/contact-sales.


 

Go to the original article...

Sheba Microsystems MEMS-based lens athermalization solution

Image Sensors World        Go to the original article...

Sheba Microsystems Launches Revolutionary MEMS Autofocus Actuator for Active Athermalization in Embedded Vision Cameras


Breakthrough µPistons™ technology uniquely solves decades-long embedded vision camera industry’s problem of lens thermal expansion. Novel product unlocks unparalleled resolution and consistent high-quality imaging performance for automotive, action, drone, mobile robotics, security and surveillance, and machine vision cameras.

TORONTO--(BUSINESS WIRE)--Sheba Microsystems Inc., a global leader in MEMS technologies, today announced the launch of its revolutionary new product, the MEMS Autofocus Actuator for Active Athermalization in Embedded Vision Cameras used in automotive, action, drones, machine vision, security and surveillance, and mobile robotics.

The first-of-its-kind solution tackles the long-standing industry problem of embedded vision cameras’ inability to maintain image quality and focus stability during temperature fluctuations as optics undergo thermal expansion.

While smartphones use autofocus actuators and electromagnetic actuators including voice coil motors (VCMs), these actuators are unreliable for achieving active athermalization in embedded vision cameras due to extreme environmental conditions. Embedded vision camera optics are also 30 times larger than smartphone optics. Other autofocus systems in-market such as tunable lenses lack thermal stability and compromise optical quality.

“MEMS actuators are fast, precise, and small in size, and are actually uniquely suited to solve thermal expansion issues, because they are thermally stable and maintain consistent performance regardless of temperature changes,” said CEO and co-founder Dr. Faez Ba-Tis, PhD. “Because of these known advantages, there have been previous industry attempts at incorporating MEMS actuators into cameras, but because they failed drop tests they were quickly abandoned. Sheba’s new design solves for all of these previous blockers, which opens up limitless possibilities for embedded vision camera innovation.”

Sheba’s proprietary technology compensates for thermal expansion by uniquely moving the lightweight sensor, instead of moving the lenses. The silicon-based MEMS actuator platform actuates the image sensor along the optical axis to compensate for thermal expansion in the optics. The weight of the image sensor represents only 2-3 % of the optical lens weight, which makes it easier to handle, enabling ultra-fast and precise autofocus performance even when temperatures fluctuate.

Sheba’s novel piston-tube electrode configuration takes advantage of a larger capacitive area, allowing for substantial stroke and increased force. In contrast to traditional MEMS comb-drive electrode configuration, Sheba’s µPistons™ design makes the MEMS actuators uniquely resilient against severe shocks, since the electrodes are well-supported and interconnected with each other.

Sheba’s new MEMS actuator has successfully passed drop tests as well as other reliability tests, including thermal shock, thermal cycling, vibration, mechanical shock, drop, tumble, and microdrop tests. It is also highly rugged, which helps maintain image focus during high shocks in action cameras or machine vision environments.

“Digital camera technologies are increasingly used in almost every aspect of our lives,” said Ba-Tis. “From sharing photos of our travels in social media, to experiencing new artificial intelligence innovations powered by machine vision, and accelerating the deployment of autonomous vehicles in our communities, high quality images are imperative to not only capture our most memorable events, but to also keep us safe. In situations where split-second decisions are critical, image quality becomes paramount.”

Sheba’s MEMS actuator offers lens design flexibility and is suitable for near and far-field imaging. It is easily integrated into existing systems and scaled up on mass production tools for automotive, action, drone, mobile robotics, security and surveillance, and machine vision cameras.
Sheba is offering evaluation kits to interested customers, so they can test and evaluate the new product in their own labs to ensure the reliability of the technology. The kit includes camera samples, a daughter board with the MEMS driver, interposer, and camera test jig to perform mechanical reliability tests, software, and user manual.

To learn more about Sheba Microsystems or to order an evaluation kit for your organization, visit www.shebamicrosystems.ca.

 



Go to the original article...

EETimes article on PixArt Imaging’s "smart pixel" sensor

Image Sensors World        Go to the original article...

Link: https://www.eetimes.com/smart-pixel-optical-sensing-exerting-ai-in-pixels-level/

Smart Pixel Optical Sensing – Exerting AI in Pixels Level

The PAC9001LU Smart Pixel Optical Sensing Chip is a Computer Vision ASIC that fits as an always-on motion sensor by leveraging the novel AI-driven pixel architecture into the sensor array design. Based on a CMOS Image Sensor rolling shutter structural design with an array of 36 x 16 pixels, it can support a high frame rate of up to 1000Hz to facilitate image capturing of fast-moving object applications. The design of AI in pixels integrates a frame comparing circuit with AI-powered algorithms to compute differences in pixel luminosity within a configurable image area. It directly provides analog frame differences and event info in Pixel Differences Mode and supports Smart Motion Detection Mode to eliminate the complex image signal processing in the processor. The partial array sensing, such as configurable ROI region, provides supple custom scene capturing for the needs of AIoT edge applications.



 PAC9001LU directly handles the digital conversion of raw image signals. It computes the image subtraction difference between two frames internally in the chip to provide the difference of each pixel in an 8-bit data format. This 8-bit data size is relatively small compared to the raw image data of a whole pixel array, which can effectively reduce data transmission bandwidth and latency issues.
 

The PAC9001LU chip is in a W2.5 x L2.6 x H0.43 mm3 CSP package body (excluding solder balls). A recommended matching lens set, LST0-2621 is also available to form a complete module when assembled with the PAC9001LU chip and comes in a size of W3.79 x L3.63 x H1.67 mm3 (height is including guide pin).



  • The low-power consumption during the Smart Motion Detection Mode that comes with intelligent informative is the most remarkable building block worthwhile in enabling AI applications. As compare to PIR or CMOS Image Sensor (CIS), higher power consumption is required for further data processing in system level.
  •  The high report rate, which can go up to 1000Hz can achieve motion detection with fast-moving objects, which outperforms the PIR or conventional CIS.
  •  The PAC9001LU is more robust with reliable performance. It has fewer false alarms detecting motion and higher immunity to temperature interference. The external environment factors, such as bright and hot sunlight from outdoors, the indoors thermal noise from heated devices are not affecting its sensing performance. The built-in algorithms can eliminate interferences like background noise too.
  •  The small form factor of the complete PAC9001LU sensor module, including the lens set, can nicely fit into the slim bezel ID design.
  •  The traditional PIR sensors are usually required not to be shielded by plastics or glass front-facing cover, which may impact the detection of thermal IR radiation. Whereas the PAC9001LU solution does not have the restraint of having a front cover of any materials and still can keep the motion sensing quality, even placing the motion sensing device indoors looking out from the glass window is possible. With the cover protection, the PAC9001LU is less prone to external damage.



PAC9001LU sensor can support low-light sensing in low or no-light conditions, which is very suitable for use in a dark environment, such as a basement.


The PAC9001LU can cater to the need for in-chip high-speed motion detection, eliminating the external controller processing.

 

In addition to motion sensing, the PAC9001LU sensor can provide the coordinate information of a targeted moving object that is in sync with each pixel differences image data. 


The PAC9001KE Evaluation Kit is available for evaluation and design research purposes.

Go to the original article...

Sony announces IMX735 17.42MP Automotive CIS

Image Sensors World        Go to the original article...

Press release: https://www.sony-semicon.com/en/news/2023/2023091201.html

Sony Semiconductor Solutions to Release CMOS Image Sensor for Automotive Cameras with Industry-Leading 17.42-Effective Megapixels

Delivering sophisticated sensing and recognition performance and contributing to safe, secure automated driving

Atsugi, Japan — Sony Semiconductor Solutions Corporation (SSS) today announced the upcoming release of the IMX735, a new CMOS image sensor for automotive cameras with the industry’s highest pixel count, at 17.42 effective megapixels. The new sensor product will support the development of automotive camera systems capable of sophisticated sensing and recognition performance, thereby contributing to safe, secure automated driving.

For automated systems to deliver automated driving, they must offer sophisticated, high-precision sensing and recognition performance, encompassing all 360 degrees of the environment around the vehicle. Accordingly, there is considerable demand for image sensors that can help achieve this level of performance and support the development of more advanced automotive camera systems.

The new sensor product achieves the industry’s highest pixel count of 17.42 effective megapixels, enabling high definition capture of far-off objects. Moreover, automated driving systems often use automotive cameras in combination with LiDAR and other sensing systems. While typical CMOS image sensors readout signals output from pixels one vertical line at a time, this product outputs signals horizontally, one row at a time. This means that automotive cameras employing this sensor can more easily synchronize with mechanical scanning LiDAR, since their laser beams also scan horizontally. This better synchronization will improve the sensing and recognition capabilities of the automated driving system as a whole.

Furthermore, the new sensors’ improved saturation illuminance, made possible by a proprietary pixel structure, and unique exposure method yield a wide dynamic range of 106 dB even when simultaneously employing high dynamic range (HDR) imaging and LED flicker mitigation. The dynamic range is even higher, at 130 dB, when using dynamic range priority mode. This creative design helps suppress highlight blowouts even in backlit conditions, enabling more precise object capture in road environments with significant differences in brightness, such as tunnel entrances and exits.

Main Features
■Long-distance recognition delivered by industry-leading 17.42 megapixels
Thanks to the industry’s highest pixel count of 17.42 effective megapixels, the new sensor is capable of high definition capture, extending the object recognition range to greater distances and thereby allowing better detection of road conditions, vehicles, pedestrians and other objects. Early detection of far-away objects while driving helps make automated driving systems safer.

■Horizontal pixel signal output for easier synchronization with mechanical-scanning LiDAR
When reading signals from pixels, CMOS image sensors generally do so in a vertical direction one line at a time. This product, on the other hand, employs a readout method that outputs signals horizontally one row at a time, making it easier to synchronize with mechanical-scanning LiDAR, which also uses a horizontal scanning method. This means that the information output from automotive cameras equipped with this product can be integrated with LiDAR information downstream on the system. This will improve the sensing and recognition capabilities of the automated driving system as a whole.

■Wide dynamic range even during simultaneous use of HDR and LED flicker mitigation
In automobile driving, objects must be precisely detected and recognized even in road environments with significant differences in brightness, such as tunnel entrances and exits. Automotive cameras are also required to suppress LED flicker, even while in HDR mode, to deal with the increasing prevalence of LED signals and other traffic devices. The proprietary pixel structure and unique exposure method of this product improves saturation illuminance, yielding a wide dynamic range of 106 dB even when simultaneously employing HDR and LED flicker mitigation (when using dynamic range priority mode, the range is even wider, at 130 dB). This design also helps reduce motion artifacts generated when capturing moving subjects.


■Compliant with standards required for automotive applications
The product is qualified for AEC-Q100 Grade 2 automotive electronic component reliability tests by mass production. Also, SSS has introduced a development process compliant with the ISO 26262 road vehicle functional safety standard, at automotive safety integrity level ASIL-B(D). This contributes to improve automotive camera system reliability.

■Cybersecurity required for automotive applications (optional)
The product can support cybersecurity features such as camera authentication via a public-key algorithm to confirm CMOS image sensor authenticity, image authentication to detect any tampering with acquired images, and communication authentication to detect any tampering with control communications.




Go to the original article...

Photonis is now ExoSens

Image Sensors World        Go to the original article...

PHOTONIS GROUP BECOMES EXOSENS
PRESS RELEASE
MÉRIGNAC – SEPTEMBER 20th 2023
 
PHOTONIS GROUP a global leader of highly differentiated technology for detection, imaging and light, held by Groupe HLD since 2021 is deeply transforming by developing adjacent technologies, expanding to particles detection markets. Following that strategy, the group has acquired four companies (Xenics, Proxivision, Telops and Elmul) since December 2022. Worldwide leader for image intensifier tubes, the company has diversified its technologies and products portfolios with the ambition to become the worldwide leader in detection and imaging technologies. To illustrate that strategy, PHOTONIS GROUP becomes EXOSENS.

Proposing electro-optic devices covering the full optical spectrum from UV to LWIR in addition to electron, ion, neutron and gamma detectors, EXOSENSaddresses four markets which are lifescience, industrial control, nuclear energy and defense. The company takes benefits of positive dynamics in each of these four verticals, such as enhanced diagnosis demand, factory automation, small modular reactors deployment and defense budget increase.

Jean-Hubert Vial, partner at Groupe HLD said: “It’s an important step for Photonis Group. By becoming EXOSENS, the company clearly anchors its position as high-end technology provider to serve high growing commercial and defense markets for more sustainability and safety”
Jérôme Cerisier, CEO of the new group said: “EXOSENS means “to detect, to see and to give meaning to what is beyond”. It perfectly reflects what we are doing, we reveal the invisible, we sense the world to make it safer. With EXOSENS, we aim to share our common values throughout the whole organization, to integrate new companies and colleagues and to always offer high performances products to meet customers satisfaction.”

Operationally, legal entities will keep their existing names. The four product brands Photonis (for intensified products, nuclear and mass spectrometry detectors), Xenics (for infrared sensors and cameras), Elmul (for electron detectors) and Telops (for hyperspectral and cooled infrared camera) will continue to be deployed and promoted in their markets.
 
ABOUT EXOSENS:
 
Accompanied by Groupe HLD since 2021, EXOSENS is a high-tech company, with more than 85 years of experience in the innovation, development, manufacture and sale of technologies in the field of particles and photo detection and imaging. Today, it offers its customers detectors and detection solutions: its travelling wave tubes, advanced cameras, neutron & gamma detectors, instrument detectors and light intensifier tubes allow EXOSENS to respond to complex issues in environments extremely demanding by offering tailor-made solutions to its customers. Thanks to its sustained and permanent investment, EXOSENS is internationally recognized as a major innovator in optoelectronics, with production and R&D carried out on 9 sites, in Europe and North America and over 1 500 employees.

Go to the original article...

Photonis is now ExoSens

Image Sensors World        Go to the original article...

PHOTONIS GROUP BECOMES EXOSENS
PRESS RELEASE
MÉRIGNAC – SEPTEMBER 20th 2023
 
PHOTONIS GROUP a global leader of highly differentiated technology for detection, imaging and light, held by Groupe HLD since 2021 is deeply transforming by developing adjacent technologies, expanding to particles detection markets. Following that strategy, the group has acquired four companies (Xenics, Proxivision, Telops and Elmul) since December 2022. Worldwide leader for image intensifier tubes, the company has diversified its technologies and products portfolios with the ambition to become the worldwide leader in detection and imaging technologies. To illustrate that strategy, PHOTONIS GROUP becomes EXOSENS.

Proposing electro-optic devices covering the full optical spectrum from UV to LWIR in addition to electron, ion, neutron and gamma detectors, EXOSENSaddresses four markets which are lifescience, industrial control, nuclear energy and defense. The company takes benefits of positive dynamics in each of these four verticals, such as enhanced diagnosis demand, factory automation, small modular reactors deployment and defense budget increase.

Jean-Hubert Vial, partner at Groupe HLD said: “It’s an important step for Photonis Group. By becoming EXOSENS, the company clearly anchors its position as high-end technology provider to serve high growing commercial and defense markets for more sustainability and safety”
Jérôme Cerisier, CEO of the new group said: “EXOSENS means “to detect, to see and to give meaning to what is beyond”. It perfectly reflects what we are doing, we reveal the invisible, we sense the world to make it safer. With EXOSENS, we aim to share our common values throughout the whole organization, to integrate new companies and colleagues and to always offer high performances products to meet customers satisfaction.”

Operationally, legal entities will keep their existing names. The four product brands Photonis (for intensified products, nuclear and mass spectrometry detectors), Xenics (for infrared sensors and cameras), Elmul (for electron detectors) and Telops (for hyperspectral and cooled infrared camera) will continue to be deployed and promoted in their markets.
 
ABOUT EXOSENS:
 
Accompanied by Groupe HLD since 2021, EXOSENS is a high-tech company, with more than 85 years of experience in the innovation, development, manufacture and sale of technologies in the field of particles and photo detection and imaging. Today, it offers its customers detectors and detection solutions: its travelling wave tubes, advanced cameras, neutron & gamma detectors, instrument detectors and light intensifier tubes allow EXOSENS to respond to complex issues in environments extremely demanding by offering tailor-made solutions to its customers. Thanks to its sustained and permanent investment, EXOSENS is internationally recognized as a major innovator in optoelectronics, with production and R&D carried out on 9 sites, in Europe and North America and over 1 500 employees.

Go to the original article...

International Image Sensors Workshop 2023 Papers are Available Online

Image Sensors World        Go to the original article...

Papers from the recent International Image Sensor Workshop (IISW) 2023 held in Crieff Scotland are now available on the International Image Sensor Society website:

Link: https://imagesensors.org/2023-papers/

An exciting lineup of invited talks, posters, and papers over nine different sessions; go check it out!

Session 1 3D Stacking and Small Pixels
Session 2 Noise
Session 3 Pixel Design & Process Technology
Session 4 HDR and Automotive
Session 5 Smart and Event-based Imagers
Session 6 Beyond Visible & Scientific Imaging
Session 7 Speciality and New Applications
Session 8 SPAD Devices
Session 9 Time of Flight

Go to the original article...

International Image Sensors Workshop 2023 Papers are Available Online

Image Sensors World        Go to the original article...

Papers from the recent International Image Sensor Workshop (IISW) 2023 held in Crieff Scotland are now available on the International Image Sensor Society website:

Link: https://imagesensors.org/2023-papers/

An exciting lineup of invited talks, posters, and papers over nine different sessions; go check it out!

Session 1 3D Stacking and Small Pixels
Session 2 Noise
Session 3 Pixel Design & Process Technology
Session 4 HDR and Automotive
Session 5 Smart and Event-based Imagers
Session 6 Beyond Visible & Scientific Imaging
Session 7 Speciality and New Applications
Session 8 SPAD Devices
Session 9 Time of Flight

Go to the original article...

css.php