IR Detection Workshop June 7-9, 2023 in Toulouse – Final Program and Registration Available

Image Sensors World        Go to the original article...

CNES, ESA, LABEX FOCUS, ONERA, CEA-LETI, AIRBUS DEFENCE & SPACE, THALES ALENIA SPACE are pleased to invite you to the “Infrared detection for space application” workshop to be held in TOULOUSE from June 7th to 9th, 2023
 
Registration deadline is June 1st, 2023.
 
Workshop registration link : https://site.evenium.net/2yp0cj0h









Go to the original article...

PCH-EM Algorithm for DSERN characterization

Image Sensors World        Go to the original article...

Hendrickson et al. have posted two new pre-prints on deep sub-electron read noise (DSERN) characterization. This new algorithm called PCH-EM is used to extract key performance parameters of sensors with sub-electron read noise through a custom implementation of the Expectation Maximization (EM) algorithm. It shows a dramatic improvement over the traditional Photon Transfer (PT) method in the sub-electron noise regime. The authors have some extensions and improvements of the method coming soon as well.

The first pre-print titled "Photon Counting Histogram Expectation Maximization Algorithm for Characterization of Deep Sub-Electron Read Noise Sensors" presents the theory behind their approach.

Abstract: We develop a novel algorithm for characterizing Deep Sub-Electron Read Noise (DSERN) image sensors. This algorithm is able to simultaneously compute maximum likelihood estimates of quanta exposure, conversion gain, bias, and read noise of DSERN pixels from a single sample of data with less uncertainty than the traditional photon transfer method. Methods for estimating the starting point of the algorithm are also provided to allow for automated analysis. Demonstration through Monte Carlo numerical experiments are carried out to show the effectiveness of the proposed technique. In support of the reproducible research effort, all of the simulation and analysis tools developed are available on the MathWorks file exchange.

Authors have released their code here: https://www.mathworks.com/matlabcentral/fileexchange/121343-one-sample-pch-em-algorithm


 

 

The second pre-print titled "Experimental Verification of PCH-EM Algorithm for Characterizing DSERN Image Sensors" presents an application of the PCH-EM algorithm to quanta image sensors.

Abstract: The Photon Counting Histogram Expectation Maximization (PCH-EM) algorithm has recently been reported as a candidate method for the characterization of Deep Sub-Electron Read Noise (DSERN) image sensors. This work describes a comprehensive demonstration of the PCH-EM algorithm applied to a DSERN capable quanta image sensor. The results show that PCH-EM is able to characterize DSERN pixels for a large span of quanta exposure and read noise values. The per-pixel characterization results of the sensor are combined with the proposed Photon Counting Distribution (PCD) model to demonstrate the ability of PCH-EM to predict the ensemble distribution of the device. The agreement between experimental observations and model predictions demonstrates both the applicability of the PCD model in the DSERN regime as well as the ability of the PCH-EM algorithm to accurately estimate the underlying model parameters.





Go to the original article...

SWIR event cameras from SCD.USA

Image Sensors World        Go to the original article...

SCD.USA has released an event based SWIR sensor/camera. Official press release: https://scdusa-ir.com/articles/advanced-multi-function-ingaas-detectors-for-swir/
 
 
IMV Europe
 
Defence imaging goes next-gen with event-based SWIR camera https://www.imveurope.com/content/defence-imaging-goes-next-gen-event-based-swir-camera 
 
 


Semi Conductor Devices (SCD), a manufacturer of uncooled infrared detectors and high-power laser diodes, has launched a new SWIR detector, the Swift-El.

The Swift-El is designed as a very low Size Weight and Power (SWaP) and low-cost VGA format 10-micron pitch detector.

According to SCD, it is the world's first SWIR detector integrating event-based imaging capabilities, making it a 'revolutionary' addition to the defence and industrial sectors.

Its advanced FPA level detection capabilities enable tactical forces to detect multiple laser sources, laser-spots, Hostile Fire Indication (HFI), and much more.

Its ROIC imager technology offers two parallel video channels in one sensor - a standard imaging SWIR video channel, and a very high frame event imaging channel.

The Swift-El offers SWIR imaging that supports day and low-light scenarios, enabling 24/7 situational awareness, better atmospheric penetration, and a low-cost SWIR image for tactical applications. Furthermore, its event-based imaging channel provides advanced capabilities, such as laser event spot detections, multi-laser spot LST capabilities, and SWIR event-based imaging, broadening the scope of target detection and classification.

The Swift-El also opens up new capacities for machine vision applications in fields such as production line sorting machines, smart agriculture, and more, where analysis of high-level SWIR images is required for automatic machine decision-making. The Swift-El enables a full frame rate of more than 1,200Hz, which is essential for machine vision and machine AI algorithms.

Kobi Zaushnizer, CEO of SCD, elaborates on the company's latest innovation: "SCD is proud to launch the Swift-El - the world's first SWIR imager to enable event-based imaging. This new product is part of our value to be ‘always a step ahead’ and our promise to our customers to ‘be the first to see’. The Swift-El event-based imaging enables the next generation of AI-based systems, offering the multi-domain battlespace multi-spectral infrared imaging for better situational awareness, advanced automatic target detection and calcifications, and target handoff across platforms and forces, while increasing warrior lethality. It also enables HFI detection, and all of this at a price point that makes it possible for SWIR cameras to be integrated into high-distribution applications, such as weapon sights and clip-ons, drones, man-portable target designators, and more. The advanced detector is already being delivered to initial customers around the world, and we expect to see a significant production ramp-up in the coming months."
 
 
 
The MIRA 02Y-E shortwave-infrared (SWIR) camera delivers a fast-imaging frame rate up to 1600 fps. Its readout integrated circuit (ROIC) enables an independent second stream of neuromorphic imaging for event detection, reducing the amount of data communication while tracking what changed in the scene. Ideal for advanced, low SWaP-C applications, the SWIR camera can be integrated into various air platforms, missiles, vehicles, and handheld devices. 



Go to the original article...

Lynred IR’s new industrial site

Image Sensors World        Go to the original article...

News from: https://ala.associates/funding/lynred-breaks-ground-on-new-e85m-industrial-site-for-infrared-technologies/

Also from Yole: https://www.yolegroup.com/industry-news/lynred-breaks-ground-on-new-e85m-industrial-site-for-infrared-technologies/

Lynred breaks ground on new €85M industrial site for infrared technologies

 Named Campus, Lynred’s new state-of-the-art industrial facility will meet growing market demand for advanced infrared technologies, notably for automotive sector, whilst bolstering French industrial sovereignty in field
 
Company’s production capacity set to undergo 50% increase by 2025; 100% by 2030
 
Grenoble, France, May 10, 2023 – Lynred, a leading global provider of high-quality infrared detectors for the aerospace, defense and commercial markets, today announces breaking ground on its new €85 million ($93.7M) industrial site to produce state-of-the-art infrared technologies. This is the biggest construction investment that the company has undertaken since it began manufacturing in 1986.
 
The project is financed by loans from the CIC bank and Bpifrance.
 
Lynred will double its current cleanroom footprint, totaling 8,200 m2 (88,264 ft2), primarily to meet two strategic objectives:
 Obtain an optimal cleanroom cleanliness classification for its new high-performance products (hybrid detectors)
 Increase the production capacity for its more compact industrial products (bolometers) used in multiple fields, including the automotive industry
This substantial investment will consolidate Lynred’s positioning as European market leader in infrared detection. It enables the company to play a key role within the European defense industrial and technological base, innately woven into strengthening French and European forces, for whom infrared detection is hugely important. With this, Lynred takes a step up in responding to the French government’s call to reorient European industry towards a ‘rearmament economy’ (FR).
 
To mark the ground breaking on May 10, Jean-François Delepau, chairman of Lynred, planted a holm oak tree.
 
“I am delighted to see our state-of-the-art industrial site come to life, consolidating our position as the second largest infrared detector manufacturer in the world. This will enable us to respond to growing market demand for next-generation infrared technologies, including in the automotive sector. It will allow us to contribute to bolstering France’s industrial sovereignty and, more generally, to improve our overall industrial performance. Above all, I wish to thank the Lynred teams involved in this major undertaking, as well as all our partners who have supported us, in particular our shareholders, Thales and Safran. Lynred is embarking on a new strategic pathway, both in terms of technology and dynamic growth,” said Mr Delepau.
 
The buildings are due for completion in the first trimester of 2025 and the site will be fully operational by the following October. This state-of-the-art industrial facility will comprise 8,200 m2 (88,264 ft2) of interconnected cleanrooms (twice the current surface area), 3,400 m2 (36,600ft2) of laboratories, a 2,300 m2 (24,756 ft2) logistics area, and a tertiary and technical area measuring 10,800 m2 (11,625 ft2).
 
Lynred is looking to increase its production capacity by 50% by 2025, in particular for its bolometer products, with a view to doubling capacity by 2030.
 
With these new cleanrooms the company will house all of its French production lines in a single location. This will enable synergies amongst core competencies and optimize production flows.
 
The new buildings will be located on the current Lynred site in Veurey-Voroize, situated within the Grenoble area. They have been designed to ensure optimized energy management and environmental performance: even with 13,600 m2 (146,400 ft2) under construction, the volume of permeable surface will increase. The company will decrease its carbon footprint by 33% and will install 1,800 m2 (19,375 ft2) of solar panels. Moreover, the site will accommodate an additional 320 trees and more than 100 charging stations for electric vehicles (cars and bicycles) will be put in place, with more cycle parking added.
 
About Lynred
Lynred and its subsidiaries, Lynred USA and Lynred Asia-Pacific, are global leaders in designing and manufacturing high quality infrared technologies for aerospace, defense and commercial markets. It has a vast portfolio of infrared detectors that covers the entire electromagnetic spectrum from near to very far infrared. The Group’s products are at the center of multiple military programs and applications. Its IR detectors are the key component of many top brands in commercial thermal imaging equipment sold across Europe, Asia and North America. Lynred is the leading European manufacturer for IR detectors deployed in space.
www.lynred.com

 

 

Go to the original article...

ICCP 2023 Call for Demos and Posters

Image Sensors World        Go to the original article...

The call for poster and demo submissions for the IEEE International Conference on Computational Photography (ICCP 2023) is now open. The call is on the website and is available here.

Whereas ICCP papers must describe original research, the posters, and demos give an opportunity to showcase previously published or yet-to-be-published work to a broader community.

The poster track is non-exclusive, and papers submitted to the paper or abstract tracks of ICCP are welcome to present a poster as well.

ICCP is at the rich intersection of optics, graphics, imaging, vision and design. The posters and demos provide an excellent and exciting opportunity for interaction and cross-talk between research communities.

The deadline for posters/demos is June 15, 2023.

Please submit your posters/demos here: https://forms.gle/VdMMEheX1X3ucQG47.

Please refer to the ICCP 2023 website for more information: https://iccp2023.iccp-conference.org/call-for-posters-demos/

Go to the original article...

Review article on figures of merit of 2D photodetectors

Image Sensors World        Go to the original article...

A review article in Nature Communications by Wang et al. (Shanghai Institute of Technical Physics) discusses techniques for characterizing 2D photodetectors.

Full paper: https://www.nature.com/articles/s41467-023-37635-1

Abstract: Photodetectors based on two-dimensional (2D) materials have been the focus of intensive research and development over the past decade. However, a gap has long persisted between fundamental research and mature applications. One of the main reasons behind this gap has been the lack of a practical and unified approach for the characterization of their figures of merit, which should be compatible with the traditional performance evaluation system of photodetectors. This is essential to determine the degree of compatibility of laboratory prototypes with industrial technologies. Here we propose general guidelines for the characterization of the figures of merit of 2D photodetectors and analyze common situations when the specific detectivity, responsivity, dark current, and speed can be misestimated. Our guidelines should help improve the standardization and industrial compatibility of 2D photodetectors. 
Device effective area

a Photoconductive photodetector. b Planar junction photodetector. c, d Vertical junction photodetectors with zero and reverse bias, respectively. e Focal plane photodetector. The dashed blue lines in a–e are suggested accurate effective areas. The dashed orange lines in b, d, and e are potential inaccurate effective areas for respective types. f Field intensity of the Gaussian beam with the beam waist w0 = 2.66 μm, here BP represents black phosphorus. g Wave optics simulation result of the electric field distribution at the upper surface of the device with plane wave injected. h Calculated absorption with the Gaussian beam with the beam waist w0 = 2.66 μm multiplying the wave optics simulation profile shown in (g).

 

Responsivity

a Monochromatic laser source measurement system, where the laser spot intensity follows the Gaussian distribution. b Relative intensity of the edge of the spot under the researcher’s estimation. The inset shows three spots with the same beam waist and color limit, the only difference of which is the beam intensity. with different intensities and the same beam waist. The estimated radius of spot size shows vast differences. c Laser spot size and power calibration measurement system. d Photon composition of blackbody radiation source, and the radiation distribution in accordance with Planck’s law. e Typical response spectrum of photon detector and thermal detector. The inset shows a diagram of the blackbody measurement system. f Schematic diagram of FTIR measurement system.


Dark current

a Typical dark current mechanism, the dashed lines, filled and empty circles and arrows represent quasi-fermi level, electrons, holes, and carrier transport direction. b Characterization and analysis of dark current for UV-VIS photodetectors. The solid red line is the Id–V characteristic curve measured with a typical VIS photodetector. The green, dark blue, orange, and light blue dashed lines represent the fitted current components of generation-recombination, band-to-band tunneling, diffusion, and trap-assisted tunneling with analytic model. c Dominant dark current for typical photovoltaic photodetectors at different temperatures. d Characterization and analysis of dynamic resistance for infrared photodetectors. The solid red line is the Rd–V characteristic curve measured with a typical infrared photodetector. The orange, green, light blue, and dark blue dashed lines represent the fitted current components of diffusion, generation-recombination, trap-assisted tunneling, and band-to-band tunneling with analytic model. e Dynamic resistance of typical photovoltaic photodetectors at different temperatures.


Other noise sources


a Noise and responsivity characteristics for photodetectors with different response bandwidths for single detection (the blue line represents the typical responsivity curve of photodetectors of high response bandwidth, the green line represents the typical responsivity curve of photodetectors of low response bandwidth, and the red line represents the typical noise characteristics. The vertical dashed lines represent the −3 dB bandwidth for photodetectors with high and low response bandwidth). b Overestimation of specific detectivity based on noise characteristics for single detection. The solid and dashed lines present the calculated specific detectivity with D∗=RAdΔfin from the measured noise and estimated noise of thermal noise and shot noise (ignoring the 1/f noise and g-r noise). c Noise and responsivity characteristics for photodetectors of imaging detection. d Overestimation of specific detectivity based on noise characteristics for imaging detection. The solid and dashed lines present the calculated specific detectivity with D∗=RAdfB∫0fBindf from the measured noise and estimated noise of thermal noise and shot noise (ignoring the 1/f noise and g-r noise).

 

Time parameters


a Calculated fall time does not reach a stable value which is inaccurate, where τf′ is inaccurate calculated fall time, τf is accurate calculated fall time. (The bule line represents the square signal curve, the yellow line represents the typical response curve of 2D photodetectors.) b Response time measurement of photodetector may not reach a stable value under pulse signal, which will lead to an inaccurate result. The inset shows pulse signal. The τr is inaccurate calculated rise time. c Variation of photocurrent and responsivity of photoconductive photodetectors with the incident optical power density14. d Rise and fall response time of photodetector should be calculated from a complete periodic signal. e Typical −3 dB bandwidth response curve of photodetector, where R0 represents stable responsivity value, fc represents the −3 dB cutoff frequency. f Gain-bandwidth product of various photodetectors, where photo-FET is photo-field-effect transistor, PVFET is photovoltage field-effect transistor14.

Go to the original article...

Sony announces 2022 earnings and 2023 forecast

Image Sensors World        Go to the original article...

 

Link: https://www.sony.com/en/SonyInfo/IR/library/presen/er/

Go to the original article...

Videos du Jour [onsemi, Sony, Melexis]

Image Sensors World        Go to the original article...


CMOS Image Sensor Layers at a Glance

The onsemi CMOS Image Sensor Wafer consists of the following layers:
• Microlens Array—Small lenses that collect and focus light onto light-sensitive areas of the sensor.
• Color Filter Array (CFA)—Mosaic of tiny color filters placed over the pixel sensors of an image sensor to capture color information.
• Photodiode—Semiconductor that converts light into an electrical current.
• Pixel Transistors—Transistors provide gain or bugger [sic, typo "buffer"?] of electrical charge from the photodiode.
• Bond Layer—Connects the Active Pixel Array to the ASIC layer
• ASIC—Logic layer for features such as error correction, memory for multi-exposures, cores for cybersecurity, hardware blocks for functional safety, and high-speed I/O.



tinyML Summit 2023: Deploying Visual AI Solutions in the Retail Industry

Mark HANSON , VP of Technology and Business Innovation, Sony Semiconductor Solutions of America
An image sensor with AI-processing capability is a novel architecture that is pushing vision AI closer to the edge to enable applications at scale. Today many AI applications stall in the PoC stage and never reach commercial deployment to solve real-world problems because existing systems lack simplicity, flexibility, affordability, and commercial-grade reliability. We’ll investigate why the retail industry struggles to keep track of stock on its retail shelves while relying on retail employees to manually monitor stock and how our (AITRIOS) vision AI application for on-shelf-availability can eliminate complexity and inefficiency at scale.

 


Melexis: Automotive in-cabin face recognition and anti-spoofing AI using 3D time-of-flight camera

In this demo, we demonstrate in-cabin face recognition and anti-spoofing AI using a 3D time-of-flight camera. Please contact us for more information.

Go to the original article...

Paper on 8-tap ToF Sensor

Image Sensors World        Go to the original article...

Miyazawa et al. from Shizuoka University in Japan recently published an article titled "A Time-of-Flight Image Sensor Using 8-Tap P-N Junction Demodulator Pixels" in the MDPI Sensors journal.

[Open access: https://www.mdpi.com/1424-8220/23/8/3987]

Abstract:
This paper presents a time-of-flight image sensor based on 8-Tap P-N junction demodulator (PND) pixels, which is designed for hybrid-type short-pulse (SP)-based ToF measurements under strong ambient light. The 8-tap demodulator implemented with multiple p-n junctions used for modulating the electric potential to transfer photoelectrons to eight charge-sensing nodes and charge drains has an advantage of high-speed demodulation in large photosensitive areas. The ToF image sensor implemented using 0.11 µm CIS technology, consisting of an 120 (H) × 60 (V) image array of the 8-tap PND pixels, successfully works with eight consecutive time-gating windows with the gating width of 10 ns and demonstrates for the first time that long-range (>10 m) ToF measurements under high ambient light are realized using single-frame signals only, which is essential for motion-artifact-free ToF measurements. This paper also presents an improved depth-adaptive time-gating-number assignment (DATA) technique for extending the depth range while having ambient-light canceling capability and a nonlinearity error correction technique. By applying these techniques to the implemented image sensor chip, hybrid-type single-frame ToF measurements with depth precision of maximally 16.4 cm (1.4% of the maximum range) and the maximum non-linearity error of 0.6% for the full-scale depth range of 1.0–11.5 m and operations under direct-sunlight-level ambient light (80 klux) have been realized. The depth linearity achieved in this work is 2.5 times better than that of the state-of-the-art 4-tap hybrid-type ToF image sensor.


Figure 1. Structure and principle of the two-tap p-n junction demodulator (PND): (a) Top view; (b) Cross-sectional view (X1–X1’); (c) Cross-sectional view (X2–X2’); (d) Potential diagram at the channel (X1–X1’); (e) Potential diagram at Si surface (X2–X2’).


Figure 2. 8-tap demodulation pixel and the operations: (a) Top view of the 8-tap PND; (b) equivalent pixel readout circuits.


Figure 3. 3D device simulation results of the 8-tap PND: (a) X-Y 2D potential plot and carrier traces to transfer to G6; (b) X-Y 2D potential plot and carrier traces to transfer to GD; (c) demodulator top view; (d) 1D potential plot (A–A’) for carrier transfer to floating diffusions, FD6 and FD2; (e) 1D potential plot (B–B’) for carrier transferring to a drain through GD only (red line) and that for carrier transferring to a drain through GD and GDO (black line).


Figure 4. Gate timing and its correspondence to the depth range to be measured: (a) Gate timing when all the gates are activated in every cycle and its correspondence to the distance profile of the back-reflected light intensity; (b) Gate timing when G4–G8 are activated for signal light sampling and G1–G3 are activated for ambient light sampling.


Figure 5. Example of the modified DATA timing diagram for cancelling ambient light.



Figure 6. Chip micrograph.



Figure 7. Response of the 8-tap outputs to the light pulse delay. (a) Response to Short Pulse (940 nm, T0 = 10 ns). (b) Response to Short Pulse (T0 = 10 ns, Normalized). (c) Response to Very Short Pulse (FWHM = 69 ps, 851 nm, Normalized). (d) Time Derivative of (c) by The Delay Time (Normalized). (e) FWHM of The Pixel Response to Very Short Pulse (FWHM = 69 ps) Measured with (d).



Figure 11. Depth image (1.0 m to 11.5 m) while moving a reflector board.




Go to the original article...

Paper on 8-tap ToF Sensor

Image Sensors World        Go to the original article...

Miyazawa et al. from Shizuoka University in Japan recently published an article titled "A Time-of-Flight Image Sensor Using 8-Tap P-N Junction Demodulator Pixels" in the MDPI Sensors journal.

[Open access: https://www.mdpi.com/1424-8220/23/8/3987]

Abstract:
This paper presents a time-of-flight image sensor based on 8-Tap P-N junction demodulator (PND) pixels, which is designed for hybrid-type short-pulse (SP)-based ToF measurements under strong ambient light. The 8-tap demodulator implemented with multiple p-n junctions used for modulating the electric potential to transfer photoelectrons to eight charge-sensing nodes and charge drains has an advantage of high-speed demodulation in large photosensitive areas. The ToF image sensor implemented using 0.11 µm CIS technology, consisting of an 120 (H) × 60 (V) image array of the 8-tap PND pixels, successfully works with eight consecutive time-gating windows with the gating width of 10 ns and demonstrates for the first time that long-range (>10 m) ToF measurements under high ambient light are realized using single-frame signals only, which is essential for motion-artifact-free ToF measurements. This paper also presents an improved depth-adaptive time-gating-number assignment (DATA) technique for extending the depth range while having ambient-light canceling capability and a nonlinearity error correction technique. By applying these techniques to the implemented image sensor chip, hybrid-type single-frame ToF measurements with depth precision of maximally 16.4 cm (1.4% of the maximum range) and the maximum non-linearity error of 0.6% for the full-scale depth range of 1.0–11.5 m and operations under direct-sunlight-level ambient light (80 klux) have been realized. The depth linearity achieved in this work is 2.5 times better than that of the state-of-the-art 4-tap hybrid-type ToF image sensor.


Figure 1. Structure and principle of the two-tap p-n junction demodulator (PND): (a) Top view; (b) Cross-sectional view (X1–X1’); (c) Cross-sectional view (X2–X2’); (d) Potential diagram at the channel (X1–X1’); (e) Potential diagram at Si surface (X2–X2’).


Figure 2. 8-tap demodulation pixel and the operations: (a) Top view of the 8-tap PND; (b) equivalent pixel readout circuits.


Figure 3. 3D device simulation results of the 8-tap PND: (a) X-Y 2D potential plot and carrier traces to transfer to G6; (b) X-Y 2D potential plot and carrier traces to transfer to GD; (c) demodulator top view; (d) 1D potential plot (A–A’) for carrier transfer to floating diffusions, FD6 and FD2; (e) 1D potential plot (B–B’) for carrier transferring to a drain through GD only (red line) and that for carrier transferring to a drain through GD and GDO (black line).


Figure 4. Gate timing and its correspondence to the depth range to be measured: (a) Gate timing when all the gates are activated in every cycle and its correspondence to the distance profile of the back-reflected light intensity; (b) Gate timing when G4–G8 are activated for signal light sampling and G1–G3 are activated for ambient light sampling.


Figure 5. Example of the modified DATA timing diagram for cancelling ambient light.



Figure 6. Chip micrograph.



Figure 7. Response of the 8-tap outputs to the light pulse delay. (a) Response to Short Pulse (940 nm, T0 = 10 ns). (b) Response to Short Pulse (T0 = 10 ns, Normalized). (c) Response to Very Short Pulse (FWHM = 69 ps, 851 nm, Normalized). (d) Time Derivative of (c) by The Delay Time (Normalized). (e) FWHM of The Pixel Response to Very Short Pulse (FWHM = 69 ps) Measured with (d).



Figure 11. Depth image (1.0 m to 11.5 m) while moving a reflector board.




Go to the original article...

Another article on Panasonic’s organic image sensor

Image Sensors World        Go to the original article...

PetaPixel: https://petapixel.com/2023/04/11/panasonics-decade-old-organic-cmos-sensor-is-still-years-away/

Panasonic’s Decade-Old Organic CMOS Sensor is Still Years Away

As a quick reminder, Panasonic's patented technology relies on an organic thin-film photo-conversion material in lieu of the conventional technique where a silicon photodiode converts light into electrical charge.

Some excerpts from the article are below.

 

... it has been nearly 10 years since the company first announced it was working on this new sensor and in that time, a lot has changed. The previously exciting low light capabilities have since been realized by other sensors...



[In an updated announcement last year Panasonic suggested ...] 8K resolution while retaining those dynamic range promises and would do so at high framerates. More recently, Panasonic explained that the sensor would also feature what is known as “reduced crosstalk,” which basically means that the red, green, and blue pixels of the sensor collect only their intended color and that light, regardless of type and color cast, and won’t spill across each pixel. This results in better color reproduction.
...

Basically, it’s very difficult to get excited about Panasonic’s organic CMOS, and that would be the case even if it was coming to market this year.
...

There are those who have been saying Sigma’s Foveon sensor is stuck in “development hell,” but Panasonic easily has it beat with its organic CMOS. 

Go to the original article...

NEC develops carbon nanotubes-based IR sensor

Image Sensors World        Go to the original article...

From StatNano: https://statnano.com/news/72257/NEC-Develops-the-World's-First-Highly-Sensitive-Uncooled-Infrared-Image-Sensor-Utilizing-Carbon-Nanotubes

 

NEC Develops the World's First Highly Sensitive Uncooled Infrared Image Sensor Utilizing Carbon Nanotubes

NEC Corporation has succeeded in developing the world's first high-sensitivity uncooled infrared image sensor that uses high-purity semiconducting carbon nanotubes (CNTs) in the infrared detection area. This was accomplished using NEC’s proprietary extraction technology. NEC will work toward the practical application of this image sensor in 2025.


 

Infrared image sensors convert infrared rays into electrical signals to acquire necessary information, and can detect infrared rays emitted from people and objects even in the dark. Therefore, infrared image sensors are utilized in various fields to provide a safe and secure social infrastructure, such as night vision to support automobiles driving in the darkness, aircraft navigation support systems and security cameras.

There are two types of infrared image sensors, the "cooled type," which operates at extremely low temperatures, and the "uncooled type," which operates near room temperature. The cooled type is highly sensitive and responsive, but requires a cooler, which is large, expensive, consumes a great deal of electricity, and requires regular maintenance. On the other hand, the uncooled type does not require a cooler, enabling it to be compact, inexpensive, and to consume low power, but it has the issues of inferior sensitivity and resolution compared to the cooled type.

(Left) Electron micrograph and image of single-walled CNTs, (Right) Atomic microscope image of a high-purity semiconducting CNT film.


(Left) Device structure, (Right) Photograph of CNT infrared array device.

In 1991, NEC discovered CNTs for the first time in the world and is now a leader in research and development related to nanotechnology. In 2018, NEC developed a proprietary technology to extract only semiconducting-type CNTs at high purity from single-walled CNTs that have a mixture of metallic and semiconducting types. NEC then discovered that thin films of semiconducting-type CNTs extracted with this technology have a large temperature coefficient of resistance (TCR) near room temperature.
The newly developed infrared image sensor is the result of these achievements and know-how. NEC applied semiconductor-type CNTs based on its proprietary technology that features a high TCR, which is an important index for high sensitivity. As a result, the new sensor achieves more than three times higher sensitivity than mainstream uncooled infrared image sensors using vanadium oxide or amorphous silicon.

The new device structure was achieved by combining the thermal separation structure used in uncooled infrared image sensors, the Micro Electro Mechanical Systems (MEMS) device technology used to realize this structure, and the CNT printing and manufacturing technology cultivated over many years for printed transistors, etc. As a result, NEC has succeeded in operating a high-definition uncooled infrared image sensor of 640 x 480 pixels by arraying the components of the structure.

Part of this work was done in collaboration with Japan’s National Institute of Advanced Industrial Science and Technology (AIST). In addition, a part of this achievement was supported by JPJ004596, a security technology research promotion program conducted by Japan’s Acquisition, Technology & Logistics Agency (ATLA).

Going forward, NEC will continue its research and development to further advance infrared image sensor technologies and to realize products and services that can contribute to various fields and areas of society.

Go to the original article...

Sony AITRIOS wins award at tinyML 2023

Image Sensors World        Go to the original article...

Link: https://www.aitrios.sony-semicon.com/en/news/aitrios-to-win-tinyml-awards-2023/ 

At the tinyML Summit 2023, held from March 27 to 29, 2023, Sony Semiconductor Solutions' edge AI sensing platform service, AITRIOS™, won the tinyML Awards 2023 "Best Innovative Software Enablement and Tools".

The tinyML Summit is a global conference on tiny machine learning (TinyML), held since 2019, where business leaders, engineers, and researchers gather to share information on the latest TinyML technologies and applications. This year the conference was held in San Francisco, United States. This award is presented to an individual, team, or organization that has created innovative software tools or development support tools related to TinyML and has contributed to the evolution of this technology.


 Deploying Visual AI Solutions in the Retail Industry
Mark HANSON , VP of Technology and Business Innovation, Sony Semiconductor Solutions of America
An image sensor with AI-processing capability is a novel architecture that is pushing vision AI closer to the edge to enable applications at scale. Today many AI applications stall in the PoC stage and never reach commercial deployment to solve real-world problems because existing systems lack simplicity, flexibility, affordability, and commercial-grade reliability. We’ll investigate why the retail industry struggles to keep track of stock on its retail shelves while relying on retail employees to manually monitor stock and how our (AITRIOS) vision AI application for on-shelf-availability can eliminate complexity and inefficiency at scale.

About AITRIOS:

The name “AITRIOS” consists of the platform keyword “AI” and “Trio S,” meaning, “three S’s.” Through AITRIOS, SSS aims to deliver the three S’s of “Solution,” “Social Value,” and “Sustainability” to the world.

Through this platform, SSS seeks to facilitate development of optimal systems, in which the edge and the cloud function in synergy, to support its partners in popularizing and expanding environmentally conscious sensing solutions using edge AI, and to deliver new value and help solve challenges faced by various industries.



AITRIOS integrates an AI model and application development environment, a marketplace, cloud-based services , and other items required for solution development into a powerful and flexible platform.

SSS, a leading company in image sensors, offers sensor configurations optimized for edge AI, enabling partners to build high-performance and reliable solutions.

AITRIOS is a one-stop B2B* (business to business) platform providing tools and environments that facilitate software and application development and system implementation.

*This service is not currently available to individual customers.

Go to the original article...

Canon’s 3.2 MP SPAD Camera: Specifications

Image Sensors World        Go to the original article...

Canon's 3.2 MP SPAD camera has received some press coverage:

PetaPixel: https://petapixel.com/2023/04/03/canons-new-sensor-enables-long-range-night-vision-capabilities/

YMCinema: https://ymcinema.com/2023/04/03/canon-develops-interchangeable-lens-camera-that-sees-in-the-dark/ 

Unfortunately I have not been able to find a spec sheet. The next best thing for now is to see the 2021 IEDM proceedings paper titled "3.2 Megapixel 3D-Stacked Charge Focusing SPAD for Low-Light Imaging and Depth Sensing" (Morimoto et al., Canon Inc., Japan).  Thanks to Prof. Eric Fossum for pointing this out in a comment on an earlier post!

Abstract:
We present a new generation of scalable photon counting image sensors, featuring zero read noise and 100ps temporal resolution. Newly proposed charge focusing single-photon avalanche diode (SPAD) is employed to resolve critical trade-offs in conventional SPAD pixels. A prototype 3.2 megapixel 3D-stacked backside-illuminated (BSI) image sensor with 1-inch format demonstrates the best-in-class photon detection efficiency (PDE), dark count rate (DCR) and timing jitter performance with the largest array size ever reported in avalanche photodiode (APD)-based image sensors. The proposed technology paves the way to compact and high-definition photon counting image sensors for low-light imaging and 3D time-of-flight sensing.
 

 

 
 

 
 
 
 



 


 


 


 

 
 
 

 
 

 

Go to the original article...

"ai-CMOS" 9-channel color camera

Image Sensors World        Go to the original article...

From Transformative Optics Corporation: https://www.ai-cmos.com/

ai-CMOS sensors solve many of today’s challenges with antiquated CMOS technology, offering unprecedented accuracy, an expanded spectrum, plus 9-channel AI-optimized color. Extending beyond the visible spectrum into near-ultraviolet (NUV) and near-infrared (NIR) greatly expands capabilities for mobile photography, autonomous transport, and machine vision.

With higher sensitivity than Bayer sensors, near-complete color gamut, and expansion beyond visible light to near-infrared and ultraviolet frequencies, ai-CMOS brings a unique multispectral ability to standard cameras.

 


Mobile Photography.
Close the gap between performance and portability, while unlocking new potential for AI-powered apps.
More Contrast: Improved Black and White Modulation Transfer Function (MTF)
Broader Spectrum: Extension to Near Infrared (NIR) and Near Ultraviolet channel
Near-Complete Color Gamut: Improving color accuracy, automated white balance
Enhanced Sensitivity: Twice the Light. Lower light levels, less motion blur, plus twice the signal levels for a myriad of Integrated Signal Processing functions.

Machine Vision.
ai-CMOS offers AI applications richer and more complete data sets for training, object detection, and object classification.
Richer Data: 3x the information over Bayer
AI Optimizations: increased raw data content for feature vectors and 2x the signal strength for Integrated Signal Processing aiding apps like Super-Resolution
More Contrast: Improved Black and White Modulation Transfer Function (MTF)
Broader Spectrum: Extension to Near Infrared (NIR) and Near Ultraviolet channels

Automotive.
ai-CMOS captures more detailed data in low-light conditions, at night, and in poorer weather conditions, like fog and rain.
Spectral Sensitivity: ai-CMOS captures twice the light of current ADAS CMOS technology on the market.
Object Detection: 25% color gamut increase and 3x Feature Vectors from traditional sensors, greatly enhancing object detection and classification.
Autonomous Driving: Better enable autonomous vehicles to navigate more complex environments, and interact with other vehicles and pedestrians.


Sensor Specs.

Resolution: 3000 x 3864
Pixel Size: 8um
Sensor Format: 35mm (dia. 39.3mm)
Spectral Response: 350nm to 850nm
Quantum Efficiency: >90%
Illumination-type: BSI
Frame Rate: 30fps in HDR
Full Well: >65,000 e-
Gain Mode: HDR and Dual Gain
 

Available in limited quantities in 2023.

Go to the original article...

SWIR imaging market ‘worth $2.9BN by 2028’

Image Sensors World        Go to the original article...

From optics.org news: https://optics.org/news/14/4/15

12 Apr 2023
Yole Intelligence says that the war in Ukraine and tensions over Taiwan will push defense applications beyond prior expectations.

Analysts at France-based Yole Intelligence say the current niche market for short-wave infrared (SWIR) imaging technology will grow rapidly over the next five years, and will be worth $2.9 billion by 2028.

In a new report on the segment, which is currently dominated by applications in defense, research, and industry, Yole’s Alex Clouet suggests that SWIR technology could begin replacing near-infrared (NIR) imagers in high-end smart phones, where the technology is used for secure identification.
Together with higher growth than previously expected in the military arena, plus innovation in key component materials expected to reduce costs, the upshot is expected to be a compound annual growth rate in excess of 40 per cent over the next few years.



Although definitions of SWIR and NIR spectral ranges differ, the term SWIR is often used to refer to wavelengths between 1400 nm and 3000 nm, whereas NIR relates to the 780-1400 nm band.
According to the report, the SWIR imaging market was worth just over $300 million last year, with defense, aerospace, and research applications accounting for more than two-thirds of that total.
“The defense segment will experience higher growth than previously expected, reaching $405 million in 2028 from $228 million in 2022, pulled by geopolitical tensions such as the Ukraine war and tensions around Taiwan and an increasing number of countries becoming interested in SWIR technologies,” Yole says.

The current focus means that defense-oriented players such as Israel’s SCD, Sensors Unlimited, and Teledyne FLIR dominate the scene. But as the technology begins to find use in a larger number of industrial and consumer applications, that is likely to change.

“Many smaller players have significant growth potential, like Sony, or companies making quantum-dot-based cameras, such as SWIR Vision Systems and Emberion, which have a price advantage on high-resolution and extended spectral range products,” Yole stated.

“Newcomers bring new disruptive technologies, like STMicroelectronics, TriEye, or Artilux, to address consumer or automotive markets.”

Emberion, which is a spin-out from Nokia with facilities in Cambridge, UK, uses both colloidal quantum dots and graphene in its devices - claiming improvements in signal-to-noise, breadth of spectral response, and operating temperature.

“Traditional CMOS image sensor suppliers can be game-changers due to their high-volume production capacity and unique design and integration know-how,” observes Yole.
“However, among them, only Sony and STMicroelectronics have already developed SWIR imaging technology - even though others may show signs of interest, such as Samsung and OmniVision.
“The SWIR ecosystem waits for greater interest from these players to accelerate technological and market disruption.”



Material innovation
Nevertheless, the technology is expected to make an impact in consumer goods, with Yole’s figures suggesting the emergence of a significant consumer market over the next five years.
“In 2026, SWIR can start replacing NIR imagers in flagship smart phones for under-display integration of facial recognition modules,” reckons Clouet, adding that the resulting market for complete 3D-sensing modules will just surpass $2 billion by 2028.
Beyond that - and depending on the level of innovation and cost reductions in key components - the technology might end up being integrated into lower-end smart phones and augmented and virtual reality (AR/VR) headsets to improve the performance of tracking cameras, 3D sensing, and outdoor multispectral imaging.

Clouet also sees applications emerging in the automotive sector, where SWIR could provide enhanced vision in low light and adverse weather conditions, as well as 3D sensing capability - although this market would still be in its infancy by 2028.

Among the technological innovations that may lead to more efficient and lower-cost imaging systems, Yole highlights the potential of quantum dots, organic photodiodes, and the germanium-on-silicon material system as some potentially key developments in sensors.
At the optical component level, polymer and metasurface lenses, diffractive optics and optical diffusers, and spectral filters could also contribute to lower costs.

Yole's report, SWIR Imaging 2023, is available now via the company’s web site.


Go to the original article...

SWIR linear array sensor from NIT

Image Sensors World        Go to the original article...

Press release from NIT:

 

The NSC1801 line scan sensor was designed initially for imaging linearly moving objects with high frame rate, high sensitivity and low noise. Its pixel size has the world smallest size of 7.5µm that contributes the lower the manufacturing costs without increasing the cost of lenses.

Now NIT is pleased to release an updated version of NSC1801, where all key parameters have been reworked and overall performances improved. NSC1801 is currently installed in NIT Lisa SWIR cameras.

NSC1801 fully benefits from NIT new manufacturing factory installed in our brand new clean room, that includes our high yield hybridization process. Our new facility allows to cover the full design and manufacturing cycle of these sensors in volume with a level of quality never achieved before. 

Moreover NSC1801 was designed with the objective of addressing new markets that could not invest into expensive and difficult to use SWIR cameras. The result is that our Lisa SWIR camera based on NSC1801 exhibits the lowest price point on the market even in unit quantity.  

Typical applications for NSC1801 are waste sorting, semiconductor and photovoltaic cell inspection, food and vegetable inspection and pharmaceutical inspection. 


Features

Benefits

Pixel size 7.5x7.5µm

Lowest pixel size in the industry to capture sharp details

Resolution 2048 pixels

Large field of view compatible with most lenses from the market

Three gain modes available

Allows selecting the best dynamic range for the scene. 

QE >85%

Boost sensitivity to the maximum available

Line rate up to 60KHz

For imaging fast moving objects 

Exposure time 10µs to 220ms

Fully configurable for capturing the best signal to noise ratio


Go to the original article...

Canon to start selling 3.2MP SPAD sensor in 2023

Image Sensors World        Go to the original article...

Canon developing world-first ultra-high-sensitivity ILC equipped with SPAD sensor, supporting precise monitoring through clear color image capture of subjects several km away, even in darkness

TOKYO, April 3, 2023—Canon Inc. announced today that the company is developing the MS-500, the world's first1 ultra-high-sensitivity interchangeable-lens camera (ILC) equipped with a 1.0 inch Single Photon Avalanche Diode (SPAD) sensor2 featuring the world's highest pixel count of 3.2 megapixels3. The camera leverages the special characteristics of SPAD sensors to achieve superb low-light performance while also utilizing broadcast lenses that feature high performance at telephoto-range focal lengths. Thanks to such advantages, the MS-500 is expected to be ideal for such applications as high-precision monitoring.

There is a growing need for high-precision monitoring systems for use in such environments as national borders, seaports, airports, train stations, power plants and other key infrastructure facilities, in order to quickly identify targets even under adverse conditions including darkness in which human eyes cannot see, and from long distances.

The currently in-development MS-500 is equipped with a 1.0 inch SPAD sensor that reduces noise, thus making possible clear, full-color HD imaging even in extreme low-light environments. When paired with Canon's extensive range of broadcast lenses, which excel at super-telephoto image capture, the camera is capable of accurately capturing subjects with precision in extreme low-light environments, even from great distances. For example, the camera may be used for nighttime monitoring of seaports, thanks to its ability to spot vessels that are several km away, thus enabling identification and high-precision monitoring of vessels in or around the seaport.

With CMOS sensors, which are commonly used in conventional modern digital cameras, each pixel measures the amount of light that reaches the pixel within a given time. However, the readout of the accumulated electronic charge contains electronic noise, which diminishes image quality, due to the process by which accumulated light is measured. This leads to degradation of the resulting image, particularly when used in low-light environments. SPAD sensors, meanwhile, employ a technology known as "photon counting", in which light particles (photons) that enter each individual pixel are counted. When even a single photon enters a pixel, it is instantly amplified approximately 1 million times and output as an electrical signal. Every single one of these photons can be digitally counted, thus making possible zero-noise during signal readout—a key advantage of SPAD sensors4. Because of this technological advantage, the MS-500 is able to operate even under nighttime environments with no ambient starlight5, and is also capable of accurately detecting subjects with minimal illumination and capture clear color images.


The MS-500 employs the bayonet lens mount (based on BTA S-1005B standards) which is widely used in the broadcast lens industry. This enables the camera to be used with Canon's extensive range of broadcast lenses which feature superb optical performance. As a result, the camera is able to recognize and capture subjects that are several km away.

Going forward, Canon will continue to pursue R&D and create products capable of surpassing the limits of the human eye while contributing to the safety and security of society by leveraging its long history of comprehensive imaging technologies that include optics, sensors, image processing and image analysis.

Canon plans to commence sales of the MS-500 in 2023.

Reference

The MS-500 will be displayed as a reference exhibit at the Canon booth during the 2023 NAB Show for broadcast and filmmaking equipment, to be held in Las Vegas from Saturday, April 15 to Wednesday, April 19.

 1Among color cameras. As of April 2, 2023. Based on Canon research.

 2Among SPAD sensors for imaging use. As of April 2, 2023. Based on Canon research.

 3Total pixel count: 3.2 million pixels. Effective pixel count: 2.1 million pixels.

 4For more information on how SPAD sensors operate and how they differ from CMOS sensors, please visit the following website:

 https://global.canon/en/technology/spad-sensor-2021.html

 5Ambient starlight is equivalent to approximately 0.02 lux. A nighttime environment with no ambient starlight is equivalent to approximately 0.007 lux.

Go to the original article...

Metalenz polarization sensor wins SPIE award

Image Sensors World        Go to the original article...

https://metalenz.com/metalenz-wins-2023-prism-award/

San Francisco, CA – SPIE, the international society for optics and photonics, recognized the most innovative new optics and photonics products with the annual industry-focused Prism Awards. Metalenz was named winner of the Camera and Imaging category for PolarEyes, the Company’s breakthough polarization imaging platform designed around the unique capabilities of Metalenz meta-optics.

PolarEyes is the world’s first and only optical module that can instantly provide information about the material make-up and depth details of the imaged scene, thereby providing highly valuable, previously unavailable information to machine vision systems.

Traditional approaches to polarization imaging require a complex array of optics, waveplates and filters, resulting in modules that are too large, expensive, and inefficient for mass markets or small form-factor devices. Dr. Noah Rubin and Professor Federico Capasso demonstrated in foundational research that a single meta-optic can complete image all of the polarization information in a scene without filtering or loss of efficiency. Now, the team at Metalenz has productized this breakthrough with PolarEyes. The result is a full-Stokes polarization camera that is over 5000x more compact than traditional cameras. This brings powerful lab camera capabilities into tiny camera modules that fit into any smart or mobile device. More than a polarized meta-optic, this full-stack, system-level solution combines physics and optics, software and hardware to power machine vision systems for next-generation smartphones and consumer electronics, to new automotive, robotic and healthcare applications.

“We are honored to have this recognition from SPIE and the photonics community. With PolarEyes, we are using our metasurface technology to look beyond just solving size and performance in existing sensor modules. We are empowering billions of devices with new information that will change the way that people and machines interact with and understand the world.” Rob Devlin, Metalenz Co-founder and CEO.


More information from: https://metalenz.com/polareyes-polarization-imaging-system/

Metalenz's "PolarEyes" polarization-based imaging system is a microscopic sensing solution that harnesses the power of polarized light. PolarEyes characterizes depth, material properties and detects transparent objects–bringing new information to a mobile form factor for the first time.

Traditional approaches to polarization imaging require a complex array of light splitters and filters, resulting in modules that are too large, expensive and inefficient for mass markets or small form-factor devices. PolarEyes shrinks these powerful lab cameras into tiny camera modules that fit into any smart or mobile device.


PolarEyes captures polarized light without filtering or loss of signal strength, and the full-stack, system-level solution combines physics and optics, software and hardware to power machine vision systems for next-generation smartphones and consumer electronics, to new automotive, robotic and healthcare applications.


Polarization provides an additional scene cue beyond intensity and depth which can be used for material classification, improved 3D sensing (surface normal reconstruction) and removing glare. Use cases include consumer electronics, robotics and automotive.

Go to the original article...

EETimes article on LiDAR for ADAS

Image Sensors World        Go to the original article...

EETimes article argues that LiDARs will be an important component in future ADAS systems.

Link: https://www.eetimes.com/the-future-of-lidar-lies-in-adas/ 

Cars are becoming more and more autonomous, to the point that self-driving is getting close to becoming real. High-performance sensors have enabled an ever-increasing number of advanced driver-assistance system (ADAS) features, such as lane-keeping, adaptive cruise control and structures for detecting blind spots during overtaking.

ADAS serves as a useful tool for drivers as well as a response to the demand for improved safety requirements. LiDAR is one of the most important components of ADAS, as it can be used in adaptive cruise control, blind-spot detection, pedestrian detection and all use cases that require the detection and mapping of objects around the vehicle.

ADAS, which corresponds to Level 2 of the driving automation scale, is now standard in most cars. Sensors that can deliver a high level of safety are required for autonomous or semi-autonomous vehicles. For automotive applications, this means that the sensor must be reliable in all-weather situations and unaffected by factors like sun, rain or fog. LiDAR sensors are also appropriate for use in high-vibration transport systems, such as driverless vehicles, mining, building and agriculture.

The article goes on to discuss two recent trends: solid-state LiDARs and spectrum-scan LiDARs.

Recently, we have witnessed a growing interest in solid-state LiDAR technology, i.e., a system that uses a laser source and a detector and that does not include scanning nor moving parts. Solid-state technology gradually measures and acquires the surrounding environment instead of depending on sequential measurements to send laser light in one direction, gather measurements and then change to another place, as in conventional optical LiDAR.

...

The Spectrum-Scan proprietary platform created by Baraja takes a distinct approach from traditional mechanical LiDAR systems. Instead of employing flimsy moving parts and oscillating mirrors to scan the surrounding area, refraction of light through prism-like optics is used. On the other hand, mechanically scanned sensors in the fast axis have expensive, large and prone-to-failure moving parts.

Go to the original article...

XenomatiX solid-state LiDAR

Image Sensors World        Go to the original article...

Link: https://xenomatix.com/lidar/xenolidar/


XenomatiX, pioneer of true-solid-state LiDARs for ADAS, AVs and road applications launched the new generation true-solid-state XenoLidar-X for automotive and industrial applications. XenoLidar-X is small, fast, light, and delivers high resolution with low power consumption. These characteristics make it suitable for integration and series applications.

The webpage discloses the following specs:

Range: up to 50 m
Field of view: 60°x20°
Angular resolution: 0.3° x 0.3°
Data output rate: 20 Hz

Go to the original article...

Quantum Dot-based image sensors (IEEE TED June 2022 issue)

Image Sensors World        Go to the original article...

Two papers in the IEEE Trans. Electron Devices journal from June 2022 on the topic of infrared quantum dot-based image sensors.

Infrared Colloidal Quantum Dot Image Sensors
Pejović et al. (IMEC Belgium)


Quantum dots (QDs) have been explored for many photonic applications, both as emitters and absorbers. Thanks to the bandgap tunability and ease of processing, they are prominent candidates to disrupt the field of imaging. This review article illustrates the state of technology for infrared image sensors based on colloidal QD absorbers. Up to now, this wavelength range has been dominated by III–V and II–VI imagers realized using flip-chip bonding. Monolithic integration of QDs with the readout chip promises to make short-wave infrared (SWIR) imaging accessible to applications that could previously not even consider this modality. Furthermore, QD sensors show already state-of-the-art figures of merit, such as sub-2- μm pixel pitch and multimegapixel resolution. External quantum efficiencies already exceed 60% at 1400 nm. With the potential to increase the spectrum into extended SWIR and even mid-wave infrared, QD imagers are a very interesting and dynamic technology segment.

Different layers in a QD image sensor


Representative images made by different QD image sensors. (a) PbS QD, SWIR image (cutoff at 1.6 μm ), 5-μm pixel pitch, and data courtesy of IMEC. (b) PbS QD, SWIR image (cutoff at 2 μm ), 15-μm pixel pitch, and data courtesy of SWIR vision systems. (c) PbS QD, SWIR image (cutoff at 2 μm ), 20-μm pixel pitch, and data courtesy of Emberion. (d) PbS QD, SWIR image (cutoff at 1.6 μm ), 2.2-μm pixel pitch, and data courtesy of STMicroelectronics. (e) PbS QD, SWIR image (cutoff at 1.85 μm ), and data courtesy of ICFO. (f) HgTe QD, MWIR image (cutoff at 5 μm ), 30-μm pixel pitch, and reprinted with permission from [51]. (g) HgTe QD, SWIR image (cutoff at 2 μm ), half VGA, 15-μm pixel pitch, and data courtesy of Sorbonne University.


Figures of Merit of Five Different PbS QD Image Sensors




Detailed Characterization of Short-Wave Infrared Colloidal Quantum Dot Image Sensors
Kim et al. (IMEC, Belgium)


Thin-film-based image sensors feature a thin-film photodiode (PD) monolithically integrated on CMOS readout circuitry. They are getting significant attention as an imaging platform for wavelengths beyond the reach of Si PDs, i.e., for photon energies lower than 1.12 eV. Among the promising candidates for converting low-energy photons to electric charge carriers, lead sulfide (PbS) colloidal quantum dot (CQD) photodetectors are particularly well suited. However, despite the dynamic research activities in the development of these thin-film-based image sensors, no in-depth study has been published on their imaging characteristics. In this work, we present an elaborate analysis of the performance of our short-wave infrared (SWIR) sensitive PbS CQD imagers, which achieve external quantum efficiency (EQE) up to 40% at the wavelength of 1450 nm. Image lag is characterized and compared with the temporal photoresponsivity of the PD. We show that blooming is suppressed because of the restricted pixel-to-pixel movement of the photo-generated charge carriers within the bottom transport layer (BTL) of the PD stack. Finally, we perform statistical analysis of the activation energy for CQD by dark current spectroscopy (DCS), which is an implementation of a well-known methodology in Si-based imagers for defect engineering to a new class of imagers.

(a) QDPD stack integration on the Si-ROIC. (b) Structure schematic of its passive PD-only device without the ROIC (ECL: edge cover layer, figures not scaled).


(a) Typical PTC for our PbS QDPD imager displaying a shot-noise-limited behavior in the relatively intense light illumination condition. (b) EQE measured with a PD-only passive device (red dashed), with an imager (black), showing no significant change in their spectral shapes and values.


Image of the Imec campus taken with our QDPD imager on a visibly sunny day with the collection of photons in the visible spectral range [(a) < 750 nm], and in the SWIR wavelengths [(b) >1350 nm], showing bright and dark sky respectively since photons with lower energy (SWIR) are less scattered by air molecules and do not reach to the imager. Set of images capturing objects under the illumination of visible [(c) < 750 nm] and SWIR [(d) >1350 nm] light. While two plastic bars are less distinguished under visible light, they have clear contrast in the SWIR range, since the Bar #2 is more optically reflective compared with Bar #1 in that spectral region.


(a) Arrhenius plot from the PbS CQD imager pixel data (Gen 1 stack). Approximately 0.41 eV of activation energy is fitted from this plot made of median dark outputs of individual pixels. (b) Activation energy histogram computed from individual pixels, showing multiple peaks ranging from 0.23 to 0.43 eV. (c) Dark current propagation according to the temperature elevation for 0°C (left), 40°C (middle), and 80°C (right), respectively.

Go to the original article...

Videos du jour: Hitachi FIB-SEM and OnSemi iToF sensor

Image Sensors World        Go to the original article...


FIB-SEM tomography o a CMOS image sensor: 533 serial cross-sectional BSE images of a CMOS image sensor were automatically acquired on a FIB-SEM in 14 hours. Reconstructed 3D data visualizes the arrangement of electrodes and wiring. Slice images gave layer descriptions of the color filter, the plugs with a diameter of about 200 nm, and the metal wiring with the minimum width of approximately 150 nm respectively.




onsemi’s indirect Time-of-Flight (iTOF) demo at Embedded World shows a 1.2MP sensor that can capture depth maps up to six meters.

Go to the original article...

FRAMOS Tutorial on Time of Flight Technology

Image Sensors World        Go to the original article...

 

Explore the benefits & advantages of the Time of Flight technology in the iToF webinar hosted by Chris Baldwin, an Image Sensor Expert from FRAMOS. 

Learn about what new opportunities this fast, reliable, and cost-effective technology can bring to those developing 3D imaging systems for any application.

In this webinar, you will learn more about Time-of-Flight sensor applications,  how ToF sensors enable the transformation of raw data into 3D information, what tools are required for tuning and calibrating, which environments are optimal for product development, how to enhance the performance of your systems, and real-life application examples.

Topics covered:

  • Introduction to indirect time of flight
  • From phase shift to distance
  • ToF pixel design
  • Phase ambiguity and noise sources
  • Applications
  • FRAMOS ToF modules and ecosystem






Go to the original article...

More ways to follow ISW blog: LinkedIn and Email

Image Sensors World        Go to the original article...

LinkedIn

You can now read about new updates on this blog by following us on LinkedIn. Thanks to Hari Tagat for maintaining the new LinkedIn page!

Email

We have received several questions on how to subscribe to Image Sensors World blog updates by email. Unfortunately, Blogger stopped that service a while back. But you can still use a third-party RSS feed aggregators to get email alerts. Some examples: Blogtrottr, Feedly, Feedrabbit. Note that some of these free services may insert ads and/or require you to sign up for an account.

Get Involved

We are always looking for interesting articles to share on this blog. If you come across something that could be of interest to our blog readers please send an email to image.sensors.world@gmail.com.

You can also send us a message using the LinkedIn page.

If you prefer to stay anonymous, please post a link to the article or news as a comment here on the blog.

Go to the original article...

Panasonic Organic-Photoconductive-Film CMOS Image Sensor

Image Sensors World        Go to the original article...

A recent blog post by Panasonic discusses the color quality of their organic CMOS image sensor.

Link: https://news.panasonic.com/global/topics/13982

Osaka, Japan – Panasonic Holdings Corporation announced that it has developed excellent color reproduction technology that suppresses color crosstalk by thinning the photoelectric conversion layer using the high light absorption rate of the Organic Photoconductive File (OPF) and by using electrical pixel separation technology. In this technology, the OPF part that performs photoelectric conversion and the circuit part that stores and readouts the electric charge are completely independent. This unique layered structure dramatically reduces the sensitivity of each pixel in green, red, and blue in wavelength regions outside the target range. As a result, color crosstalk is reduced, excellent spectral characteristics are obtained, and accurate color reproduction is made possible regardless of the type of light source.

Conventional Bayer array-type silicon image sensors do not have sufficient color separation performance for green, red, and blue. Therefore, for example, under light sources that have peaks at specific wavelengths, such as cyan light and magenta light, it has been difficult to accurately reproduce, recognize, and judge colors.


Our OPF CMOS image sensor has a unique structure in which the photoelectric conversion part that converts light into an electric signal is an organic thin film, and the function of storing and reading out the signal charge is performed in the circuit part, which are completely independent from each other (Figure 1). As a result, unlike with conventional silicon image sensors, it is possible to provide photoelectric conversion characteristics that do not depend on the physical properties of silicon. The OPF with its high light absorption rate enables the thinning of the photoelectric conversion part ((1) Photoelectric conversion film thinning technology). By providing a discharge electrode at the pixel boundaries, the signal charge due to the incident light at the pixel boundaries is discharged, and the signal charge from adjacent pixels is suppressed ((2) Electrical pixel isolation technology). In addition, since the under part of the OPF is covered with the pixel electrode for collecting the signal charge generated in the OPF and the electrode for discharging the charge, incident light that cannot be absorbed by the OPF does not reach the circuit side. This suppresses the transmission ((3) Light transmission suppression structure). With the above three technologies, it is possible to suppress light and signal charges that enter from adjacent pixels. As a result, color crosstalk can be reduced to an almost ideal shape, as shown in the spectral characteristics shown in Figure 2, and accurate color reproduction is achieved regardless of the color of the light source (Figure 3).
This technology enables accurate color reproduction and inspection even in environments where it is difficult for conventional image sensors to reproduce the original colors, such as plant factories that use magenta light. It is also possible to accurately reproduce the colors of substances with subtle color changes, such as living organisms. It can also be applied to managing skin conditions, monitoring health conditions, and inspecting fruits and vegetables. Furthermore, in combination with the high saturation characteristics and global shutter function of our OPF CMOS image sensor*, it can contribute to highly robust imaging systems that are highly tolerant of changes in light source type, illuminance, and speed.


 

This sensor has been under development for many years. Panasonic's press release from 2018 demonstrated the dynamic range and speed: https://news.panasonic.com/global/press/en180214-2

Panasonic Develops Industry's-First 8K High-Resolution, High-Performance Global Shutter Technology using Organic-Photoconductive-Film CMOS Image Sensor

The new technology enables 8K high resolution and high picture quality imaging without motion distortion, even in extremely bright scenes.

Osaka Japan, - Panasonic Corporation today announced that it has developed a new technology which realizes 8K high-resolution (36M pixels), 60fps framerate, 450k high-saturation electrons and global shutter [1] imaging with sensitivity modulation function simultaneously, using a CMOS image sensor with an organic photoconductive film (OPF). In this OPF CMOS image sensor, the photoelectric-conversion part and the circuit part are independent. By utilizing this OPF CMOS image sensor's unique structure, we have been able to newly develop and incorporate high-speed noise cancellation technology and high saturation technology in the circuit part. And, by using this OPF CMOS image sensor's unique sensitivity control function to vary the voltage applied to the OPF, we realize global shutter function. The technology that simultaneously achieves these performances is the industry's first*1.

With the technology, it is possible to capture images at 8K resolution, even in high contrast scenes, such as a field under strong sunlight and shaded spectator seats under a stadium roof. Moreover, by utilizing the global shutter function that enables simultaneous image capture by all pixels, it is expected to be able to capture moving objects instantaneously without distortion, be utilized for multi viewpoint cameras (performing multi-view synchronized imaging using plural cameras) and used in fields requiring high-speed and high-resolution, such as machine vision and ITS monitoring. In addition, conventionally, even in scenes where it was necessary to utilize different ND filters [2] according to capturing conditions, the technology realizes a new electronically-controlled variable ND filter function which enables stepless adjustment of the OPF sensitivity [3] merely by controlling the voltage applied to the OPF.

The new technology has the following advantages.

  1.  8K resolution, 60fps framerate, 450k saturation electrons and global shutter function are realized simultaneously.
  2.  Switching between high sensitivity mode and high saturation mode is possible using gain switching function.
  3.  The ND filter function can be realized steplessly by controlling the voltage applied to the OPF.

This Development is based on the following technologies.

  1.  "OPF CMOS image sensor design technology", in that, the photoelectric-conversion part and the circuit part can be designed independently.
  2.  "In-pixel capacitive coupled noise cancellation technique" which can suppress pixel reset noise at high speed even at high resolution
  3.  "In-pixel gain switching technology" that can achieve high saturation characteristics
  4.  "Voltage controlled sensitivity modulation technology" that can adjust the sensitivity by changing the voltage applied to the OPF.

Panasonic holds 135 Japanese patents and 83 overseas patents (including pending) related to this technology.

Panasonic will present some of these technologies at the international academic conference: ISSCC (International Solid-State Circuit Conference) 2018 which will be held in San Francisco on February 11 - 15, 2018.















Go to the original article...

EETimes article about Prophesee-Qualcomm deal

Image Sensors World        Go to the original article...

Full article here: https://www.eetimes.com/experts-weigh-impact-of-prophesee-qualcomm-deal/

Experts Weigh Impact of Prophesee-Qualcomm Deal

Some excerpts:

Frédéric Guichard, CEO and CTO of DXOMARK, a French company that specializes in testing cameras and other consumer electronics, and that is unconnected with Paris-based Prophesee, told EE Times that the ability to deblur in these circumstances could provide definite advantages.

“Reducing motion blur [without increasing noise] would be equivalent to virtually increasing camera sensitivity,” Guichard said, noting two potential benefits: “For the same sensitivity [you could] reduce the sensor size and therefore camera thickness,” or you could maintain the sensor size and use longer exposures without motion blur.

Judd Heape, VP for product management of camera, computer vision and video at Qualcomm Technologies, told EE Times that they can get this image enhancement with probably a 20-30% increase in power consumption to run the extra image sensor and execute the processing.

“The processing can be done slowly and offline because you don’t really care about how long it takes to complete,” Heape added.

...

“We have many, many low-power use cases,” he said. Lifting a phone to your ear to wake it up is one example. Gesture-recognition to control the car when you’re driving is another.

“These event-based sensors are much more efficient for that because they can be programmed to easily detect motion at very low power,” he said. “So, when the sensor is not operating, when there’s no movement or no changes in the scene, the sensor basically consumes almost no power. So that’s really interesting to us.”

Eye-tracking could also be very useful, Heape added, because Qualcomm builds devices for augmented and virtual reality. “Eye-tracking, motion-tracking of your arms, hands, legs… are very efficient with image sensors,” he said. “In those cases, it is about power, but it’s also about frame rate. We need to track the eyes at like 90 [or 120] frames per second. It’s harder to do that with a standard image sensor.”

Prophesee CEO Luca Verre told EE Times the company is close to launching its first mobile product with one OEM. “The target is to enter into mass production next year,” he said. 

Go to the original article...

TechCrunch article on future of computer vision

Image Sensors World        Go to the original article...

Everything you know about computer vision may soon be wrong

Ubicept wants half of the world's cameras to see things differently


Some excepts from the article:

Most computer vision applications work the same way: A camera takes an image (or a rapid series of images, in the case of video). These still frames are passed to a computer, which then does the analysis to figure out what is in the image. 

Computers don’t care, however, and Ubicept believes it can make computer vision far better and more reliable by ignoring the idea of frames.

The company’s solution is to bypass the “still frame” as the source of truth for computer vision and instead measure the individual photons that hit an imaging sensor directly. That can be done with a single-photon avalanche diode array (or SPAD array, among friends). This raw stream of data can then be fed into a field-programmable gate array (FPGA, a type of super-specialized processor) and further analyzed by computer vision algorithms.

The newly founded company demonstrated its tech at CES in Las Vegas in January, and it has some pretty bold plans for the future of computer vision.


Visit www.ubicept.com for more information.

Check out their recent demo of low-light license plate recognition here: https://www.ubicept.com/blog/license-plate-recognition-in-low-light

Go to the original article...

Hailo-15 AI-centric Vision Processor

Image Sensors World        Go to the original article...

From Yole: https://www.yolegroup.com/industry-news/leading-edge-ai-chipmaker-hailo-introduces-hailo-15-the-first-ai-centric-vision-processors-for-next-generation-intelligent-cameras/

Leading edge AI chipmaker Hailo introduces Hailo-15: the first AI-centric vision processors for next-generation intelligent cameras


The powerful new Hailo-15 Vision Processor Units (VPUs) bring unprecedented AI performance directly to cameras deployed in smart cities, factories, buildings, retail locations, and more.

Hailo, the pioneering chipmaker of edge artificial intelligence (AI) processors, today announced its groundbreaking new Hailo-15™ family of high-performance vision processors, designed for integration directly into intelligent cameras to deliver unprecedented video processing and analytics at the edge.

With the launch of Hailo-15, the company is redefining the smart camera category by setting a new standard in computer vision and deep learning video processing, capable of delivering unprecedented AI performance in a wide range of applications for different industries.

With Hailo-15, smart city operators can more quickly detect and respond to incidents; manufacturers can increase productivity and machine uptime; retailers can protect supply chains and improve customer satisfaction; and transportation authorities can recognize everything from lost children, to accidents, to misplaced luggage.

“Hailo-15 represents a significant step forward in making AI at the edge more scalable and affordable,” stated Orr Danon, CEO of Hailo. “With this launch, we are leveraging our leadership in edge solutions, which are already deployed by hundreds of customers worldwide; the maturity of our AI technology; and our comprehensive software suite, to enable high performance AI in a camera form-factor.”

The Hailo-15 VPU family includes three variants — the Hailo-15H, Hailo-15M, and Hailo-15L — to meet the varying processing needs and price points of smart camera makers and AI application providers. Ranging from 7 TOPS (Tera Operation per Second) up to an astounding 20 TOPS, this VPU family enables over 5x higher performance than currently available solutions in the market, at a comparable price point. All Hailo-15 VPUs support multiple input streams at 4K resolution and combine a powerful CPU and DSP subsystems with Hailo’s field-proven AI core.

By introducing superior AI capabilities into the camera, Hailo is addressing the growing demand in the market for enhanced video processing and analytic capabilities at the edge. With this unparalleled AI capacity, Hailo-15-empowered cameras can carry out significantly more video analytics, running several AI tasks in parallel including faster detection at high resolution to enable identification of smaller and more distant objects with higher accuracy and less false alarms.

As an example, the Hailo-15H is capable of running the state-of-the-art object detection model YoloV5M6 with high input resolution (1280×1280) at real time sensor rate, or the industry classification model benchmark, ResNet-50, at an extraordinary 700 FPS.

With this family of high-performance AI vision processors, Hailo is also pioneering the use of vision-based transformers in cameras for real-time object detection. The added AI capacity can also be utilized for video enhancement and much better video quality in low-light environments, for video stabilization, and high dynamic range performance.

Hailo-15-empowered cameras lower the total cost of ownership in massive camera deployments by offloading cloud analytics to save video bandwidth and processing, while improving overall privacy due to data anonymization at the edge. The result is an ultra-high-quality AI-based video analytics solution that keeps people safer, while ensuring their privacy and allows organizations to operate more efficiently, at a lower cost and complexity of network infrastructure.

The Hailo-15 vision processors family, like the Hailo-8TM AI accelerator, which is already widely deployed, are engineered to consume very little power, making them suitable for every type of IP camera and enabling the design of fanless edge devices. The small power envelope means camera designers can develop lower-cost products by leaving out an active cooling component. Fanless cameras are also better suited for industrial and outdoor applications where dirt or dust can otherwise impact reliability.

“By creating vision processors that offer high performance and low power consumption directly in cameras, Hailo has pushed the limits of AI processing at the edge,” said KS Park, Head of R&D for Truen, specialists in edge AI and video platforms. “Truen welcomes the Hailo-15 family of vision processors, embraces their potential, and plans to incorporate the Hailo-15 in the future generation of Truen smart cameras.”

“With Hailo-15, we’re offering a unique, complete and scalable suite of edge AI solutions,” Danon concluded. “With a single software stack for all our product families, camera designers, application developers, and integrators can now benefit from an easy and cost-effective deployment supporting more AI, more video analytics, higher accuracy, and faster inference time, exactly where they’re needed.”

Hailo will be showcasing its Hailo-15 AI vision processor at ISC-West in Las Vegas, Nevada, from March 28-31, at booth #16099.

Go to the original article...

Sony’s new SPAD-based dToF Sensor IMX611

Image Sensors World        Go to the original article...

https://www.sony-semicon.com/en/news/2023/2023030601.html

Sony Semiconductor Solutions to Release SPAD Depth Sensor for Smartphones with High-Accuracy, Low-Power Distance Measurement Performance, Powered by the Industry’s Highest*1 Photon Detection Efficiency

Atsugi, Japan — Sony Semiconductor Solutions Corporation (SSS) today announced the upcoming release of the IMX611, a direct time-of-flight (dToF) SPAD depth sensor for smartphones that delivers the industry’s highest*1 photon detection efficiency.

The IMX611 has a photon detection efficiency of 28%, the highest in the industry,*1 thanks to its proprietary single-photon avalanche diode (SPAD) pixel structure.*2 This reduces the power consumption of the entire system while enabling high-accuracy measurement of the distance of an object.

This new sensor will generate opportunities to create new value in smartphones, including functions and applications that utilize distance information.



In general, SPAD pixels are used as a type of detector in a dToF sensor, which acquire distance information by detecting the time of flight of light emitted from a source until it returns to the sensor after being reflected off an object.




The IMX611 uses a proprietary SPAD pixel structure that gives the sensor the industry’s highest*1 photon detection efficiency, at 28%, which makes it possible to detect even very weak photons that have been emitted from the light source and reflected off the object. This allows for highly accurate measurement of object distance. It also means the sensor can offer high distance-measurement performance even with lower light source laser output, thereby helping to reduce the power consumption of the whole smartphone system.

This sensor can accurately measure the distance to an object, making it possible to improve autofocus performance in low-light environments with poor visibility, to apply a bokeh effect to the subject’s background, and to seamlessly switch between wide-angle and telephoto cameras. All of these capabilities will improve the user experience of smartphone cameras. This sensor also enables 3D spatial recognition, AR occlusion,*4 motion capture/gesture recognition, and other such functions. With the spread of the metaverse in the future, this sensor will contribute to the functional evolution of VR head mounted displays and AR glasses, which are expected to see increasing demand.

By incorporating a proprietary signal processing function into the logic chip inside the sensor, the RAW information acquired from the SPAD pixels is converted into distance information to output, and all this is done within the sensor. This approach makes it possible to reduce the load of post-processing, thereby simplifying overall system development.





Go to the original article...

css.php