A 100kfps X-ray imager

Image Sensors World        Go to the original article...

Marras et al. presented a paper titled "Development of the Continuous Readout Digitising Imager Array Detector" at the Topical Workshop on Electronics for Particle Physics 2023.

Abstract: Abstract: The CoRDIA project aims to develop an X-ray imager capable of continuous operation in excess of 100 kframe/s. The goal is to provide a suitable instrument for Photon Science experiments at diffraction-limited Synchrotron Rings and Free Electron Lasers considering Continuous Wave operation. Several chip prototypes were designed in a 65 nm process: in this paper we will present an overview of the challenges and solutions adopted in the ASIC design.

 
 
 
  


Go to the original article...

Pixel-level programmable regions-of-interest for high-speed microscopy

Image Sensors World        Go to the original article...

Zhang et al. from MIT recently published a paper titled "Pixel-wise programmability enables dynamic high-SNR cameras for high-speed microscopy" in Nature Communications.

Abstract: High-speed wide-field fluorescence microscopy has the potential to capture biological processes with exceptional spatiotemporal resolution. However, conventional cameras suffer from low signal-to-noise ratio at high frame rates, limiting their ability to detect faint fluorescent events. Here, we introduce an image sensor where each pixel has individually programmable sampling speed and phase, so that pixels can be arranged to simultaneously sample at high speed with a high signal-to-noise ratio. In high-speed voltage imaging experiments, our image sensor significantly increases the output signal-to-noise ratio compared to a low-noise scientific CMOS camera (~2–3 folds). This signal-to-noise ratio gain enables the detection of weak neuronal action potentials and subthreshold activities missed by the standard scientific CMOS cameras. Our camera with flexible pixel exposure configurations offers versatile sampling strategies to improve signal quality in various experimental conditions.

 

a Pixels within an ROI capture spatiotemporally-correlated physiological activity, such as signals from somatic genetically encoded voltage indicators (GEVI). b Simulated CMOS pixel outputs with uniform exposure (TE) face the trade between SNR and temporal resolution. Short TE (1.25 ms) provides high temporal resolution but low SNR. Long TE (5 ms) enhances SNR but suffers from aliasing due to low sample rate, causing spikes (10 ms interspike interval) to be indiscernible. Pixel outputs are normalized row-wise. Gray brackets: the zoomed-in view of the pixel outputs. c Simulated pixel outputs of the PE-CMOS. Pixel-wise exposure allows pixels to sample at different speeds and phases. Two examples: in the staggered configuration, the pixels sample the spiking activity with prolonged TE (5 ms) at multiple phases with offsets of (Δ = 0, 1,25, 2.5, 3.75 ms). This configuration maintains SNR and prevents aliasing, as the interspike interval exceeding the temporal resolution of a single phase is captured by phase-shifted pixels. In the multiple exposure configuration, the ROI is sampled with pixels at different speeds, resolving high-frequency spiking activity and slow varying subthreshold potentials that are challenging to acquire simultaneously at a fixed sampling rate. d The PE-CMOS pixel schematic with 6 transistors (T1-T6), a photodiode (PD), and an output (OUT). RST, TX, and SEL are row control signals. EX is a column signal that controls pixel exposure. e The pixel layout. The design achieves programmable pixel-wise exposure while maximizing the PD fill factor for high optical sensitivity.

 

a Maximum intensity projection of the sCMOS (Hamamatsu Orca Flash 4.0 v3) and the PE-CMOS videos of a cultured neuron expressing the ASAP3 GEVI protein. b ROI time series from the sCMOS sampled at 800 Hz with pixel exposure (TE) of 1.25 ms. Black trace: ROI time series. Gray trace: the time series each with 1/4 pixels of the ROI. Plotted signals are inverted from raw samples for visualization. c simultaneously imaged ROI time series of the PE-CMOS. Colored trace: the time series of phase-shifted pixels at offsets (Δ) of 0, 1.25, 2.5, and 3.75 ms each contain 1/4 pixels of the ROI. All pixels are sampled at 200 Hz with TE = 5 ms. Black trace: the interpolated ROI time series with 800 Hz equivalent sample rate. Black arrows: An example showing a spike exceeding the temporal resolution of a single phase is captured by phase-shifted pixels. Black circles: an example subthreshold event barely discernable in sCMOS is visible in the pCMOS output. d, e, f: same at panels (a, b, c) with an example showing a spike captured by the PE-CMOS but not resolvable in the sCMOS output due to low SNR (marked by the magenta arrow). g, h comparison of signal quality from smaller ROIs covering parts of the cell membrane. Gray boxes: zoomed-in view of a few examples of putative spiking events. i SNR of putative spikes events from ROIs in panel (g). A putative spiking event is recorded when the signals from either output exceed SNR > 5. Data are presented as mean values +/- SD, two-sided Wilcoxon rank-sum test for equal medians, n = 93 events, p = 2.99 × 10-24. The gain is calculated as the spike SNR in the PE-CMOS divide by the SNR in the sCMOS. All vertical scales of SNR are 5 in all subfigures.

a The intracellular potential of the cell and the ROI GEVI time-series of the PE-CMOS and sCMOS. GEVI pulse amplitude is the change in GEVI signal corresponding to each current injection pulse. It is measured as the difference between the average GEVI intensity during each current pulse and the average GEVI intensity 100 ms before and after the current injection pulse. GEVI pulse amplitude is converted into SNR by dividing the noise standard deviation. b max. projection of the cell in PE-CMOS and sCMOS. c zoomed in view of the intracellular voltage and GEVI pulses in (a). The red arrow indicates spike locations identified from the intracellular voltage. The black arrows indicate a time where intracellular potential shows a flat response when the GEVI signals in both PE-CMOS and sCMOS exhibit significant amplitude variations. These can be mistaken for spiking events. d zoomed in view of (c) showing the PE-CMOS trace can resolve two spikes with small inter-spike interval, while sCMOS at 800 Hz and 200 Hz both fail to do so. The blue arrows point to the first spike invoked by the current pulse. While the sharp rising edges make them especially challenging for image sensors to sample, the PE-CMOS can preserve their amplitudes better the sCMOS.
 

a Maximum intensity projection of the PE-CMOS videos, raw and filtered (2 × 2 spatial box filter) output at full spatial resolution. Intensity is measured by digital bits (range: 0–1023). b Maximum intensity projection divided into four sub-frames according to pixel sampling speed, each with 1/4 spatial resolution. c The ROI time series from pixels of different speeds (colored trace). Black trace: a 1040 Hz equivalent signal interpolated across all ROI pixels. d Fast sampling pixels (520 Hz) resolves high-SNR spike bursts. e–f Pixels with more prolonged exposure (TE = 2.8–5.7 ms) improves SNR to detect weak subthreshold activity (black arrow) and (f) low SNR spike. The vertical scale of SNR is 10 unless otherwise noted.


Open access article link: https://www.nature.com/articles/s41467-024-48765-5

Go to the original article...

PhD thesis on SciDVS event camera

Image Sensors World        Go to the original article...

Link: https://www.research-collection.ethz.ch/handle/20.500.11850/683623

Thesis title: A Scientific Event Camera: Theory, Design, and Measurements
Author: Rui Garcia
Advisor: Tobi Delbrück

Go to the original article...

Yole analysis of onsemi acquisition of SWIR Vision Systems

Image Sensors World        Go to the original article...

Article by Axel Clouet, Ph.D. (Yole Group)

Link: https://www.yolegroup.com/strategy-insights/onsemi-enters-the-realm-of-ir-by-acquiring-swir-vision-systems/

Onsemi, a leading CMOS image sensor supplier, has acquired SWIR Vision Systems, a pioneer in quantum-dots-based short-wave infrared (SWIR) imaging technology. Yole Group tracks and reports on these technologies through reports like Status of the CMOS Image Sensor 2024 and SWIR Imaging 2023. Yole Group’s Imaging Team discusses how this acquisition mirrors current industry trends.

SWIR Vision Systems pioneered the quantum dots platform

SWIR imaging modality has long been used in defense and industrial applications, generating $97 million in revenue for SWIR imager suppliers in 2022. However, its adoption has been limited by the high cost of InGaAs technology, the historical platform necessary to capture these wavelengths, compared to standard CMOS technology. In recent years, SWIR has attracted interest with the emergence of lower-cost technologies like quantum dots and germanium-on-silicon, both compatible with CMOS fabs and anticipated to serve the mass markets in the long term.

SWIR Vision Systems, a U.S.-based start-up, pioneered the quantum dots platform, introducing the first-ever commercial product in 2018. This company is fully vertically integrated, making its own image sensors for integration into its own cameras. 

An acquisition aligned with Onsemi’s positioning

The CMOS image sensor industry was worth $21.8 billion in 2023 and is expected to reach $28.6 billion by 2029. With a market share of 6%, onsemi is the fourth largest CMOS image sensor supplier globally. The company is the leader in the fast-growing $2.3 billion automotive segment and has a significant presence in the industrial, defense and aerospace, and medical segments.

In the short term, SWIR products will help onsemi catch up with Sony’s InGaAs products in the industrial segment by leveraging the cost advantage of quantum dots. Its existing sales channels will facilitate the adoption of quantum dots technology by camera manufacturers.

Additionally, onsemi is set to establish long-term relationships with defense customers, a segment poised for growth due to global geopolitical instability. By acquiring SWIR Vision Systems and the East Fishkill CMOS fab completed in 2022, onsemi secured its supply chain, owns the SWIR strategic technology, and has a large-volume U.S.-based factory. It is, therefore, aligned with the dual-use approach promoted by the U.S. government for its local industry.

This acquisition will contribute to faster development and adoption of the quantum dots platform without disrupting the SWIR landscape. For onsemi, it is an attractive feature to quickly attract new customers in the industrial and defense sectors and a differentiating technology for the automotive segment in the long term.

 



Go to the original article...

Nuvoton introduces new 3D time-of-flight sensor

Image Sensors World        Go to the original article...

Link: https://www.prnewswire.com/news-releases/tof-sensor-for-enhanced-object-detection-and-safety-application-nuvoton-launches-new-3d-tof-sensor-with-integrated-distance-calculation-circuit-302197512.html

TOF Sensor for Enhanced Object Detection and Safety Application: Nuvoton Launches New 3D TOF Sensor with Integrated Distance Calculation Circuit

KYOTO, Japan, July 16, 2024 /PRNewswire/ -- Nuvoton Technology Corporation Japan is set to begin mass production of a 1/4-inch VGA (640x480 pixel) resolution 3D Time-of-Flight (TOF) sensor in July 2024. This sensor is poised to revolutionize the recognition of people and objects in various indoor and outdoor environments. This capability has been achieved through Nuvoton's unique pixel design technology and distance calculation/Image Signal Processor (ISP) technology.
 

1. High Accuracy in Bright Ambient Lighting Conditions
Leveraging Nuvoton's proprietary CMOS image sensor pixel technology and CCD memory technology, the new TOF sensor has four memories in the 5.6-square-micrometer pixel compared to the three memories of its conventional TOF sensor and achieves accurate distance sensing by simultaneously controlling pulse light sources and acquiring background light signals. It can provide the precise recognition of the position and shape of people and objects under various ambient lighting conditions.


 

2. Accurate Distance Measurement for Moving Objects
With four embedded memories within each pixel, Nuvoton's new TOF sensor outputs distance images in a single frame. This innovative design significantly reduces motion blur and measurement errors in moving objects by capturing and calculating distance from four types of imaging signals within one frame. This feature is particularly suited for applications requiring dynamic object detection and recognition, such as obstacle detection for autonomous mobile robots (AMRs) and airbag intensity control in vehicles.

 

3. Integrated Distance Calculation Circuit for Fast and Accurate Sensing
Nuvoton's new TOF sensor is equipped with an integrated distance calculation circuit and a signal correction ISP, enabling it to output high-speed, high-precision distance (3D) images at up to 120 fps (QVGA) without delay. This eliminates the need for distance calculation by the system processor, reducing the processing overhead and enabling faster sensing systems. Additionally, the sensor can simultaneously output distance (3D) and IR (2D) images, useful for applications requiring both high precision and recognition/authentication functions.
 


For more information, please visit: https://www.nuvoton.com/products/image-sensors/3d-tof-sensors/kw330-series/
 

About Nuvoton Technology Corporation Japan: https://www.nuvoton.co.jp/en/

Go to the original article...

International Image Sensor Society Calls for Award Nominations

Image Sensors World        Go to the original article...

The International Image Sensor Society (IISS) calls for nominations for IISS Exceptional Lifetime Achievement Award, IISS Pioneering Achievement Award, and IISS Exceptional Service Award. The Awards are to be presented at the 2025 International Image Sensor Workshop (IISW) (to be held in Japan).
 
Description of Awards:

  • IISS Exceptional Lifetime Achievement Award. This Award is made to a member of the image sensor community who has made substantial sustained and exceptional contributions to the field of solid-state image sensors over the course of their career. (Established 2013)
  • IISS Pioneering Achievement Award. This award is to recognize a person who made a pioneering achievement in image sensor technology as judged by at least 10 years of hindsight as a foundational contribution. (Established 2015)
  • IISS Exceptional Service Award. This award is presented for exceptional service to the image sensor specialist community. This category recognizes activities in editorial roles, conference leadership roles, and so on, outside of their service related to the IISS. (Established 2011)

 
Submission deadline: all nominations must be received by October 1st, 2024 using the specified entry format.

Email for submissions: 2025nominations@imagesensors.org

Note: Self-nomination is discouraged.

Go to the original article...

SAE article on L3 autonomy

Image Sensors World        Go to the original article...

Link: https://www.sae.org/news/2024/07/adas-sensor-update

Are today’s sensors ready for next-level automated driving?

SAE Level 3 automated driving marks a clear break from the lower levels of driving assistance since that is the dividing line where the driver can be freed to focus on things other than driving. While the driver may sometimes be required to take control again, responsibility in an accident can be shifted from the driver to the automaker and suppliers. Only a few cars have met regulatory approval for Level 3 operation. Thus far, only Honda (in Japan), the Mercedes-Benz S-Class and EQS sedans with Drive Pilot and BMW’s recently introduced 7 Series offer Level 3 autonomy.

With more vehicles getting L3 technology and further automated driving skills being developed, we wanted to check in with some of the key players in this tech space and hear the latest industry thinking about best practices for ADAS and AV Sensors.

Towards More Accurate 3D Object Detection

Researchers from Japan's Ritsumeikan University have developed DPPFA-Net, an innovative network that combines 3D LiDAR and 2D image data to improve 3D object detection for robots and self-driving cars. Led by Professor Hiroyuki Tomiyama, the team addressed challenges in accurately detecting small objects and aligning 2D and 3D data, especially in adverse weather conditions.

DPPFA-Net incorporates three key modules:

  •  Memory-based Point-Pixel Fusion (MPPF): Enhances robustness against 3D point cloud noise by using 2D images as a memory bank.
  •  Deformable Point-Pixel Fusion (DPPF): Focuses on key pixel positions for efficient high resolution feature fusion.
  •  Semantic Alignment Evaluator (SAE): Ensures semantic alignment between data representations during fusion.

The network outperformed existing models in the KITTI Vision Benchmark, achieving up to 7.18% improvement in average precision under various noise conditions. It also performed well in a new dataset with simulated rainfall.

Ritsumeikan University researchers said this advancement has significant implications for self driving cars and robotics. It could lead to reduced accidents, improved traffic flow and safety, and enhanced robot capabilities in various applications. The improvements in 3D object detection are expected to contribute to safer transportation, enhanced robot capabilities, and accelerated development of autonomous systems.

AEVA

Aeva has introduced Atlas, the first 4D lidar sensor designed for mass-production automotive applications. Atlas aims to enhance advanced driver assistance systems (ADAS) and autonomous driving, meeting automotive-grade requirements.

  •  The company’s sensor is powered by two key innovations: the fourth-generation lidar-on-chip module called Aeva CoreVision that incorporate all key lidar elements in a smaller package, using silicon photonics technology.
  •  Aeva X1 new system-on-chip (SoC) lidar processor that integrate data acquisition, point cloud processing, scanning system, and application software.

These innovations make Atlas 70% smaller and four times more power-efficient than Aeva's previous generation, enabling various integration options without active cooling. Atlas uses Frequency Modulated Continuous Wave (FMCW) 4D lidar technology, which offers improved object detection range and immunity to interference. It also provides a 25% greater detection range for low-reflectivity targets and a maximum range of 500 meters.

Atlas is accompanied by Aeva’s perception software, which harnesses advanced machine learning-based classification, detection and tracking algorithms. Incorporating the additional dimension of velocity data, Aeva’s perception software provides unique advantages over conventional time-of-flight 3D lidar sensors.

Atlas is expected to be available for production vehicles starting in 2025, with earlier sample availability for select customers. Aeva's co-founder and CTO Mina Rezk said that Atlas will enable OEMs to equip vehicles with advanced safety and automated driving features at highway speeds, addressing previously unsolvable challenges. Rezk believes Atlas will accelerate the industry's transition to Frequency-Modulated Continuous-Wave 4D lidar technology, which is increasingly considered the end state for lidar due to its enhanced perception capabilities and unique instant velocity data.

Luminar

Following several rocky financial months and five years of development, global automotive technology company Luminar is launching Sentinel, its full-stack software suite. Sentinel enables automakers to accelerate advanced safety and autonomous functionality, including 3D mapping, simulation, and dynamic lidar features. A study by the Swiss Re Institute showed cars equipped with Luminar lidar and Sentinel software demonstrated up to 40% reduction in accident severity.

Developed primarily in-house with support from partners, including Scale AI, Applied Intuition, and Civil Maps (which Luminar acquired in 2022), Sentinel leverages Luminar's lidar hardware and AI-based software technologies.

CEO and founder Austin Russell said Luminar has been building next-generation AI-based safety and autonomy software since 2017. “The majority of major automakers don't currently have a software solution for next-generation assisted and autonomous driving systems,” he said. “Our launch couldn't be more timely with the new NHTSA mandate for next-generation safety in all U.S.-production vehicles by 2029, and as of today, we're the only solution we know of that meets all of these requirements.”

Mobileye

Mobileye has secured design wins with a major Western automaker for 17 vehicle models launching in 2026 and beyond. The deal covers Mobileye's SuperVision, Chauffeur, and Drive platforms, offering varying levels of autonomous capabilities from hands-off, eyes-on driving to fully autonomous robotaxis.

All systems will use Mobileye's EyeQ 6H chip, integrating sensing, mapping, and driving policy. The agreement includes customizable software to maintain brand-specific experiences.
CEO Amnon Shashua called this an "historic milestone" in automated driving, emphasizing the scalability of Mobileye's technology. He highlighted SuperVision's role as a bridge to eyes-off systems for both consumer vehicles and mobility services.

Initial driverless deployments are targeted for 2026.

BMW 

BMW new 7 Series received the world’s first approval for a combination Level 2/Level 3 driving assistance systems in the same vehicle. This milestone offers drivers unique benefits from both systems.
The Level 2 BMW Highway Assistant enhances comfort on long journeys, operating at speeds up to 81 mph (130 km/h) on motorways with separated carriageways. It allows drivers to take their hands off the steering wheel for extended periods while remaining attentive. The system can also perform lane changes autonomously or at the driver's confirmation.

The Level 3 BMW Personal Pilot L3 enables highly automated driving at speeds up to 37 mph (60 km/h) in specific conditions, such as motorway traffic jams. Drivers can temporarily divert their attention from the road, but they have to retake control when prompted.

This combination of systems offers a comprehensive set of functionalities for a more comfortable and relaxing driving experience on both long and short journeys. The BMW Personal Pilot L3, which includes both systems, is available exclusively in Germany for €6,000 (around $6,500). Current BMW owners can add the L2 Highway Assistant to their vehicle, if applicable, free of charge starting August 24.

Mercedes-Benz 

Mercedes-Benz’s groundbreaking Drive Pilot Level 3 autonomous driving system is available for the S-Class and EQS Sedan. It allows drivers to disengage from driving in specific conditions, such as heavy traffic under 40 mph (64 km/h) on approved freeways under certain circumstances. The system uses advanced sensors – including radar, lidar, ultrasound, and cameras – to navigate and make decisions.
While active, Drive Pilot enables drivers to use in-car entertainment features on the central display. However, drivers must remain alert and take control when requested. Drive Pilot functions under the following conditions:

  •  Clear lane markings on approved freeways
  •  Moderate to heavy traffic with speeds under 40 mph
  •  Daytime lighting and clear weather
  •  Driver visible by camera located above driver's display
  •  The car is not in a construction zone.

Drive Pilot relies on a high-definition 3D map of the road and surroundings. It's currently certified for use on major freeways in California and parts of Nevada.

NPS

At CES 2024, Neural Propulsion Systems (NPS) demonstrated its ultra-resolution imaging radar software for automotive vision sensing. The technology significantly improves radar precision without expensive lidar sensors or weather-related limitations.

NPS CEO Behrooz Rezvani likens the improvement to enhancing automotive imaging from 20/20 to better than 20/10 vision. The software enables existing sensors to resolve to one-third of the radar beam-width, creating a 10 times denser point cloud and reducing false positives by over ten times, the company said.

The demonstration compared performance using Texas Instruments 77 GHz chipsets with and without NPS technology. Former GM R&D vice president and Waymo advisor Lawrence Burns noted that automakers can use NPS to enhance safety, performance, and cost-effectiveness of driver-assistance features using existing hardware.

NPS' algorithms are based on the Atomic Norm framework, rooted in magnetic resonance imaging technology. The software can be deployed on various sensing platforms and implemented on processors with neural network capability. Advanced applications of NPS software with wide aperture multi-band radar enable seeing through physical barriers like shrubs, trees, and buildings — and even around corners. The technology is poised to help automakers meet NHTSA's proposed stricter standards for automatic emergency braking, aiming to reduce pedestrian and bicycle fatalities on U.S. roads.

Go to the original article...

Perovskite sensor with 3x more light throughput

Image Sensors World        Go to the original article...

Link: https://www.admin.ch/gov/en/start/documentation/media-releases.msg-id-101189.html


Dübendorf, St. Gallen und Thun, 28.05.2024 - Capturing three times more light: Empa and ETH researchers are developing an image sensor made of perovskite that could deliver true-color photos even in poor lighting conditions. Unlike conventional image sensors, where the pixels for red, green and blue lie next to each other in a grid, perovskite pixels can be stacked thus greatly increasing the amount of light each individual pixel can capture.

Family, friends, vacations, pets: Today, we take photos of everything that comes in front of our lens. Digital photography, whether with a cell phone or camera, is simple and hence widespread. Every year, the latest devices promise an even better image sensor with even more megapixels. The most common type of sensor is based on silicon, which is divided into individual pixels for red, green and blue (RGB) light using special filters. However, this is not the only way to make a digital image sensor – and possibly not even the best.

A consortium comprising Maksym Kovalenko from Empa's Thin Films and Photovoltaics laboratory, Ivan Shorubalko from Empa's Transport at Nanoscale Interfaces laboratory, as well as ETH Zurich researchers Taekwang Jang and Sergii Yakunin, is working on an image sensor made of perovskite capable of capturing considerably more light than its silicon counterpart. In a silicon image sensor, the RGB pixels are arranged next to each other in a grid. Each pixel only captures around one-third of the light that reaches it. The remaining two-thirds are blocked by the color filter.
Pixels made of lead halide perovskites do not need an additional filter: it is already "built into" the material, so to speak. Empa and ETH researchers have succeeded in producing lead halide perovskites in such a way that they only absorb the light of a certain wavelength – and therefore color – but are transparent to the other wavelengths. This means that the pixels for red, green and blue can be stacked on top of each other instead of being arranged next to each other. The resulting pixel can absorb the entire wavelength spectrum of visible light. "A perovskite sensor could therefore capture three times as much light per area as a conventional silicon sensor," explains Empa researcher Shorubalko. Moreover, perovskite converts a larger proportion of the absorbed light into an electrical signal, which makes the image sensor even more efficient.

Kovalenko's team was first able to fabricate individual functioning stacked perovskite pixels in 2017. To make the next step towards real image sensors, the ETH-Empa consortium led by Kovalenko had partnered with the electronics industry. "The challenges to address include finding new materials fabrication and patterning processes, as well as design and implementation of the perovskite-compatible read-out electronic architectures", emphasizes Kovalenko. The researchers are now working on miniaturizing the pixels, which were originally up to five millimeters in size, and assembling them into a functioning image sensor. "In the laboratory, we don't produce the large sensors with several megapixels that are used in cameras," explains Shorubalko, "but with a sensor size of around 100'000 pixels, we can already show that the technology works."

Good performance with less energy
Another advantage of perovskite-based image sensors is their manufacture. Unlike other semiconductors, perovskites are less sensitive to material defects and can therefore be fabricated relatively easily, for example by depositing them from a solution onto the carrier material. Conventional image sensors, on the other hand, require high-purity monocrystalline silicon, which is produced in a slow process at almost 1500 degrees Celsius.

The advantages of perovskite-based image sensors are apparent. It is therefore not surprising that the research project also includes a partnership with industry. The challenge lies in the stability of perovskite, which is more sensitive to environmental influences than silicon. "Standard processes would destroy the material," says Shorubalko. "So we are developing new processes in which the perovskite remains stable. And our partner groups at ETH Zurich are working on ensuring the stability of the image sensor during operation."

If the project, which will run until the end of 2025, is successful, the technology will be ready for transfer to industry. Shorubalko is confident that the promise of a better image sensor will attract cell phone manufacturers. "Many people today choose their smartphone based on the camera quality because they no longer have a stand-alone camera," says the researcher. A sensor delivering excellent images in much poorer lighting conditions could be a major advantage.

Go to the original article...

Sony Imaging Business Strategy Meeting

Image Sensors World        Go to the original article...

Sony held a strategy meeting recently. Slides from the Imaging and Sensing business are available here: https://www.sony.com/en/SonyInfo/IR/library/presen/business_segment_meeting/pdf/2024/ISS_E.pdf



































Go to the original article...

Videos du jour – Sony, onsemi

Image Sensors World        Go to the original article...

Sony UV and SWIR sensors demo:



Webinar by ON Semi on image sensor selection:



ON Semi Hyperlux image sensor demo:



Go to the original article...

Senseeker Expands Low-Noise Neon Digital Readout IC Family for SWIR Applications

Image Sensors World        Go to the original article...

The 10 µm, 1280 x 1024 Neon® RD0131 DROIC is available now for commercial use.


Santa Barbara, California (July 16 th , 2024) — Senseeker Corp, a leading innovator of digital infrared image sensing technology, has announced the availability of the Neon® RD0131, an advanced digital readout integrated circuit (DROIC) that expands the Neon product family with the addition of a high definition 1280 x 1024 format.

“The new larger format size of the Neon RD0131 is a welcome addition to the Neon DROIC family,” said Dr. Martin H. Ettenberg, President and CEO at Princeton Infrared Technologies. “Senseeker’s approach to offering families of compatible products allows reuse of test equipment, electronics and software, greatly simplifying the development of new high-performance SWIR cameras and imagers that we provide for the Industrial, Scientific and Defense markets.”

The Neon RD0131, with 1280 x 1024 format and 10 µm pitch has triple-gain modes with programmable well capacities of 22 ke-, 160 ke- and 1.1 Me-. The DROIC supports a read noise of 15 electrons at room temperature in high-gain.

“The Neon RD0131 CTIA DROIC is the second chip in our Neon product family that has proven to be a hit with customers that are developing solutions for low-light applications such as short-wave infrared (SWIR) and low-current technologies such as quantum dot-based detectors,” said Kenton Veeder, President of Senseeker. “We have included the popular features and operating modes that Senseeker isknown for, including on-chip temperature monitoring and programmable multiple high-speed windows to observe and track targets at thousands of frames per second.”

The Neon RD0131 is available in full or quarter wafers now and is supported by Senseeker’s CoaxSTACK™ electronics kit, CamIRa® imaging software and sensor test units (STUs) that, together, enable testing and evaluation of Neon-based focal plane arrays quickly and efficiently.

The Neon® RD0131-L10x is a low-noise, triple-gain digital readout integrated circuit (DROIC) that has a 10 µm pitch pixel with a capacitive transimpedance amplifier (CTIA) front-end circuit. This DROIC was developed for low-light applications such as short wave Infrared (SWIR) and low-current detector technologies such as quantum dot-based detectors. It has been designed for use in high operating temperature (HOT) conditions.

  • 10 μm , P-on-N polarity, CTIA input
  • Global snapshot, Integrate-while-read (IWR) operation
  • Three selectable gains with well capacity of 22 ke- (high-
  • gain), 160 ke- (medium-gain) and 1.1 Me- (low-gain)
  • Correlated Doubling Sampling (CDS) on and off chip
  • Advanced zero-signal noise floors of 15 e– rms (high-gain
  • using CDS, room temperature)
  • Synchronous or asynchronous integration control
  • High-speed windowing with multiple windows
  • Serialized to 16 bits per pixel (15 data, 1 valid flag bit)
  • SPI control interface (SenSPI®) and optional frame clock

 Neon RD0131 dies on wafer

 
Image of a bruised apple that uses the Neon ROIC with a short wave infrared (SWIR) detector.


 

Go to the original article...

AMS low-power global shutter sensor

Image Sensors World        Go to the original article...

Slides available here: https://www.project-mantis.eu/presentations_2023/ePicture_2023_AMSOSRAM_20230608121413.pdf

A subset of slides below:


 










Go to the original article...

International Image Sensor Workshop 2025 First Call for Papers

Image Sensors World        Go to the original article...


FIRST CALL FOR PAPERS
ABSTRACTS DUE DEC 19, 2024
2025 International Image Sensor Workshop Awaji Yumebutai Int. Conf. Center, Hyōgo, Japan
(June 2 - 5, 2025)

The 2025 International Image Sensor Workshop (IISW) provides a biennial opportunity to present innovative work in the area of solid-state image sensors and share new results with the image sensor community. The event is intended for image sensor technologists; in order to encourage attendee interaction and a shared experience, attendance is limited, with strong acceptance preference given to workshop presenters. As is the tradition, the 2025 workshop will emphasize an open exchange of information among participants in an informal, secluded setting beside the Awaji Island in Hyōgo, Japan.

The scope of the workshop includes all aspects of electronic image sensor design and development. In addition to regular oral and poster papers, the workshop will include invited talks and announcement of International Image Sensors Society (IISS) Award winners.

Papers on the following topics are solicited:

Image Sensor Design and Performance
CMOS imagers, CCD imagers, SPAD sensors
New and disruptive architectures
Global shutter image sensors
Low noise readout circuitry, ADC designs
Single photon sensitivity sensors
High frame rate image sensors
High dynamic range sensors
Low voltage and low power imagers
High image quality; Low noise; High sensitivity
Improved color reproduction
Non-standard color patterns with special digital processing
Imaging system-on-a-chip, on-chip image processing
Event-based image sensors

Pixels and Image Sensor Device Physics
New devices and pixel structures
Advanced materials
Ultra miniaturized pixels development, testing, and characterization
New device physics and phenomena
Electron multiplication pixels and imagers
Techniques for increasing QE, well capacity, reducing crosstalk, and improving angular response
Frontside illuminated, backside illuminated, and stacked pixels and pixel arrays
Pixel simulation: optical and electrical simulation, 2D and 3D, CAD for design and simulation, improved models

Application Specific Imagers
Image sensors and pixels for range sensing: LIDAR, TOF, RGBZ, structured light, stereo imaging, etc.
Image sensors with enhanced spectral sensitivity (NIR, UV, IR)
Sensors for DSC, DSLR, mobile, digital video cameras and mirror-less cameras
Array imagers and sensors for multi-aperture imaging, computational imaging, and machine learning
Sensors for medical applications, microbiology, genome sequencing
High energy photon and particle sensors (X-ray, radiation)
Line arrays, TDI, very large format imagers
Multi and hyperspectral imagers
Polarization sensitive imagers

Image Sensor Manufacturing and Testing
New manufacturing techniques
Wafer-on-wafer and chip-on-wafer stacking technologies
Backside thinning
New characterization methods
Packaging and testing: reliability, yield, cost
Defects, noises, and leakage currents
Radiation damage and radiation hard imagers

On-chip Optics and Color Filters
Advanced optical path, color filters, microlens, light guides
Nanotechnologies for Imaging
Wafer level cameras

Submission of abstracts:

An abstract should consist of a single page of maximum 500-words text with up to two pages of illustrations (3 pages maximum), and include authors’ name(s), affiliation, mailing address, telephone number, and e-mail address.

The deadline for abstract submission is 11:59pm, Thursday Dec 19, 2024 (GMT).
To submit an abstract, please go to: https://cmt3.research.microsoft.com/IISW2025
Above website should be open by Aug 1, 2024.

The first time you visit the paper submission site, you'll need to click on "Create Account". Once you create and verify your account with your email address, you will be able to submit abstracts by logging in and clicking “Create New Submission”.

Please visit https://imagesensors.org/CFP2025 for complete instructions and any updates to the abstract and paper submission procedures.

Abstracts will be considered on the basis of originality and quality. High quality papers on work in progress are also welcome. Abstracts will be reviewed confidentially by the Technical Program Committee.

Key Dates:
Authors will be notified of the acceptance of their abstract latest by Feb 10, 2025.
Final-form 4-page paper submission date is Mar 22, 2025.
Presentation material submission date is May 1, 2025.

Location:
The IISW 2025 will be held at the International Conference Center on Awaji Island in Hyōgo Prefecture, Japan. This beautiful hotel is about 1 hour from Kansai International Airport. Limousine Buses chartered by IISW will pick up attendees at JR Shin-Kobe Station and JR Sannomiya Station.

Registration, Workshop fee, and Hotel Reservation:
Registration details and hotel reservation information will be provided in the Final Announcement of the Workshop.

Forthcoming announcements and additional information will be posted on the 2025 Workshop page of the International Image Sensor Society website at: https://www.imagesensors.org/



Go to the original article...

Last chance to buy Sony CCD sensors

Image Sensors World        Go to the original article...

We shared back in 2015 news of Sony discontinuing their CCD sensors.

The "last time buy" period for these sensors is nearing the end.

Framos: https://www.framos.com/en/news/framos-announces-last-time-buy-deadline-for-sony-ccd-sensors

Taking into consideration current market demand and customer feedback, Sony has decided to revise the “Last Time Buy PO submission” deadline to the End of September 2024. Final shipments to FRAMOS remain unchanged at the end of March 2026. With these changes, FRAMOS invites all customers to submit their final Last Time Buy Purchase Orders to them no later than September 24th, 2024, to ensure timely processing and submission to Sony by the new Last Time Buy deadline date.
Important dates:
 Deadline for Last Time Buy Purchase Orders received by FRAMOS: September 24th, 2024
 Final delivery of accepted Last Time Buy Purchase Orders from FRAMOS: March 31st, 2026 

SVS-Vistek: https://www.svs-vistek.com/en/news/svs-news-article.php?p=svs-vistek-offers-last-time-buy-options-or-replacement-products-for-ccd-cameras

For customers who wish to continue using CCD-based designs, SVS-Vistek has initiated a Last-Time-Buy (LTB) period, effective immediately, followed by a subsequent Last-Time-Delivery (LTD) period. This allows our customers to continue to produce and sell their CCD-based products, ensuring reliable delivery. Orders can be placed until August 31, 2024 (Last-Time-Buy). SVS-Vistek will then offer delivery of LTB cameras until August 31, 2026 (Last-Time-Delivery). We advise our customers individually and try to find the best solution together. 

Go to the original article...

Forbes blog on Obsidian thermal imagers

Image Sensors World        Go to the original article...

Link: https://www.forbes.com/sites/davidhambling/2024/05/22/new-us-technology-makes-more--powerful-thermal-imagers-at-lower-cost/

[some excerpts below]

New U.S. Technology Makes More Powerful Thermal Imagers At Lower Cost 

Thermal imaging has been a critical technology in the war in Ukraine, spotting warm targets like vehicles and soldiers in the darkest nights. Military-grade thermal imagers used on big Baba Yaga night bombers are far too expensive for drone makers assembling $400 FPV kamikaze drones who have to rely on lower-cost devices. But a new technology developed by U.S company Obsidian Sensors Inc could transform the thermal imaging market with affordable high-resolution sensors.

...

Older digital cameras were based on CCDs (charge coupled devices), the current generation use more affordable CMOS imaging sensors which produce an electrical charge in response to light. The vast majority of thermal imagers use a different technology: an array of microbolometers, miniature devices whose pixels absorb infrared energy and measure the resulting change in resistance. The conventional design neatly integrates the microbolometers and the circuits which read them on the same silicon chip.

...

John Hong, CEO of Obsidian Sensors based in San Diego believes he has a better approach, which can scale up to high resolution at low cost and, crucially, high volume, at established foundries. The new design does not integrate everything in one unit but separates the bolometer array from the readout circuits. This is more complex but allows a different manufacturing technique to be used.

The readout circuits are still on silicon, but the sensor array is produced on a sheet of glass, leveraging technology perfected for flat-screen TVs and mobile phone displays. Large sheets of glass are far cheaper to process than small wafers of silicon and bolometers made on glass cost about a hundred times less than on silicon.

Hong says the process can easily produce multi-megapixel arrays. Obsidian are already producing test batches of VGA sensors, and plan to move to 1280x1024 this year and 1920x1080 in 2025.
Obsidian has been quietly developing their technology for six years and are now able to produce units for evaluation at a price three to four times lower than comparable models. Further evolution of the manufacturing process will bring prices even lower.

That could bring a 640x480 VGA sensor imager down to well below $200.

...

Hong says they plan to sell a thousand VGA cameras this year on a pilot production run, and are currently raising a series B to hit much larger volumes in 2025 and beyond. That should be just about right to surf the wave of demand in the next few years.

 

The thermal image from Obsidian's sensor (left) shows pedestrians who are invisible in the glare in a digital camera image (right) [Obsidian Sensors]


Go to the original article...

Albert Theuwissen lecture on CIS stacking technology

Image Sensors World        Go to the original article...












 


Go to the original article...

Videos du jour : under display cameras, SPADs

Image Sensors World        Go to the original article...

 


Designing Phase Masks for Under-Display Cameras

Diffractive blur and low light levels are two fundamental challenges in producing high-quality photographs in under-display cameras (UDCs). In this paper, we incorporate phase masks on display panels to tackle both challenges. Our design inserts two phase masks, specifically two microlens arrays, in front of and behind a display panel. The first phase mask concentrates light on the locations where the display is transparent so that more light passes through the display, and the second phase mask reverts the effect of the first phase mask. We further optimize the folding height of each microlens to improve the quality of PSFs and suppress chromatic aberration. We evaluate our design using a physically-accurate simulator based on Fourier optics. The proposed design is able to double the light throughput while improving the invertibility of the PSFs. Lastly, we discuss the effect of our design on the display quality and show that implementation with polarization-dependent phase masks can leave the display quality uncompromised.

 

 


Passive Ultra-Wideband Single-Photon Imaging

We consider the problem of imaging a dynamic scene over an extreme range of timescales simultaneously—seconds to picoseconds—and doing so passively, without much light, and without any timing signals from the light source(s) emitting it. Because existing flux estimation techniques for single-photon cameras break down in this regime, we develop a flux probing theory that draws insights from stochastic calculus to enable reconstruction of a pixel’s time-varying flux from a stream of monotonically-increasing photon detection timestamps. We use this theory to (1) show that passive free-running SPAD cameras have an attainable frequency bandwidth that spans the entire DC-to-31 GHz range in low-flux conditions, (2) derive a novel Fourier-domain flux reconstruction algorithm that scans this range for frequencies with statistically-significant support in the timestamp data, and (3) ensure the algorithm’s noise model remains valid even for very low photon counts or non-negligible dead times. We show the potential of this asynchronous imaging regime by experimentally demonstrating several never-seen-before abilities: (1) imaging a scene illuminated simultaneously by sources operating at vastly different speeds without synchronization (bulbs, projectors, multiple pulsed lasers), (2) passive non-line-of-sight video acquisition, and (3) recording ultra-wideband video, which can be played back later at 30 Hz to show everyday motions—but can also be played a billion times slower to show the propagation of light itself.


 
SoDaCam: Software-defined Cameras via Single-Photon Imaging

Reinterpretable cameras are defined by their post-processing capabilities that exceed traditional imaging. We present "SoDaCam" that provides reinterpretable cameras at the granularity of photons, from photon-cubes acquired by single-photon devices. Photon-cubes represent the spatio-temporal detections of photons as a sequence of binary frames, at frame-rates as high as 100 kHz. We show that simple transformations of the photon-cube, or photon-cube projections, provide the functionality of numerous imaging systems including: exposure bracketing, flutter shutter cameras, video compressive systems, event cameras, and even cameras that move during exposure. Our photon-cube projections offer the flexibility of being software-defined constructs that are only limited by what is computable, and shot-noise. We exploit this flexibility to provide new capabilities for the emulated cameras. As an added benefit, our projections provide camera-dependent compression of photon-cubes, which we demonstrate using an implementation of our projections on a novel compute architecture that is designed for single-photon imaging.

Go to the original article...

PetaPixel article on Samsung’s 200MP sensor

Image Sensors World        Go to the original article...

Full article here: https://petapixel.com/2024/06/27/samsung-announces-worlds-first-200mp-sensor-for-telephoto-cameras/


Samsung Unveils World’s First 200MP Sensor for Smartphone Telephoto Cameras

 


Samsung has announced three new image sensors for main and sub cameras in upcoming smartphones. Among the trio of new chips, Samsung unveiled the world’s first 200-megapixel telephoto camera sensor for mobile devices.

The ISOCELL HP9, the industry’s first 200MP telephoto sensor for smartphones, features a Type 1/1.4 format and 0.56μm pixel size. Samsung explains that the sensor has a proprietary high-refractive microlens that uses a novel material and significantly improves the sensor’s light-gathering capabilities. This works by more precisely directing light to the corresponding RGB color filter. Samsung claims this results in 12% better light sensitivity (based on signal-to-noise ratio 10) and 10% improved autofocus contrast performance compared to Samsung’s prior telephoto sensor. 

“Notably, the HP9 excels in low-light conditions, addressing a common challenge for traditional telephoto cameras. Its Tetra²pixel technology merges 16 pixels (4×4) into a large, 12MP 2.24μm-sized sensor, enabling sharper portrait shots — even in dark settings — and creating dramatic out-of-focus bokeh effects,” the Korean tech giant explains.

When used alongside a new remosaic algorithm, Samsung says its new HP9 sensor offers 2x or 4x in-sensor zoom modes, achieving up to 12x total zoom when paired with a 3x optical zoom telephoto module, “all while maintaining crisp image quality.”

Next is the ISOCELL GNJ, a dual-pixel 50-megapixel image sensor in Type 1/1.57 format. This sensor sports 1.0μm pixels, and each pixel includes a pair of photodiodes, enabling “fast and accurate autofocus, similar to the way human eyes focus.” The sensor also captures complete color information, which Samsung says helps with focusing and image quality.

The sensor utilizes an in-sensor zoom function, which promises good video quality. It also offers benefits for still photography, as Samsung says the in-sensor zoom function can reduce artifacts and moiré.

Thanks to an improved high-transmittance anti-reflective layer (ARL), plus Samsung’s high-refractive microlenses, the GNJ boasts better light transmission and promises consistent image quality. It also has an upgraded pixel isolation material to minimize the crosstalk between adjacent pixels, resulting in more detailed, accurate photos.

As Samsung notes, these improvements also result in a more power-efficient design. The sensor offers a 29% improvement in live view power efficiency and a 34% reduction in power use when shooting 4K/60p video.

Rounding out the three new sensors is the ISOCELL JN5, a 50-megapixel Type 1/2.76 sensor with 0.64μm pixels. Because of its slim optical format, the new JN5 sensor can be used across primary and sub-cameras, including ultra-wide, wide, telephoto, and front-facing camera units.

The sensor includes dual vertical transfer gate (Dual VTG) technology to increase charge transfer within pixels, which reduces noise in extremely low-light conditions. It also leverages Super Quad Phase Detection (Super QPD) to rapidly adjust focus when capturing moving subjects.

Yet another fancifully named feature is dual slope gain (DSG), which Samsung says enhances the JN5’s high-dynamic range (HDR) performance. This works by amplifying analog signals (photons) into two signals, converting them into digital data, and combining them. This sounds similar to dual ISO technology, which expands dynamic range by combining low-gain and high-gain data into a single file.

Go to the original article...

onsemi acquires SWIR Vision Systems

Image Sensors World        Go to the original article...

From Businesswire: https://www.businesswire.com/news/home/20240702703913/en/onsemi-Enhances-Intelligent-Sensing-Portfolio-with-Acquisition-of-SWIR-Vision-Systems

onsemi Enhances Intelligent Sensing Portfolio with Acquisition of SWIR Vision Systems

SCOTTSDALE, Ariz.--(BUSINESS WIRE)--As part of onsemi’s continuous drive to provide the most robust, cutting-edge technologies for intelligent image sensing, the company announced today it has completed the acquisition of SWIR Vision Systems®. SWIR Vision Systems is a leading provider of CQD® (colloidal quantum-dot-based) short wavelength infrared (SWIR) technology – a technology that extends the detectable light spectrum to see through objects and capture images that were not previously possible. The integration of this patented technology within onsemi’s industry-leading CMOS sensors will significantly enhance the company’s intelligent sensing product portfolio and pave the way for further growth in key markets including industrial, automotive and defense.

CQD uses nanoparticles or crystals with unique optical and electronic properties that can be precisely tuned to absorb an extended wavelength of light. This technology extends the visibility and detection of systems beyond the range of standard CMOS sensors to SWIR wavelengths. To date, SWIR technology has been limited in adoption due to the high cost and manufacturing complexity of the traditional indium gallium arsenide (InGAas) process. With this acquisition, onsemi will combine its silicon-based CMOS sensors and manufacturing expertise with the CQD technology to deliver highly integrated SWIR sensors at lower cost and higher volume. The result are more compact, cost-effective imaging systems that offer extended spectrum and can be used in a wide array of commercial, industrial and defense applications.

These advanced SWIR sensors are able to see through dense materials, gases, fabrics and plastics, which is essential across many industries, particularly for industrial applications such as surveillance systems, silicon inspection, machine vision imaging and food inspection. In autonomous vehicle imaging, the higher spectra will create better visibility to see through difficult conditions such as extreme darkness, thick fog or winter glare.

SWIR Vision Systems is now a wholly owned subsidiary of onsemi, with its highly skilled team being integrated into the company’s Intelligent Sensing Group. The team will continue to operate in North Carolina. The acquisition is not expected to have any meaningful impact on onsemi’s near to midterm financial outlook.

Go to the original article...

Cambridge Mechatronics CEO interview: Capturing the smartphone camera market and more

Image Sensors World        Go to the original article...

 

In this episode of the Be Inspired series, Andy Osmant, CEO of Cambridge Mechatronics explains the countless use cases for the company’s shape memory alloy (SMA) actuators, from smart phone cameras to insulin pumps, and how they decided which markets to target. Andy also delves into their experience changing business models to also sell semiconductors, and how being part of the Cambridge ecosystem has supported the growth of the business.
 

0:00-3:54 About Cambridge Mechatronics
3:54-5:14 Controlling SMA
5:14-9:15 Supply chains and relationships
9:15-11:56 Other use cases
11:56-15:51 The Cambridge ecosystem
15:51-19:36 Looking ahead

Go to the original article...

Sony announces IMX901/902 wide aspect ratio global shutter CIS

Image Sensors World        Go to the original article...

Press release: https://www.sony-semicon.com/en/info/2024/2024062701.html

Product page: https://www.sony-semicon.com/en/products/is/industry/gs/imx901-902.html

Sony Semiconductor Solutions to Release 8K Horizontal, Wide-Aspect Ratio Global Shutter Image Sensor for C-mount Lenses That Delivers High Image Quality and High-Speed Performance

Atsugi, Japan — Sony Semiconductor Solutions Corporation (SSS) announced today the upcoming release of the IMX901, a wide-aspect ratio global shutter CMOS image sensor with 8K horizontal resolution and approximately 16.41 effective megapixels. The IMX901 supports C-mount lenses, which are widely used in industrial applications, and offers high image quality and high-speed performance, helping to solve to a variety of industrial challenges.

The new sensor provides high-resolution and wide field of view with 8K horizontal and 2K vertical pixels. In addition, it features Pregius STM, a global shutter technology with a unique pixel structure, to deliver low-noise, high-quality, high-speed, and distortion-free imaging in a compact size.

In addition to this product, SSS will also release the IMX902, which has 6K horizontal and 2K vertical pixels and approximately 12.38 effective megapixels, to expand its product lineup of global shutter image sensors.

In today's logistics systems, where belt conveyors are seeing wider belt widths and faster speeds, there is a growing demand for image sensors that can expand the imaging area for barcode reading and improve imaging performance and efficiency. Typically, multiple cameras are required to capture the entire belt conveyor in the field of view, which can lead to concerns about increased camera system size and costs.

A single camera equipped with the new sensor announced today can capture a wide-range area horizontally, helping to reduce the number of cameras and associated cost required compared to conventional methods. In addition, leveraging SSS's original back-illuminated structure, Pregius S, the new product delivers both distortion-free high-speed imaging and high image quality. The product also features a wide dynamic range exceeding 70 dB and clearly captures fast-moving objects with a high frame rate of 134 fps.

This product, which can capture images in wide aspect ratio with high image quality and high speed, can be used for barcode reading on belt conveyors at logistics facilities, machine vision inspections and appearance inspections to detect fine defects and scratches, and other applications. 

 





Go to the original article...

Omnivision presents event camera deblurring paper at CVPR 2024

Image Sensors World        Go to the original article...

EVS-assisted Joint Deblurring Rolling-Shutter Correction and Video Frame Interpolation through Sensor Inverse Modeling

Event-based Vision Sensors (EVS) gain popularity in enhancing CMOS Image Sensor (CIS) video capture. Nonidealities of EVS such as pixel or readout latency can significantly influence the quality of the enhanced images and warrant dedicated consideration in the design of fusion algorithms. A novel approach for jointly computing deblurred, rolling-shutter artifact corrected high-speed videos with frame rates up to 10000 FPS using inherently blurry rolling shutter CIS frames of 120 FPS to 150 FPS in conjunction with EVS data from a hybrid CIS-EVS sensor is presented. EVS pixel latency, readout latency and the sensor's refractory period are explicitly incorporated into the measurement model. This inverse function problem is solved on a per-pixel manner using an optimization-based framework. The interpolated images are subsequently processed by a novel refinement network. The proposed method is evaluated using simulated and measured datasets, under natural and controlled environments. Extensive experiments show reduced shadowing effect, a 4 dB increment in PSNR, and a 12% improvement in LPIPS score compared to state-of-the-art methods.

 



Go to the original article...

CEA-Leti announces three-layer CIS

Image Sensors World        Go to the original article...

CEA-Leti Reports Three-Layer Integration Breakthrough On the Path for Offering AI-Embedded CMOS Image Sensors
 
This Work Demonstrates Feasibility of Combining Hybrid Bonding and High-Density Through-Silicon Vias
 
DENVER – May 31, 2024 – CEA-Leti scientists reported a series of successes in three related projects at ECTC 2024 that are key steps to enabling a new generation of CMOS image sensors (CIS) that can exploit all the image data to perceive a scene, understand the situation and intervene in it – capabilities that require embedding AI in the sensor.
 
Demand for smart sensors is growing rapidly because of their high-performance imaging capabilities in smartphones, digital cameras, automobiles and medical devices. This demand for improved image quality and functionality enhanced by embedded AI has presented manufacturers with the challenge of improving sensor performance without increasing the device size.
 
“Stacking multiple dies to create 3D architectures, such as three-layer imagers, has led to a paradigm shift in sensor design,” said Renan Bouis, lead author of the paper, “Backside Thinning Process Development for High-Density TSV in a 3-Layer Integration”.
 
“The communication between the different tiers requires advanced interconnection technologies, a requirement that hybrid bonding meets because of its very fine pitch in the micrometer & even sub-micrometer range,” he said. “High-density through-silicon via (HD TSV) has a similar density that enables signal transmission through the middle tiers. Both technologies contribute to the reduction of wire length, a critical factor in enhancing the performance of 3D-stacked architectures.”
 
‘Unparalleled Precision and Compactness’
 
The three projects applied the institute’s previous work on stacking three 300 mm silicon wafers using those technology bricks. “The papers present the key technological bricks that are mandatory for manufacturing 3D, multilayer smart imagers capable of addressing new applications that require embedded AI,” said Eric Ollier, project manager at CEA-Leti and director of IRT Nanoelec’s Smart Imager program. The CEA-Leti institute is a major partner of IRT Nanoelec.
 
“Combining hybrid bonding with HD TSVs in CMOS image sensors could facilitate the integration of various components, such as image sensor arrays, signal processing circuits and memory elements, with unparalleled precision and compactness,” said Stéphane Nicolas, lead author of the paper, “3-Layer Fine Pitch Cu-Cu Hybrid Bonding Demonstrator With High Density TSV For Advanced CMOS Image Sensor Applications,” which was chosen as one of the conference’s highlighted papers.
 
The project developed a three-layer test vehicle that featured two embedded Cu-Cu hybrid-bonding interfaces, face-to-face (F2F) and face-to-back (F2B), and with one wafer containing high-density TSVs.
 
Ollier said the test vehicle is a key milestone because it demonstrates both feasibility of each technological brick and also the feasibility of the integration process flow. “This project sets the stage to work on demonstrating a fully functional three-layer, smart CMOS image sensor, with edge AI capable of addressing high performance semantic segmentation and object-detection applications,” he said.
 
At ECTC 2023, CEA-Leti scientists reported a two-layer test vehicle combining a 10-micron high, 1-micron diameter HD TSV and highly controlled hybrid bonding technology, both assembled in F2B configuration. The recent work then shortened the HD TSV to six microns high, which led to development of a two-layer test vehicle exhibiting low dispersion electrical performances and enabling simpler manufacturing.
 
’40 Percent Decrease in Electrical Resistance’
 
“Our 1-by-6-micron copper HD TSV offers improved electrical resistance and isolation performance compared to our 1-by-10-micron HD TSV, thanks to an optimized thinning process that enabled us to reduce the substrate thickness with good uniformity,” said Stéphan Borel, lead author of the paper, “Low Resistance and High Isolation HD TSV for 3-Layer CMOS Image Sensors”.
 
“This reduced height led to a 40 percent decrease in electrical resistance, in proportion with the length reduction. Simultaneous lowering of the aspect ratio increased the step coverage of the isolation liner, leading to a better voltage withstand,” he added.
 
“With these results, CEA-Leti is now clearly identified as a global leader in this new field dedicated to preparing the next generation of smart imagers,” Ollier explained. “These new 3D multi-layer smart imagers with edge AI implemented in the sensor itself will really be a breakthrough in the imaging field, because edge AI will increase imager performance and enable many new applications.”


Go to the original article...

IISS updates its papers database

Image Sensors World        Go to the original article...

The International Image Sensor Society has a new and updated papers repository thanks to a multi-month overhaul effort.

  • 853 IISW workshop papers in the period 2007-2023 are updated with DOI (Digital Object Identifier). Check out any of these papers in the IISS Online Library.
  • Each paper has a landing page containing metadata such as title, authors, year, keywords, references, and of course link to the PDF.
  • As an extra service we have also identified DOIs (if exists) to referenced papers in workshop papers. This makes it convenient to access referenced papers by clicking on the DOI directly from the landing page.
  • DOIs for pre-2007 workshop papers will be added later.

IISS website: https://imagesensors.org/

IISS Online Library: https://imagesensors.org/past-workshops-library/ 

Go to the original article...

Paper on event cameras for automotive vision in Nature

Image Sensors World        Go to the original article...

In a recent open access Nature article titled "Low-latency automotive vision with event cameras", Daniel Gehrig and Davide Scaramuzza write:

The computer vision algorithms used currently in advanced driver assistance systems rely on image-based RGB cameras, leading to a critical bandwidth–latency trade-off for delivering safe driving experiences. To address this, event cameras have emerged as alternative vision sensors. Event cameras measure the changes in intensity asynchronously, offering high temporal resolution and sparsity, markedly reducing bandwidth and latency requirements. Despite these advantages, event-camera-based algorithms are either highly efficient but lag behind image-based ones in terms of accuracy or sacrifice the sparsity and efficiency of events to achieve comparable results. To overcome this, here we propose a hybrid event- and frame-based object detector that preserves the advantages of each modality and thus does not suffer from this trade-off. Our method exploits the high temporal resolution and sparsity of events and the rich but low temporal resolution information in standard images to generate efficient, high-rate object detections, reducing perceptual and computational latency. We show that the use of a 20 frames per second (fps) RGB camera plus an event camera can achieve the same latency as a 5,000-fps camera with the bandwidth of a 45-fps camera without compromising accuracy. Our approach paves the way for efficient and robust perception in edge-case scenarios by uncovering the potential of event cameras.

Also covered in an ArsTechnica article: New camera design can ID threats faster, using less memory https://arstechnica.com/science/2024/06/new-camera-design-can-id-threats-faster-using-less-memory/

 


 a, Unlike frame-based sensors, event cameras do not suffer from the bandwidth–latency trade-off: high-speed cameras (top left) capture low-latency but high-bandwidth data, whereas low-speed cameras (bottom right) capture low-bandwidth but high-latency data. Instead, our 20 fps camera plus event camera hybrid setup (bottom left, red and blue dots in the yellow rectangle indicate event camera measurements) can capture low-latency and low-bandwidth data. This is equivalent in latency to a 5,000-fps camera and in bandwidth to a 45-fps camera. b, Application scenario. We leverage this setup for low-latency, low-bandwidth traffic participant detection (bottom row, green rectangles are detections) that enhances the safety of downstream systems compared with standard cameras (top and middle rows). c, 3D visualization of detections. To do so, our method uses events (red and blue dots) in the blind time between images to detect objects (green rectangle), before they become visible in the next image (red rectangle).

Our method processes dense images and asynchronous events (blue and red dots, top timeline) to produce high-rate object detections (green rectangles, bottom timeline). It shares features from a dense CNN running on low-rate images (blue arrows) to boost the performance of an asynchronous GNN running on events. The GNN processes each new event efficiently, reusing CNN features and sparsely updating GNN activations from previous steps.


 

a,b, Comparison of asynchronous, dense feedforward and dense recurrent methods, in terms of task performance (mAP) and computational complexity (MFLOPS per inserted event) on the purely event-based Gen1 detection dataset41 (a) and N-Caltech101 (ref. 42) (b). c, Results of DSEC-Detection. All methods on this benchmark use images and events and are tasked to predict labels 50 ms after the first image, using events. Methods with dagger symbol use directed voxel grid pooling. For a full table of results, see Extended Data Table 1.

a, Detection performance in terms of mAP for our method (cyan), baseline method Events + YOLOX (ref. 34) (blue) and image-based method YOLOX (ref. 34) with constant and linear extrapolation (yellow and brown). Grey lines correspond to inter-frame intervals of automotive cameras. b, Bandwidth requirements of these cameras, and our hybrid event + image camera setup. The red lines correspond to the median, and the box contains data between the first and third quartiles. The distance from the box edges to the whiskers measures 1.5 times the interquartile range. c, Bandwidth and performance comparison. For each frame rate (and resulting bandwidth), the worst-case (blue) and average (red) mAP is plotted. For frame-based methods, these lie on the grey line. The performance using the hybrid event + image camera setup is plotted as a red star (mean) and blue star (worst case). The black star points in the direction of the ideal performance–bandwidth trade-off.

The first column shows detections for the first image I0. The second column shows detections between images I0 and I1 using events. The third column shows detections for the second image I1. Detections of cars are shown by green rectangles, and of pedestrians by blue rectangles.


Go to the original article...

PIXEL2024 workshop

Image Sensors World        Go to the original article...

The Eleventh International Workshop on Semiconductor Pixel Detectors for Particles and Imaging (Pixel2024) will take place 18-22 November 2024 at the Collège Doctoral Européen, University of Strasbourg, France.


The workshop will cover various topics related to pixel detector technology. Development and applications will be discussed for charged particle tracking in high energy physics, nuclear physics, astrophysics, astronomy, biology, medical imaging and photon science. The conference program will also include reports on radiation effects, timing with pixel sensors, monolithic sensors, sensing materials, front and back end electronics, as well as interconnection and integration technologies toward detector systems.
All sessions are plenary and include a poster session. Contributions will be chosen from submitted abstracts.


Key deadlines:

  •  abstract submission: July 5,
  •  early bird registration: September 1,
  •  late registration: September 30.

Abstract submission link: https://indico.in2p3.fr/event/32425/abstracts/ 



Go to the original article...

Himax invests in Obsidian thermal imagers

Image Sensors World        Go to the original article...

From GlobeNewswire: https://www.globenewswire.com/news-release/2024/05/29/2889639/8267/en/Himax-Announces-Strategic-Investment-in-Obsidian-Sensors-to-Revolutionize-Next-Gen-Thermal-Imagers.html

Himax Announces Strategic Investment in Obsidian Sensors to Revolutionize Next-Gen Thermal Imagers

TAINAN, Taiwan and SAN DIEGO, May 29, 2024 (GLOBE NEWSWIRE) -- Himax Technologies, Inc. (Nasdaq: HIMX) (“Himax” or “Company”), a leading supplier and fabless manufacturer of display drivers and other semiconductor products, today announced its strategic investment in Obsidian Sensors, Inc. ("Obsidian"), a San Diego-based thermal imaging sensor solution manufacturer. Himax's strategic investment in Obsidian Sensors, as the lead investor in Obsidian’s convertible note financing, was motivated by the potential of their proprietary and revolutionary high-resolution thermal sensors to dominate the market through low-cost, high-volume production capabilities. The investment amount was not disclosed. In addition to an ongoing engineering collaboration where Obsidian leverages Himax's IC design resources and know-how, the two companies also aim to combine the advantages of Himax’s WiseEye ultralow power AI processors with Obsidian’s high-resolution thermal imaging to create an advanced thermal vision solution. This would complement Himax's existing AI capabilities and ecosystem support, improving detection in challenging environments and boosting accuracy and reliability, thereby opening doors to a wide array of applications, including industrial, automotive safety and autonomy, and security systems. Obsidian’s proprietary thermal imaging camera solutions have already garnered attention in the industry, with notable existing investors including Qualcomm Ventures, Hyundai, Hyundai Mobis, SK Walden and Innolux.

Thermal imaging sensors offer unparalleled versatility, capable of detecting heat differences in total darkness, measuring temperature, and identifying distant objects. They are particularly well suited for a wide range of surveillance applications, especially in challenging and life-saving scenarios. Compared to prevailing thermal sensor solutions, which typically suffer from low resolution, high cost, and limited production volumes, Obsidian is revolutionizing the thermal imaging industry by producing high resolution thermal sensors with its proprietary Large Area MEMS Platform (“LAMP”), offering low-cost production at high volumes. With large glass substrates capable of producing sensors with superior resolution, VGA or higher, at volumes exceeding 100 million units per year, Obsidian is poised to drive the mass market adoption of this unrivaled technology across industries, including automotive, security, surveillance, drones, and more.

With accelerating interest in both the consumer and defense sectors, Obsidian’s groundbreaking thermal imaging sensor solutions are gaining traction in automotive applications and poised to play a pivotal role. The novel ADAS (Advanced Driver Assistance Systems) and AEB (Automatic Emergency Braking) system, integrated with Obsidian’s thermal sensors, significantly enable higher-resolution and clear vision in low-light and adverse weather conditions such as fog, smoke, rain, and snow, ensuring much better driving safety and security. This aligns perfectly with measures announced by the NHTSA (National Highway Traffic Safety Administration) on April 29, 2024, which issued its final rule mandating the implementation of AEB, including PAEB (Pedestrian AEB) that is effective at night, as a standard feature on all new cars beginning in 2029, recognizing pedestrian safety features as essential components rather than just luxury add-ons. This safety standard is expected to significantly reduce rear-end and pedestrian crashes. Traffic safety authorities in other countries are also following suit with similar regulations underscoring the trend and significant potential demand for thermal imaging sensors from Obsidian Sensors in the years to come.

 

A dangerous nighttime driving situation can be averted with a thermal camera
 

“We are pleased to begin our strategic partnership with Himax through this funding round and look forward to a fruitful collaboration to potentially merge our market leading thermal imaging sensor and camera technologies with Himax’s advanced ultralow power WiseEyeTM endpoint AI, leveraging each other's domain expertise. Furthermore, progress has been made in the engineering projects for mixed signal integrated circuits, leveraging Himax’s decades of experience in image processing. Given our disruptive cost and scale advantage, this partnership will enable us to better cater to the needs of the rapid-growing thermal imaging market,” said John Hong, CEO of Obsidian Sensors.

“We see great potential in Obsidian Sensors' revolutionary high-resolution thermal imaging sensor. Himax’s strategic investment in Obsidian further enhances our portfolio and expands our technology reach to cover thermal sensing which represents a great compliment to our WiseEye technology, a world leading ultralow power image sensing AI total solution. Further, we see tremendous potential of Obsidian’s technology in the automotive sector where Himax already holds a dominant position in display semiconductors. We also anticipate additional synergies through expansion of our partnership with our combined strength and respective expertise driving future success,” said Mr. Jordan Wu, President and Chief Executive Officer of Himax.

Go to the original article...

ID Quauntique webinar: single photon detectors for quantum tech

Image Sensors World        Go to the original article...



In this webinar replay, we first explore the role of single-photon detectors in advancing quantum technologies, with a focus on superconducting nanowire detectors (SNSPDs) and the benefits they offer for quantum computing and high-speed quantum communication.

After which, we discuss the evolving needs of the field and describe IDQ’s user-focused detector solutions, including our innovative photon-number-resolving (PNR) SNSPDs and our new rack-mountable SNSPD system. We show real-world experiments that have already benefited from the outstanding performances of our detectors, including an enhanced heralded single-photon source and high key-rate QKD implementation.

Finally, we conclude with our vision on the future of single-photon detection for quantum information and networking, and the exciting possibilities this can unlock.

Go to the original article...

ID Quauntique webinar: single photon detectors for quantum tech

Image Sensors World        Go to the original article...



In this webinar replay, we first explore the role of single-photon detectors in advancing quantum technologies, with a focus on superconducting nanowire detectors (SNSPDs) and the benefits they offer for quantum computing and high-speed quantum communication.

After which, we discuss the evolving needs of the field and describe IDQ’s user-focused detector solutions, including our innovative photon-number-resolving (PNR) SNSPDs and our new rack-mountable SNSPD system. We show real-world experiments that have already benefited from the outstanding performances of our detectors, including an enhanced heralded single-photon source and high key-rate QKD implementation.

Finally, we conclude with our vision on the future of single-photon detection for quantum information and networking, and the exciting possibilities this can unlock.

Go to the original article...

ISSW 2024 this week in Trento, Italy

Image Sensors World        Go to the original article...

The 2024 International SPAD Sensor Workshop is happening this week in Trento, Italy. Full program is available here: https://issw2024.fbk.eu/program

Talks:



Posters:

Go to the original article...

css.php