Image sensor papers and talks at ISSCC 2025

Image Sensors World        Go to the original article...

ISSCC 2025 will be held February 16-20, 2025 in San Francisco. The program includes papers and talks of interest to the image sensors community. There will be 6 imager papers in the technical session as well as a special forum by invited industry experts on their views on technology trends.


ISSCC Imager session:
6.1 H. Shim et al., Samsung, "A 3-Stacked Hybrid-Shutter CMOS Image Sensor with Switchable 1.2μm-Pitch 50Mpixel Rolling Shutter and 2.4μm-Pitch 12.5Mpixel Global Shutter Modes for Mobile Applications"
6.2 S. Park et al., Ulsan National Institute of Technology, SolidVue, Sungkyunwan Univ., Sognag Univ., "An Asynchronous 160×90 Flash LiDAR Sensor with Dynamic Frame Rates of 5 to 250fps Based on Pixelwise ToF Validation via a Background-Light-Adaptive Threshold"
6.3 H-S. Choi et al., Yonsei Univ., KAIST, XO Semiconductor, Myongji Univ, Samsung, " SPAD Flash LiDAR with Chopped Analog Counter for 76m Range and 120klx Background Light"
6.4 T-H. Tsai et al., META, Brillnics, Sesame AI, "A 400×400 3.24μm 117dB-Dynamic-Range 3-Layer Stacked Digital Pixel Sensor"
6.5 T. Kainuma et al., Sony, "A 25.2Mpixel 120frames/s Full-Frame Global-Shutter CMOS Image Sensor with Pixel-Parallel ADC"
6.6 Y. Zhuo et al., Peking Univ, Univ. of Chinese Academy of Sciences, Shanghai Inst. of Technical Physics Chinese Academy of Sciences, "A 320×256 6.9mW 2.2mK-NETD 120.4dB-DR LW-IRFPA with Pixel-Paralleled Light-Driven 20b Current-to-Phase ADC"

Forum "Seeing the Future: Advances in Image and Vision Sensing"
Image sensors are the eyes of modern technology, enabling both humans and machines to perceive and interpret the world. While they are well-known in smartphones and cameras, their role in transformative applications such as autonomous vehicles, IoT devices, and AR/VR is rapidly growing. Advances like deep-trench isolation, 3D integration, and pixel-level innovations have driven the development of 2-layer pixels, miniaturized global shutters, time-of-flight sensing, and event-based detection. Stacked architectures, in particular, enable intelligent onchip processing, making edge computing possible while reducing the device footprints for AR/VR, medical technology, and more. Metamaterials and computational cameras are further pushing boundaries by merging advanced optics with sophisticated algorithms, achieving higher image quality, enhanced depth perception, and entirely new imaging capabilities.

This forum provides engineers with insight into the latest breakthroughs in image sensor technology, edge computing, metaphotonics, and computational imaging— offering an inspiring platform to explore innovations that will shape the future of sensing and drive the next generation of technological advancements.

5.1 F. Domengie, Yole, "Innovative Image Sensors Technologies Expanding Applications and Market Frontiers"
5.2 S. Roh, Samsung, "Dispersion-Engineered Metasurface Integration for Overcoming Pixel Shrink Limitations in CMOS Image Sensors"
5.3 B. Fowler, OMNIVISION, "Advances in Automotive CMOS Image Sensors"
5.4 H.E. Ryu, Seoul National Univ., "Neuromorphic Imaging Sensor: How It Works and Its Applications"
5.5 D. Stoppa, Sony, "Innovation Trends in Depth Sensing and Imaging: Enabling Technologies and Core Building Blocks"
5.6 P. Van Dorpe, imec/KUL, "Photonics Enhanced Imaging for Omics and Medical Imaging"
5.7 C. Liu, META, "AI Sensors for Wearable Devices"
5.8 D. Golanski, STMicrolectronics, "From NIR to SWIR CMOS Image Sensors: Technology Challenges and state-of-the-art"
5.9 F. Heide, Princeton Univ, "Cameras As Nanophotonic Optical Computers"

Go to the original article...

Sony stacked CIS+iToF sensor (IEDM 2024)

Image Sensors World        Go to the original article...

Article (in German): https://www.pcgameshardware.de/Wissenschaft-Thema-237118/News/Fuer-Kameras-Sony-stapelt-Farb-Tiefensensor-keine-Verzerrungen-mehr-1462040/

English translation from Google Translate (with some light editing) below:

 

Depth sensors, which provide an image with spatial information, have become increasingly widespread in recent years. They can be used, for example, to create 3D scans or for targeted, subsequent blur effects - for example in smartphone cameras. In most cases, so-called ToF sensors (Time of Flight) are used, in which each pixel is measured when previously radiated infrared light is reflected back.

Not next to each other, but on top of each
So far, however, there has been a problem in the implementation in combination with normal camera sensors. Either the ToF sensor is located next to the camera sensor. Then there are are concealed areas through the different angles, but above all on edges, and not every color value can be assigned a depth value. Or ToF and color pixels sit on the same sensor and take away the space from each other. In other words: The resolution is reduced.

However, the camera division of Sony now claims to have found a way out. At the IEDM 2024 semiconductor trade fair, a combination sensor was presented in which the camera sensor is located directly above the depth sensor. This is made possible by the use of a new material: normally the color pixels would be located on silicon, but the broadband light would be absorbed and thus the depth pixels covered. However, Sony has apparently solved this problem by means of a new construction on a broadband transparent, organic photo-leading film. Visible wavelength hits the color sensors, while infrared light falls further down on the IR pixels of the ToF sensor.



Above each ToF pixel, which each occupies 4um, there are four RGB pixels with 1um each. In total, there is talk of a resolution of 1004 x 756 pixels for the depth map and 4016 x 3024 pixels for the color image. At least in this respect, the prototype has apparently already reached a usable area.

 

However, it is still unclear whether and when corresponding sensors should go into mass production. However, if Sony can potentially eliminate existing problems, the wide availability of such a sensor would offer numerous options. For example, you could simplify the creation of high-resolution 3D scans for games and movies and also make the data collection of robots significantly more reliable.

Go to the original article...

Global shutter quantum dot image sensor for NIR imaging

Image Sensors World        Go to the original article...

L. Baudoin et al. of  ISAE SUPAERO, University of Toulouse, Toulouse, France recently published a paper titled "Global Shutter and Charge Binning With Quantum Dots Photodiode Arrays for NIR Imaging" in the IEEE Journal of the Electron Devices Society.

Open access link: https://ieeexplore.ieee.org/document/10742005

Abstract:  New applications like depth measurements or multispectral imaging require to develop image sensors able to sense efficiently in the Near Infrared and Short-Wave Infrared where silicon is weakly sensitive. Colloidal Quantum Dots (CQD) technology is an interesting candidate to address these new applications as it allows to develop image sensors with high quantum efficiency at excitonic peak and high-resolution images. In this paper, we present an electrical model describing the electrical behavior of a designed and manufactured CQD photodiode. We use this model to explore a different architecture collecting holes instead of electrons. This architecture allows to control the charge collection inside the CQD thin film through the electric field. This property enables to implement global shutter functionality, to bin charges from several photodiodes, or to operate two physically interleaved photodiodes arrays alternatively with different types of pixel circuitries. These operating modes extend the capabilities of CQD image sensors in terms of applications.

Overview of the CQD thin film properties.


(a) Electron microscopy cross section of the characterized photodiode [16] (b) Scheme of the simulated device for electrons collection (c) CQD photodiode process flow [16].


(a) CQD photodiode absorption spectrum (b) Current vs Voltage CQD photodiode characteristic – experiment vs simulation.


(a) Scheme of the simulated device for holes collection (b) Band diagram of the photodiode varying the voltage of the bottom electrode (c) Physical phenomena explaining the photodiode current vs voltage characteristic.


Current vs Voltage characteristics vs (a) CQD thin film holes mobility (b) carriers lifetime (c) CQD thin film electron affinity (d) ETL electron affinity (e) HTL electron affinity Turn-on bias vs (f) CQD thin film holes mobility (g) carriers lifetime (h) CQD thin film electron affinity (i) HTL electron affinity.


(a) Scheme of the multi-electrodes device working principle (b) Multi-electrodes photodiode architecture for holes collection control alternating collection on pixels A (top image) and collection on pixel B (bottom image).

Electrostatic potential and band diagrams explaining the carriers’ collection control for: (a) electrons collecting photodiodes (b) holes collecting photodiodes.


Current-Voltage characteristics explaining the carriers’ collection control for: (a) electrons collecting photodiodes (b) holes collecting photodiodes.


Electric field for photodiodes with central bottom electrode biased and various bottom electrodes’ widths.

Turn-on bias vs work functions for various electrodes’ size.

 

Current-Voltage characteristics of the collecting and non-collecting electrodes at various illuminations.

Go to the original article...

imec SWIR quantum dot sensor

Image Sensors World        Go to the original article...

From optics.org news: https://optics.org/news/15/12/28

imec group launches SWIR sensor with lead-free quantum dot photodiodes

Technology is a step toward “greener” IR imagers for autonomous driving, medical diagnostics.

Last week, at the 2024 IEEE International Electron Devices Meeting, in San Francisco, imec, a research and innovation hub in nanoelectronics and digital technologies, and its partners in the Belgian project Q-COMIRSE, presented the first prototype shortwave infrared image (SWIR) sensor based on indium arsenide quantum dot photodiodes.

The sensor demonstrated successful 1390 nm imaging results, offering an environmentally-friendly alternative to first-generation quantum dots that contain lead, which limited their widespread manufacturing. The proof-of-concept is a critical step toward mass-market infrared imaging with low-cost and non-toxic photodiodes.

By detecting wavelengths beyond the visible spectrum, SWIR sensors can provide enhanced contrast and detail, as materials reflect differently in this range.

Face recognition and eye-tracking

These sensors can distinguish objects that appear identical to the human eye and penetrate through fog or mist, suiting them to applications such as face recognition or eye-tracking in consumer electronics, and autonomous vehicle navigation. While current versions are costly and limited to high-end applications, wafer-level integration promises broader accessibility.

Tuned for SWIR, quantum dots offer compact, low-cost absorbers, since integration into CMOS circuits and existing manufacturing processes is possible. However, first-generation QDs often contain toxic heavy metals such as lead and mercury, and the search for alternatives continues.

At 2024 IEDM, imec and its partners within the Q-COMIRSE project (Ghent University, QustomDot BV, ChemStream BV and ams OSRAM) presented a SWIR image sensor featuring a lead-free quantum dot alternative as absorber; indium arsenide (InAs). The proof-of-concept sensor, tested on both glass and silicon substrates, was the first of its kind to produce successful 1390 nm imaging results, imec announced.

Pawel Malinowski, imec technology manager and domain lead imaging, emphasized the significance of the achievement: “The first generation of QD sensors was crucial for showcasing the possibilities of this flexible platform. We are now working towards a second generation that will serve as a crucial enabler for the masses, aiming at cost-efficient manufacturing in an environmentally friendly way,” he said.

“With major industry players looking into quantum dots, we are committed to further refine this semiconductor technology towards accessible, compact, multifunctional image sensors with new functionalities.”

Stefano Guerrieri, Engineering Fellow at ams Osram, added, “Replacing lead in colloidal quantum dots with a more environmentally friendly material was our key goal in Q-COMIRSE. Our remarkable development work with imec and the others paves the way toward a low-cost and lead-free SWIR technology that, once mature for industrial products, could enable unprecedented applications in robotics, automotive, AR/VR and consumer electronics among others.”

Go to the original article...

imec SWIR quantum dot sensor

Image Sensors World        Go to the original article...

From optics.org news: https://optics.org/news/15/12/28

imec group launches SWIR sensor with lead-free quantum dot photodiodes

Technology is a step toward “greener” IR imagers for autonomous driving, medical diagnostics.

Last week, at the 2024 IEEE International Electron Devices Meeting, in San Francisco, imec, a research and innovation hub in nanoelectronics and digital technologies, and its partners in the Belgian project Q-COMIRSE, presented the first prototype shortwave infrared image (SWIR) sensor based on indium arsenide quantum dot photodiodes.

The sensor demonstrated successful 1390 nm imaging results, offering an environmentally-friendly alternative to first-generation quantum dots that contain lead, which limited their widespread manufacturing. The proof-of-concept is a critical step toward mass-market infrared imaging with low-cost and non-toxic photodiodes.

By detecting wavelengths beyond the visible spectrum, SWIR sensors can provide enhanced contrast and detail, as materials reflect differently in this range.

Face recognition and eye-tracking

These sensors can distinguish objects that appear identical to the human eye and penetrate through fog or mist, suiting them to applications such as face recognition or eye-tracking in consumer electronics, and autonomous vehicle navigation. While current versions are costly and limited to high-end applications, wafer-level integration promises broader accessibility.

Tuned for SWIR, quantum dots offer compact, low-cost absorbers, since integration into CMOS circuits and existing manufacturing processes is possible. However, first-generation QDs often contain toxic heavy metals such as lead and mercury, and the search for alternatives continues.

At 2024 IEDM, imec and its partners within the Q-COMIRSE project (Ghent University, QustomDot BV, ChemStream BV and ams OSRAM) presented a SWIR image sensor featuring a lead-free quantum dot alternative as absorber; indium arsenide (InAs). The proof-of-concept sensor, tested on both glass and silicon substrates, was the first of its kind to produce successful 1390 nm imaging results, imec announced.

Pawel Malinowski, imec technology manager and domain lead imaging, emphasized the significance of the achievement: “The first generation of QD sensors was crucial for showcasing the possibilities of this flexible platform. We are now working towards a second generation that will serve as a crucial enabler for the masses, aiming at cost-efficient manufacturing in an environmentally friendly way,” he said.

“With major industry players looking into quantum dots, we are committed to further refine this semiconductor technology towards accessible, compact, multifunctional image sensors with new functionalities.”

Stefano Guerrieri, Engineering Fellow at ams Osram, added, “Replacing lead in colloidal quantum dots with a more environmentally friendly material was our key goal in Q-COMIRSE. Our remarkable development work with imec and the others paves the way toward a low-cost and lead-free SWIR technology that, once mature for industrial products, could enable unprecedented applications in robotics, automotive, AR/VR and consumer electronics among others.”

Go to the original article...

Ubicept superpowers computer vision for a world in motion

Image Sensors World        Go to the original article...

Computer Vision Pioneer Ubicept to Showcase Breakthrough in Machine Perception at CES 2025


Game-Changing Photonic Computer Vision Technology Now Available for Rapid Prototyping Across Autonomous Vehicles, Robotics, AR/VR and More 


Las Vegas, January 7, 2025 – Ubicept, founded by computer vision experts from MIT, University of Wisconsin-Madison, and veterans of Google, Facebook, Skydio and Optimus Ride, today unveiled breakthrough technology that processes photon-level image data to enable unprecedented machine perception clarity and precision. The company will debut its innovation at CES 2025; demonstrations will show how the Ubicept approach handles challenging scenarios that stymie current computer vision systems, from autonomous vehicles navigating dark corners to robots operating in variable lighting conditions.

In their current state, cameras and image sensors cannot handle multiple challenging lighting conditions at the same time. Image capture in complex circumstances such as fast movement at night yields results that are too noisy or too blurry, severely limiting the potential of AI and other technologies that depend on computer vision clarity. Such systems also require different solutions to address different lighting conditions, resulting in disparate imaging systems with unreliable outputs. 

Now, Ubicept is bringing maximum visual perception to the computer vision ecosystem to make image sensors and cameras more powerful than ever before. The technology combines proprietary software with Single-Photon Avalanche Diode (SPAD) sensors -- the same technology used in iPhone LiDAR systems – to create a unified imaging solution that eliminates the need for multiple specialized cameras. This enables:

  • Crystal-clear imaging in extreme low light without motion blur

  • High-speed motion capture without light streaking

  • Simultaneous handling of bright and dark areas in the same environment

  • Precise synchronization with lights (LEDs, lasers) for 3D applications


“Ubicept has developed the optimal imaging system,” said Sebastian Bauer, cofounder and CEO, Ubicept. “By processing individual photons, we're enabling machines to see with astounding clarity across all lighting conditions simultaneously, including pitch darkness, bright sunlight, fast motion, and 3D sensing.” 

Ubicept is making its technology available via its new FLARE (Flexible Light Acquisition and Representation Engine) Development Kit, combining a 1-megapixel, full-color SPAD sensor from a key hardware partner with Ubicept’s sensor-agnostic processing technologies. This development kit will enable camera companies, sensor makers, and computer vision engineers to seamlessly integrate Ubicept technology into autonomous vehicles, robotics, AR/VR, industrial automation, and surveillance applications.

In addition to SPAD sensors, Ubicept also seamlessly integrates with existing cameras and CMOS sensors, easing the transition to next generation technologies and enabling any camera to be transformed into an advanced imaging system. 

“The next big AI wave will be enabled by computer vision powered applications in the real world; however, today’s cameras were designed for humans, and using standard image data for computer vision systems won’t get us there,” said Tristan Swedish, cofounder and CTO, Ubicept. “Ubicept’s technology bridges that gap, enabling computer vision systems to achieve ideal perception. Our mission is to create a scalable, software-defined camera system that powers the future of computer vision.”

Ubicept is backed by Ubiquity Ventures, E14 Fund, Wisconsin Alumni Research Foundation, Convergent Ventures, and other investors, with a growing customer base that includes leading brands in the automotive and AR/VR industries. 

The new FLARE Development Kit is now available for pre-order; visit www.ubicept.com/preorder to sign-up and learn more, or see Ubicept’s technology in action at CES, Las Vegas Convention Center, North Hall, booth 9467.

About Ubicept

Ubicept has pushed computer vision to the limits of physics. Developed out of MIT and the University of Wisconsin-Madison, Ubicept technology enables super perception for a world in motion by transforming photon image data into actionable information through advanced processing algorithms. By developing groundbreaking technology that optimizes imaging in low light, fast motion and high dynamic range environments, Ubicept enables industries to overcome the limitations of conventional vision systems, unlocking new possibilities for computer vision and beyond. Learn more at ubicept.com or follow Ubicept on LinkedIn

Media Contact:

Dana Zemack

Scratch Marketing + Media for Ubicept

ubicept@scratchmm.com 

Go to the original article...

Video of the day: Oculi Smart Sensing

Image Sensors World        Go to the original article...


Visual Intelligence at the Edge, by Fred Brady

Fred is currently the Chief Technical Product Officer for Oculi, a Rochester-based start-up in the smart sensing field. He presented this talk in the Society for Imaging Science and Technology (IS&T)'s Rochester NY Chapter seminar series on 11 Dec. 2024.
Today's image sensors are inefficient for vision AI - they were developed for human presence detection. These solutions are slow, power-hungry, and expensive. We will discuss Oculi's Intellipixel solution, which puts smarts at the ‘edge of the edge’ to output just the data needed for AI.
00:00 - Introduction
00:38 - Visual Intelligence at the Edge
13:00 - Oculi Output Examples
18:32 - Face and Pupil Detection
20:42 - Wrap-up
22:00 - Discussion


Go to the original article...

Another 2025 CES innovation award: Lidwave’s 4D LiDAR sensor

Image Sensors World        Go to the original article...

From: https://www.einpresswire.com/article/768427169/lidwave-s-odem-4d-lidar-sensor-receives-the-prestigious-ces-innovation-award-2025

Lidwave's technology receives acknowledgment once more, with in the form of "CES innovation award" for its Odem 4D LiDAR sensor

JERUSALEM, ISRAEL, December 12, 2024 /EINPresswire.com/ -- Lidwave, a pioneer in the field of coherent LiDAR, is proud to share that its revolutionary Odem 4D Sensor has been recognized as an Honoree in the CES Innovation Awards 2025 in the Imaging category. Lidwave, a pioneer in the field of coherent LiDAR, is proud to share that its revolutionary Odem 4D Sensor has been recognized as an Honoree in the CES Innovation Awards 2025 in the Imaging category. “This recognition underscores Odem’s potential to redefine machine perception across industries, enabling smarter, more efficient systems, powered by Lidwave's innovative Finite Coherent Ranging (FCR™) technology” said Yehuda Vidal, Lidwave’s CEO.

At its core, Odem is a 4D coherent LiDAR that delivers both high-resolution 3D spatial data and instantaneous velocity information at the pixel level. This ability to capture an object’s location and motion in real time transforms how machines perceive and respond to their surroundings. From autonomous vehicles and robotics to industrial automation and smart infrastructure, Odem empowers systems with the precision and speed required for decision-making in dynamic environments.
One of Odem’s standout features is its software-defined architecture, which allows users to adapt key parameters - such as field of view, resolution, detection range, and frame rate – to their needs, with no change to the hardware. This flexibility enables industries to test and optimize Odem for their unique applications, making it a powerful tool for innovation across diverse sectors. Whether streamlining factory operations, enhancing transportation systems, or advancing next-generation robotics, Odem is designed to meet the evolving needs of its users.

Beyond its exceptional performance in both short- and long-range applications, Odem represents a breakthrough in scalability and affordability. By integrating a complete LiDAR system - including lasers, amplifiers, receivers, and optical routing - onto a single chip, Lidwave has made high-performance sensing technology accessible at scale. This achievement addresses one of the industry’s most critical challenges, ensuring that advanced LiDAR solutions can be deployed widely and cost-effectively.
Reliability is at the heart of Odem’s design. Built to perform under all conditions—including total darkness, glaring sunlight, fog, and dust—Odem ensures consistent and accurate detection in even the most challenging scenarios. Its robustness makes it an indispensable solution for demanding applications where precision and dependability are essential.

“We are thrilled to receive this recognition for Odem,” said Yehuda Vidal, CEO of Lidwave. “This sensor combines advanced capabilities with unmatched scalability and reliability. Its ability to provide detailed spatial and motion data in real time, while being scalable and cost-effective, is a game-changer for industries worldwide.”

“This award highlights Odem’s transformative impact,” added Dr. Yossi Kabessa, Lidwave’s CTO. “With its 4D data capabilities and flexibility, Odem empowers industries to adopt cutting-edge sensing solutions that drive innovation and progress.”

“This acknowledgment joins the feedback we get from our partners in various fields,” said Nitsan Avivi, Head of Business Development at Lidwave. “ and makes it clear that Odem will have an enormous impact on machine vision. Its unique capabilities and scalability are paving the way for new use cases, expanding the horizons of LiDAR applications”  


Go to the original article...

SOLiDVUE wins CES 2025 innovation award for solid-state LiDAR

Image Sensors World        Go to the original article...

From PR Newswire: https://www.prnewswire.com/news-releases/solidvue-sets-new-standards-with-ces-innovation-award-winning-high-resolution-lidar-sensor-ic-sl-2-2-302329805.html

SOLiDVUE Sets New Standards with CES Innovation Award-Winning High-Resolution LiDAR Sensor IC, 'SL-2.2'

SEOUL, South Korea, Dec. 16, 2024 /PRNewswire/ -- SOLiDVUE, Korea's exclusive enterprise specialized in CMOS LiDAR (Light Detection and Ranging) sensor IC development, announced that its groundbreaking single-chip LiDAR sensor IC, the SL-2.2, boasting a world-first 400x128 resolution, has been honored with the CES Innovation Award® at CES 2025.

LiDAR is a next-generation core component for autonomous vehicles and robotics, capable of precisely measuring the shape and distance of objects to output 3D images. This technology enables accurate object recognition for applications such as autonomous vehicles, drones, robots, security cameras, and traffic management systems.

Established in 2020, SOLiDVUE focuses on designing SoCs (System-on-Chip) for LiDAR sensors, which form the core of a LiDAR system. "While mechanical LiDAR has been the standard, the latest trend is to replace it with semiconductor chips," said co-CEO, Jung-Hoon Chun. SOLiDVUE is the only company in South Korea to have developed LiDAR sensors that completely replace mechanical components with semiconductor technology.

SOLiDVUE's LiDAR sensors are compatible with solid-state LiDAR systems, which are 10 times smaller and 100 times cheaper than traditional mechanical LiDAR systems. "Our sensors offer ultra-compact chip solution compared to competitors, but their performance is not just on par—it's superior," co-CEO, Jaehyuk Choi stated confidently.

The company's proprietary technologies, such as CMOS SPAD (Single Photon Avalanche Diode) technology, single-chip sensor architecture, and image signal processor, underpin its competitive edge. CMOS SPAD technology enhances measurement accuracy by detecting sparse photons even the single-photon level. Globally, only a few companies, including SOLiDVUE, possess such single-chip sensor technology.

SOLiDVUE's technological prowess has been repeatedly acknowledged at the IEEE ISSCC (International Solid-State Circuits Conference), marking a remarkable achievement for a Korean fabless company. Furthermore, the recent CES Innovation Award has once again affirmed its prominence in the LiDAR sensor industry.

SOLiDVUE's award-winning SL-2.2 pushes the boundaries of resolution with its ability to output high-resolution 3D images up to 400x128 pixels, surpassing the 200x116 resolution of existing products. The SL-2.2 can detect objects up to 200 meters away with an exceptional 99.9% accuracy.

As a single-chip sensor, the SL-2.2 is fabricated using standard CMOS semiconductor processes and benefits from SOLiDVUE's proprietary ultra-miniaturization technology. The sensor core measures just 0.9cm x 0.9cm and is packaged in a compact 1.4cm x 1.4cm BGA-type package, enabling seamless integration into various LiDAR systems. Its single-chip design reduces power consumption, enhancing energy efficiency and ensuring high reliability.

The SL-2.2 is a successor to the company's first product, the SV-110, which features a 200x116 resolution and a 128-meter detection range. The SL-2.2 is scheduled for an official release in 2025 and is expected to play a pivotal role in advancing LiDAR technology across applications such as autonomous vehicles, robotics, drones, and smart cities.

Co-CEO Jaehyuk Choi emphasized, "At SOLiDVUE, we are actively collaborating with numerous domestic and international companies and research institutions to push the boundaries of LiDAR technology. With the rapidly growing demand for LiDAR, we are committed to continuously expanding our product lineup to meet diverse market needs. Our mission is to lead the LiDAR industry by delivering innovative solutions that address the evolving challenges of tomorrow."



Go to the original article...

MagikEye to present 5cm to 5m depth sensing solution at CES

Image Sensors World        Go to the original article...

From Businesswire: https://www.businesswire.com/news/home/20241218853081/en/MagikEye-Brings-%E2%80%9CSeeing-in-3D-from-Near-to-Far%E2%80%9D-to-CES-2025-Now-Enabling-Depth-Sensing-from-5cm-to-5m

MagikEye Brings “Seeing in 3D from Near to Far” to CES 2025: Now Enabling Depth Sensing from 5cm to 5m

STAMFORD, Conn.--(BUSINESS WIRE)--MagikEye Inc. (www.magik-eye.com), a leader in advanced 3D depth sensing technology, is pleased to offer private demonstrations of its latest Invertible Light™ Technology (ILT) advancements at the 2025 Consumer Electronics Show (CES) in Las Vegas, NV. Building on a mission to provide the “Eyes of AI,” the newest iteration of ILT can measure depth as close as 5cm and reaching to 5m. This expanded range can transform how developers take advantage of 3D vision in their products, allowing seeing 3D from near-to-far.

By leveraging a simple, low-cost projector and a standard CMOS image sensor, MagikEye’s ILT solution delivers 3D with unparalleled cost and power savings. A small amount of software running on any low-power microcontroller enables a broad spectrum of applications—ranging from consumer electronics and robotics to AR/VR, industrial automation, and transportation—without the cost of specialized silicon or sensors. With the newest version of ILT, manufacturers can inexpensively add depth capabilities to more devices, increasing product versatility and improving product performance.

“This new generation of ILT redefines what’s possible in 3D sensing,” said Takeo Miyazawa, Founder & CEO of MagikEye. “By bringing the near-field range down to 5cm, we enable a richer, more immersive interaction between devices and their environment, while providing more complete data for AI applications. From tiny consumer gadgets to large-scale robotic systems, our technology scales effortlessly, helping our customers drive innovation, enhance user experiences, and unlock new market opportunities.”

During CES 2025, MagikEye invites interested partners, product designers, and customers to arrange a private demonstration of the enhanced ILT technology. These one-on-one sessions will provide an in-depth look at how to seamlessly integrate ILT into existing hardware and software platforms and explore its potential across a multitude of applications.

Go to the original article...

Yole Webinar on Status of CIS Industry in 2024

Image Sensors World        Go to the original article...

Yole recently held a webinar on the latest trends and emerging applications in the CMOS image sensor market.

It is still available to view with a free registration at this link: https://attendee.gotowebinar.com/register/3603702579220268374?source=Yole+webinar+page

More information:

https://www.yolegroup.com/event/trade-shows-conferences/webinar-the-cmos-image-sensor-industry/


The CMOS image sensor (CIS) market, which is projected to grow at a 4.7% compound annual growth rate from 2023 to 2029, growing to $28.6 billion, is undergoing a transformation. Declining smartphone sales, along with weakening demand in devices such as laptops and tablet computers, are key challenges to growth.

We forecast that automotive cameras and other emerging applications will instead be the key drivers of future CIS market growth. Technology innovations such as triple-stacked architectures and single-photon avalanche diode-based sensors are improving performance, enabling new applications in low light and 3D imaging, for example, while high dynamic range and LED flicker mitigation are key requirements for automotive image sensors.

This webinar, co-organized with the Edge AI + Vision alliance, will discuss how CIS suppliers are focusing on enhancing sensor capabilities, along with shifting their product mixes towards higher potential value markets. Our experts will also explore how emerging sensing modalities such as neuromorphic, optical metasurfaces, short-wave infrared and multispectral imaging will supplement, and in some cases supplant, CMOS image sensors in the future.






Go to the original article...

CEA-Leti presents Integrated Phase Modulator And Sensor at IEDM 2024

Image Sensors World        Go to the original article...

CEA-Leti Device Integrates Light Sensing & Modulation, Bringing Key Scalability, Compactness and Optical-Alignment Advantages
 
First-Reported Device ‘Improves Resolution and Penetration Depth Of Optical Imaging Techniques for Biomedical Applications’
 
SAN FRANCISCO – Dec. 10, 2024 – CEA-Leti researchers have developed the first-reported device able to sense light and modulate it accordingly in a single device, using a liquid crystal cell and a CMOS image sensor.
 
The compact system provides intrinsic optical alignment and compactness and is easy to scale-up, facilitating the use of digital optical phase conjugation (DOPC) techniques in applications such as microscopy and medical imaging.
 
“The main benefits of this device, which provides significant advantages compared to competing systems that require separate components, should boost its deployment in more complex and larger optical systems,” said Arnaud Verdant, CEA-Leti research engineer in mixed-signal IC design and lead-author of the paper presented at IEDM 2024.
 
In the paper, “A 58×60 π/2-Resolved Integrated Phase Modulator And Sensor With Intra-Pixel Processing”, CEA-Leti explained that this is the first solid-state device integrating a liquid crystal-based spatial light modulator hybridized with a custom lock-in CMOS image sensor. The integrated phase modulator and sensor embeds a 58×60 pixel array, where each pixel both senses and modulates light phases.
 
The device leverages the key advantage of DOPC to dynamically compensate for optical wavefront distortions, which improves performance in a variety of photonic applications and corrects optical aberrations in imaging systems. By precisely controlling laser beams, it improves the resolution and penetration depth of optical imaging techniques for biomedical applications.
 
Standard DOPC systems rely on separated cameras and light-wavefront modulators, but their bandwidth is limited by the data processing and transfer between these devices. If the system senses and controls the light-phase modulation locally in each pixel, the bandwidth no longer depends on the number of pixels, and is only limited by the liquid crystal response time. This feature is a key advantage in fast-decorrelating, scattering media, such as living tissues.
 
“Scattering in biological tissues and other complex media severely limits the ability to focus light, which is a critical requirement for many photonic applications,” Verdant explained. “Wavefront shaping techniques can overcome these scattering effects and achieve focused light delivery. In the future, this will make it possible to envision applications such as photodynamic therapy, where light focusing selectively activates photosensitive drugs within tumors.
 
“When this technology is more mature, it also may have diverse benefits across various sectors, in addition to improving biomedical imaging resolution and depth,” he said. “It could enable earlier disease detection and non-invasive therapies. In industry, it could enhance laser beam quality and efficiency.”

 



Go to the original article...

Sub-micron thickness InGaAs pixels

Image Sensors World        Go to the original article...

[Jan 23, 2025: post title updated for clarity.]

In a paper titled "Highly-efficient (>70%) and Wide-spectral (400–1700 nm) sub-micron-thick InGaAs photodiodes for future high-resolution image sensors" in Light: Science & Applications, Dae-Myeong Geum et al. from KAIST write:

Abstract: This paper demonstrates the novel approach of sub-micron-thick InGaAs broadband photodetectors (PDs) designed for high-resolution imaging from the visible to short-wavelength infrared (SWIR) spectrum. Conventional approaches encounter challenges such as low resolution and crosstalk issues caused by a thick absorption layer (AL). Therefore, we propose a guided-mode resonance (GMR) structure to enhance the quantum efficiency (QE) of the InGaAs PDs in the SWIR region with only sub-micron-thick AL. The TiOx/Au-based GMR structure compensates for the reduced AL thickness, achieving a remarkably high QE (>70%) from 400 to 1700 nm with only a 0.98 μm AL InGaAs PD (defined as 1 μm AL PD). This represents a reduction in thickness by at least 2.5 times compared to previous results while maintaining a high QE. Furthermore, the rapid transit time is highly expected to result in decreased electrical crosstalk. The effectiveness of the GMR structure is evident in its ability to sustain QE even with a reduced AL thickness, simultaneously enhancing the transit time. This breakthrough offers a viable solution for high-resolution and low-noise broadband image sensors.

a Schematics of conventional Fabry-Perot resonance cavity and proposed GMR structure. b Design of GMR integrated InGaAs PD structures and design parameters of GMR structure. c 2D mapping results for the relative amount of absorption in AL grating period as a function of wavelength with fixed TAL = 1 μm. d RCWA simulation results with 1 μm AL PD on rear side engineering about InP substrate, flat metal structure, and GMR structure. e Electric field intensity distribution for 1.0 μm AL InGaAs PIN PDs on different bottom structures at 0.6 μm and 1.5 μm. f TAL dependent absorption spectra in terms of wavelength. g Top InGaAs layer thickness-dependent absorption spectra for visible light absorption

a Schematics of the GMR integrated InGaAs PDs by utilizing wafer bonding based thin film transfer method b Photograph of wafer-level patterned GMR structure with 1.5 μm period. c SEM image for periodic patterns consisting of Au width of 0.75 μm and TiOx 0.75 μm. d Schematic image of the fabricated PD on GMR structure and optical image of fabricated devices. e EDX images of Ti, O, and Au atoms at the top view. f Fabricated 0.5 μm and 1.0 μm AL InGaAs PD on GMR Si structure

a Schematics of the fabricated device structures with different bottom structures. b I–V characteristics for 1.0 μm AL InGaAs PD on InP, flat metal, GMR, and ideality factors as an inset figure (c) Surface leakage currents for 1 μm AL InGaAs PD with/without SU-8 passivation using size dependency. d Iph–Pin characteristics for 0.5 and 1 μm AL PDs on GMR Si. e Calculated f3dB for 15 × 15 μm2 devices in terms of TAL. f Calculated f3dB as a function of device width to confirm the transit time limited bandwidth


a EQE spectra for fabricated PDs on InP substrate with different TAL. b EQE spectra for fabricated 1 μm AL PDs on different bottom structures. c Resulting EQE spectra for different TAL on GMR structure and reference 2.1 μm AL PDs on InP substrate. d Calculated current density using EQE spectrum as a function of TAL and structures. e Comparison of normalized performances of EQE per TAL for proposed devices and conventional PDs. f Fabricated devices with/without 20 nm surface InGaAs layer for 1 μm AL PDs on GMR Si. g Benchmark for state-of-the-art InGaAs-based SWIR pixels with simulated EQE lines as a function of TAL variation (Dashed line: InGaAs PD on InP substrate, dotted line: InGaAs PD on flat metal structure, with the same ARC of this experiment)


Full text:  https://www.nature.com/articles/s41377-024-01652-6

Go to the original article...

SPAD camera for diffuse correlation spectroscopy

Image Sensors World        Go to the original article...

In a paper titled "ATLAS: a large array, on-chip compute SPAD camera for multispeckle diffuse correlation spectroscopy" in Biomedical Optics Express, Alistair Gorman et al. of University of Edinburgh write:

Abstract: We present ATLAS, a 512 × 512 single-photon avalanche diode (SPAD) array with embedded autocorrelation computation, implemented in 3D-stacked CMOS technology, suitable for single-photon correlation spectroscopy applications, including diffuse correlation spectroscopy (DCS). The shared per-macropixel SRAM architecture provides a 128 × 128 macropixel resolution, with parallel autocorrelation computation, with a minimum autocorrelation lag-time of 1 µs. We demonstrate the direct, on-chip computation of the autocorrelation function of the sensor, and its capability to resolve changes in decorrelation times typical of body tissue in real time, at long source-detector separations similar to those achieved by the current leading optical modalities for cerebral blood flow monitoring. Finally, we demonstrate the suitability for in-vivo measurements through cuff-occlusion and forehead cardiac signal measurements.

Fig. 1. (a) Sensor chip micrograph. (b) Dark count rate per SPAD cumulative distribution. (c) Photon detection efficiency.


Fig. 2. (a) Macropixel layout and (b) Sensor architecture showing the column normalization processor which multiplies each macropixel autocorrelation sample Aτ by the number of BinClk cycles (N) and divides by the square of the total photon count (Cτ0)2 before summing each entire row in a pipelined adder.

 


  Fig. 3. Macropixel signal flow diagram. 5-bit photon counts Cτ0 from the SPAD are delayed progressively 31 times, multiplied by the current value of Cτi and accumulated as Aτ. The autocorrelation calculation of g2(τ) needs to be normalized by (Cτ0)2/Tint.


 

 Fig. 4. Macropixel circuit block diagram. This shows the hardware implementation of Fig. 3. 16 SPADs are OR-ed and counted in a 5-bit accumulator. A 31 stage 5-bit shift register creates delayed photon counts at BinClk rate. A shared multiplier multiplexed operating at 32 times higher frequency (PixClk) generates the Aτ samples in an 32 × 22b SRAM.

Fig. 5. Macropixel timing diagram. In each BinClk period photon counts are delayed and shifted one place in the 5-b shift register. In that time PixClk initiates 32 10-b multiply and add (precharge, modify write) operations around each word location in the 32 × 22b SRAM.


 Fig. 6. (a) Target for verification of autocorrelation imaging mode. Example autocorrelations from the highlighted pixels in (a), corresponding to frequencies of 390.6 kHz (b) and 97.7 kHz (c).


 Fig. 7. (a) Measured and (b) theoretical normalized autocorrelations for frequencies between 12.2 and 195.3 kHz.


 Fig. 8. Autocorrelation image sequences with 10% duty cycle pulsed wave LED illumination of A and B targets. (a) Low frequency 8 kHz/4 kHz and 26 kHz/1 kHz pulsed wave images over a 1.28-12.8 µs lag range. (b) High frequency 96 kHz/112 kHz and 96 kHz/195 kHz pulsed wave images over an 8.96-20.48 µs lag range.

 


Fig. 9. (a) Ensemble average correlation calculated on-chip (blue) and off-chip (red). (b) Ensemble average SNR gain with respect to the single macropixel mean SNR at increasing number of pixels.

 


 Fig. 10. Time constants from exponential fitting of autocorrelation of LED sequences.

 


Fig. 11. Illustration of experimental setup to assess the sensitivity for DCS measurements.


  Fig. 12. (a) Measured optical power from the end of the detector fiber bundle at different source-detector separations on human forehead. (b) Typical range of time constants measured from adult forehead from exponential fitting of autocorrelations acquired with ATLAS in ensemble mode, with a 10 mm source detector separation and an integration time of 13.1 ms (8192 iterations) per sample. (c) Time series of time constant from exponential fitting of autocorrelations acquired with ATLAS in ensemble mode, from a rotating PLA disc (Fig. 11), driven with a square wave voltage to produce a similar range of time constants as measured from the forehead.

Fig. 13. Normalized time series of relative time constant from exponential fit, and best fit square wave.
 

Fig. 14. MAE against distal fiber powers between 3 and 30 nW.
 

Fig. 15. (a) Source and detection fiber at palm. (b) Time constant during and after a linear increase of wrist occlusion pressure from 0 mm Hg to a peak of 165 mm Hg at 40 s. (c) Six pulse periods of the time constant post occlusion.

 

Fig. 16. Time constants from exponential fitting of autocorrelations measured from forehead, for separations between the source and fiber of 35, 40, 45 and 50 mm.

Full text: https://opg.optica.org/boe/fulltext.cfm?uri=boe-15-11-6499&id=561837


Go to the original article...

SPAD direct-time-of-flight pixel with correlation-assisted processing

Image Sensors World        Go to the original article...

In a paper titled "Correlation-Assisted Pixel Array for Direct Time of Flight", A. Morsy and M. Kuijk or Vrije Universiteit write:

Abstract
Time of flight is promising technology in machine vision and sensing, with an emerging need for low power consumption, a high image resolution, and reliable operation in high ambient light conditions. Therefore, we propose a novel direct time-of-flight pixel using the single-photon avalanche diode (SPAD) sensor, with an in-pixel averaging method to suppress ambient light and detect the laser pulse arrival time. The system utilizes two orthogonal sinusoidal signals applied to the pixel as inputs, which are synchronized with a pulsed laser source. The detected signal phase indicates the arrival time. To evaluate the proposed system’s potential, we developed analytical and statistical models for assessing the phase error and precision of the arrival time under varying ambient light levels. The pixel simulation showed that the phase precision is less than 1% of the detection range when the ambient-to-signal ratio is 120. A proof-of-concept pixel array prototype was fabricated and characterized to validate the system’s performance. The pixel consumed, on average, 40 μW of power in operation with ambient light. The results demonstrate that the system can operate effectively under varying ambient light conditions and its potential for customization based on specific application requirements. This paper concludes by discussing the system’s performance relative to the existing direct time-of-flight technologies, identifying their strengths and limitations.

Figure 1. CA-dToF pixel schematic and simulation, where (a) is the pixel schematic and (b) is the histogram from the detected events. On the right side is the sinusoidal signals applied to the CA-dToF pixel, while (c) is the voltage evolution of the analog channels SC1 and SC2 and (d) is the calculated arrival time, with ASR = 2.


 

Figure 2. (a) Histogram of accumulated ambient light over a period {T} for a certain integration time. (b) Histogram of ambient light and laser pulses with an FWHM {a} detected with an arrival time {l} over a period {T}, along with ambient light that is uniformly distributed over the integration time.

Figure 3. Reduction in the detected sine’s amplitude for different ASR values when a=4.25%·T and C=274.6 mV.

Figure 4. (a) When ASR = 0, the analytical model predicted that the detected voltage precision was oscillating due to active light shot noise. (b) When ASR = 1, the analytical model predicted that the detected voltage precision was oscillating due to the influence of laser and ambient light shot noise. (c) When ASR = 120, the analytical model predicted that the detected voltage precision oscillation was not significant due to the dominant ambient light shot noise.

Figure 12. (a) CA-dToF pixel array micrograph with three different quenching resistors. (b) The experimental set-up.

Figure 13. CA-dToF pixel experimental results for two different ASR values: (a) detected signal, (b) detected phase error, (c) detected amplitude precision, and (d) detected phase precision.

Figure 15. A snapshot of a scene with the 32×32 pixel array at the room’s ambient light. (a) Colored image of the scene. (b) The 3D image.


Full text: https://www.mdpi.com/1424-8220/24/16/5380

Go to the original article...

SK Hynix CIS business reorg

Image Sensors World        Go to the original article...

SK Hynix restructures CIS organization seemingly to replicate HBM success model

Link: https://www.digitimes.com/news/a20241210PD215/sk-hynix-cis-hbm-business-market.html

Despite the low profitability of SK Hynix's CMOS image sensor (CIS) business, the company has decided to retain this segment and reorganize its CIS development team under the Future Technology Research Institute, possibly hoping to replicate the successful narrative seen in high bandwidth memory (HBM).
According to industry sources cited by ZDNet Korea, SK Hynix CTO Seon-Yong Cha is expected to also lead CIS development.

Some analysts believe that SK Hynix endured a period of poor profitability in its HBM business but ultimately achieved success in the artificial intelligence (AI) chip market. In the future, demand for SK Hynix's CIS products may extend beyond the smartphone sector to include automotive, machine vision, and industrial markets.

Compared to other sectors within SK Hynix, the CIS business has lower profitability. Coupled with a shrinking smartphone market in recent years, there has been a decline in CIS demand, making it challenging for SK Hynix to secure a leading position in the CIS market.
According to market research firm Yole Développement, the top three players in the CIS market in 2023 are Sony with 45% market share, Samsung Electronics (Samsung) with 19%, and OmniVision with 11%. Meanwhile, SK Hynix ranks sixth with only 4% market share.

SK Hynix plans to transfer most of its CIS developers to other business units in 2024, resulting in a reduction of CIS production capacity by more than half compared to 2023. There was speculation within the South Korean industry that SK Hynix might "abandon the CIS business," but the company ultimately decided to continue its operations in this area.

SK Hynix president Noh-Jung Kwak reportedly has a strong desire to develop the CIS business. During a regular shareholders' meeting in March 2024, Kwak stated that he does not intend to abandon the CIS business, acknowledging both strengths and weaknesses compared to competitors while stressing that SK Hynix is analyzing these factors.

SK Hynix acquired the CIS development company SiliconFile in 2008, marking its entry into the CIS market. The absorption of SiliconFile in 2014 marked the beginning of its expansion in the CIS field. By 2019, SK Hynix established a CIS R&D center in Japan and launched the CIS brand Black Pearl.
SK Hynix previously supplied CIS components to mid-range Chinese smartphones and successfully provided sensors for Samsung's foldable Galaxy Z Fold3/Flip3 series and Galaxy A series in 2021.

Go to the original article...

Event cameras for GPS-free drone navigation

Image Sensors World        Go to the original article...

Link: https://spectrum.ieee.org/drone-gps-alternatives

A recent article in IEEE Spectrum titled "Neuromorphic Camera Helps Drones Navigate Without GPS High-end positioning tech comes to low-cost UAVs" discusses efforts in using neuromorphic cameras to achieve GPS-free navigation for drones.

Some excerpts:

[GPS] signals are vulnerable to interference from large buildings, dense foliage, or extreme weather and can even be deliberately jammed. [GPS-free navigation systems that rely only on] accelerometers and gyroscopes [suffer from] errors [that] accumulate over time and can ultimately cause a gradual drift. ... Visual navigation systems  [consume] considerable computing and data resources.

A pair of navigation technology companies has now teamed up to merge the approaches and get the best of both worlds. NILEQ, a subsidiary of British missile-maker MBDA based in Bristol, UK, makes a low-power visual navigation system that relies on neuromorphic cameras. This will now be integrated with a fiber optic-based INS developed by Advanced Navigation in Sydney, Australia, to create a positioning system that lets low-cost drones navigate reliably without GPS.

[...]

[Their proprietary algorithms] process the camera output in real-time to create a terrain fingerprint for the particular patch of land the vehicle is passing over. This is then compared against a database of terrain fingerprints generated from satellite imagery, which is stored on the vehicle. [...]

The companies are planning to start flight trials of the combined navigation system later this year, adds Shaw, with the goal of getting the product into customers hands by the middle of 2025.

Go to the original article...

Quantum Solutions announces SWIR camera based on quantum dot technology

Image Sensors World        Go to the original article...

Link: https://quantum-solutions.com/product/q-cam-swir-camera/#description

Oxford, UK – November 26, 2024 – Quantum Solutions proudly announces the release of the Q.Cam™ , an advanced Short-Wave Infrared (SWIR) camera designed for outdoor applications.

Redefining SWaP for Outdoor Applications:
The Q.Cam™ ™ sets a new standard for low Size, Weight, and Power (SWaP) in SWIR cameras, making it ideal for outdoor applications where space is limited and visibility in challenging conditions like smoke, fog, and haze is crucial.

Developed in collaboration with a leading partner, the Q.Cam™ is the first USB 3.0 camera featuring Quantum Solutions’ state-of-the-art Quantum Dot SWIR sensor, offering VGA resolution (640 x 512 pixels) with a wide spectral range of 400 nm to 1700 nm.

The Q.Cam™ is incredibly compact, weighing only 35 grams with dimensions of 35 x 25 x 25 mm³, making it perfect for integration in space-constrained environments. Its TEC-less design minimizes power consumption to an impressive <1.3 Watts, ideal for battery-powered operation.

Overcoming Outdoor Challenges:
Using SWIR cameras outdoors has traditionally been challenging due to varying lighting conditions and temperature-related image quality fluctuations requiring re-calibration of the camera to adjust to changing conditions. The Q.Cam™ addresses these issues with its advanced image correction technology, which automatically adjusts for factors like gain, temperature offset, and illumination. The camera can perform more than150+ automatic calibrations on the fly, ensuring consistent, high-quality images even in challenging and constantly changing outdoor environments. This advanced correction capability enables a TEC-less design, significantly reducing power consumption without compromising image quality.


The integration of proprietary Quantum Dot technology allows Quantum Solutions to offer the Q.Cam™ as a cost-effective and accessible solution for bringing SWIR imaging to a wider range of outdoor applications.

Seamless Integration and Flexibility:
The Q.Cam™ comes equipped with a user-friendly USB 3.0 interface, a Graphical User Interface (GUI), and Python scripts for easy integration and control.

ITAR-Free and Ready for Global Deployment:
The Q.Cam™ is an ITAR-free product with a short lead time of 3 weeks, making it readily
available for global deployment in a variety of sectors, including:
• Security and Surveillance
• Defence
• Search and Rescue
• Environmental Monitoring
• Robotics and Machine Vision
• Automotive

Key Features of Q.Cam™ :
• Quantum Dot SWIR Sensor: 640 x 512 pixels, 400 nm - 1700 nm spectral range
• Best-in-class SWaP: 35 g, 35 x 25 x 25 mm³, <1.3 W power consumption
• Built-in Automatic Image Correction: Up to 150+ automatic image corrections
(Gain, Offset, Temperature, and Illumination)
• Cost-Effective and Accessible: Among the most affordable SWIR cameras
available in the market
• Frame Rate up to 60 Hz; Global Shutter
• Operating Temperature: -20°C to 50°C

Go to the original article...

Video of the Day: tutorial on iToF imagers

Image Sensors World        Go to the original article...


Abstract:
"Indirect Time of Flight 3D imaging is an emerging technology used in 3D cameras. The technology is based on measuring the time of flight of modulated light. It allows to generate fine grain depth images with several hundreds of thousand image points. I-TOF has become a standard solution for face recognition and authentication. Recently I-TOF is also used in various new applications, such as computational photography, gesture recognition and robotics. This talk will introduce the basic operation principle of an I-TOF 3D imager IC. The integrated building blocks will be discussed and the analog operation of an I-TOF pixel will be addressed in detail. System level topics of the camera module will also be covered to provide a complete overview of the technology."
This presentation was recorded as part of the lecture "Selected Topics of Advanced Analog Chip Design" from the Institute of Electronics at TU Graz.
Special thanks to Dr. Timuçin Karaca, for the insightful presentation.

Go to the original article...

Exosens (prev. Photonis) acquires Noxant

Image Sensors World        Go to the original article...

News link: https://optics.org/news/15/11/29

Exosens eyes further expansion with Noxant deal
20 Nov 2024

French imaging and analytical technology group aiming to add MWIR camera specialist to growing portfolio.

Exosens, the France-based technology group previously known as Photonis, is set to further grow its burgeoning camera portfolio with the acquisition of Noxant.

Located in the Paris suburbs, Noxant specializes in high-performance cooled imagers operating at mid-infrared wavelengths.

The agreement between the two firms allows Exosens to enter into exclusive negotiations to pursue the acquisition, and if consummated it would complement existing camera expertise in the form of Xenics, Telops, and another pending acquisition, Night Vision Laser Spain (NVLS).

Gas imaging
Noxant sells its range of cameras for applications including surveillance, scientific research, industrial testing, and gas detection - the latter said to represent a “strong synergistic addition” to Exosens’ existing camera offering.

Exosens CEO Jérôme Cerisier said: “Through this acquisition, we would broaden Exosens' technological spectrum by offering cutting-edge cooled infrared solutions to meet the growing demands of our OEM customers.

“Noxant's expertise in cooled infrared technology aligns perfectly with our mission to deliver high-performance, reliable imaging solutions for critical applications.

“Furthermore, the synergies between Noxant and Telops would strengthen our research and development capabilities and accelerate our innovation in infrared technologies.”

At the moment Noxant serves OEMs primarily, whereas Telops tends to target end users, meaning opportunities for cross-selling under the Exosens umbrella organization.

Its products include the “NoxCore” range of camera cores, “NoxCam” cameras, and the “GasCore” series of high-performance optical gas imaging cameras. Offering a spectral range of 3-5 µm in the MWIR or 7-10 µm in the long-wave infrared (LWIR), these are able to image a large number of process and pollutant gases including methane, carbon dioxide, and nitrous oxide.

Commenting on the likely business combination, Noxant chairman Laurent Dague suggested that joining forces with Exosens would represent a “perfect match”, and a deal that would enable Noxant to continue delivering advanced cooled infrared technology while benefiting from Exosens' much larger scale and customer reach.

Growing business
While Noxant’s 22 employees generated annual revenues of approximately €12 million in the 12 months ending June 2024, Exosens’ most recent financial results showed sales of €274 million for the nine months up to September 30 this year.

That figure represented a 33 per cent jump on the same period in 2023, largely due to much higher sales of the firm’s microwave amplification products, which contributed €200 million to the total.

Meanwhile Exosens’ detection and imaging businesses contributed close to €77 million, up from €47 million for the same nine-monthly period last year - partly through the addition of Telops and Photonis Germany (formerly ProxiVision).

Not all of those sales relate to optical technology, with the company also selling neutron and gamma-ray detectors used in the nuclear industry.

Last month Exosens announced that it had signed a definitive agreement to acquire NVLS, which produces man-portable night vision and thermal devices from its base in Madrid.

That deal should see NVLS further develop its business in Spain, Latin America and Asia, while also broadening Exosens’ know-how in optical and mechanical technologies.

Go to the original article...

Gpixel announces GSPRINT5514 global shutter CIS

Image Sensors World        Go to the original article...

Press release: https://www.einpresswire.com/article/761834209/gsprint5514-a-new-high-sensitivity-14mp-bsi-global-shutter-cis-targeting-high-speed-machine-vision-and-4k-video

GSPRINT5514 a new High Sensitivity 14MP BSI Global Shutter CIS targeting high speed machine vision and >4k video.

CHANGCHUN, CHINA, November 19, 2024 /EINPresswire.com/ -- Gpixel announces GSPRINT5514BSI, the fifth sensor in the popular GSPRINT series of high-speed global shutter CMOS image sensors. The sensor is pin compatible with GSPRINT4510 and GSPRINT4521 for easy design into existing camera platforms.

GSPRINT5514BSI features 4608 x 3072 pixels, each 5.5 µm square – a 4/3 aspect ratio 4k sensor compatible with APS-C optics. With 10-bit output GSPRINT5514BSI achieves 670 frames per second. In 12-bit mode the sensor outputs 350 fps.

Using backside illumination technology, the sensor achieves 86% quantum efficiency at 510 nm and 17% at 200 nm for UV applications. The sensor offers dual gain HDR readout, maximizing 15 ke- full well capacity with a minimum < 2.0 e- noise to achieve an outstanding 78.3 dB of dynamic range. Analog 1x2 binning increases the full well capacity to 30 ke-.

Up to 8 vertically oriented regions of interest can be defined to operate the sensor at increased frame rates. The image data is output via 84 sub-LVDS channels at 1.2 Gbps. For applications in which the maximum frame rate is not required, multiplexing modes are available to reduce the number out output channels by any multiple of two.

The sensor features on-chip sequencer, SPI control, PLL, and both analog and digital temperature sensors.
“The GSPRINT family of image sensors have enabled new use cases in high-speed machine vision and offer unprecedented value to the 4k video market,” says Wim Wuyts, Gpixel’s Chief Commercial Officer. “We will continue to expand this product line to meet the needs of customers across the growing diversity of applications demanding high speed, excellent image quality, and a high dynamic range. From a technology perspective we are proud to extend our GSPRINT series with the second BSI Global Shutter product, opening a wavelength extension into DUV.”

The GSPRINT5514BSI is available in monochrome or color variants with either sealed or removable cover glass and is assembled in a 454-pin µPGA package.
Samples and evaluation systems are available now.



Go to the original article...

Sony releases IMX925 stacked global-shutter CIS

Image Sensors World        Go to the original article...

Press release: https://www.sony-semicon.com/en/news/2024/2024111901.html

Product page: https://www.sony-semicon.com/en/products/is/industry/gs/imx925-926.html

Sony Semiconductor Solutions to Release an Industrial CMOS Image Sensor with Global Shutter for High-Speed Processing and High Pixel Count Offering an Expanded, High-Precision Product Lineup Supporting Faster Recognition and Inspection

Atsugi, Japan — Sony Semiconductor Solutions Corporation (SSS) today announced the upcoming release of the IMX925 stacked CMOS image sensor with back-illuminated pixel structure and global shutter. This new product offers 394 fps high-speed processing and a high, 24.55-effective-megapixel count and is optimized for industrial equipment imaging.

The new sensor product is equipped with the Pregius S™ global shutter technology made possible by SSS’s original pixel structure, delivering a compact design with minimal noise and high-quality imaging performance. It also employs a new circuit structure that optimizes pixel reading and sensor drive in the A/D converter, making processing approximately four times faster and twice as energy efficient as conventional products.

Along with the IMX925, SSS will also release three models with different sensor sizes and frame rates. The expanded product lineup will help make recognition and inspection tasks faster and more precise, improving productivity in the industrial equipment domain, where this kind of superior performance is increasingly in demand.

With factory automation progressing, demand continues to grow for machine vision cameras capable of fast, high-quality imaging for a variety of objects in the industrial equipment domain. By employing a global shutter capable of capturing moving subjects free of distortion together with a proprietary back-illuminated pixel structure, SSS’s global-shutter CMOS image sensors deliver superb pixel characteristics, including high sensitivity and saturation capacity. They are mainly being used to recognize and inspect precision components such as electronic devices.

The IMX925 sensor is compact enough to be C mount compatible, the most common mounting standard for machine vision cameras. It has a total of 24.55 effective megapixels and offers a higher frame rate than previous models thanks to the enhanced high-speed signal processing. These features enable increased image capture per unit of time, thereby reducing measurement and inspection process times and helping to save energy. The product is also expected to be useful in advanced inspection processes such as 3D inspections which employ multiple image data.

Main Features
■New circuit structure with optimized sensor drive for high-speed imaging and power saving
The new sensor models employ a new circuit structure that optimizes pixel reading and sensor drive in the A/D converter. Reducing the data output time enables high-speed imaging, so the IMX925 delivers a frame rate of 394 fps, about four times faster than conventional products. Power consumption is also more than twice as efficient as on conventional products. The reduced power consumption and shorter measurement and inspection times will contribute to improved productivity in various applications.
■Global shutter with original pixel structure for high-definition imaging in a compact package
The new products are equipped with SSS’s proprietary Pregius S global shutter technology. The back-illuminated pixels and stacked structure enable high sensitivity and saturation capacity on very small, 2.74 µm pixels. This structure delivers 24.55 effective megapixels on the IMX925 in a C-mount-compatible 1.2-type size, delivering a high pixel count in a compact package. This design also ensures that the sensors can capture fast-moving objects free of distortion, which in turn makes the products highly useful in compact, high-definition machine vision cameras that can be easily installed on equipment and manufacturing lines.
■Higher data transmission per lane for higher camera precision and speed
The new products employ SSS’s own embedded clock high-speed interface SLVS-EC™, which supports up to 12.5 Gbps/lane. With high-resolution image data transmitted on fewer data lanes than in the past, FPGA options are expanded, supporting the development of high-precision, high-speed cameras.

 



Go to the original article...

2029 forecast: Image sensors market will be worth $29.62B

Image Sensors World        Go to the original article...

Link: https://www.novuslight.com/image-sensors-market-worth-29-62-billion-by-2029_N13346.html

The global image sensor market is expected to be valued at USD 20.66 billion in 2024 and is projected to reach USD 29.62 billion by 2029, growing at a CAGR of 7.5% from 2024–2029 according to a new report by MarketsandMarkets. Additions to existing applications in various industries and technological advancements in image sensor product offerings are key factors driving the expansion of the image sensor market. Restraints such as High Manufacturing costs hinder market growth. However, factors such as Integration with other technologies provide lucrative opportunities for market players in the coming years.

Area Scan image sensors by array type to hold the highest CAGR during the forecast period.
Area scan image sensors will have the highest CAGR in the image sensor market for years due to versatile applications in numerous industries. Since it captures images in a two-dimensional format, area scan image sensors find broad application in machine vision in manufacturing, quality assurance, and automated inspection systems. Growing demand for automation of processes in industries is an imperative factor driving this growth. Area scan sensors make high-speed image acquisition possible with good measurement accuracy, which are essential factors for maintaining product quality as well as operational efficiency. Improvements in the form of better resolution and sensitivity along with AI integration are also improving the performance of area scan image sensors. Their high-speed, real-time image processing capacity supports applications in the automotive, healthcare, and logistics sectors, where swift decision-making is important. The rise of smart factories and Industry 4.0 projects has increased the demand for area scan sensors, which are essential for automation and data analytics functions. With more and more industries embracing high-performance imaging solutions, the best position of area scan image sensors will be leading in growth rates and innovation in the market.


More than 16 MP by resolution to exhibit highest market share during the forecast period
Over the next few years, image sensors with more than 16 MP resolution will likely rule the market because they can fulfill the fast-growing demands of high-quality imaging in various applications. Manufacturers are increasingly incorporating higher-resolution sensors into smartphones, digital cameras, and professional equipment due to increasing consumer demands for superior image quality. These further fuels the demand for visually beautiful images and videos because of the proliferation of social media and digital content creation. Other areas include the automotive, healthcare, and security fields. For example, advanced driver-assistance systems (ADAS) in automobiles would require detailed imaging of the lane and pedestrians; these pixels need to be greater, hence the requirement of such sensors. Medical imaging devices need high-resolution imaging for the accurate diagnostics that are provided. Thus, the increased low-light sensitivity and higher readout speed will make the high-resolution sensor more attractive. As such, the image sensors market greater than 16 MP will significantly increase as it goes well with the industry trend.


Industrial sector to hold the highest CAGR during the forecast period
Industrial, the largest application segment for image sensors, is anticipated to have the highest CAGR in the image sensor market during the next couple of years due to the growing application of automation, robotics, and machine vision systems. Rising efficiency and precision needs among industries in the manufacturing process also create an enormous demand for advanced imaging technologies. Therefore, image sensors play a very important role in quality control; real-time inspection and monitoring products are possible to ensure compliance with stringent quality standards. The establishment of Industry 4.0, which involves a convergence of IoT devices and smart technologies in the manufacturing process, is another reason behind the demand for high-performance image sensors. The sensors are more capable of collecting more data, which are analyzed for predictive purposes to minimize downtime.
Furthermore, developing more applications in the autonomous vehicles, logistics, and warehousing area significantly contributes to the increase in the requirement for advanced imaging solutions. Industrial applications will transform with the use oftransform using sensor technologies like 3D imaging and AI-enhanced vision systems. The system could offer clearer operational efficiency.
 

Asia Pacific in the image sensor industry to exhibit the highest CAGR during the forecast period
This will be where the highest CAGR in image sensors over the next years will emerge, spurred by various strong drivers in the region. The area is also where leading electronic manufacturing bases reside, among other factors that include some strong economies globally, like China, Japan, South Korea, and Taiwan. Some investment in this area for their development in researching and making discoveries is accelerating image sensor innovation in these markets. This growth is driven by the increasing demand for consumer electronics, including good-quality camera smartphones and tablets, as consumers want better imaging. Asia Pacific also grows due to fast industrialization and urbanization; image sensors are adopted in many industries, including automotive, healthcare, and surveillance. The automotive segment is especially booming for ADAS, which uses highly quality image sensors that include safety features. Furthermore, government projects like smart city projects help encourage surveillance and monitoring solutions; consequently, the demand also rises in the image sensor market. Collectively, these factors position the Asia Pacific region for robust growth, making it a key player in the global image sensor landscape. 


Key Players
The image sensor companies includes major Tier I and II players like Sony Corporation (Japan), Samsung. (South Korea), Omnivision (US), Semiconductor Components Industries, LLC (US) and STMicroelectronics (Switzerland), Panasonic Holdings Corporation (Japan), Canon Inc. (Japan), HAMAMATSU PHOTONICS KK. (Japan), Teledyne Technologies Incorporated. (US), SK HYNIX INC. (South Korea), Himax Technologies Inc. (Taiwan) and others. These players have a strong market presence in advanced packaging across various countries in North America, Europe, Asia Pacific, and the Rest of the World (RoW).

Go to the original article...

Single Photon Avalanche Diodes – Buyer’s Guide

Image Sensors World        Go to the original article...

Photoniques magazine published an article titled "Single photon avalanches diodes" by Angelo Gulinatti (Politecnico di Milano).

Abstract: Twenty years ago the detection of single photons was little more than a scientific curiosity reserved to a few specialists. Today it is a flourishing field with an ecosystem that extends from university laboratories to large semiconductor manufacturers. This change of paradigm has been stimulated by the emergence of critical applications that rely on single photon detection, and by technical progresses in the detector field. The single photon avalanche diode has unquestionably played a major role in this process.

Full article [free access]: https://www.photoniques.com/articles/photon/pdf/2024/02/photon2024125p63.pdf

 


Figure 1: Fluorescence lifetime measured by time-correlated single-photon counting (TCSPC). The sample is excited by a pulsed laser and the delay between the excitation pulse and the emitted photon is measured by a precision clock. By repeating multiple times, it is possible to build a histogram of the delays that reproduces the shape of the optical signal.



Figure 3: By changing the operating conditions or the design parameters, it is possible to improve some
performance metrics at the expenses of others.



Go to the original article...

IEDM 2024 Program is Live

Image Sensors World        Go to the original article...

70th Annual IEEE International Electron Devices Meeting (IEDM) will be held December 7-11, 2024 in San Francisco California. Session #41 is on the topic of "Advanced Image Sensors":

https://iedm24.mapyourshow.com/8_0/sessions/session-details.cfm?scheduleid=58

Title: 41 | ODI | Advanced Image Sensors
Description:
This session includes 6 papers on latest image sensor technologies developments. To be noticed this year the multiple ways of stacking layer with new features. The first stack involves a dedicated AI image processing layer based on neural networks for a 50 Mpix sensor. The second one shows progress on small pixel noise with 2-layer pixel and additional intermediate interconnection. Third stack, very innovative with organic pixel on top of conventional Si based ITOF pixel for true single device RGB-Z sensor. All three papers are authored by Sony Semiconductors. InAs QD image sensors are also reported for the first time as a lead-free option for SWIR imaging by both IMEC and Sony Semiconductors Also progress in conventional IR global shutter with newly nitrated MIM capacitor and optimized DTI filling for crosstalk and QE improvement is presented by Samsung semiconductor.

Wednesday, December 11, 2024 - 01:35 PM
41-1 | A Novel 1/1.3-inch 50 Megapixel Three-wafer-stacked CMOS Image Sensor with DNN Circuit for Edge Processing
This study reports the first ever 3-wafer-stacked CMOS image sensor with DNN circuit. The sensor was fabricated using wafer-on-wafer-on-wafer process and DNN circuit was placed on the bottom wafer to ensure heat dissipation. This device can incorporate the HDR function and enlarge the pixel array area to remarkably improve image-recognition.


Wednesday, December 11, 2024 - 02:00 PM
41-2 | Low Dark Noise and 8.5k e− Full Well Capacity in a 2-Layer Transistor Stacked 0.8μm Dual Pixel CIS with Intermediate Poly-Si Wiring
This paper demonstrates a 2-layer transistor pixel stacked CMOS image sensor with the world’s smallest 0.8μm dual pixel. We improved the layout flexibility with intermediate poly-Si wiring technique. Our advanced 2-layer pixel device achieved low input-referred random noise of 1.3 e−rms and high full well capacity of 8.5k e−.


Wednesday, December 11, 2024 - 02:25 PM
41-3 | A High-Performance 2.2μm 1-Layer Pixel Global Shutter CMOS Image Sensor for Near-Infrared Applications
A high performance and low cost 2.2μm 1-layer pixel near infrared (NIR) global shutter (G/S) CMOS image sensor (CIS) was demonstrated. In order to improve quantum efficiency (QE), thick silicon with high aspect ratio full-depth deep trench isolation (FDTI) and backside scattering technology are implemented. Furthermore, thicker sidewall oxide for deep trench isolation and oxide filled FDTI were applied to enhance a modulation transfer function (MTF). In addition, 3-dimensional metal-insulator-metal capacitors were introduced to suppress temporal noise (TN). As a result, we have demonstrated industry-leading NIR G/S CIS with 2.71e- TN, dark current of 8.8e-/s, 42% QE and 58% MTF.


Wednesday, December 11, 2024 - 03:15 PM
41-4 | First Demonstration of 2.5D Out-of-Plane-Based Hybrid Stacked Super-Bionic Compound Eye CMOS Chip with Broadband (300-1600 nm) and Wide-Angle (170°) Photodetection
We propose a hybrid stacked CMOS bionic chip. The surface employs a fabrication process involving binary-pore anodic aluminum oxide (AAO) templates and integrates monolayer graphene (Gr) to mimic the compound eyes, thereby enhancing detection capabilities in the ultraviolet and visible ranges. Utilizing a 2.5D out-of-plane architecture, it achieves a wide-angle detection effect (170°) equivalent to curved surfaces while enhancing absorption in the 1550 nm communication band to nearly 100%. Additionally, through-silicon via (TSV) technology is integrated for wafer-level fabrication, and a CMOS 0.18-µm integrated readout circuit is developed, achieving the super-bionic compound eye chip based on hybrid stacked integration.


Wednesday, December 11, 2024 - 03:40 PM
41-5 | Pseudo-direct LiDAR by deep-learning-assisted high-speed multi-tap charge modulators
A virtually direct LiDAR system based on an indirect ToF image sensor and charge-domain temporal compressive sensing combined with deep learning is demonstrated. This scheme has high spatio-temporal sampling efficiency and offers advantages such as high pixel count, high photon-rate tolerance, immunity to multipath interference, constant power consumption regardless of incident photon rates, and motion artifact-free. The importance of increasing the number of taps of the charge modulator is suggested by simulation.


Wednesday, December 11, 2024 - 04:05 PM
41-6 | A Color Image Sensor Using 1.0-μm Organic Photoconductive Film Pixels Stacked on 4.0-μm Si Pixels for Near-Infrared Time-of-Flight Depth Sensing
We have developed an image sensor capable to simultaneously acquire high-resolution RGB images with good color reproduction and parallax-free ranging information by 1.0-μm organic photoconductive film RGB pixels stacked on 4.0-μm NIR silicon pixels for iToF depth sensing.


Wednesday, December 11, 2024 - 04:30 PM
41-7 | Pb-free Colloidal InAs Quantum Dot Image Sensor for Infrared
We developed an image sensor using colloidal InAs quantum dot (QD) for photoconversion. After spincoating the QDs on a wafer and standard semiconductor processing, the sensor exhibited infrared sensitivity and imaging capability. This approach facilitates easier production of lead-free infrared sensors for consumer use.


Wednesday, December 11, 2024 - 04:55 PM
41-8 | Lead-Free Quantum Dot Photodiodes for Next Generation Short Wave Infrared Optical Sensors
Colloidal quantum dot sensors are disruptingimaging beyond the spectral limits of silicon. In this paper,we present imagers based on InAs QDs as alternative for 1stgeneration Pb-based stacks. New synthesis method yields 9nm QDs optimized for 1400 nm and solution-phase ligandexchange results in uniform 1-step coating. Initial EQE is17.4% at 1390 nm on glass and 5.8% EQE on silicon(detectivity of 7.4 × 109 Jones). Using metal-oxide transportlayers and >300 hour air-stability enable compatibility withfab manufacturing. These results are a starting point towardsthe 2nd generation quantum dot SWIR imagers.


Also of interest, the following talk in Tuesday's session "Major Consumer Image Sensor Innovations Presented at IEDM"

Description: Authors: Albert Theuwissen, Harvest Imaging
Image Sensors past, and progress made over the years

Go to the original article...

AMS Osram Q3 2024 Results

Image Sensors World        Go to the original article...




Go to the original article...

Videos from EPIC Neuromorphic Cameras Online Meeting

Image Sensors World        Go to the original article...

Presentations from the recent EPIC Online Technology Meeting on Neuromorphic Cameras are available on YouTube:

 

IKERLAN – DVS Pre-processor & Light-field DVS – Xabier Iturbe, NimbleAI EU Project Coordinator
IKERLAN is a leading technology centre in providing competitive value to industry, since 1974. They offer integral solutions in three main areas: digital technologies and artificial intelligence, embedded electronic systems and cybersecurity, and mechatronic and energy technologies. They currently have a team of more than 400 people and offices in Arrasate-Mondragón, Donostialdea and Bilbao. As a cooperative member of the MONDRAGON Corporation and the Basque Research and Technology Alliance (BRTA), IKERLAN represents a sustainable, competitive business model in permanent transformation.


FlySight – Neuromorphic Sensor for Security and Surveillance – Niccolò Camarlinghi, Head of Research
FlySight S.r.l. (A single member Company) is the Defense and Security subsidiary company of Flyby Group, a satellite remote sensing solutions company.
FlySight team offers integrated solutions of data exploitation, image processing, avionic data/sensors fusion. Our products are mainly dedicated to the exploitation of data captured by many sensors typeand our solutions are intended both for the on-ground as well as for the on-board segments.
Through itsexperience in C4ISR (Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance), FlySight offers innovative software development and geospatial application technology programs (GIS) customized for the best results.
Our staff can apply the right COTS for your specific mission.
The instruments and products developed for this sector can find application as dual use tools also in many civil fields like Environmental Monitoring, Oil & Gas, Precision Farming and Maritime/Coastal Planning.


VoxelSensors – Active Event Sensors : an Event-based Approach to Single-photon Sensing of Sparse Optical Signals – Ward van der Tempel, CTO
VoxelSensors is at the forefront of 3D perception, providing cutting-edge sensors and solutions for seamless integration of the physical and digital worlds. Our patented Switching Pixels® Active Event Sensor (SPAES) technology represents a novel category of efficient 3D perception systems, delivering exceptionally low latency with ultra-low power consumption by capturing a new Voxel with fewer than 10 photons. SPAES is a game-changing innovation that unlocks the true potential of fully immersive experiences for both consumer electronics and enterprise AR/VR/MR wearables.


PROPHESEE – Christoph Posch, Co-Founder and CTO
Prophesee is the inventor of the world’s most advanced neuromorphic vision systems. Prophesee’s patented sensors and AI algorithms, introduce a new computer vision paradigm based on how the human eye and brain work. Like the human vision, it sees events: essential, actionable motion information in the scene, not a succession of conventional images.


SynSense – Neuromorphic Processing and Applications – Dylan Muir, VP, Global Research Operations
SynSense is a leading-edge neuromorphic computing company. It provides dedicated mixed-signal/fully digital neuromorphic processors which overcome the limitations of legacy von Neumann computers to provide an unprecedented combination of ultra-low power consumption and low-latency performance. SynSense was founded in March 2017 based on advances in neuromorphic computing hardware developed at the Institute of Neuroinformatics of the University of Zurich and ETH Zurich. SynSense is developing “full-stack” custom neuromorphic processors for a variety of artificial-intelligence (AI) edge-computing applications that require ultra-low-power and ultra-low-latency features, including autonomous robots, always-on co-processors for mobile and embedded devices, wearable health-care systems, security, IoT applications, and computing at the network edge.


Thales – Eric Belhaire, Senior Expert in the Technical Directorate
Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity. It develops products and solutions that help make the world safer, greener and more inclusive.

Go to the original article...

"Photon inhibition" to reduce SPAD camera power consumption

Image Sensors World        Go to the original article...

In a paper titled "Photon Inhibition for Energy-Efficient Single-Photon Imaging" presented at the European Conference on Computer Vision (ECCV) 2024 Lucas Koerner et al. write:

Single-photon cameras (SPCs) are emerging as sensors of choice for various challenging imaging applications. One class of SPCs based on the single-photon avalanche diode (SPAD) detects individual photons using an avalanche process; the raw photon data can then be processed to extract scene information under extremely low light, high dynamic range, and rapid motion. Yet, single-photon sensitivity in SPADs comes at a cost — each photon detection consumes more energy than that of a CMOS camera. This avalanche power significantly limits sensor resolution and could restrict widespread adoption of SPAD-based SPCs. We propose a computational-imaging approach called photon inhibition to address this challenge. Photon inhibition strategically allocates detections in space and time based on downstream inference task goals and resource constraints. We develop lightweight, on-sensor computational inhibition policies that use past photon data to disable SPAD pixels in real-time, to select the most informative future photons. As case studies, we design policies tailored for image reconstruction and edge detection, and demonstrate, both via simulations and real SPC captured data, considerable reduction in photon detections (over 90% of photons) while maintaining task performance metrics. Our work raises the question of “which photons should be detected?”, and paves the way for future energy-efficient single-photon imaging.

 







 

Lucas Koerner, Shantanu Gupta, Atul Ingle, and Mohit Gupta. "Photon Inhibition for Energy-Efficient Single-Photon Imaging." In European Conference on Computer Vision, pp. 90-107 (2024)
[preprint link]

Go to the original article...

Hamamatsu acquires BAE Systems Imaging [Update: Statement from Fairchild Imaging]

Image Sensors World        Go to the original article...

Press release: https://www.hamamatsu.com/us/en/news/announcements/2024/20241105000000.html

Acquisition of BAE Systems Imaging Solutions, Inc. Strengthening the Opto-semiconductor segment and accelerating value-added growth

2024/11/05
Hamamatsu Photonics K.K.

Photonics Management Corp. (Bridgewater, New Jersey, USA), a subsidiary of Hamamatsu Photonics K.K. (Hamamatsu City, Japan), has purchased the stock of BAE Systems Imaging Solutions, Inc. a subsidiary of BAE Systems, Inc. (Falls Church, Virginia, USA). In recognition of the company’s deep roots starting in 1920 as the Fairchild Aerial Camera Corporation, the company will return to the name first used in 2001, Fairchild Imaging.

Fairchild Imaging is a semiconductor manufacturer specializing in high-performance CMOS image sensors in the visible to near-infrared and X-ray regions, and it has the world’s best low-noise CMOS image sensor design technology. Fairchild Imaging’s core products include scientific CMOS image sensors for scientific measurement applications that simultaneously realize high sensitivity, high-speed readout, and low noise, as well as X-ray CMOS image sensors for dental and medical diagnostic applications.

Fairchild Imaging’s core products are two-dimensional CMOS image sensors that take pictures in dark conditions where low noise is essential. These products complement Hamamatsu Photonics’ one-dimensional CMOS image sensors, which are used for analytical instruments and factory automation applications such as displacement meters and encoders. Therefore, Fairchild Imaging’s technologies will enhance Hamamatsu’s CMOS image sensor product line.

Through the acquisition of shares, we expect the following:

1. Promote sales activities of Fairchild Imaging’s products by utilizing the global sales network currently established by Hamamatsu Photonics Group.
2. While Hamamatsu Photonics’ dental business serves the European and the Asian regions including Japan, Fairchild Imaging serves North America. This will lead to the expansion and strengthening of our worldwide dental market share.
3. Fairchild Imaging will become Hamamatsu’s North American design center for 2D, low-noise image sensors. This will strengthen CMOS image sensor design resources and utilize our North American and Japanese locations to provide worldwide marketing and technical support.
4. Create new opportunities and products by combining Fairchild Imaging’s CMOS image sensor design technology with Hamamatsu Photonics’ MEMS technology to support a wider range of custom CMOS image sensors and provide higher value-added products.

BAE Systems is retaining the aerospace and defense segment of the BAE Systems Imaging Solution’s portfolio, which was transferred to the BAE Systems, Inc. Electronic Systems sector, prior to the closing of this stock purchase transaction.

Fairchild Imaging will continue their operating structure and focus on developing and providing superior products and solutions to their customers.
 
 
[Update Nov 6, 2024: statement from Fairchild Imaging]
 
We are very happy to announce a new chapter in the storied history of Fairchild Imaging! BAE Systems, Inc., which had owned the stock of Fairchild Imaging, Inc. for the past 13 years, has processed a stock sale to Photonics Management Corporation, a subsidiary of Hamamatsu Photonics K.K. Resuming the identity as Fairchild Imaging, Inc., we will operate as an independent, yet wholly owned, US entity.

Fairchild Imaging is a CMOS imaging sensor design and manufacturing company, specializing in high-performance image sensors. Our x-ray and visible spectrum sensors provide class leading performance in x-ray, and from ultraviolet through visible and into near-infrared wavelengths. Fairchild Imaging’s core products include medical x-ray sensors for superior diagnostics, as well as scientific CMOS (sCMOS) sensors for measurement applications that simultaneously realize high sensitivity, fast readout, high dynamic range, and ultra-low noise in 4K resolution.
 
Marc Thacher, CEO of Fairchild Imaging, said:
“Joining the Hamamatsu family represents a great opportunity for Fairchild Imaging. Building upon decades of imaging excellence, we look forward to bringing new innovations and technologies to challenging imaging applications like scientific, space, low-light, machine vision, inspection, and medical diagnostics. The acquisition by Hamamatsu will help drive growth and agility as we continue as a design leader for our customers, partners, and employees.”
 
As part of this new chapter, Fairchild Imaging is unveiling its latest evolution of sCMOS sensors: sCMOS 3.1. These patented, groundbreaking imagers redefine the limits of what is possible in CMOS sensors for the most demanding of imaging applications.

Go to the original article...

Lynred announces 8.5um pitch thermal sensor

Image Sensors World        Go to the original article...

Link: https://ala.associates/wp-content/uploads/2024/09/241001-Lynred-8.5-micron-EN-.pdf

Lynred demonstrates smallest thermal imaging sensor for future Automatic Emergency Braking Systems (AEB) at AutoSens Europe 

Prototype 8.5 µm pixel pitch technology that shrinks by 50% the volume size of thermal cameras is designed to help automotive OEMs meet tougher future AEB system requirements, particularly at night.

Grenoble, France, October 1, 2024 – Lynred, a leading global provider of high-quality infrared sensors for the aerospace, defense and commercial markets, today announces it will demonstrate a prototype 8.5 µm pixel pitch sensor during AutoSens Europe, a major international event for automotive engineers, in Barcelona, Spain, October 8 – 10, 2024. The 8.5 µm pixel pitch technology is the smallest infrared sensor candidate for future Automatic Emergency Braking (AEB) and Advanced Driver Assistance Systems (ADAS).

The prototype, featuring half the surface of current 12 µm thermal imaging sensors for automotive applications, will enable system developers to build much smaller cameras for integration in AEB systems.

Following a recent ruling by the US National Highway Traffic Safety Administration (NHTSA), AEB systems will be mandatory in all light vehicles by 2029. It sets tougher rules for road safety at night.

The NHTSA sees driver assistance technologies and the deployment of sensors and subsystems as holding the potential to reduce traffic crashes and save thousands of lives per year. The European Traffic Safety Council (ETSC) also recognizes that AEB systems need to work better in wet, foggy and low-light conditions.

Thermal imaging sensors can detect and identify objects in total darkness. As automotive OEMs need to upgrade the performance of AEB systems within all light vehicles, Lynred is preparing a full roadmap of solutions set to help achieve this compliance. Currently gearing up for high volume production of its automotive qualified 12µm product offer, Lynred is ready to deliver the key component enabling Pedestrian Automatic Emergency Braking (PAEB) systems to work in adverse conditions, particularly at night, when more than 75% of pedestrian fatalities occur.

Lynred is among the first companies to demonstrate a longwave infrared (LWIR) pixel pitch technology for ADAS and PAEB systems that will optimize the size to performance ratio of future generation cameras. The 8.5µm pixel pitch technology will divide by two the volume of a thermal imaging camera, resulting in easier integration for OEMs, while successfully maintaining the same performance standards as larger-sized LWIR models.

Go to the original article...

css.php