Single-Photon Challenge image reconstruction competition is now open!

Image Sensors World        Go to the original article...

The Single-Photon Challenge announced yesterday (Oct 19) at ICCV 2025 is a first-of-its-kind benchmark and open competition for photon-level imaging and reconstruction.

The competition is now open! The submission deadline is April 1, 2026 (AOE) and winners will be announced in summer 2026. 

The challenge provides access to single-photon datasets and a public leaderboard to benchmark algorithms for photon-efficient vision: https://SinglePhotonChallenge.com


For this image reconstruction challenge you will need to come up with a novel and creative ways to transform many single-photon camera frames into a single high quality image. This setting is very similar to traditional burst imaging but taken to its extreme limit. Instead of a few burst images you have access to a thousand, but the catch is that each input frame is extremely noisy.  

There are thousands of dollars in prizes to win, thanks to the sponsors Ubicept and Singular Photonics.

Go to the original article...

Samsung announces 0.5um pixel

Image Sensors World        Go to the original article...

https://semiconductor.samsung.com/image-sensor/mobile-image-sensor/isocell-hp5/

Specifications:

Effective Resolution
 16,384 x 12,288 (200MP) 
Pixel Size
 0.5 μm 
Optical Format
 1/1.56" 
Color Filter
 Tetra²pixel RGB Bayer Pattern 
Normal Frame Rate
 7.5 fps @full, 30 fps @50MP, 90 fps @12.5MP 
Video Frame Rate
 30 fps @8K, 120 fps @4K, 480 fps @FHD (w/o AF) 
Shutter Type
 Electronic rolling shutter 
ADC Accuracy
 10-bit 
Supply Voltage
 2.2 V for analog, 1.8 V for I/O, 1.0 V for digital core supply 
Operating Temperature
 -20℃ to +85℃ 
Interface
 4 lanes (4.5 Gbps per lane) D-PHY / 3 trios (4 Gsps per trio) C-PHY 
Chroma
 Tetra²pixel 
Autofocus
 Super QPD (PDAF) 
HDR
 Smart-ISO Pro (iDCG), Staggered HDR 
Output Formats
 RAW8, RAW10, RAW12, RAW14 
Analog Gain
 16x @full, 256x @12.5MP

Excerpt from Baidu news (translated with Google translate): 

Samsung releases ISOCELL HP5, the world's first 0.5µm ultra-fine pixel 200 million image sensor

...  Samsung officially released the new 200-megapixel image sensor ISOCELL HP5, which is expected to be the first telephoto camera of the OPPO Find X9 Pro mobile phone.

... ISOCELL HP5 sensor is 1/1.56 inches in size, has an ultra-high resolution of 16384 x 12288, and compresses the unit pixel size to 0.5 microns. It is Samsung's first 200 million image sensor in the world equipped with 0.5µm ultra-micro pixels.

To overcome the challenges posed by small pixels, ISOCELL HP5 integrates multiple cutting-edge technologies. Among them, dual vertical transfer gate (D-VTG) and front deep trench isolation (FDTI) technologies work together to effectively increase the full well capacity of each pixel, or its ability to accommodate light signals. 

Go to the original article...

Tower Semiconductor preprint on 2.2um global shutter pixel

Image Sensors World        Go to the original article...

Yokoyama et al. from Tower Semiconductor have posted a preprint titled "Charge Domain Type 2.2um BSI Global Shutter Pixel with Dual Depth DTI Produced by Thick-Film Epitaxial Process":

Abstract: We developed a 2.2um Backside Illuminated (BSI) Global Shutter (GS) pixel with true charge-domain Correlated Double Sampling (CDS). A thick-film epitaxial deep DTI (Deep Trench Isolation) process was implemented to enhance 1/PLS (Parasitic Light Sensitivity) using a dual depth DTI structure.
The thickness of the epitaxial substrate was 8.5 um. This structure was designed using optical simulation. By using a thick epitaxial substrate, it is possible to reduce the amount of light that reaches the memory node. Dual-depth DTI, which shallows the DTI depth on the readout side, makes it possible to read signals from the PD to the memory node smoothly. To achieve this structure, we developed a process for thick epitaxial substrate, and the dual-depth DTI can be fabricated with a single mask. This newly developed pixel represents the smallest ever charge-domain GS pixel to date. Despite its compact size, this pixel achieved high QE (83%) and 1/PLS of over 10,000. The pixel maintains 80% of its peak QE at ±15 degrees. 1/PLS is stable even when the F# is small.

Full paper: https://sciprofiles.com/publication/view/7ae02d55ce8f3721ebfc8c35fb871d97 

Go to the original article...

Billion-pixel-resolution microscopy of curved surfaces

Image Sensors World        Go to the original article...

Recent Optical news article covers a publication by Yang et al. which presents a new technique for capturing high resolution microscopy images of curved surfaces. 

X. Yang, H. Chen, L. Kreiss, C.B. Cook, G. Kuczewski, M. Harfouche, M.O. Bohlen, R. Horstmeyer, “Curvature-adaptive gigapixel microscopy at submicron resolution and centimeter scale,” Opt. Lett., 50, 5977-5980 (2025).
DOI: 10.1364/OL.572466

New microscope captures large, high-resolution images of curved samples in single snapshot
Innovation promises faster insights for biology, medicine and industrial applications

Researchers have developed a new type of microscope that can acquire extremely large, high-resolution pictures of non-flat objects in a single snapshot. This innovation could speed up research and medical diagnostics or be useful in quality inspection applications.

“Although traditional microscopes assume the sample is perfectly flat, real-life samples such as tissue sections, plant samples or flexible materials may be curved, tilted or uneven,” said research team leader Roarke Horstmeyer from Duke University. “With our approach, it’s possible to adjust the focus across the sample, so that everything remains in focus even if the sample surface isn’t flat, while avoiding slow scanning or expensive special lenses.”

In the Optica Publishing Group journal Optics Letters, the researchers show that the microscope, which they call PANORAMA, can capture submicron details — 1/60 to 1/120 the diameter of a human hair — across an area roughly the size of a U.S. dime without moving the sample. It produces a detailed gigapixel-scale image, which has 10 to 50 times more pixels than the average smartphone camera image.

“This tool can be used wherever large-area, detailed imaging is needed. For instance, in medical pathology, it could scan entire tissue slides, such as those from a biopsy, at cellular resolution almost instantly,” said Haitao Chen, a doctoral student in Horstmeyer’s lab. “In materials science or industrial inspection, it could quickly inspect large surfaces such as a chip wafer at high detail.”


Go to the original article...

Billion-pixel-resolution microscopy of curved surfaces

Image Sensors World        Go to the original article...

Recent Optical news article covers a publication by Yang et al. which presents a new technique for capturing high resolution microscopy images of curved surfaces. 

X. Yang, H. Chen, L. Kreiss, C.B. Cook, G. Kuczewski, M. Harfouche, M.O. Bohlen, R. Horstmeyer, “Curvature-adaptive gigapixel microscopy at submicron resolution and centimeter scale,” Opt. Lett., 50, 5977-5980 (2025).
DOI: 10.1364/OL.572466

New microscope captures large, high-resolution images of curved samples in single snapshot
Innovation promises faster insights for biology, medicine and industrial applications

Researchers have developed a new type of microscope that can acquire extremely large, high-resolution pictures of non-flat objects in a single snapshot. This innovation could speed up research and medical diagnostics or be useful in quality inspection applications.

“Although traditional microscopes assume the sample is perfectly flat, real-life samples such as tissue sections, plant samples or flexible materials may be curved, tilted or uneven,” said research team leader Roarke Horstmeyer from Duke University. “With our approach, it’s possible to adjust the focus across the sample, so that everything remains in focus even if the sample surface isn’t flat, while avoiding slow scanning or expensive special lenses.”

In the Optica Publishing Group journal Optics Letters, the researchers show that the microscope, which they call PANORAMA, can capture submicron details — 1/60 to 1/120 the diameter of a human hair — across an area roughly the size of a U.S. dime without moving the sample. It produces a detailed gigapixel-scale image, which has 10 to 50 times more pixels than the average smartphone camera image.

“This tool can be used wherever large-area, detailed imaging is needed. For instance, in medical pathology, it could scan entire tissue slides, such as those from a biopsy, at cellular resolution almost instantly,” said Haitao Chen, a doctoral student in Horstmeyer’s lab. “In materials science or industrial inspection, it could quickly inspect large surfaces such as a chip wafer at high detail.”


Go to the original article...

Webinar on metasurface optics design

Image Sensors World        Go to the original article...

Metasurface Optics for Information Processing and Computing
Presented by Shane Colburn
Thu, Oct 9, 2025 1:00P EDT


Optics has long played a central role in information processing, from early analog computing systems to modern optical imaging and communication platforms. Recent advancements in nanofabrication and wavefront control have enabled a new class of ultrathin optical elements known as metasurfaces, which significantly expand the design space for manipulating light. By tailoring local phase, amplitude, and polarization responses at subwavelength scales, metasurfaces offer a compact and highly controllable platform for performing complex transformations on optical wavefronts.

Metaoptics for optical information processing leverages co-design of optical elements and computational algorithms to perform operations typically handled in the digital domain. Metasurfaces can be engineered to modify the point spread function of imaging systems, enabling custom optical transformations that enhance task-specific performance. Convolutional metaoptics, in particular, allow spatial convolutions to be executed directly in the optical domain as part of a hybrid analog-digital pipeline. These approaches present opportunities for reducing latency and energy consumption in computational imaging and embedded vision systems. Key challenges remain in achieving robustness, scalability, and seamless integration with electronic hardware, motivating continued research at the intersection of optics, machine learning, and photonic device design.

Who should attend:
This session is ideal for professionals involved in research and development, optical engineering, photonic device development, computational imaging, machine learning for optics, and advanced nanofabrication. It is particularly relevant to those working with technologies such as metasurfaces, wavefront shaping, hybrid analog-digital imaging systems, convolutional metaoptics, embedded vision hardware, and optical information processing platforms.

About the presenter:
Shane Colburn received his Ph.D. in electrical engineering and completed his postdoctoral studies at the University of Washington. His research primarily focused on dielectric metasurfaces for computational imaging and information processing, emphasizing hybrid optical-digital systems that leverage the compact form factor offered by metasurfaces and the aberration mitigation capabilities of computational imaging. He developed design methods using metaoptics for object detection and performing convolutions in the optical domain. Additionally, he investigated methods for reconfiguring metasurfaces, including novel architectures, electromechanical tuning, and phase-change material metasurfaces.

Colburn was previously the director of optical design at Tunoptix, where he led the development of its proprietary designs and nanofabrication efforts for building robust, high-performance imaging systems using metaoptics. Colburn is now the founder and managing director of Edgedyne, a company that develops information processing technologies based on metaoptics and provides photonic design and consulting services to clients in a range of sectors, including telecommunications, semiconductor, remote sensing, medical imaging, and consumer electronics.

Go to the original article...

Article about Japan’s TDK and Apple iPhone cameras

Image Sensors World        Go to the original article...

Original article here: https://gori.me/iphone/iphone-news/161745

(Translated using google translate)

TDK's TMR Sensor is the Secret to iPhone Cameras, Tim Cook Praises About Japanese Technology

TDK reveals thirty years of technology accumulation and manufacturing process that competitors cannot imitate at the first release of Apple Yokohama Technology Center 

Apple CEO Tim Cook visited the Apple Yokohama Technology Center (YTC) in Tsunashima, Yokohama, during his visit to Japan. This is the first time that the facility has been opened to the public, and the reality of a state-of-the-art research and development center with about 6,000 square meters of lab space and a clean room has been revealed.

On the same day, YTC presented four of Apple's leading companies—TDK, AGC, Kyocera, and Sony Semiconductor Solutions— that support Apple’s innovation. Tim Cook told reporters, “Apple will never be happy with the situation. Continue to ask for something better. The same goes for Japanese companies. We will never be satisfied and will continue to develop with the aim of always further advancement,” he said, emphasizing the importance of collaborative relationships with Japanese companies. 

The partnership between TDK and Apple began before the first iPod and has been in a long-term relationship for more than three decades. Today, almost all Apple products use TDK technology, and contribute to a wide range of fields, including batteries, pass filters, inductors, microphones, and various sensors.

It’s worth noting that TDK uses 100% renewable energy in all of its products it manufactures for Apple products. In the background of the beautiful photo shoot of the iPhone, which is usually used casually, and the ultra-compact part called the TMR sensor developed by TDK functions as a technology that all iPhone users benefit from.

TMR sensor stands for “Tunnel Magnetoresistance Sensor” and is an ultra-small sensor that detects changes in magnetic fields with extremely high sensitivity. It is so small that it contains fifty thousand wine glasses in a glass of wine, and it is a size that is almost invisible to the naked eye.

The principle of operation of this sensor applies quantum mechanical phenomena. To put it simply, it is a structure in which an ultra-thin insulator is sandwiched between two magnetic materials , and the electrical resistance changes dramatically due to changes in the external magnetic field. Compared to conventional Hall elements, the TMR sensor has reached about a hundred times the sensitivity of the TMR sensor, and it is characterized by extremely clear reactions such as "zero or one". 

The experience of automatically focusing on the moment you launch the camera app on your iPhone and point the lens at the subject will be "natural" that many users feel on a daily basis. However, in this background, the TMR sensor accurately grasps the position of the lens in a thousandth of a second.

The specific mechanisms are as follows. When the lens moves back and forth, a small magnet moves with the lens. The TMR sensor detects the distance change with this magnet as a change in the magnetic field as a change in the magnetic field, and instantly grasps where the lens is now. The camera system makes appropriate focus adjustments by "detecting the position" rather than measuring the distance.

The TMR sensor, which was first used for autofocus applications on the iPhone X, has also been applied to the sensor shift image stabilization (OIS) from the iPhone 12 series. The minute movement due to the camera shake is also instantly detected, and the sensor itself is operated to correct it.

The latest iPhone 17 series also uses TMR sensors for the center stage function of the front camera, which detects the fine movement of the lens in real time with a sensitivity of 100 times more than general hall elements. 

An easy example of an easy-to-understand TMR sensor is the joystick of the game controller. In the conventional joystick, a mechanical part called a "potension meter" is used, and the angle is detected by physical contact.

On the other hand, joysticks using TMR sensors operate non-contact , which greatly improves response speed and accuracy. In addition, since there is no mechanical wear, it also realizes durability that does not deteriorate in accuracy even if it is used for a long time.

The structure of the TMR sensor itself can be understood by the competitors when it is disassembled. However, it is very difficult to actually produce an equivalent product. The reason is TDK’s proprietary manufacturing process technology.

Semiconductor-based equipment is used for manufacturing, but it is not the equipment itself that is important. Combining multiple specialized technologies such as TMR deposition, magnetic material plating, and dry etching, the process of creating a unique layered structure that detects the magnetic field from which direction it detects the magnetic field from and which direction does it not perceives it is at the core. 

Modern smartphones are becoming thinner, and many magnets are used inside. There may be concerns about whether the delicate TMR sensor will work properly in this environment.

TDK cooperates closely from the customer's design stage to propose optimal sensor placement and design . The influence of the magnetic field is rapidly weakened by simply securing 1cm of physical distance, so the interference problem can be solved with proper design. He visits Cupertino fourteen times a year, and his close work with Apple’s camera team is proof of that. 

TDK has leveraged ninety years of expertise in magnetic materials to establish this process technology. The TMR sensors manufactured at the Asama Techno Plant in Japan also have a sustainable manufacturing system using 100% renewable energy.

The background of each photo that iPhone users casually take is the result of such a long-term accumulation of Japanese precision technology and technology. TMR sensors are by no means a prominent component, but they will continue to evolve as an important technology that supports the modern smartphone experience. 

Go to the original article...

Sony announces IMX927 a 105MP global-shutter CIS

Image Sensors World        Go to the original article...

Product page: https://www.sony-semicon.com/en/products/is/industry/gs/imx927-937.html

Release page: https://www.sony-semicon.com/en/info/2025/2025092901.html 

PetaPixel article: https://petapixel.com/2025/09/29/sonys-new-global-shutter-sensor-captures-105-megapixels-at-100fps/ 

Sony Semiconductor Solutions to Release the Industry-Leading Global Shutter CMOS Image Sensor for Industrial Use That Achieves Both Approximately 105-Effective-Megapixels and High-Speed 100 FPS Output

Delivering high-resolution and high-frame-rate imaging to contribute to diversified, advanced inspections 

Atsugi, Japan — Sony Semiconductor Solutions Corporation (Sony) today announced the upcoming release of the IMX927 stacked CMOS sensor with back-illuminated pixel structure and global shutter. It is the industry-leading sensor that achieves both high-resolution of approximately 105-effective-megapixels and high-speed output at a maximum frame rate of 100 fps.

The new sensor product is equipped with Pregius S™ global shutter technology made possible by Sony’s original pixel structure, ensuring high-quality imaging performance with distortion free imaging and minimal noise. By optimizing the sensor drive in pixel reading and A/D converter, it supports high-speed image data output. Introducing this high-resolution and high-frame-rate model into the product lineup will help improving productivity in industrial equipment domain, where recognition targets and inspection methods continue to diversify.

With the automation of factories progressing, the need for machine vision cameras that can capture a variety of objects at high speed and high resolution is growing in industrial equipment domain. With its proprietary back-illuminated pixel structure, Sony’s global shutter CMOS image sensors deliver high sensitivity and saturation capacity. Because they can capture moving subjects at high resolution without distortion, they are increasingly being used in a wide range of applications such as precision component recognition and foreign matter inspection. The new IMX927 features a high resolution of approximately 105 effective megapixels while delivering a high frame rate of up to 100 fps, helping shorten measurement and inspection times. It also shows promise in advanced measurement and inspection applications, for instance imaging larger objects in high resolution and three-dimensional inspection using multiple sets of image data. 

Along with the IMX927, Sony will also release seven products with different image sizes and frame rates. It has also developed a new ceramic package with connector that is compatible with a series of all these products, which allows cameras to be designed with sensors removable from camera modules. This can contribute to streamlining camera assembly and sensor replacement. By expanding its global shutter product lineup, Sony is contributing to the advancement of industrial equipment, where recognition and inspection tasks continue to become ever more precise and diversified.

Main Features
■ Global shutter technology with Sony’s proprietary pixel structure for high-resolution and high-sensitivity imaging
The new sensor is equipped with Pregius S global shutter technology. The very small 2.74 μm hat use Sony’s proprietary back-illuminated pixels and stacked structure enable the approximately 105-effective-megapixels resolution in a compact size with a high level of sensitivity and saturation capacity. In addition to inspections of precision components such as semiconductors and flat-panel displays, which require a high degree of accuracy, this feature also enables the capture of larger objects with distortion free, high-resolution, low-noise images. Thereby machine vision cameras can achieve higher precision measurement and inspection processes in a wide range of applications.
■ Circuit structure enabling a highly efficient sensor drive that saves power and makes high-speed imaging possible
The new sensor employs a circuit structure that optimizes pixel reading and sensor drive in the A/D converter, which saves power and enables faster data processing. This design makes a high-speed frame rate of up to 100 fps possible, reducing the time to output image data for more efficiency in measurement and inspection tasks. It also shows promise for application in advanced inspections such as three-dimensional inspections, which use multiple image data sets.
■ New ceramic package with connector to streamline camera assembly and contribute to stable operation
Sony has also developed a new ceramic package with connector, which is compatible with a series of eight products including the IMX927, making it possible to combine or detach sensors from camera modules flexibly to design cameras. Using this package makes camera assembly easier and streamlines the process of replacing sensors to suit camera specifications. It also has a superior heat dissipation structure, which suppresses the impact of heat on camera performance, contributing to stable, long-term operation.

 

Go to the original article...

ISSW 2026 call for papers

Image Sensors World        Go to the original article...


The International SPAD Sensor Workshop

1st-4th June 2026 / Yonsei University, Seoul, South Korea

The 2026 International SPAD Sensor Workshop (ISSW) is a biennial event focusing on Single-Photon Avalanche Diodes (SPAD), SPAD-based sensors, and related applications. The workshop welcomes all researchers (including PhD students, postdocs, and early-career researchers), practitioners, and educators interested in these topics.

This fifth edition of the workshop will take place in Seoul, South Korea, hosted at Yonsei University, in a venue suited to encourage interaction and a shared experience among the attendees. The workshop will follow a 1-day introductory school on SPAD sensor technology, which will be held in the same venue as the workshop on June 1st, 2026.

The workshop will include a mix of invited talks and peer-reviewed contributions. Accepted works will be published on the International Image Sensor Society website (https://imagesensors.org/). Submitted works may cover any of the aspects of SPAD technology, including device modeling, engineering and fabrication, SPAD characterization and measurements, pixel and sensor architectures and designs, and SPAD applications.

Topics
Papers on the following SPAD-related topics are solicited:

●      CMOS/CMOS-compatible technologies
●      SiPMs
●      III-V, Ge-on-Si
●      Modeling
●      Quenching and front-end circuits
●      Architectures
●      Time-to-digital converters
●      Smart data-processing techniques
●      Applications of SPAD single pixel and arrays, such as:
o   Depth sensing / ToF / LiDAR
o   Time-resolved imaging
o   Low-light imaging
o   Quantum imaging
o   High-dynamic-range imaging
o   Biophotonics
o   Computational imaging
o   Quantum RNG
o   High-energy physics
o   Quantum communications
●      Emerging technologies & applications

Draft paper submission
Submission portal TBD.

Paper format - Each submission should comprise a 1000-character abstract and a 3-page paper, equivalent to 1 page of text and 2 pages of images. The submission must include the authors' name(s) and affiliation, mailing address, and email address. The formatting can adhere to either a style that integrates text and figures, akin to the standard IEEE format, or a structure with a page of text followed by figures, mirroring the format of the International Solid-State Circuits Conference (ISSCC) or the IEEE Symposium on VLSI Technology and Circuits. Examples illustrating these formats can be accessed in the online database of the International Image Sensor Society.

The deadline for paper submission is 23:59 CET, January 11th, 2026.

Papers will be considered on the basis of originality and quality. High-quality papers on work in progress are also welcome. Papers will be reviewed confidentially by the Technical Program Committee.

Accepted papers will be made freely available for download from the International Image Sensor Society website.

Poster submission
In addition to talks, we wish to offer all graduate students, post-docs, and early-career researchers an opportunity to present a poster on their research projects or other research relevant to the workshop topics.

If you wish to take up this opportunity, please submit a 1000-character abstract and a 1-page description (including figures) of the proposed research activity, along with the authors’ name(s) and affiliation, mailing address, and e-mail address.

The deadline for paper submission is 23:59 CET, January 11th, 2026.

Key dates
The deadline for paper submission is 23:59 CET, January 11th, 2026.

Authors will be notified of the acceptance of their papers & posters latest by February 22nd, 2026.

The final paper submission date is March 29th, 2026.

The presentation material submission date is May 22nd, 2026.

Location
ISSW 2026 will be held fully in-person in Seoul, S. Korea, at the Baekyang Nuri Grand Ballroom at Yonsei University. 

Go to the original article...

Happening Today: Swiss Photonics Lunch Chat

Image Sensors World        Go to the original article...

Link: https://www.swissphotonics.net/home?event_id=4480

Lunch Chat: SPAD arrays and cameras: a comparison with conventional image sensors and detectors

Tue, 16.09.2025, online

This talk will introduce single-photon avalanche diode (SPAD) arrays and cameras, highlighting how they differ from conventional imaging and photon-counting technologies. We will review the state-of-the-art in SPAD devices and compare their performance with established detectors such as photomultiplier tubes (PMTs), silicon photomultipliers (SiPMs), EMCCD cameras, as well as modern sCMOS and qCMOS image sensors. The discussion will focus on their working principles and on when SPAD-based systems provide unique advantages versus when conventional solutions may be more appropriate, depending on the application.

Speaker:
Milo Wu, PhD, Business Development Manager PI Imaging

Date
Tuesday, 16 September 2025

Time
12:00 - 12:45 (CEST)

Software
Zoom

Costs
free of charge

Registration only necessary once
This event series requires registration (see link above). We will send you the access information (Zoom-link and ID) by email after the registration. As the Zoom link remains the same every week, you do not need to register again for the following meetings.

Go to the original article...

Image sensors workshop at IEEE Sensors 2025

Image Sensors World        Go to the original article...

A workshop titled "Future CMOS Image Sensors for AI Era – AI or not" will be held alongside IEEE Sensors 2025 in Vancouver, Canada on Sunday Oct 19, 2025.

Artificial intelligence (AI) is becoming increasingly integrated into our daily lives. In particular, Deep Neural Networks (DNNs) are expected to merge with CMOS Image Sensors (CISs), which, soon, will open up a new era of smart, adaptive, and autonomous systems in various consumer electronics, such as smartphones, automotive technology, and augmented/virtual reality glasses.

Future CMOS Image Sensors for AI Era – AI or not
Artificial intelligence (AI) is becoming increasingly integrated into our daily lives. In particular, Deep Neural Networks (DNNs) are expected to merge with CMOS Image Sensors (CISs), which, soon, will open up a new era of smart, adaptive, and autonomous systems in various consumer electronics, such as smartphones, automotive technology, and augmented/virtual reality glasses.   

This workshop will focus on the future CISs that incorporate state-of-the-art computational image sensor processors (ISPs). These capabilities have evolved from traditional computation to DNNs in the AI era. Industry leaders and academic researchers will present invited talks covering two major topics: 
1. Trends in CISs technology focused on computation, including neural networks for future applications and associated software.
2. Sensor technologies and ISPs designed for the AI era, and Sensor simulations tailored for future CISs

We believe this workshop will pave the way for advancements in future imaging and sensing technology.   We encourage all attendees of the IEEE SENSORS 2025 conference to engage in discussions about "Future CMOS Image Sensors for AI Era – AI or not". 


Go to the original article...

International Image Sensor Workshop (IISW) 2025 proceedings available

Image Sensors World        Go to the original article...

IISW 2025 papers available in our public archive at https://imagesensors.org/2025-papers/.

Each article also got a DOI assigned for easy future references, just like all other papers published by IISS since 2007.

Thank you to the all the organizers and volunteers who made this workshop possible!

Go to the original article...

VoxelSensors Qualcomm collab

Image Sensors World        Go to the original article...

https://www.globenewswire.com/news-release/2025/08/28/3140996/0/en/VoxelSensors-to-Advance-Next-Generation-Depth-Sensing-Technology-with-10x-Power-Savings-for-XR-Applications.html 

 VoxelSensors to Advance Next-Generation Depth Sensing Technology with 10x Power Savings for XR Applications
VoxelSensors, a company developing novel intelligent sensing and data insights technology for Physical AI, today announced a collaboration with Qualcomm Technologies, Inc. to jointly optimize VoxelSensors’ sensing technology with Snapdragon® XR Platforms.

 Brussels, Aug. 28, 2025 (GLOBE NEWSWIRE) -- VoxelSensors, a company developing novel intelligent sensing and data insights technology for Physical AI, today announced a collaboration with Qualcomm Technologies, Inc. to jointly optimize VoxelSensors’ sensing technology with Snapdragon® XR Platforms.

Technology & Industry Challenges

VoxelSensors has developed Single Photon Active Event Sensor (SPAES™) 3D sensing, a breakthrough technology that solves current critical depth sensing performance limitations for robotics and XR. The SPAES™ architecture addresses them by delivering 10x power savings and lower latency, maintaining robust performance across varied lighting conditions. This innovation is set to enable machines to understand both the physical world and human behavior from user’s point-of-view, advancing Physical AI.

Physical AI processes data from human perspectives to learn about the world around us, predict needs, create personalized agents, and adapt continuously through user-centered learning. This enables new and exciting applications previously unattainable. At the same time, Physical AI pushes the boundaries of operation to wider environments posing challenging conditions like variable lighting and power constraints.

VoxelSensors’ technology addresses both challenges by offering a technology that expands the operative limits of current day sensors, while collecting human point-of-view data to better train physical AI models. Overcoming these challenges will define the future of human-machine interaction.
Collaboration

VoxelSensors is working with Qualcomm Technologies to jointly optimize VoxelSensors’ SPAES™ 3D sensing technology with Snapdragon AR2 Gen 1 Platform, allowing a low-latency and flexible 3D active event data stream. The optimized solution will be available to select customers and partners by December 2025.

“We are pleased to collaborate with Qualcomm Technologies,” said Johannes Peeters, CEO of VoxelSensors. “After five years of developing our technology, we see our vision being realized through optimizations with Snapdragon XR Platforms. With our sensors that are ideally suited for next-generation 3D sensing and eye-tracking systems, and our inference engine for capturing users’ egocentric data, we see great potential in enabling truly personal AI agent interactions only available on XR devices.”
“For the XR industry to expand, Qualcomm Technologies is committed to enabling smaller, faster, and more power-efficient devices,” said Ziad Asghar, SVP & GM of XR at Qualcomm Technologies, Inc. “We see great potential for small, lightweight AR smart glasses that consumers can wear all day. VoxelSensors’ technology offers the potential to deliver higher performance rates with significantly lower power consumption, which is needed to achieve this vision.”

Market Impact and Future Outlook

As VoxelSensors continues to miniaturize their technology, the integration into commercial products is expected to significantly enhance the value proposition of next-generation XR offerings. Collaborating with Qualcomm Technologies, a leader in XR chipsets, emphasizes VoxelSensors’ commitment to fostering innovation to advance the entire XR ecosystem, bringing the industry closer to mainstream adoption of all-day wearable AR devices.

Go to the original article...

SMPTE awards Dr. Peter Centen

Image Sensors World        Go to the original article...

https://www.smpte.org/about/awards-programs/camera-winners

2025 - Dr. Peter G. M. Centen

For pioneering innovations in image sensor technology that transformed electronic cinematography and broadcast imaging. Over a career spanning more than four decades, Dr. Centen played a pivotal role in the industry’s transition from CCD to CMOS image sensors, serving as chief architect of the Xensium family that enabled HD, 4K, HDR, and HFR imaging. During the transitions from SD to HD, narrow-screen to widescreen, and film to digital cinematography, his development of Dynamic Pixel Management—a groundbreaking sub-pixel-control technology—allowed a single sensor to support multiple resolutions and aspect ratios, including ultra-wide formats (~2.4:1), without compromise. This innovation, first implemented in the Viper FilmStream camera, eliminated the need for format-specific imaging systems and laid the foundation for today’s flexible, high-performance camera designs.

The Camera Origination and Imaging Medal, established in 2012, recognizes significant technical achievements related to inventions or advances in imaging technology, including sensors, imaging processing electronics, and the overall embodiment and application of image capture devices.​ 

Go to the original article...

SMPTE awards Dr. Peter Centen

Image Sensors World        Go to the original article...

https://www.smpte.org/about/awards-programs/camera-winners

2025 - Dr. Peter G. M. Centen

For pioneering innovations in image sensor technology that transformed electronic cinematography and broadcast imaging. Over a career spanning more than four decades, Dr. Centen played a pivotal role in the industry’s transition from CCD to CMOS image sensors, serving as chief architect of the Xensium family that enabled HD, 4K, HDR, and HFR imaging. During the transitions from SD to HD, narrow-screen to widescreen, and film to digital cinematography, his development of Dynamic Pixel Management—a groundbreaking sub-pixel-control technology—allowed a single sensor to support multiple resolutions and aspect ratios, including ultra-wide formats (~2.4:1), without compromise. This innovation, first implemented in the Viper FilmStream camera, eliminated the need for format-specific imaging systems and laid the foundation for today’s flexible, high-performance camera designs.

The Camera Origination and Imaging Medal, established in 2012, recognizes significant technical achievements related to inventions or advances in imaging technology, including sensors, imaging processing electronics, and the overall embodiment and application of image capture devices.​ 

Go to the original article...

Prophesee announces GenX320 starter kits for Raspberry Pi

Image Sensors World        Go to the original article...

https://www.prophesee.ai/2025/08/26/prophesee-brings-event-based-vision-to-raspberry-pi-5-with-genx320-starter-kit/

Prophesee Brings Event-Based Vision to Raspberry Pi 5 with GenX320 Starter Kit

New starter kit provides developers efficient, cost-effective way to leverage low-power, high-speed neuromorphic vision for IoT, drones, robotics, security and surveillance—with one of the world’s most popular embedded development platforms

PARIS, Aug 26, 2025

Prophesee, the inventor and leader in event-based neuromorphic vision systems, today announces the launch of the GenX320 Starter Kit for Raspberry Pi® 5, making its breakthrough frameless sensing technology available to the Raspberry Pi developer community for the first time. Built around Prophesee’s ultra-compact, ultra-efficient GenX320 event-based vision sensor, the kit connects directly to the Raspberry Pi 5 camera connector to allow development of real-time applications that leverage the advantages of event-based vision for drones, robotics, industrial automation, surveillance, and more. 

The kit enables efficient, cost-effective and easy-to-use access to develop solutions based on Prophesee’s advanced Metavision® event-based vision platform, through use of the company’s OpenEB, open-source core of its award-winning Metavision SDK. The Raspberry Pi ecosystem is one of the largest and most active hardware communities in the world, with more than 60 million units sold and millions of developers engaged across open-source and maker platforms.

Event-based vision is a paradigm shift from traditional frame-based approaches. It doesn’t capture entire images at once but instead detects changes in brightness, known as “events,” at each pixel. This makes sensors much faster (responding in microseconds), able to operate with much less data and processing power, and be more power-efficient than traditional sensors. 

The kit is purpose-built to enable real-world, real-time applications where traditional frame-based vision struggles:

  • Drones & Robotics: Obstacle avoidance, drone-to-drone tracking, real-time SLAM
  • Industrial IoT: 3D scanning, defect detection, and predictive maintenance
  • Surveillance & Safety: Intrusion detection, fall detection, and motion analytics 

ABOUT THE KIT

The GenX320 Starter Kit is built around the Prophesee GenX320 sensor, the smallest and most power-efficient event-based vision sensor available. With a 320×320 resolution, >140 dB dynamic range, event rate equivalent to ~10,000 fps, and sub-millisecond latency, the sensor provides the performance needed for demanding real-time applications on an embedded platform.

Key Features:

  •  Compact event-based camera module with MIPI CSI-2 interface
  •  Native integration with Raspberry Pi 5 (board sold separately)
  •  Power-efficient operation (<50 mW sensor-only consumption)
  •  OpenEB support with Python and C++ APIs

Software Resources:

  • Developers will be able to access drivers, data recording, replay and visualization tools on GitHub.
  •  Access to the Prophesee Knowledge Center, a centralized location for users to access various resources, including: a download repository, user guides, and FAQs; a community forum to share ideas; a support ticket system; and additional resources such as application notes, product manuals, training videos, and more than 200 academic papers.

AVAILABILITY

The Prophesee GenX320 Starter Kit for Raspberry Pi 5 is available for pre-order starting August 26, 2025, through Prophesee’s website and authorized distributors. For more information or to order, visit: www.prophesee.ai/event-based-starter-kit-genx320-raspberry-pi-5/ 

Go to the original article...

Galaxycore 50MP 0.61um CIS

Image Sensors World        Go to the original article...

Translated from Baidu news: https://baijiahao-baidu-com.translate.goog/s?id=1839605263838551524&wfr=spider&for=pc&_x_tr_sl=zh-CN&_x_tr_tl=de&_x_tr_hl=de&_x_tr_pto=wapp

GLOBAL HUI, August 5th | GalaxyCore (688728.SH) announced that it has recently achieved mass production and shipment of its 0.61-micron 50-megapixel image sensor product. This product, the world's first single-chip 0.61-micron pixel image sensor, is based on the company's unique Galaxy Cell 2.0 process platform and manufactured in the company's own wafer fab, significantly improving small-pixel performance. This product utilizes a 1/2.88 optical size, reducing the thickness of the camera module and making it widely applicable to smartphone rear-mounted main cameras, ultra-wide-angle cameras, and front-facing cameras. Furthermore, this product integrates single-frame high dynamic range (DAGHDR) technology, achieving wider dynamic range coverage in a single exposure, effectively addressing overexposure and underexposure issues in backlit scenes. It also supports PDAF phase autofocus, ensuring a fast and accurate shooting experience.

The company's 0.61-micron 50-megapixel image sensor has entered mass production and shipment, successfully entering the rear-mounted main camera market for branded mobile phones. This marks further market recognition of the company's innovative high-pixel single-chip integration technology and fully demonstrates the efficiency of its Fab-Lite model. To date, the company has achieved mass production of 0.7-micron 50-megapixel, 1.0-micron 50-megapixel, and 0.61-micron 50-megapixel image sensors based on single-chip integration technology. The company will subsequently leverage this technology platform to further enhance the performance of high-pixel products such as 32-megapixel and 50-megapixel, while also launching products with specifications exceeding 100 megapixels. This will continuously strengthen the company's core competitiveness, increase market share, and expand its leading position. 

Go to the original article...

Canon 400MP CIS

Image Sensors World        Go to the original article...

Link: https://www.photografix-magazin.de/canon-zeigt-410-mp-vollformatsensor-technik-rekord-fuer-spezialanwendungen/

Related post on the blog in January: https://image-sensors-world.blogspot.com/2025/01/canon-announces-410mp-full-frame-sensor.html 

Canon has presented its new record image sensor with 410 megapixels to the public for the first time. The presentation took place at the P&I 2025 in China.

Spectacular engineering key data 

  •  Resolution: 24.592 x 16,704 pixels, which corresponds to almost 200 times Full-HD and 12 x 8K.
  •  Sensor architecture : New, rearly lit stacked sensor with integrated signal processing.
  •  Data throughput : impressive read speed of 3,280 MP/s, which allows 8 frames/s in full resolution.
  •  Monochrome version : Uses 4-pixel binning for higher light sensitivity and allows 100 MP videos at 24 fps.

Target market: Industry instead of consumers

Canon does not position the LI8030SA for the mass market, but for highly specialized industries such as monitoring, medicine and machine vision. At P&I 2025, the sensor was presented behind glass, which is usually a clear signal that it is still in the development phase. Canon is already looking for expressions of interest, but the first models should not be intended for classic cameras. Nevertheless, the technology could also influence commercial Canon sensors in the future.

With the 410-MP sensor, Canon impressively shows where the journey can go. For hobby photographers, however, this sensor remains a distant dream. Most of us don't need such an exaggerated resolution anyway. In the professional sector, however, it opens up new dimensions.

Go to the original article...

Harvest Imaging 2025 Forum – Dec 8, 9 – Single-Photon Detection

Image Sensors World        Go to the original article...

Registration page: https://harvestimaging.com/forum_introduction_2025_coming.php

The Harvest Imaging forum will continue with a next edition scheduled for December 8 & 9, 2025, in Delft, the Netherlands. The basic intention of the Harvest Imaging forum is to have a scientific and technical in-depth discussion on one particular topic that is of great importance and value to the digital imaging community.

The 2025 Harvest Imaging forum will deal with a single topic and will have only one world-level expert as the speaker: 

"SINGLE PHOTON DETECTION"
Prof. dr. Robert HENDERSON (Univ. of Edinburgh, UK)

Abstract:

Access to the ultrafast quantum statistics of light enabled by new solid-state single photon imaging technologies is revolutionizing camera technology. 

The noise-free detection and precise localization of individual photons enables imaging of time itself (which directly enables depth perception) at unprecedented temporal and spatial resolutions. Such solid-state single photon imaging technologies now approach the sensitivity, timing resolution and dark noise of vacuum electrocathode approaches whilst providing robustness, low cost and high spatial resolution. Amongst these, CMOS single Photon Avalanche Diode (SPAD) arrays offer the unique capability to extract single photon statistics in high background conditions using massively parallel on-chip timing and digital computation.

This forum will highlight the modelling, device structures, characterisation methods and circuitry necessary to develop this new generation of SPAD imaging system. Recent advances in SPAD direct time of flight (dToF) and photon counting sensor design techniques optimized for low power, computation, and area will be reviewed. 

The forum will focus primarily on the mainstream commercial applications of SPADs in low light imaging, depth imaging (RGB-Z) and LIDAR. Further examples will be drawn from emerging use cases in fluorescence microscopy, Raman spectroscopy, non-line-of-sight imaging, quantum optics and medical diagnostics (X-ray, PET). Future trends and prospects enabled by 3D-stacking technology will be considered.

Bio

Robert K. Henderson is a Professor of Electronic Imaging in the School of Engineering at the University of Edinburgh. He obtained his PhD in 1990 from the University of Glasgow. From 1991, he was a research engineer at the Swiss Centre for Microelectronics, Neuchatel, Switzerland. In 1996, he was appointed senior VLSI engineer at VLSI Vision Ltd, Edinburgh, UK where he worked on the world’s first single chip video camera.

From 2000, as principal VLSI engineer in STMicroelectronics Imaging Division he developed image sensors for mobile phone applications. He joined University of Edinburgh in 2005, designing the first SPAD image sensors in nanometer CMOS technologies in the MegaFrame and SPADnet EU projects. This research activity led to the first volume SPAD time-of-flight products in 2013 in the form of STMicroelectronics FlightSense series, which perform an autofocus-assist now present in over 2 billion smartphones. He benefits from a long-term research partnership with STMicroelectronics in which he explores medical, scientific and high speed imaging applications of SPAD technology. In 2014, he was awarded a prestigious ERC advanced fellowship. He is an advisor to Ouster Automotive and a Fellow of the IEEE and the Royal Society of Edinburgh.

Go to the original article...

Sony 3-layer stacked sensor

Image Sensors World        Go to the original article...

Tranlated from baidu.com: https://baijiahao-baidu-com.translate.goog/s?id=1839758590887948034&wfr=spider&for=pc&_x_tr_sl=zh-CN&_x_tr_tl=de&_x_tr_hl=de&_x_tr_pto=wapp

In-depth: Sony's three-layer CIS changes the global sensor market

Source: AI Core World (Aug 7, 2025)

Sony is developing a three-layer image sensor

Sony Semiconductor Solutions (SSS) showcased a potentially groundbreaking three-layer image sensor design as part of a presentation to investors, the company's Imaging & Sensing Solutions (I&SS) division announced today. The design promises significant performance improvements.

Although Sony has used stacked sensors in several cameras, including its flagship a1 II, these sensors currently have a dual-layer structure. One layer is the photodiode layer responsible for capturing light, which contains all the light-sensitive pixels; the other layer is the transistor layer located below it, which is responsible for image processing tasks. Sony's core long-term goal is to introduce the crucial third layer in the image sensor stack. This essentially means an expansion of processing power and a leap in image quality.

When other conditions are equal, the stronger the processing power at the sensor level, the better the imaging effect will naturally be. Sony explains that increasing processing power at the sensor level will directly translate into improvements in several key performance areas: dynamic range, sensitivity, noise performance, power efficiency, readout speed, and resolution.

While adding sensor layers doesn't directly change the pixel resolution itself, it unlocks entirely new video recording modes by significantly improving the overall speed and performance of the sensor.
Image sensors remain a core pillar of Sony's strategy in diverse areas including mobile devices, automotive, industrial and cameras. Sony expects the camera-related sensor market to continue expanding at a compound annual growth rate of 9% by fiscal 2030, which indicates that Sony will continue to increase its investment in this field. 

Next-generation sensor technology will become a driving force for differentiation

Sony is focusing on R&D in multi-layer sensor stack architectures and advanced process nodes to improve sensitivity, dynamic range, power efficiency and readout speed – cutting-edge technologies that will directly power future Alpha and FX camera series. To achieve these goals, Sony plans to invest a total of nearly 930 billion yen in capital expenditures between 2024 and 2026, about half of which will be dedicated to the research and development and production of advanced image sensor processes.
As outlined in its long-term strategy, Sony is going all out and investing in next-generation sensor technologies, including multi-layer stacked image sensors.

Sony's triple-stacked sensor, used in the Xperia 1 V and adopted by other mainstream smartphone models, significantly improves image quality. The architecture also supports multimodal sensing and on-chip artificial intelligence processing, marking a shift in the industry's focus from simply pursuing resolution to intelligent sensing. The breakthrough in 22nm logic stacking technology is committed to achieving ultra-low power consumption and expanded computing power, among which FDSOI technology is expected to be applied in the field of neuromorphic sensing.

---------------------------------------------- 

Also covered by PetaPixel: https://petapixel.com/2025/07/30/sonys-triple-layer-image-sensor-promises-huge-performance-gains/ 

 Sony’s Triple-Layer Image Sensor Promises Huge Performance Gains

 




Go to the original article...

AIStorm and Tower Semiconductor release AI-in-Imager chip

Image Sensors World        Go to the original article...

AIStorm & Tower Semiconductor Introduce Cheetah HS, World’s First Up-to-260K FPS AI-in-Imager Chip for Inspection, Robotics & Sports

Charge-domain imager with on-chip neural network, provides breakthrough slow-motion vision at a fraction of the cost and power consumption of competing high-speed cameras

HOUSTON, Texas, and MIGDAL HAEMEK, Israel - August 12, 2025: AIStorm, the market leader in charge-domain solutions for edge AI, and Tower Semiconductor (NASDAQ/TASE: TSEM), a leading foundry of high-value analog semiconductor solutions, today announced the availability of the Cheetah HS — a high-speed, 120×80-pixel imager with first-layer AI capability that captures up to 260,000 frames per second — 2,000 to 4,000 times faster than conventional CMOS sensors. The Cheetah HS architecture is made possible by Tower’s unique charge-domain imaging platform which is leveraged by AIStorm’s proprietary charge-domain-based analog AI neurons.

By combining ultra-high-speed imaging with charge-domain AI, Cheetah HS slashes system power requirements and bill-of-materials cost for designers of robotics, drones, vibration- and structure-health monitors, high-speed security and surveillance tracking systems, manufacturing and assembly lines, barcode readers, PCB-inspection equipment, biometric unlock systems, vehicle-speed detectors, and even golf-swing analyzers.

“Many consumer and industrial applications require ultra-slow-motion analysis of real-time events to analyze performance or detect anomalies. Such solutions are very costly, and our Cheetah HS solution makes them affordable for a wide range of markets and end applications,” said David Schie, CEO of AIStorm. “Tower is a global leader in charge-domain global-shutter pixels, making them the ideal partner for the development and production of such groundbreaking products.”
“We are very pleased to see the fruits of our long term, close collaboration with AIStorm on this unique breakthrough platform of analog charge-domain embedded AI technology,” said Dr. Avi Strum, SVP and GM of the Sensors and Displays BU at Tower Semiconductor. “Its inherent low-power, low-cost, and high-performance virtues will enable a family of affordable, high-volume products in the near future.”

Key advantages of Cheetah HS
 Adjustable frame rate up to 260,000 frames per second (fps)
 Integrated LED driver (programmable up to 40 mA)
 Enhanced low light performance
 Integrated charge-domain neuron layer outputting pulse streams for downstream neural-network layers or raw high-speed video
 Dramatic cost advantage over competitors
 Lowers processing costs by capturing images quickly, leaving more time per frame for processing
 Ability to capture extremely high-speed events and analyze them in slow motion

How it works
Traditional high-speed cameras utilize expensive high-speed data converters to capture data, which separates the AI input layer from the pixels, increasing the BOM cost and necessitating high-speed connectors and interface components. Cheetah HS’s charge-domain architecture converts incoming photons to charge, computes the first neural-network layer in analog form, then outputs a pulse train that can be processed by downstream networks. The capture rate is programmable, allowing lower frame rates with faster capture times (reducing the cost of processing) or faster frame rates for accurate measurements or slow-motion analysis.

Availability
Cheetah HS is available now in both chip form as well as full reference-camera systems [aistorm.ai/cheetah]. 

Go to the original article...

Toshiba announces linear CCD sensor for document scanners

Image Sensors World        Go to the original article...

Toshiba Releases Lens-Reduction Type CCD Linear Image Sensor with Low Random Noise That Helps Improve Image Quality in Devices Such as A3 Multifunction Printers

KAWASAKI, Japan--(BUSINESS WIRE)--Toshiba Electronic Devices & Storage Corporation ("Toshiba") has launched a lens-reduction type CCD linear image sensor “TCD2728DG” for A3 multifunction printers. Shipments start today. The sensor has 7,500 image sensing elements (pixels) and supports A3 multifunction printers. It is also more effective at reducing random noise (NDσ) than Toshiba’s current TCD2726DG.

Business offices are seeing a growing need for high-speed, high-resolution copying and scanning of large volumes of different kinds of documents. This is particularly true for A3 multifunction printers, where improving image quality has become an important issue, and NDσ in the signal has to be suppressed to enhance image quality.

TCD2728DG has lower output amplifier gain than Toshiba’s current product, TCD2726DG, and reduces NDσ by approximately 40%. This improvement enhances image quality in multifunction printers. The new CCD linear sensor achieves a data rate of 100 MHz (50 MHz × 2 channels), enabling high-speed processing of large volumes of images. This makes it well-suited for line scan cameras used in inspection systems that require real-time decision-making.

Toshiba will continue to expand its product lineup to support scanning by multifunction printers and the sensing applications of inspection devices, and to meet growing demand for high-speed, high-resolution imaging and sensing technologies. 

Applications
 A3 multifunction printers (resolution of 600 dpi)
 7500-pixel line scan camera for various inspection systems (semiconductor inspection equipment, food sorting equipment, etc.)


Features
 Reduces random noise by approximately 40%
 High-speed CCD linear image sensor: Data rate =100MHz (Master clock frequency 50MHz × 2ch) (max)
 The built-in timing generator circuit and CCD driver help facilitate system development 

 

 

Go to the original article...

Image sensor sampling strategies

Image Sensors World        Go to the original article...

 

 

Electronic Sampling for Temporal Imaging: Computational Optical Imaging Episode 66 

This episode considers global and rolling shutter strategies and other alternatives for sampling of video. The very simple simulation presented is available at https://github.com/arizonaCameraLab/c... The frame interpolation research referenced is described at https://jianghz.me/projects/superslomo/


00:00 - Event cameras
00:56 - Visual cortex
01:41 - Image sensors
03:00 - Data plane coding
03:55 - Rolling shutter
05:50 - Rolling shutter simulation
09:09 - Temporal interpolation
09:59 - Random temporal sampling
11:12 - Sample data 
11:52 - Sample packing
12:29 - Rolling shutter compesation
15:15 - Dynamic range

Go to the original article...

Samsung presentation on pixel deep sub-micron and metaoptics trends

Image Sensors World        Go to the original article...

Paper link: https://imagesensors.org/wp-content/uploads/2025/03/Invited-6.pdf

Journey of pixel optics scaling into deep sub-micron and migration to metaoptics era 


 


Go to the original article...

ESSERC 2025 smart cars workshop

Image Sensors World        Go to the original article...

The Role of Cameras and Photonics for Smart Cars 

Full day workshop on Sep 8th, 2025 @ ESSERC 2025 TU Munich

https://www.esserc2025.org/w8

Organizers
Cedric Tubert (STMicroelectronics, FR)
Daniele Perenzoni (Sony, IT) 

This workshop explores cutting-edge developments in automotive vision systems, highlighting the integration of advanced cameras and photonic technologies. We present significant advancements in Automotive High Dynamic Range Imaging specifically designed for High Temperature conditions, addressing one of the industry's most challenging operational environments. The session showcases innovative wafer-scale micro-optics and meta-surfaces that are revolutionizing both imaging and illumination applications. Attendees will gain insights into the evolution of next generation CMOS image sensors for smart cars and for Driver and Occupancy Monitoring systems. The workshop also examines hardware accelerators enabling low-latency event-based vision processing, critical for real-time decision. Finally, we address the integration challenges in 'Photonics on the Road,' exploring practical hurdles and solutions for implementing these technologies in self-driving vehicles. These innovations collectively demonstrate the essential role of photonics and imaging systems in creating safer, more efficient autonomous transportation.

Program 

09:30 - 10:15
Automotive High Dynamic Range Imaging in High Temperature Conditions
Tomas Geurts (Omnivision, BE)
​The talk will cover High Dynamic Range (HDR) requirements in ADAS and In-Cabin automotive imaging applications. The importance and relevance of performance at high temperature will be explained. The talk will highlight fundamental limitations of low-light and HDR performance at elevated temperatures which is an important aspect in automotive imaging but often under-illuminated in publications.​

10:15 - 11:00
Past and Future of CMOS Image Sensors in Automotive Industry
Yorito Sakano (Sony Semiconductor Solutions, JP)
Business motivation is essential for the evolution of semiconductor devices. The larger the market, the faster the technology evolves. The first iPhone was born in 2007, and the back-illuminated image sensor, an epoch-making event for CMOS image sensors, was introduced in 2009. With technical breakthroughs and business motivations coming together almost simultaneously, CMOS image sensors have undergone a dramatic technological evolution over the past decade or so. Similarly, automotive CMOS image sensors have recently undergone a unique evolution in the competitive axis of high dynamic range (HDR), supported by business motivation such as the evolution of Advanced Driver-Assistance Systems (ADAS) and the efforts toward the practical application of Autonomous Driving (AD). Let me overview the recent evolution of automotive CMOS image sensors and discuss the direction of future evolution.
​​​
11:00 - 11:30
Coffee break

11:30 - 12:15
Wafer Scale Micro-optics and Meta-surfaces for Applications in Imaging and Illumination
Falk Eilenberger (Fraunhofer, DE)
Micro- and nanooptical systems game-changers in our ability to manipulate light. Nanooptical systems, frequently called meta-surfaces, allow to access all degrees of freedom of the optical fields, such as spectral properties, its polarization, and its phase next to its intensity, which is classically addressed in imaging systems. Nano- and microoptical systems allow to introduce massive parallelization in optical systems, breaking virtually any commonly known design rules both for imaging as well as for illumination systems. Harnessing these degrees of freedom is, however, a grand challenge in terms of design, engineering, and cost scaling. In the talk I shall highlight how wafer scale fabrication techniques can be utilized to overcome these issues, if the entire process chain from design to the final application can be tailored to the specific requirements of the optical task at hand. I shall do so by highlighting a variety of applications and projects, in which wafer scale nanooptics have played a crucial role, from optics for satellite missions all the way to illumination systems for mobility solutions.

12:15 - 13:00
CMOS Image Sensors for Driver and Occupancy Monitoring Solutions
Jerome Chossat and Pierre Malinge (STMicroelectronics, FR)
Automotive applications require high-performance and cost-effective sensors. Considering these constraints, we present a novel pixel architecture capable of both rolling and global shutter imaging. Utilizing a non-Bayer CFA pattern, it captures both RGB and near-infrared images. A specific ASIL pixel design ensures a comprehensive integrity check of the sensor. The latter is connected to a logic circuit through a 3D Cu-to-Cu hybrid bonding process, providing state-of-the-art on-chip data processing and interfacing. Such a sensor is ideally suited for driver monitoring systems while enabling the integration of advanced multimedia features. Indeed, on top of the pixel and readout quality requirements, CMOS mage sensors for Driver and Occupancy Monitoring solutions are bringing a lot of challenges on the digital side too. They may contain quite complex signal processing for properly dealing with various non-Bayer CFA and manage IR content, they must integrate automotive safety capabilities, must be efficiently protected against malicious attackers aimed at tampering their functionalities, and must prevent usage of counterfeit components. In addition, all this must be done under aggressive cost constraint and stringent power constraints but also be developed in conformance with road vehicles functional safety (ISO26262), and Road vehicles Cybersecurity engineering (ISO21434).

13:00 - 14:00
Lunch

14:00 - 14:45
Hardware Accelerators for Low-latency Event-based Vision
Charlotte Frenkel (TU Delft, NL)
From optical flow to high-speed particle counting, event-based cameras emerge as an enabler for low-latency vision applications. They capture temporal contrast changes as a stream of events, which are generated on a per-pixel basis and at a temporal resolution of a few microseconds. However, there is currently a lack of hardware support for event-based processing workloads that generate updated predictions within microseconds. This talk will cover emerging developments in this area, from dynamic graph neural networks to digital in-memory computing for spiking neural networks.
 
14:45 - 15:30
Photonics on the Road: Navigating the Integration Hurdles in Self-Driving Cars
Christoph Parl (Valeo, DE)
Valeo is at the forefront of the autonomous driving revolution, providing a comprehensive suite of sensors - cameras, RADARs, ultrasonics, microphones, and LiDARs - that enable self-driving capabilities. This keynote will explore how Valeo's technology is driving the transition from manual to fully autonomous vehicles. A key focus will be on vehicle integration: the art of seamlessly embedding these sensors into the vehicle's design. This requires balancing function-driven design, ensuring optimal sensor performance, with emotion-driven design, creating desirable and engaging vehicles. The presentation will highlight the diverse sensors required for autonomy, with a focus on LiDARs due to complexity. Crucially, we will examine the challenges and solutions surrounding sensor mounting positions. Optimal placement is vital, considering each sensor's needs, environmental factors, and cleaning requirements. Finally, we'll explore how solid-state technology can help vehicle integration to enable more compact and robust solutions for a large-scale rollout of self-driving functions. 

15:30 - 16:00 
Coffee break

16:00 - 16:45
Final discussion and closing​​​​ 

Go to the original article...

NovoViz announces a SPAD-based event camera

Image Sensors World        Go to the original article...


The NovoViz NV04ASC-HW Asynchronous photon-driven camera was developed for applications requiring high sensitivity and/or frame rate but with reduced output bandwidth.

The camera combines the benefits of a single-photon avalanche diode (SPAD) camera, namely the single-photon resolution and fast operating speeds, with the benefits of an event camera – low output data rates.

64 x 48 SPAD pixels
100M fps
10ns resolution
Event-driven output
USB 3.0 

Company profile: https://exhibitors.world-of-photonics.com/exhibitor-portal/2025/list-of-exhibitors/exhibitordetails/novoviz/?elb=178.1100.5785.1.111 

More news coverage:

https://www.tokyoupdates.metro.tokyo.lg.jp/en/post-1551/

https://www.startupticker.ch/en/news/novoviz-wins-chf-150-000-to-advance-computational-imaging

  

Go to the original article...

NovoViz announces a SPAD-based event camera

Image Sensors World        Go to the original article...


The NovoViz NV04ASC-HW Asynchronous photon-driven camera was developed for applications requiring high sensitivity and/or frame rate but with reduced output bandwidth.

The camera combines the benefits of a single-photon avalanche diode (SPAD) camera, namely the single-photon resolution and fast operating speeds, with the benefits of an event camera – low output data rates.

64 x 48 SPAD pixels
100M fps
10ns resolution
Event-driven output
USB 3.0 

Company profile: https://exhibitors.world-of-photonics.com/exhibitor-portal/2025/list-of-exhibitors/exhibitordetails/novoviz/?elb=178.1100.5785.1.111 

More news coverage:

https://www.tokyoupdates.metro.tokyo.lg.jp/en/post-1551/

https://www.startupticker.ch/en/news/novoviz-wins-chf-150-000-to-advance-computational-imaging

  

Go to the original article...

RealSense spinoff from Intel

Image Sensors World        Go to the original article...

Link: https://realsenseai.com/news-insights/news/realsense-completes-spin-out-from-intel-raises-50-million-to-accelerate-ai-powered-vision-for-robotics-and-biometrics/

RealSense Completes Spinout from Intel, Raises $50 Million to Accelerate AI-Powered Vision for Robotics and Biometrics

RealSense Completes Spinout from Intel to Accelerate AI-Powered Vision for Robotics and Biometrics. View Press Release

The newly independent company is set to lead in computer vision and machine perception for physical AI and beyond

SAN FRANCISCO — July 11, 2025 — RealSense, a pioneer in AI-powered computer vision, today announced its successful spinout from Intel Corporation and the close of a $50 million Series A funding round. With investment led by a renowned semiconductor private equity firm and participation from strategic investors, including Intel Capital and MediaTek Innovation Fund, RealSense now operates as an independent company focused on advancing innovation in AI, robotics, biometrics and computer vision. 

The new capital will fuel RealSense’s expansion into adjacent and emerging markets and scale its manufacturing, sales and go-to-market (GTM) global presence to meet increased demand for humanoid and autonomous mobile robotics (AMRs), as well as AI-powered access control and security solutions.

“We’re excited to build on our leadership position in 3D perception in robotics and see scalable growth potential in the rise of physical AI,” said Nadav Orbach, CEO of RealSense. “Our independence allows us to move faster and innovate more boldly to adapt to rapidly changing market dynamics as we lead the charge in AI innovation and the coming robotics renaissance.”

RealSense brings to market proven industry traction across robotics, industrial automation, security, healthcare and “tech for good” initiatives — including partnerships with companies like ANYbotics, Eyesynth, Fit:Match and Unitree Robotics. 

RealSense will continue to support its existing customer base and product roadmap, including the acclaimed RealSense depth cameras, embedded in 60% of the world’s AMRs and humanoid robots, an incredibly fast-growing segment. Its recently launched D555 depth camera, powered by the next-gen RealSense Vision SoC V5 and featuring Power over Ethernet (PoE), demonstrates the company’s ongoing leadership in embedded vision technology and edge AI capabilities. 

“Our mission is to enable the world to integrate robotics and AI in everyday life safely,” said Orbach. “This technology is not about replacing human creativity or decision-making — but about removing danger and drudgery from human work. Our systems are built to amplify human potential by offloading these types of tasks to machines equipped with intelligent, secure and reliable vision systems.”

RealSense has developed robust, global manufacturing technology capabilities to ensure consistent quality and product performance, working with a broad network of vision system distributors and value-added resellers. The company has over 3,000 customers worldwide, with over 80 global patents.

Seasoned leadership for a critical market moment

RealSense’s founding team brings together veteran technologists and business leaders with deep expertise in computer vision, AI, robotics and market development. The team includes:

Nadav Orbach – Chief Executive Officer
Mark Yahiro – Vice President, Business Development
Mike Nielsen – Vice President, Marketing
Fred Angelopoulos – Vice President, Sales
Guy Halperin – Vice President, Head of R&D
Eyal Rond – Vice President, AI and Computer Vision
Joel Hagberg – Vice President, Product 
Ilan Ofek – Vice President, New Product Introduction and Manufacturing
Chris Matthieu – Chief Developer Evangelist
The spinout comes at a moment of rapid global growth in robotics and biometrics. The robotics market is projected to quadruple — from $50 billion today to over $200 billion within six years — while demand for humanoid robots is expected to grow at a CAGR above 40%. At the same time, facial biometrics are becoming increasingly accepted in everyday applications, from airport screening to event entry.

To meet global demand, RealSense plans to expand its GTM team and hire additional AI, software and robotics engineers to accelerate product development.

Go to the original article...

Zeiss acquires SPAD startup PiImaging

Image Sensors World        Go to the original article...

Link: https://www.zeiss.com/microscopy/en/about-us/newsroom/press-releases/2025/zeiss-acquires-all-equity-shares-of-pi-imaging-technology-sa.html

Unlocking SPAD technology for advanced imaging applications in microscopy and beyond

ZEISS acquires all equity shares of Pi Imaging Technology SA

Jena, Germany | 21 July 2025 | ZEISS Research Microscopy Solutions

In early July, Carl Zeiss Microscopy GmbH has acquired all equity shares of Pi Imaging Technology SA, based in Lausanne, Switzerland. Pi Imaging Technology SA now operates as "Pi Imaging Technology SA – a ZEISS company". The Lausanne location with all employees will be retained.

Pi Imaging Technology SA has been a trusted partner of ZEISS Research Microscopy Solutions for many years. To continue and deepen a successful long-term collaboration, ZEISS now purchased all equity shares of Pi Imaging Technology.

The Swiss-based sensing provider focuses on the development of single-photon avalanche diode (SPAD) arrays and image sensors, engineered using cutting-edge semiconductor technology. SPAD is a type of photo detector that can detect very weak light signals, even down to the level of individual photons. SPADs are commonly used in a variety of applications in everyday life, industry and various research fields.

"The goal of the acquisition is to combine the innovative SPAD technology with ZEISS microscopy solutions and jointly further develop them, thereby expanding our market-leading position. With the acquisition of Pi Imaging Technology SA, we are investing in a technology that secures our future core business and enables further growth", says Dr. Michael Albiez, Head of ZEISS Research Microscopy Solutions.

SPADs in microscopy and beyond

SPAD detectors from Pi Imaging Technology SA will complement the current and future sensor technologies used in ZEISS high-end microscopes. The combination of Pi Imaging Technology SA's technology and ZEISS microscopy solutions will enable innovative solutions for researchers in the field of high-end fluorescence microscopy in the future. The integration of SPAD technology into ZEISS microscopes improves both the quality and throughput in microscopic imaging in life sciences and so opens new technological possibilities and applications. Since SPAD detectors offer exceptional sensitivity in low-light conditions, they allow researchers to study molecular environments and interactions with remarkable clarity, for example.

"We achieved pioneering milestones by being the first company to integrate a SPAD array into a commercial microscope in 2020 and subsequently introducing the first SPAD camera to the market in 2021", says Michel Antolovic, Managing Director and co-founder of Pi Imaging Technology. "I am very pleased that after many years of trusting collaboration with ZEISS, we are now taking the next step and integrating our entire business into the ZEISS Group. We will merge our innovation capabilities and together shape the field of light detection."

Following the acquisition, ZEISS customers can expect advanced imaging applications with the next generation of detectors.

ZEISS and Pi Imaging Technology SA are also active in other fields, including spectroscopy, scientific imaging, low-light imaging, and high-speed imaging. Their objective is to also collaborate on advancing these fields.

Go to the original article...

Princeton Infrared Technologies closing business

Image Sensors World        Go to the original article...

From Princeton Infrared Technologies: https://www.princetonirtech.com/

Today marks a bittersweet milestone as we officially close the doors of Princeton Infrared Technologies.

It’s a moment of mixed emotions. Pride in what we’ve accomplished and gratitude for the people who made it possible. Over the past 13 years, we built cutting-edge products in the shortwave infrared and fueled innovation in unique applications.

To our incredible and inspiring employees: thank you! Your passion, resilience and brilliance made the impossible possible. You brought our vision to life and made PIRT what it was and how it will always be remembered.

To our customers, research collaborators, partners, and investors: your trust fueled our work and allowed us to push the boundaries of what’s possible in SWIR imaging. Together, we achieved breakthroughs, made discoveries, and moved the industry forward in ways that should bring us pride.

While it’s hard to see this chapter end, I’m deeply grateful for the journey we’ve taken together. I only wish we had more time to continue the work we’ve shared. This will be our final message as a company. Thank you for being such an important part of our story.

Here’s to new beginnings.

If there are any questions or you need any help please contact:
Brian W. Hofmeister, Esq.
(P)(609) 890-1500
bwh@hofmeisterfirm.com

Go to the original article...

css.php