Harvest Imaging 2025 Forum – Dec 8, 9 – Single-Photon Detection

Image Sensors World        Go to the original article...

Registration page: https://harvestimaging.com/forum_introduction_2025_coming.php

The Harvest Imaging forum will continue with a next edition scheduled for December 8 & 9, 2025, in Delft, the Netherlands. The basic intention of the Harvest Imaging forum is to have a scientific and technical in-depth discussion on one particular topic that is of great importance and value to the digital imaging community.

The 2025 Harvest Imaging forum will deal with a single topic and will have only one world-level expert as the speaker: 

"SINGLE PHOTON DETECTION"
Prof. dr. Robert HENDERSON (Univ. of Edinburgh, UK)

Abstract:

Access to the ultrafast quantum statistics of light enabled by new solid-state single photon imaging technologies is revolutionizing camera technology. 

The noise-free detection and precise localization of individual photons enables imaging of time itself (which directly enables depth perception) at unprecedented temporal and spatial resolutions. Such solid-state single photon imaging technologies now approach the sensitivity, timing resolution and dark noise of vacuum electrocathode approaches whilst providing robustness, low cost and high spatial resolution. Amongst these, CMOS single Photon Avalanche Diode (SPAD) arrays offer the unique capability to extract single photon statistics in high background conditions using massively parallel on-chip timing and digital computation.

This forum will highlight the modelling, device structures, characterisation methods and circuitry necessary to develop this new generation of SPAD imaging system. Recent advances in SPAD direct time of flight (dToF) and photon counting sensor design techniques optimized for low power, computation, and area will be reviewed. 

The forum will focus primarily on the mainstream commercial applications of SPADs in low light imaging, depth imaging (RGB-Z) and LIDAR. Further examples will be drawn from emerging use cases in fluorescence microscopy, Raman spectroscopy, non-line-of-sight imaging, quantum optics and medical diagnostics (X-ray, PET). Future trends and prospects enabled by 3D-stacking technology will be considered.

Bio

Robert K. Henderson is a Professor of Electronic Imaging in the School of Engineering at the University of Edinburgh. He obtained his PhD in 1990 from the University of Glasgow. From 1991, he was a research engineer at the Swiss Centre for Microelectronics, Neuchatel, Switzerland. In 1996, he was appointed senior VLSI engineer at VLSI Vision Ltd, Edinburgh, UK where he worked on the world’s first single chip video camera.

From 2000, as principal VLSI engineer in STMicroelectronics Imaging Division he developed image sensors for mobile phone applications. He joined University of Edinburgh in 2005, designing the first SPAD image sensors in nanometer CMOS technologies in the MegaFrame and SPADnet EU projects. This research activity led to the first volume SPAD time-of-flight products in 2013 in the form of STMicroelectronics FlightSense series, which perform an autofocus-assist now present in over 2 billion smartphones. He benefits from a long-term research partnership with STMicroelectronics in which he explores medical, scientific and high speed imaging applications of SPAD technology. In 2014, he was awarded a prestigious ERC advanced fellowship. He is an advisor to Ouster Automotive and a Fellow of the IEEE and the Royal Society of Edinburgh.

Go to the original article...

Sony 3-layer stacked sensor

Image Sensors World        Go to the original article...

Tranlated from baidu.com: https://baijiahao-baidu-com.translate.goog/s?id=1839758590887948034&wfr=spider&for=pc&_x_tr_sl=zh-CN&_x_tr_tl=de&_x_tr_hl=de&_x_tr_pto=wapp

In-depth: Sony's three-layer CIS changes the global sensor market

Source: AI Core World (Aug 7, 2025)

Sony is developing a three-layer image sensor

Sony Semiconductor Solutions (SSS) showcased a potentially groundbreaking three-layer image sensor design as part of a presentation to investors, the company's Imaging & Sensing Solutions (I&SS) division announced today. The design promises significant performance improvements.

Although Sony has used stacked sensors in several cameras, including its flagship a1 II, these sensors currently have a dual-layer structure. One layer is the photodiode layer responsible for capturing light, which contains all the light-sensitive pixels; the other layer is the transistor layer located below it, which is responsible for image processing tasks. Sony's core long-term goal is to introduce the crucial third layer in the image sensor stack. This essentially means an expansion of processing power and a leap in image quality.

When other conditions are equal, the stronger the processing power at the sensor level, the better the imaging effect will naturally be. Sony explains that increasing processing power at the sensor level will directly translate into improvements in several key performance areas: dynamic range, sensitivity, noise performance, power efficiency, readout speed, and resolution.

While adding sensor layers doesn't directly change the pixel resolution itself, it unlocks entirely new video recording modes by significantly improving the overall speed and performance of the sensor.
Image sensors remain a core pillar of Sony's strategy in diverse areas including mobile devices, automotive, industrial and cameras. Sony expects the camera-related sensor market to continue expanding at a compound annual growth rate of 9% by fiscal 2030, which indicates that Sony will continue to increase its investment in this field. 

Next-generation sensor technology will become a driving force for differentiation

Sony is focusing on R&D in multi-layer sensor stack architectures and advanced process nodes to improve sensitivity, dynamic range, power efficiency and readout speed – cutting-edge technologies that will directly power future Alpha and FX camera series. To achieve these goals, Sony plans to invest a total of nearly 930 billion yen in capital expenditures between 2024 and 2026, about half of which will be dedicated to the research and development and production of advanced image sensor processes.
As outlined in its long-term strategy, Sony is going all out and investing in next-generation sensor technologies, including multi-layer stacked image sensors.

Sony's triple-stacked sensor, used in the Xperia 1 V and adopted by other mainstream smartphone models, significantly improves image quality. The architecture also supports multimodal sensing and on-chip artificial intelligence processing, marking a shift in the industry's focus from simply pursuing resolution to intelligent sensing. The breakthrough in 22nm logic stacking technology is committed to achieving ultra-low power consumption and expanded computing power, among which FDSOI technology is expected to be applied in the field of neuromorphic sensing.

---------------------------------------------- 

Also covered by PetaPixel: https://petapixel.com/2025/07/30/sonys-triple-layer-image-sensor-promises-huge-performance-gains/ 

 Sony’s Triple-Layer Image Sensor Promises Huge Performance Gains

 




Go to the original article...

Nikon Z 24-70mm f2.8 S II review

Cameralabs        Go to the original article...

Nikon has totally reworked their pro Z 24-70mm f2.8 S with improved features for photographers and videographers. How does it compare to its predecessor and Tamron? Find out in my review.…

Go to the original article...

AIStorm and Tower Semiconductor release AI-in-Imager chip

Image Sensors World        Go to the original article...

AIStorm & Tower Semiconductor Introduce Cheetah HS, World’s First Up-to-260K FPS AI-in-Imager Chip for Inspection, Robotics & Sports

Charge-domain imager with on-chip neural network, provides breakthrough slow-motion vision at a fraction of the cost and power consumption of competing high-speed cameras

HOUSTON, Texas, and MIGDAL HAEMEK, Israel - August 12, 2025: AIStorm, the market leader in charge-domain solutions for edge AI, and Tower Semiconductor (NASDAQ/TASE: TSEM), a leading foundry of high-value analog semiconductor solutions, today announced the availability of the Cheetah HS — a high-speed, 120×80-pixel imager with first-layer AI capability that captures up to 260,000 frames per second — 2,000 to 4,000 times faster than conventional CMOS sensors. The Cheetah HS architecture is made possible by Tower’s unique charge-domain imaging platform which is leveraged by AIStorm’s proprietary charge-domain-based analog AI neurons.

By combining ultra-high-speed imaging with charge-domain AI, Cheetah HS slashes system power requirements and bill-of-materials cost for designers of robotics, drones, vibration- and structure-health monitors, high-speed security and surveillance tracking systems, manufacturing and assembly lines, barcode readers, PCB-inspection equipment, biometric unlock systems, vehicle-speed detectors, and even golf-swing analyzers.

“Many consumer and industrial applications require ultra-slow-motion analysis of real-time events to analyze performance or detect anomalies. Such solutions are very costly, and our Cheetah HS solution makes them affordable for a wide range of markets and end applications,” said David Schie, CEO of AIStorm. “Tower is a global leader in charge-domain global-shutter pixels, making them the ideal partner for the development and production of such groundbreaking products.”
“We are very pleased to see the fruits of our long term, close collaboration with AIStorm on this unique breakthrough platform of analog charge-domain embedded AI technology,” said Dr. Avi Strum, SVP and GM of the Sensors and Displays BU at Tower Semiconductor. “Its inherent low-power, low-cost, and high-performance virtues will enable a family of affordable, high-volume products in the near future.”

Key advantages of Cheetah HS
 Adjustable frame rate up to 260,000 frames per second (fps)
 Integrated LED driver (programmable up to 40 mA)
 Enhanced low light performance
 Integrated charge-domain neuron layer outputting pulse streams for downstream neural-network layers or raw high-speed video
 Dramatic cost advantage over competitors
 Lowers processing costs by capturing images quickly, leaving more time per frame for processing
 Ability to capture extremely high-speed events and analyze them in slow motion

How it works
Traditional high-speed cameras utilize expensive high-speed data converters to capture data, which separates the AI input layer from the pixels, increasing the BOM cost and necessitating high-speed connectors and interface components. Cheetah HS’s charge-domain architecture converts incoming photons to charge, computes the first neural-network layer in analog form, then outputs a pulse train that can be processed by downstream networks. The capture rate is programmable, allowing lower frame rates with faster capture times (reducing the cost of processing) or faster frame rates for accurate measurements or slow-motion analysis.

Availability
Cheetah HS is available now in both chip form as well as full reference-camera systems [aistorm.ai/cheetah]. 

Go to the original article...

A New Image Sensor Company in China is Recruiting

Image Sensors World        Go to the original article...

Shanghai Primevision Technology Co., LTD

Recruiting for 11 positions (Link to list) including:
CIS image sensor test - Link
CIS image sensor pixel design - Link
CIS (Image Sensor) Sales - Link

Most positions in Shanghai, China
曹蕾 Cao Lei, Human Resources Director, can provide additional details - lei.cao@primevision.ai

Go to the original article...

Toshiba announces linear CCD sensor for document scanners

Image Sensors World        Go to the original article...

Toshiba Releases Lens-Reduction Type CCD Linear Image Sensor with Low Random Noise That Helps Improve Image Quality in Devices Such as A3 Multifunction Printers

KAWASAKI, Japan--(BUSINESS WIRE)--Toshiba Electronic Devices & Storage Corporation ("Toshiba") has launched a lens-reduction type CCD linear image sensor “TCD2728DG” for A3 multifunction printers. Shipments start today. The sensor has 7,500 image sensing elements (pixels) and supports A3 multifunction printers. It is also more effective at reducing random noise (NDσ) than Toshiba’s current TCD2726DG.

Business offices are seeing a growing need for high-speed, high-resolution copying and scanning of large volumes of different kinds of documents. This is particularly true for A3 multifunction printers, where improving image quality has become an important issue, and NDσ in the signal has to be suppressed to enhance image quality.

TCD2728DG has lower output amplifier gain than Toshiba’s current product, TCD2726DG, and reduces NDσ by approximately 40%. This improvement enhances image quality in multifunction printers. The new CCD linear sensor achieves a data rate of 100 MHz (50 MHz × 2 channels), enabling high-speed processing of large volumes of images. This makes it well-suited for line scan cameras used in inspection systems that require real-time decision-making.

Toshiba will continue to expand its product lineup to support scanning by multifunction printers and the sensing applications of inspection devices, and to meet growing demand for high-speed, high-resolution imaging and sensing technologies. 

Applications
 A3 multifunction printers (resolution of 600 dpi)
 7500-pixel line scan camera for various inspection systems (semiconductor inspection equipment, food sorting equipment, etc.)


Features
 Reduces random noise by approximately 40%
 High-speed CCD linear image sensor: Data rate =100MHz (Master clock frequency 50MHz × 2ch) (max)
 The built-in timing generator circuit and CCD driver help facilitate system development 

 

 

Go to the original article...

Omnivision Needs an Applications Engineer

Image Sensors World        Go to the original article...

 Omnivision

Senior Applications Engineer - Santa Clara, California, USA - Link

Go to the original article...

Image sensor sampling strategies

Image Sensors World        Go to the original article...

 

 

Electronic Sampling for Temporal Imaging: Computational Optical Imaging Episode 66 

This episode considers global and rolling shutter strategies and other alternatives for sampling of video. The very simple simulation presented is available at https://github.com/arizonaCameraLab/c... The frame interpolation research referenced is described at https://jianghz.me/projects/superslomo/


00:00 - Event cameras
00:56 - Visual cortex
01:41 - Image sensors
03:00 - Data plane coding
03:55 - Rolling shutter
05:50 - Rolling shutter simulation
09:09 - Temporal interpolation
09:59 - Random temporal sampling
11:12 - Sample data 
11:52 - Sample packing
12:29 - Rolling shutter compesation
15:15 - Dynamic range

Go to the original article...

Samsung presentation on pixel deep sub-micron and metaoptics trends

Image Sensors World        Go to the original article...

Paper link: https://imagesensors.org/wp-content/uploads/2025/03/Invited-6.pdf

Journey of pixel optics scaling into deep sub-micron and migration to metaoptics era 


 


Go to the original article...

ESSERC 2025 smart cars workshop

Image Sensors World        Go to the original article...

The Role of Cameras and Photonics for Smart Cars 

Full day workshop on Sep 8th, 2025 @ ESSERC 2025 TU Munich

https://www.esserc2025.org/w8

Organizers
Cedric Tubert (STMicroelectronics, FR)
Daniele Perenzoni (Sony, IT) 

This workshop explores cutting-edge developments in automotive vision systems, highlighting the integration of advanced cameras and photonic technologies. We present significant advancements in Automotive High Dynamic Range Imaging specifically designed for High Temperature conditions, addressing one of the industry's most challenging operational environments. The session showcases innovative wafer-scale micro-optics and meta-surfaces that are revolutionizing both imaging and illumination applications. Attendees will gain insights into the evolution of next generation CMOS image sensors for smart cars and for Driver and Occupancy Monitoring systems. The workshop also examines hardware accelerators enabling low-latency event-based vision processing, critical for real-time decision. Finally, we address the integration challenges in 'Photonics on the Road,' exploring practical hurdles and solutions for implementing these technologies in self-driving vehicles. These innovations collectively demonstrate the essential role of photonics and imaging systems in creating safer, more efficient autonomous transportation.

Program 

09:30 - 10:15
Automotive High Dynamic Range Imaging in High Temperature Conditions
Tomas Geurts (Omnivision, BE)
​The talk will cover High Dynamic Range (HDR) requirements in ADAS and In-Cabin automotive imaging applications. The importance and relevance of performance at high temperature will be explained. The talk will highlight fundamental limitations of low-light and HDR performance at elevated temperatures which is an important aspect in automotive imaging but often under-illuminated in publications.​

10:15 - 11:00
Past and Future of CMOS Image Sensors in Automotive Industry
Yorito Sakano (Sony Semiconductor Solutions, JP)
Business motivation is essential for the evolution of semiconductor devices. The larger the market, the faster the technology evolves. The first iPhone was born in 2007, and the back-illuminated image sensor, an epoch-making event for CMOS image sensors, was introduced in 2009. With technical breakthroughs and business motivations coming together almost simultaneously, CMOS image sensors have undergone a dramatic technological evolution over the past decade or so. Similarly, automotive CMOS image sensors have recently undergone a unique evolution in the competitive axis of high dynamic range (HDR), supported by business motivation such as the evolution of Advanced Driver-Assistance Systems (ADAS) and the efforts toward the practical application of Autonomous Driving (AD). Let me overview the recent evolution of automotive CMOS image sensors and discuss the direction of future evolution.
​​​
11:00 - 11:30
Coffee break

11:30 - 12:15
Wafer Scale Micro-optics and Meta-surfaces for Applications in Imaging and Illumination
Falk Eilenberger (Fraunhofer, DE)
Micro- and nanooptical systems game-changers in our ability to manipulate light. Nanooptical systems, frequently called meta-surfaces, allow to access all degrees of freedom of the optical fields, such as spectral properties, its polarization, and its phase next to its intensity, which is classically addressed in imaging systems. Nano- and microoptical systems allow to introduce massive parallelization in optical systems, breaking virtually any commonly known design rules both for imaging as well as for illumination systems. Harnessing these degrees of freedom is, however, a grand challenge in terms of design, engineering, and cost scaling. In the talk I shall highlight how wafer scale fabrication techniques can be utilized to overcome these issues, if the entire process chain from design to the final application can be tailored to the specific requirements of the optical task at hand. I shall do so by highlighting a variety of applications and projects, in which wafer scale nanooptics have played a crucial role, from optics for satellite missions all the way to illumination systems for mobility solutions.

12:15 - 13:00
CMOS Image Sensors for Driver and Occupancy Monitoring Solutions
Jerome Chossat and Pierre Malinge (STMicroelectronics, FR)
Automotive applications require high-performance and cost-effective sensors. Considering these constraints, we present a novel pixel architecture capable of both rolling and global shutter imaging. Utilizing a non-Bayer CFA pattern, it captures both RGB and near-infrared images. A specific ASIL pixel design ensures a comprehensive integrity check of the sensor. The latter is connected to a logic circuit through a 3D Cu-to-Cu hybrid bonding process, providing state-of-the-art on-chip data processing and interfacing. Such a sensor is ideally suited for driver monitoring systems while enabling the integration of advanced multimedia features. Indeed, on top of the pixel and readout quality requirements, CMOS mage sensors for Driver and Occupancy Monitoring solutions are bringing a lot of challenges on the digital side too. They may contain quite complex signal processing for properly dealing with various non-Bayer CFA and manage IR content, they must integrate automotive safety capabilities, must be efficiently protected against malicious attackers aimed at tampering their functionalities, and must prevent usage of counterfeit components. In addition, all this must be done under aggressive cost constraint and stringent power constraints but also be developed in conformance with road vehicles functional safety (ISO26262), and Road vehicles Cybersecurity engineering (ISO21434).

13:00 - 14:00
Lunch

14:00 - 14:45
Hardware Accelerators for Low-latency Event-based Vision
Charlotte Frenkel (TU Delft, NL)
From optical flow to high-speed particle counting, event-based cameras emerge as an enabler for low-latency vision applications. They capture temporal contrast changes as a stream of events, which are generated on a per-pixel basis and at a temporal resolution of a few microseconds. However, there is currently a lack of hardware support for event-based processing workloads that generate updated predictions within microseconds. This talk will cover emerging developments in this area, from dynamic graph neural networks to digital in-memory computing for spiking neural networks.
 
14:45 - 15:30
Photonics on the Road: Navigating the Integration Hurdles in Self-Driving Cars
Christoph Parl (Valeo, DE)
Valeo is at the forefront of the autonomous driving revolution, providing a comprehensive suite of sensors - cameras, RADARs, ultrasonics, microphones, and LiDARs - that enable self-driving capabilities. This keynote will explore how Valeo's technology is driving the transition from manual to fully autonomous vehicles. A key focus will be on vehicle integration: the art of seamlessly embedding these sensors into the vehicle's design. This requires balancing function-driven design, ensuring optimal sensor performance, with emotion-driven design, creating desirable and engaging vehicles. The presentation will highlight the diverse sensors required for autonomy, with a focus on LiDARs due to complexity. Crucially, we will examine the challenges and solutions surrounding sensor mounting positions. Optimal placement is vital, considering each sensor's needs, environmental factors, and cleaning requirements. Finally, we'll explore how solid-state technology can help vehicle integration to enable more compact and robust solutions for a large-scale rollout of self-driving functions. 

15:30 - 16:00 
Coffee break

16:00 - 16:45
Final discussion and closing​​​​ 

Go to the original article...

NovoViz announces a SPAD-based event camera

Image Sensors World        Go to the original article...


The NovoViz NV04ASC-HW Asynchronous photon-driven camera was developed for applications requiring high sensitivity and/or frame rate but with reduced output bandwidth.

The camera combines the benefits of a single-photon avalanche diode (SPAD) camera, namely the single-photon resolution and fast operating speeds, with the benefits of an event camera – low output data rates.

64 x 48 SPAD pixels
100M fps
10ns resolution
Event-driven output
USB 3.0 

Company profile: https://exhibitors.world-of-photonics.com/exhibitor-portal/2025/list-of-exhibitors/exhibitordetails/novoviz/?elb=178.1100.5785.1.111 

More news coverage:

https://www.tokyoupdates.metro.tokyo.lg.jp/en/post-1551/

https://www.startupticker.ch/en/news/novoviz-wins-chf-150-000-to-advance-computational-imaging

  

Go to the original article...

NovoViz announces a SPAD-based event camera

Image Sensors World        Go to the original article...


The NovoViz NV04ASC-HW Asynchronous photon-driven camera was developed for applications requiring high sensitivity and/or frame rate but with reduced output bandwidth.

The camera combines the benefits of a single-photon avalanche diode (SPAD) camera, namely the single-photon resolution and fast operating speeds, with the benefits of an event camera – low output data rates.

64 x 48 SPAD pixels
100M fps
10ns resolution
Event-driven output
USB 3.0 

Company profile: https://exhibitors.world-of-photonics.com/exhibitor-portal/2025/list-of-exhibitors/exhibitordetails/novoviz/?elb=178.1100.5785.1.111 

More news coverage:

https://www.tokyoupdates.metro.tokyo.lg.jp/en/post-1551/

https://www.startupticker.ch/en/news/novoviz-wins-chf-150-000-to-advance-computational-imaging

  

Go to the original article...

RealSense spinoff from Intel

Image Sensors World        Go to the original article...

Link: https://realsenseai.com/news-insights/news/realsense-completes-spin-out-from-intel-raises-50-million-to-accelerate-ai-powered-vision-for-robotics-and-biometrics/

RealSense Completes Spinout from Intel, Raises $50 Million to Accelerate AI-Powered Vision for Robotics and Biometrics

RealSense Completes Spinout from Intel to Accelerate AI-Powered Vision for Robotics and Biometrics. View Press Release

The newly independent company is set to lead in computer vision and machine perception for physical AI and beyond

SAN FRANCISCO — July 11, 2025 — RealSense, a pioneer in AI-powered computer vision, today announced its successful spinout from Intel Corporation and the close of a $50 million Series A funding round. With investment led by a renowned semiconductor private equity firm and participation from strategic investors, including Intel Capital and MediaTek Innovation Fund, RealSense now operates as an independent company focused on advancing innovation in AI, robotics, biometrics and computer vision. 

The new capital will fuel RealSense’s expansion into adjacent and emerging markets and scale its manufacturing, sales and go-to-market (GTM) global presence to meet increased demand for humanoid and autonomous mobile robotics (AMRs), as well as AI-powered access control and security solutions.

“We’re excited to build on our leadership position in 3D perception in robotics and see scalable growth potential in the rise of physical AI,” said Nadav Orbach, CEO of RealSense. “Our independence allows us to move faster and innovate more boldly to adapt to rapidly changing market dynamics as we lead the charge in AI innovation and the coming robotics renaissance.”

RealSense brings to market proven industry traction across robotics, industrial automation, security, healthcare and “tech for good” initiatives — including partnerships with companies like ANYbotics, Eyesynth, Fit:Match and Unitree Robotics. 

RealSense will continue to support its existing customer base and product roadmap, including the acclaimed RealSense depth cameras, embedded in 60% of the world’s AMRs and humanoid robots, an incredibly fast-growing segment. Its recently launched D555 depth camera, powered by the next-gen RealSense Vision SoC V5 and featuring Power over Ethernet (PoE), demonstrates the company’s ongoing leadership in embedded vision technology and edge AI capabilities. 

“Our mission is to enable the world to integrate robotics and AI in everyday life safely,” said Orbach. “This technology is not about replacing human creativity or decision-making — but about removing danger and drudgery from human work. Our systems are built to amplify human potential by offloading these types of tasks to machines equipped with intelligent, secure and reliable vision systems.”

RealSense has developed robust, global manufacturing technology capabilities to ensure consistent quality and product performance, working with a broad network of vision system distributors and value-added resellers. The company has over 3,000 customers worldwide, with over 80 global patents.

Seasoned leadership for a critical market moment

RealSense’s founding team brings together veteran technologists and business leaders with deep expertise in computer vision, AI, robotics and market development. The team includes:

Nadav Orbach – Chief Executive Officer
Mark Yahiro – Vice President, Business Development
Mike Nielsen – Vice President, Marketing
Fred Angelopoulos – Vice President, Sales
Guy Halperin – Vice President, Head of R&D
Eyal Rond – Vice President, AI and Computer Vision
Joel Hagberg – Vice President, Product 
Ilan Ofek – Vice President, New Product Introduction and Manufacturing
Chris Matthieu – Chief Developer Evangelist
The spinout comes at a moment of rapid global growth in robotics and biometrics. The robotics market is projected to quadruple — from $50 billion today to over $200 billion within six years — while demand for humanoid robots is expected to grow at a CAGR above 40%. At the same time, facial biometrics are becoming increasingly accepted in everyday applications, from airport screening to event entry.

To meet global demand, RealSense plans to expand its GTM team and hire additional AI, software and robotics engineers to accelerate product development.

Go to the original article...

Job Postings – Week of 3 August 2025

Image Sensors World        Go to the original article...


Leidos

Detector Scientist

Vista, California, USA

Link

Australian Research Council

PhD Scholarships in Quantum Technologies

Adelaide, Melbourne, and Brisbane, Australia

Link

CMOS Sensor, Inc.

Product Marketing and Sales Manager

San Jose, California, USA

Link

Sony Europe

European Graduate Program - Image Sensor Designer

Oslo, Norway

Link

Eyeo

CMOS Image Sensor Characterization Engineer

Eindhoven, The Netherlands

Link

ams-Osram

Device engineer optical detectors

Premstätten, Styria, Austria

Link

Osram

Working student - Mass Marketing in the area of Sensor Solutions

Munich, Bavaria, Germany

Link

CERN

Detector Physicist

Geneva, Switzerland

Link

Lockheed-Martin

Converged Sensors Engineering

King of Prussia, Pennsylvania; Liverpool, New York; Owego, New York

USA

Link

Go to the original article...

Job Postings – Week of 3 August 2025

Image Sensors World        Go to the original article...


Leidos

Detector Scientist

Vista, California, USA

Link

Australian Research Council

PhD Scholarships in Quantum Technologies

Adelaide, Melbourne, and Brisbane, Australia

Link

CMOS Sensor, Inc.

Product Marketing and Sales Manager

San Jose, California, USA

Link

Sony Europe

European Graduate Program - Image Sensor Designer

Oslo, Norway

Link

Eyeo

CMOS Image Sensor Characterization Engineer

Eindhoven, The Netherlands

Link

ams-Osram

Device engineer optical detectors

Premstätten, Styria, Austria

Link

Osram

Working student - Mass Marketing in the area of Sensor Solutions

Munich, Bavaria, Germany

Link

CERN

Detector Physicist

Geneva, Switzerland

Link

Lockheed-Martin

Converged Sensors Engineering

King of Prussia, Pennsylvania; Liverpool, New York; Owego, New York

USA

Link

Go to the original article...

Zeiss acquires SPAD startup PiImaging

Image Sensors World        Go to the original article...

Link: https://www.zeiss.com/microscopy/en/about-us/newsroom/press-releases/2025/zeiss-acquires-all-equity-shares-of-pi-imaging-technology-sa.html

Unlocking SPAD technology for advanced imaging applications in microscopy and beyond

ZEISS acquires all equity shares of Pi Imaging Technology SA

Jena, Germany | 21 July 2025 | ZEISS Research Microscopy Solutions

In early July, Carl Zeiss Microscopy GmbH has acquired all equity shares of Pi Imaging Technology SA, based in Lausanne, Switzerland. Pi Imaging Technology SA now operates as "Pi Imaging Technology SA – a ZEISS company". The Lausanne location with all employees will be retained.

Pi Imaging Technology SA has been a trusted partner of ZEISS Research Microscopy Solutions for many years. To continue and deepen a successful long-term collaboration, ZEISS now purchased all equity shares of Pi Imaging Technology.

The Swiss-based sensing provider focuses on the development of single-photon avalanche diode (SPAD) arrays and image sensors, engineered using cutting-edge semiconductor technology. SPAD is a type of photo detector that can detect very weak light signals, even down to the level of individual photons. SPADs are commonly used in a variety of applications in everyday life, industry and various research fields.

"The goal of the acquisition is to combine the innovative SPAD technology with ZEISS microscopy solutions and jointly further develop them, thereby expanding our market-leading position. With the acquisition of Pi Imaging Technology SA, we are investing in a technology that secures our future core business and enables further growth", says Dr. Michael Albiez, Head of ZEISS Research Microscopy Solutions.

SPADs in microscopy and beyond

SPAD detectors from Pi Imaging Technology SA will complement the current and future sensor technologies used in ZEISS high-end microscopes. The combination of Pi Imaging Technology SA's technology and ZEISS microscopy solutions will enable innovative solutions for researchers in the field of high-end fluorescence microscopy in the future. The integration of SPAD technology into ZEISS microscopes improves both the quality and throughput in microscopic imaging in life sciences and so opens new technological possibilities and applications. Since SPAD detectors offer exceptional sensitivity in low-light conditions, they allow researchers to study molecular environments and interactions with remarkable clarity, for example.

"We achieved pioneering milestones by being the first company to integrate a SPAD array into a commercial microscope in 2020 and subsequently introducing the first SPAD camera to the market in 2021", says Michel Antolovic, Managing Director and co-founder of Pi Imaging Technology. "I am very pleased that after many years of trusting collaboration with ZEISS, we are now taking the next step and integrating our entire business into the ZEISS Group. We will merge our innovation capabilities and together shape the field of light detection."

Following the acquisition, ZEISS customers can expect advanced imaging applications with the next generation of detectors.

ZEISS and Pi Imaging Technology SA are also active in other fields, including spectroscopy, scientific imaging, low-light imaging, and high-speed imaging. Their objective is to also collaborate on advancing these fields.

Go to the original article...

Conference List – February 2026

Image Sensors World        Go to the original article...

TIPP 2026 (International Conference on Technology & Instrumentation in Particle Physics) - 2-6 February 2026 - Mumbai, India - Website

IEEE International Solid-State Circuits Conference (ISSCC) - 15-19 February 2026 - San Francisco, California, USA - Website

SPIE Medical Imaging - 15-19 February 2026 - Vancouver, British Columbia, Canada - Website

innoLAE (Innovations in Large-Area Electronics) - 17-19 February 2026 - Cambridge, UK - Website

Wafer-Level Packaging Symposium - 17-19 February 2026 - Burlingame, California, USA - Website

IEEE Applied Sensing Conference - 23-26 February 2026 - Delhi, India - Website

MSS Parallel (BSD, Materials & Detectors, and Passive Sensors) Conference - 23-27 February 2026 - Orlando, Florida, USA - Website - (Clearances may be required)

22nd Annual IEEE International Conference on Sensing, Communication, and Networking (SECON) - 26-28 February 2026 - Abu Dhabi, United Arab Emirates - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Go to the original article...

Princeton Infrared Technologies closing business

Image Sensors World        Go to the original article...

From Princeton Infrared Technologies: https://www.princetonirtech.com/

Today marks a bittersweet milestone as we officially close the doors of Princeton Infrared Technologies.

It’s a moment of mixed emotions. Pride in what we’ve accomplished and gratitude for the people who made it possible. Over the past 13 years, we built cutting-edge products in the shortwave infrared and fueled innovation in unique applications.

To our incredible and inspiring employees: thank you! Your passion, resilience and brilliance made the impossible possible. You brought our vision to life and made PIRT what it was and how it will always be remembered.

To our customers, research collaborators, partners, and investors: your trust fueled our work and allowed us to push the boundaries of what’s possible in SWIR imaging. Together, we achieved breakthroughs, made discoveries, and moved the industry forward in ways that should bring us pride.

While it’s hard to see this chapter end, I’m deeply grateful for the journey we’ve taken together. I only wish we had more time to continue the work we’ve shared. This will be our final message as a company. Thank you for being such an important part of our story.

Here’s to new beginnings.

If there are any questions or you need any help please contact:
Brian W. Hofmeister, Esq.
(P)(609) 890-1500
bwh@hofmeisterfirm.com

Go to the original article...

3D stacked edge-AI chip with CIS + deep neural network

Image Sensors World        Go to the original article...

In a recent preprint titled "J3DAI: A tiny DNN-Based Edge AI Accelerator for 3D-Stacked CMOS Image Sensor," Tain et al. write:

This paper presents J3DAI, a tiny deep neural network-based hardware accelerator for a 3-layer 3D-stacked CMOS image sensor featuring an artificial intelligence (AI) chip integrating a Deep Neural Network (DNN)-based accelerator. The DNN accelerator is designed to efficiently perform neural
network tasks such as image classification and segmentation. This paper focuses on the digital system of J3DAI, highlighting its Performance-Power-Area (PPA) characteristics and showcasing advanced edge AI capabilities on a CMOS image sensor. To support hardware, we utilized the Aidge comprehensive software framework, which enables the programming of both the host processor and the DNN accelerator. Aidge supports post-training quantization, significantly reducing memory footprint and computational complexity, making it crucial for deploying models on resource-constrained hardware like J3DAI.
Our experimental results demonstrate the versatility and efficiency of this innovative design in the field of edge AI, showcasing its potential to handle both simple and computationally intensive tasks.
Future work will focus on further optimizing the architecture and exploring new applications to fully leverage the capabilities of J3DAI. As edge AI continues to grow in importance, innovations like J3DAI will play a crucial role in enabling real-time, low-latency, and energy-efficient AI processing at the edge.


 




Go to the original article...

Call for Papers: Image Sensors at ISSCC 2026

Image Sensors World        Go to the original article...

New for IEEE ISSCC 2026, we are pleased to announce the creation of a new sub-committee dedicated to Image Sensors & Displays. The Call for Papers includes, but is not limited to, the following topics:

Image sensors • vision sensors and event-based and computer vision sensors • LIDAR, time-of-flight, depth sensing • machine learning and edge computing for imaging applications • display drivers, touch sensing • haptic displays • interactive display and sensing technologies for AR/VR

ISSCC is the foremost global forum for presentation of advances in solid-state circuits and systems-on-a-chip. This is a great opportunity to increase the presence of image sensors at the Conference and offers a unique opportunity for engineers working at the cutting edge of IC design and application to maintain technical currency, and to network with leading experts.

For more information, contact the sub-committee chair, Bruce Rae (STMicroelectronics) via LinkedIn


Go to the original article...

STMicro and Metalenz sign new licensing deal

Image Sensors World        Go to the original article...

 STMicroelectronics and Metalenz have signed a license agreement to scale the production of metasurface optics for high-volume applications in consumer, automotive, and industrial markets.
 
This collaboration aims to meet the growing demand in sectors like smartphone biometrics, LIDAR, and robotics, as the metasurface optics market is projected to reach $2 billion by 2029.
 
ST will leverage its 300mm semiconductor and optics manufacturing platform to integrate Metalenz’s technology, ensuring greater precision and cost-efficiency at scale. Since 2022, ST has already shipped over 140 million units of metasurface optics and FlightSense modules using Metalenz IP.

Full press release below. https://newsroom.st.com/media-center/press-item.html/t4717.html 

STMicroelectronics and Metalenz Sign a New License Agreement to Accelerate Metasurface Optics Adoption
 
New license agreement enabling the proliferation of metasurface optics across high-volume consumer, automotive and industrial markets: from smartphone applications like biometrics, LIDAR and camera assist, to robotics, gesture recognition, or object detection.
 
The agreement broadens ST’s capability to use Metalenz IP to produce advanced metasurface optics while leveraging ST’s unique technology and manufacturing platform combining 300mm semiconductor and optics production, test and qualification.

 
STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics applications and Metalenz, the pioneer of metasurface optics, announced a new license agreement. The agreement broadens ST’s capability to use Metalenz IP to produce advanced metasurface optics while leveraging ST’s unique technology and manufacturing platform combining 300mm semiconductor and optics production, test and qualification.
 
“STMicroelectronics is the unique supplier on the market offering a groundbreaking combination of optics and semiconductor technology. Since 2022, we have shipped well over 140 million metasurface optics and FlightSense™ modules using Metalenz IP. The new license agreement with Metalenz bolsters our technology leadership in consumer, industrial and automotive segments, and will enable new opportunities from smartphone applications like biometrics, LIDAR and camera assist, to robotics, gesture recognition, or object detection,” underlined Alexandre Balmefrezol, Executive Vice President and General Manager of STMicroelectronics’s Imaging Sub-Group. “Our unique model, processing optical technology in our 300mm semiconductor fab, ensures high precision, cost-effectiveness, and scalability to meet the requests of our customers for high-volume, complex applications.”
 
“Our agreement with STMicroelectronics has the potential to further fast-track the adoption of metasurfaces from their origins at Harvard to adoption by market leading consumer electronics companies,” said Rob Devlin, co-founder and CEO of Metalenz. “By enabling the shift of optics production into semiconductor manufacturing, this agreement has the possibility to further redefine the sensing ecosystem. As use cases for 3D sensing continue to expand, ST’s technology leadership in the market together with our IP leadership solidifies ST and Metalenz as the dominant forces in the emergent metasurface market we created.”
 
The new license agreement aims to address the growing market opportunity for metasurface optics projected to experience significant growth to reach $2B by 2029*; largely driven by the industry’s role in emerging display and imaging applications. (*Yole Group, Optical Metasurfaces, 2024 report)
 
In 2022, metasurface technology from Metalenz, which spun out of Harvard and holds the exclusive license rights to the foundational Harvard metasurface patent portfolio, debuted with ST’s market leading direct Time-of-Flight (dToF) FlightSense modules.
 
Replacing the traditional lens stacks and shifting to metasurface optics instead has improved the optical performance and temperature stability of the FlightSense modules while reducing their size and complexity.
 
The use of 300mm wafers ensures high precision and performance in optical applications, as well as the inherent scalability and robustness advantage of semiconductor manufacturing process.

Go to the original article...

Turn your global shutter CMOS sensor into a LiDAR

Image Sensors World        Go to the original article...

In a paper titled "A LiDAR Camera with an Edge" in IOP Measurement Science and Technology journal, Oguh et al. describe an interesting approach of turning a conventional global shutter CMOS image sensor into a LiDAR. The key idea is neatly explained by these two sentences in the paper: "... we recognize a simple fact: if the shutter opens before the arrival time of the photons, the camera will see them. Otherwise, the camera will not. Thus, if the shutter jitter range remains the same and its distribution is uniform, the average intensity of the object in many camera frames will be uniquely associated with the arrival time of the photons."

Abstract: A novel light detection and ranging (LiDAR) design was proposed and demonstrated using just a conventional global shutter complementary metal-oxide-semiconductor (CMOS) camera. Utilizing the jittering rising edge of the camera shutter, the distance of an object can be obtained by averaging hundreds of camera frames. The intensity (brightness) of an object in the image is linearly proportional to the distance from the camera. The achieved time precision is about one nanosecond while the range can reach beyond 50 m using a modest setup. The new design offers a simple yet powerful alternative to existing LiDAR techniques."

 



Full paper (paywalled): https://iopscience.iop.org/article/10.1088/1361-6501/adcb5c

Go to the original article...

Job Postings – Week of 20 July 2025

Image Sensors World        Go to the original article...


Fairchild Imaging

CMOS Image Sensor Characterization Engineer

San Jose, California, USA

Link

CERN

Design Engineer - Monolithic Pixel Sensors

Geneva, Switzerland

Link

Apple

Camera Image Sensor Digital Design Engineer Lead

Cupertino, California, USA

Link

Tsinghua University

Postdoctoral Positions in Experimental High Energy Physics

Beijing, China

Link

Concurrent Technologies

Sensor Scientist

Dayton, Ohio, USA

Link

CNRS-LPHNE

Postdoctoral position on Hyper-Kamiokande

Paris, France

Link

Attollo Engineering

Infrared Sensor and FPA Test Engineer

Camarillo, California, USA

Link

Imasenic

Image Sensor Internships

Barcelona, Spain

Link

Imperx

Director of Camera Development

Boca Raton, Florida, USA

Link

Go to the original article...

Samsung blog article on nanoprism pixels

Image Sensors World        Go to the original article...

News: https://semiconductor.samsung.com/news-events/tech-blog/nanoprism-optical-innovation-in-the-era-of-pixel-miniaturization/

Nanoprism: Optical Innovation in the Era of Pixel Miniaturization 

The evolution of mobile image sensors is ultimately linked to the advancement of pixel technology. The market's demand for high-quality images with smaller and thinner devices is becoming increasingly challenging, making 'fine pixel' technology a core task in the mobile image sensor industry.
In this trend, Samsung System LSI continues to advance its technology, drawing on its experience in the field of small-pixel image sensors. The recently released mobile image sensor ISOCELL JNP is the industry's first to apply Nanoprism, pushing the boundaries on the physical limitations of pixels.
Let's explore how Nanoprism, the first technology to apply Meta-Photonics to image sensors, was created and how it was implemented in ISOCELL JNP.
 
Smaller Pixels, More Light
Sensitivity in image sensors is a key factor in realizing clear and vivid images. Pixel technology has evolved over time to capture as much light as possible. Examples include the development from front-side illumination (FSI) to back-side illumination (BSI) and various technologies such as deep trench isolation (DTI).
In particular, technology has evolved in the direction of making pixels smaller and smaller to realize high-resolution images without increasing the size of smartphone camera modules. However, this has gradually reduced the sensitivity of unit pixels and caused image quality degradation due to crosstalk between pixels. As a result, it was hard to avoid the limitation of a sharp decline in image quality in low-light environments.
To solve this problem, Samsung introduced a Front Deep Trench Isolation (FDTI) structure that creates a physical barrier between pixels and also developed ISOCELL 2.0 , which isolates even the color filters on top of the pixels. Furthermore, Samsung considered an approach to innovate the optical structure of the pixel itself, which can utilize even the peripheral light that could not be accepted in the existing structure. Nanoprism was born out of this consideration.
More details on the pixel technology of Samsung can be found at the link below.
Pixel Technology
 
Nanoprism: Refracting Light to Collect More
Nanoprism is a new technology first proposed in 2017 based on Meta-Photonics source technology that Samsung Advanced Institute of Technology (SAIT) has accumulated for many years. Unlike meta-lens research, which was active in Meta-Photonics research at the time and minimized light dispersion, it used the reverse idea of maximizing dispersion to separate colors. The Nanoprism is a meta-surface-based prism structure that can perform color separation.
So, what has changed from the existing pixel structure? In the existing microlens-based optics, the microlens and the color filter of the pixel are matched 1:1, so only the light of the color corresponding to the color filter of each pixel can be accepted by the pixel. In other words, there was a physical limit that light could only be received as much as the size of the defined pixel. 

 However, Nanoprism sets an optimized optical path so that light can be directed to each color-matched pixel by placing a nanoscale structure in the microlens position. Simply put, the amount of light received by each pixel has increased, because light that was lost due to color mismatch can be sent to adjacent pixels using refraction and dispersion of light. Nanoprism allows pixels to receive more light than the existing microlens structure, and it was possible to improve the sensitivity reduction, which was a concern due to the smaller pixels.

 
Applying Nanoprism to Image Sensors
Commercializing Meta-Photonics technology in image sensors was a challenging task. Securing both customer reliability and technical completeness was vital. To operate properly as a product, not only the structure of Nanoprism had to be implemented, but also dozens of indicators had to be satisfied.
Samsung's relevant teams worked closely together, repeating the design-process-measurement loop, and made the best efforts to secure performance by considering and reflecting various scenarios from the initial design stage and establishing a reliable verification procedure.
As can be inferred from its name Nanoprism, it was especially difficult from process development to mass production because precise and complex nanometer (nm) structures had to be implemented in pixels. In order to bring the new technology to life, special techniques and methods were introduced, including CMP (Chemical Mechanical Polishing) and low-temperature processes for Nanoprism implementation as well as TDMS (Thermal Desorption Mass Spectrometry) for image sensor production.
 
ISOCELL JNP Enables Brighter and Clearer Images
ISOCELL JNP with Nanoprism has been in mass production this year, and is incorporated in recent smartphones, contributing to an enhanced user experience. Because more light can be received without loss, it is possible to take bright and clear pictures, especially in challenging light conditions. In fact, the ISOCELL JNP with Nanoprism has 25% improved sensitivity compared to the previous ISOCELL JN5 with the same specifications.


Of course, increasing the size of the image sensor can improve the overall performance of the camera, but in mobile, there is a limit to increasing the size of the image sensor indefinitely due to design constraints such as 'camera bump'. Samsung System LSI tried to break through this limitation head-on with Nanoprism. Even in situations where pixels are getting smaller, this technology has improved the sensitivity and color reproduction of each pixel, and applied to ISOCELL JNP.
More details on the product can be found at the link below.

https://semiconductor.samsung.com/image-sensor/mobile-image-sensor/isocell-jnp/ 

The need for high-resolution image implementation in the mobile market will continue. Accordingly, the trend of pixel miniaturization will continue, and even if pixels become smaller, the development of pixel technology to secure high sensitivity, quantum efficiency, and noise reduction will be required. Nanoprism is a technology to increase sensitivity among these, and Samsung aims to move towards further innovation in a direction that goes beyond the existing physical limitations.
Building on this collaboration, continued cross-functional, cross-team efforts aim to explore new direction for next-generation image sensor technologies. 

Go to the original article...

iToF webinar – onsemi’s Hyperlux ID solution

Image Sensors World        Go to the original article...

Overcoming iToF Challenges: Enabling Precise Depth Sensing for Industrial and Commercial Innovation


 

Go to the original article...

Single-photon computer vision workshop @ ICCV 2025

Image Sensors World        Go to the original article...

📸✨ Join us at ICCV 2025 for our workshop on Computer Vision with Single-Photon Cameras (CVSPC)!

🗓️  Sunday, Oct 19th, 8:15am-12:30pm at the Hawai'i Convention Center

🔗 Full Program: https://cvspc.cs.pdx.edu/

🗣️ Invited Speakers: Mohit Gupta, Matthew O'Toole, Dongyu Du, David Lindell, Akshat Dave

📍 Submit your poster and join the conversation! We welcome early ideas & in-progress work.

📝 Poster submission form: https://forms.gle/qQ7gFDwTDexy6e668

🏆 Stay tuned for a CVSPC competition announcement!

👥Organizers: Atul Ingle, Sotiris Nousias, Mel White, Mian Wei and Sacha Jungerman.


Single-photon cameras (SPCs) are an emerging class of camera technology with the potential to revolutionize the way today’s computer vision systems capture and process scene information, thanks to their extreme sensitivity, high speed capabilities, and increasing commercial availability.

They provide extreme dynamic range and long-range high-resolution 3D imaging, well beyond the capabilities of CMOS image sensors. SPCs thus facilitate various downstream computer vision applications such as low-cost, long-range cameras for self-driving cars and autonomous robots, high-sensitivity cameras for night photography and fluorescence-guided surgeries, and high dynamic range cameras for industrial machine vision and biomedical imaging applications.

The goal of this half-day workshop at ICCV 2025 is to showcase the myriad ways in which SPCs are used today in computer vision and inspire new applications. The workshop features experts on several key topics of interest, as well as a poster session to highlight in-progress work. 

We welcome submissions to CVSPC 2025 for the poster session, which we will host during the workshop. We invite posters presenting research relating to any aspect of single-photon imaging, such as those using or simulating SPADs, APDs, QIS, or other sensing methods that operate at or near the single-photon limit. Posters may be of new or prior work. If the content has been previously presented in another conference or publication, please note this in the abstract. We especially encourage submissions of in-progress work and student projects.

Please submit a 1-page abstract via this Google Form. These abstracts will be used for judging poster acceptance/rejection, and will not appear in any workshop proceedings. Please use any reasonable format that includes a title, list of authors and a short description of the poster. If this poster is associated with a previously accepted conference or journal paper please be sure to note this in the abstract and include a citation and/or a link to the project webpage.

Final poster size will be communicated to the authors upon acceptance.

Questions? Please email us at cvspc25 at gmail.

Poster Timeline:
📅 Submission Deadline: August 15, 2025
📢 Acceptance Notification: August 22, 2025 

Go to the original article...

Single-photon computer vision workshop @ ICCV 2025

Image Sensors World        Go to the original article...

📸✨ Join us at ICCV 2025 for our workshop on Computer Vision with Single-Photon Cameras (CVSPC)!

🗓️  Sunday, Oct 19th, 8:15am-12:30pm at the Hawai'i Convention Center

🔗 Full Program: https://cvspc.cs.pdx.edu/

🗣️ Invited Speakers: Mohit Gupta, Matthew O'Toole, Dongyu Du, David Lindell, Akshat Dave

📍 Submit your poster and join the conversation! We welcome early ideas & in-progress work.

📝 Poster submission form: https://forms.gle/qQ7gFDwTDexy6e668

🏆 Stay tuned for a CVSPC competition announcement!

👥Organizers: Atul Ingle, Sotiris Nousias, Mel White, Mian Wei and Sacha Jungerman.


Single-photon cameras (SPCs) are an emerging class of camera technology with the potential to revolutionize the way today’s computer vision systems capture and process scene information, thanks to their extreme sensitivity, high speed capabilities, and increasing commercial availability.

They provide extreme dynamic range and long-range high-resolution 3D imaging, well beyond the capabilities of CMOS image sensors. SPCs thus facilitate various downstream computer vision applications such as low-cost, long-range cameras for self-driving cars and autonomous robots, high-sensitivity cameras for night photography and fluorescence-guided surgeries, and high dynamic range cameras for industrial machine vision and biomedical imaging applications.

The goal of this half-day workshop at ICCV 2025 is to showcase the myriad ways in which SPCs are used today in computer vision and inspire new applications. The workshop features experts on several key topics of interest, as well as a poster session to highlight in-progress work. 

We welcome submissions to CVSPC 2025 for the poster session, which we will host during the workshop. We invite posters presenting research relating to any aspect of single-photon imaging, such as those using or simulating SPADs, APDs, QIS, or other sensing methods that operate at or near the single-photon limit. Posters may be of new or prior work. If the content has been previously presented in another conference or publication, please note this in the abstract. We especially encourage submissions of in-progress work and student projects.

Please submit a 1-page abstract via this Google Form. These abstracts will be used for judging poster acceptance/rejection, and will not appear in any workshop proceedings. Please use any reasonable format that includes a title, list of authors and a short description of the poster. If this poster is associated with a previously accepted conference or journal paper please be sure to note this in the abstract and include a citation and/or a link to the project webpage.

Final poster size will be communicated to the authors upon acceptance.

Questions? Please email us at cvspc25 at gmail.

Poster Timeline:
📅 Submission Deadline: August 15, 2025
📢 Acceptance Notification: August 22, 2025 

Go to the original article...

X-FAB’s new 180nm process for SPAD integration

Image Sensors World        Go to the original article...

News link: https://www.xfab.com/news/details/article/x-fab-expands-180nm-xh018-process-with-new-isolation-class-for-enhanced-spad-integration

X-FAB Expands 180nm XH018 Process with New Isolation Class for Enhanced SPAD Integration

NEWS – Tessenderlo, Belgium – Jun 19, 2025

New module enables more compact designs resulting in reduced chip size

X-FAB Silicon Foundries SE, the leading analog/mixed-signal and specialty foundry, has released a new isolation class within its 180nm XH018 semiconductor process. Designed to support more compact and efficient single-photon avalanche diode (SPAD) implementations, this new isolation class enables tighter functional integration, improved pixel density, and higher fill factor – resulting in smaller chip area.
SPADs are critical components in a wide range of emerging applications, including LiDAR for autonomous vehicles, 3D imaging, depth sensing in AR/VR systems, quantum communication and biomedical sensing. X-FAB already offers several SPAD devices built on its 180nm XH018 platform, with active areas ranging from 10µm to 20µm. This includes a near-infrared optimized diode for elevated photon detection probability (PDP) performance.

To enable high-resolution SPAD arrays, a compact pitch and elevated fill factor are essential. The newly released module ISOMOS1, a 25V isolation class module, allows for significantly more compact transistor isolation structures, eliminating the need for an additional mask layer and aligning perfectly with X-FAB’s other SPAD variants.

The benefits of this enhancement are evident when comparing SPAD pixel layouts. In a typical 4x3 SPAD array with 10x10µm² optical areas, the adoption of the new isolation class enables a ~25% reduction in total area and boosts fill factor by ~30% compared to the previously available isolation class. With carefully optimized pixel design, even greater gains in area efficiency and detection sensitivity are achievable.

X-FAB’s SPAD solution has been widely used in applications that require direct Time-of-Flight, such as smartphones, drones, and projectors. This new technological advancement directly benefits these applications in which high-resolution sensing with a compact footprint is essential. It enables accurate depth sensing in multiple scenarios, including industrial distance detection and robotics sensing, for example, by protecting the area around a robot and avoiding collisions when robots are working as cobots. Beyond increasing performance and integration density, the new isolation class opens up opportunities for a broader range of SPAD-based systems requiring low-noise, high-speed single-photon detection within a compact footprint.

Heming Wei, X-FAB’s Technical Marketing Manager for Optoelectronics, explains: “The introduction of a new isolation class in XH018 marks an important step forward for SPAD integration. It enables tighter layouts and better performance, while allowing for more advanced sensing systems to be developed using our proven, reliable 180 nanometer platform.”

Models and PDKs, including the new ISOMOS1 module, are now available, supporting efficient evaluation and development of next-generation SPAD arrays. X-FAB will be exhibiting at Sensors Converge 2025 in Santa Clara, California (June 24–26) at booth #847, showcasing its latest sensor technologies. 

 

 

Example design of 4x3 SPAD pixel using new compact 25 V isolation class with ISOMOS1 module (right) and with previous module (left)

Go to the original article...

Hamamatsu webinar on SPAD and SPAD arrays

Image Sensors World        Go to the original article...

 

 

The video is a comprehensive webinar on Single Photon Avalanche Diodes (SPADs) and SPAD arrays, addressing their theory, applications, and recent advancements. It is led by experts from the New Jersey Institute of Technology and Hamamatsu, discussing technical fundamentals, challenges, and innovative solutions to improve the performance of SPAD devices. Key applications highlighted include fluorescence lifetime imaging, remote gas sensing, quantum key distribution, and 3D radiation detection, showcasing SPAD's unique ability to timestamp events and enhance photon detection efficiency.

Go to the original article...

Images from the world’s largest camera

Image Sensors World        Go to the original article...

Story in Nature news: https://www.nature.com/articles/d41586-025-01973-5

First images from world’s largest digital camera leave astronomers in awe

The Rubin Observatory in Chile will map the entire southern sky every three to four nights.

The Trifid Nebula (top right) and the Lagoon Nebula, in an image made from 678 separate exposures taken at the Vera C. Rubin Observatory in Chile. Credit: NSF-DOE Vera C. Rubin Observatory

 

The Vera C. Rubin Observatory in Chile has unveiled its first images, leaving astronomers in awe of the unprecedented capabilities of the observatory’s 3,200-megapixel digital camera — the largest in the world. The images were created from shots taken during a trial that started in April, when construction of the observatory’s Simonyi Survey Telescope was completed.

...

One image (pictured) shows the Trifid Nebula and the Lagoon Nebula, in a region of the Milky Way that is dense with ionized hydrogen and with young and still-forming stars. The picture was created from 678 separate exposures taken by the Simonyi Survey Telescope in just over 7 hours. Each exposure was monochromatic and taken with one of four filters; they were combined to give the rich colours of the final product. 

Go to the original article...

css.php