LOFIC CIS paper

Image Sensors World        Go to the original article...

Kitajima et al. from Ritsumeikan University published a paper titled "An LOFIC Image Sensor Readout Circuit with an On-Chip HDR Merger Achieving 36.5% Area and 14.9% Power Reduction" as an extended version of work presented at 2025 IEEE Sensors conference.

Abstract: For sensing applications, a complementary metal oxide semiconductor (CMOS) image sensor (CIS) with a lateral overflow integration capacitor (LOFIC) is in high demand. The LOFIC CIS can achieve high-dynamic-range (HDR) imaging by combining a low-conversion-gain (LCG) signal for large maximum signal electrons and a high-conversion-gain (HCG) signal for a low electron-referred noise floor. However, the LOFIC CIS faces challenges regarding the power consumption and circuit area when reading both HCG and LCG signals. To address these issues, this study proposes a readout circuit composed of area-efficient MOS capacitors using a folding DC operating point technique and an in-column signal selector for an on-chip HDR merger of HCG and LCG signals. A 10-bit test chip was fabricated with a 0.18µm CMOS process with MOS capacitors. The fabricated chip maintains high linearity, achieving an integral nonlinearity (INL) of +7.17/−6.93 LSB for the HCG signal and +7.95/−7.41 LSB for the LCG signal. Furthermore, the proposed design achieves a 14.92% reduction in the average power consumption of the total readout circuit and a 36.5% reduction in the readout circuit area.

Link: https://www.mdpi.com/2674-0729/5/1/8 

The conference proceedings version is available here: https://ieeexplore.ieee.org/document/11331086

 
 
 
 

Go to the original article...

Conference List – August 2026

Image Sensors World        Go to the original article...

The 3rd International Conference on AI Sensors and Transducers - 2–7 August 2026 - Jeju, South Korea - Website

BNL Physics and Detector Simulation Meeting - 11 August 2026 - Zoom online - Website

International Conference on Particle Accelerators and Detector Technologies (ICPADT-26) - 21-22 August 2026 - Washington, DC, USA - Website

SPIE Optics & Photonics - 23-27 August 2026 - San Diego, California, USA - Website

IOP Photon 2026 - 31 August-3 September 2026 - Newcastle-upon-Tyne, UK - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Go to the original article...

QVGA SPAD dToF sensor paper

Image Sensors World        Go to the original article...

Kuijk et al from Vrije University and Sony Depthsensing Belgium published a paper titled "320 × 240 SPAD Direct Time-of-Flight Image Sensor and Camera Based on In-Pixel Correlation and Switched-Capacitor Averaging" in the special IISW issue of Sensors journal.

Abstract: Correlation-Assisted Direct Time-of-Flight (CA-dToF) is demonstrated for the first time on a large 320 × 240-pixel SPAD array sensor that includes on-chip high-speed timing support circuitry. SPAD events are processed in-pixel, avoiding data communication over the array and/or storage bottlenecks. This is accomplished by sampling two orthogonal triangle waves that are synchronized with short light pulses illuminating the scene. Using small switched-capacitor circuits, exponential moving averaging (EMA) is applied to the sampled voltages, delivering two analog voltages (VQ2, VI2). These contain the phase delay, or the time of flight between the light pulse and photon’s time of arrival (ToA). Uncorrelated ambient photons and dark counts are averaged out, leaving only their associated shot noise impacting the phase precision. The QVGA camera allows for capturing depth-sense images with sub-cm precision over a 6 m range of detection, even with a small PDE of 0.7% at an 850 nm wavelength.

Open access DOI link: https://doi.org/10.3390/s25216772


 Figure 3. The practical EMA implementation consists of generating non-overlapping clocks (f1 and f2) in response to an edge transition from Vnext, driving the gates of two NMOS transistors (left); the parasitic capacitance of the substrate diffusion diode between the two transistors forms Cs (center); the non-linear behavior of Cs and Cint form spice simulation and are shown on the right.


 Figure 5. An analog counter based on a switched-capacitor principle, useful for counting events like incident photons. When zooming-in, the step-like behavior becomes visible (in green).

Figure 6. The correlation functions TCOS and TSIN (left) and the schematic (right) of the two-stage averaging system for correlating the incident ToAs of photons with these functions.


 

Figure 8. The pixel circuit has a non-overlapping clock generator, a photon counter, and two-stage averaging for the sampled triangular TSIN and TCOS signals.

Figure 11. Demodulation using (0°, 180°) phases at 25 MHz: Gray, Q2, I2, and 3D images. Shown right are cross-sections from the image’s row 124, giving more quantitative results, including measured and modeled depth STDV. Color scale is present in the depth-graph (upper right). Ambient is 1 klux at the whiteboard (b) and 2 klux at the box (e).
 

Go to the original article...

QVGA SPAD dToF sensor paper

Image Sensors World        Go to the original article...

Kuijk et al from Vrije University and Sony Depthsensing Belgium published a paper titled "320 × 240 SPAD Direct Time-of-Flight Image Sensor and Camera Based on In-Pixel Correlation and Switched-Capacitor Averaging" in the special IISW issue of Sensors journal.

Abstract: Correlation-Assisted Direct Time-of-Flight (CA-dToF) is demonstrated for the first time on a large 320 × 240-pixel SPAD array sensor that includes on-chip high-speed timing support circuitry. SPAD events are processed in-pixel, avoiding data communication over the array and/or storage bottlenecks. This is accomplished by sampling two orthogonal triangle waves that are synchronized with short light pulses illuminating the scene. Using small switched-capacitor circuits, exponential moving averaging (EMA) is applied to the sampled voltages, delivering two analog voltages (VQ2, VI2). These contain the phase delay, or the time of flight between the light pulse and photon’s time of arrival (ToA). Uncorrelated ambient photons and dark counts are averaged out, leaving only their associated shot noise impacting the phase precision. The QVGA camera allows for capturing depth-sense images with sub-cm precision over a 6 m range of detection, even with a small PDE of 0.7% at an 850 nm wavelength.

Open access DOI link: https://doi.org/10.3390/s25216772


 Figure 3. The practical EMA implementation consists of generating non-overlapping clocks (f1 and f2) in response to an edge transition from Vnext, driving the gates of two NMOS transistors (left); the parasitic capacitance of the substrate diffusion diode between the two transistors forms Cs (center); the non-linear behavior of Cs and Cint form spice simulation and are shown on the right.


 Figure 5. An analog counter based on a switched-capacitor principle, useful for counting events like incident photons. When zooming-in, the step-like behavior becomes visible (in green).

Figure 6. The correlation functions TCOS and TSIN (left) and the schematic (right) of the two-stage averaging system for correlating the incident ToAs of photons with these functions.


 

Figure 8. The pixel circuit has a non-overlapping clock generator, a photon counter, and two-stage averaging for the sampled triangular TSIN and TCOS signals.

Figure 11. Demodulation using (0°, 180°) phases at 25 MHz: Gray, Q2, I2, and 3D images. Shown right are cross-sections from the image’s row 124, giving more quantitative results, including measured and modeled depth STDV. Color scale is present in the depth-graph (upper right). Ambient is 1 klux at the whiteboard (b) and 2 klux at the box (e).
 

Go to the original article...

ForzaFAST581high-speed global-shutter CIS paper

Image Sensors World        Go to the original article...

Agarwal et al. from Forza Silicon (Pasadena, CA) published a paper titled "High-Frame-Rate Low-Noise Global Shutter CMOS Image Sensor for High-Speed Machine Vision" in the IISW special issue of MDPI Sensors journal.

Open access link: https://www.mdpi.com/1424-8220/26/4/1117

Abstract: In this paper we present a low-noise, high-frame-rate global shutter CMOS image sensor with UHD resolution (3840 × 2160), targeting high-speed machine vision applications. The sensor (ForzaFAST581) supports video capture at up to 1141 FPS at 12 bits and 1694 FPS at 8 bits at full resolution, consuming a total power of 5.5 W. Fabricated in a 65 nm, four-metal BSI process, the imager features a 5 µm voltage-domain global shutter pixel with dual-gain capability for improved dynamic range and a read noise of 3.04 e− in global shutter and 2.15 e− in rolling shutter mode for high-gain at maximum frame rate operation. For compact camera integration and low power consumption, the sensor is designed to stream video through 16 CML data ports, each operating at 7.44 Gbps, achieving a total aggregate throughput of 119 Gbps. Additionally, the sensor supports selectable output bit depths—8-bit, 10-bit, and 12-bit—allowing frame rate optimization based on application-specific requirements.

Figure 1. Image sensor (ForzaFAST581) die photo with 268-Pin LGA Package.


  

Figure 2. Image sensor block diagram.

Figure 9. Bottom digital block floor plan.

 

Figure 13. (a) High-speed signal path (from clock receiver to serialized data output), (b) high-speed 16-to-1 serializer architecture.

 

Figure 16. Quantum efficiency vs. wavelength (in nm).


Go to the original article...

Himax full year financial results for 2025 are out

Image Sensors World        Go to the original article...

https://www.globenewswire.com/news-release/2026/02/12/3236971/8267/en/Himax-Technologies-Inc-Reports-Fourth-Quarter-and-Full-Year-2025-Financial-Results-Provides-First-Quarter-2026-Guidance.html

Excerpts: 

Himax’s visibility for the whole year outlook of automotive sector remains limited amid the backdrop of uncertain government policy and consumer sentiment. However, the Company expects the first quarter to be the trough of the year, with sales rebounding in the second quarter and business momentum continuing to improve into the second half, supported by lean customer inventory levels and new projects for automotive customers scheduled to MP later in the year. Continued growth in non-driver IC businesses, particularly Tcon and WiseEye AI, should provide incremental support

Himax continues to expand into areas such as ultralow power AI for endpoint devices, Front-lit LCoS microdisplay and waveguide for AR glasses, and WLO for co-packaged optics, all of them are with exciting upside potential in the next couple of years, driven by the recent breakout of AI. Himax expects these initiatives to become new meaningful growth drivers while also improving Company’s product mix and overall profitability

WiseEye is gaining strong traction in smart glasses, with a growing number of design-in engagements underway among global tech names, solution platform providers, and smart glasses specialists. A leading brand’s smart glasses are poised to enter mass production later this year, marking an important milestone for WiseEye in the smart glasses market

[... WiseEye ... incorporates Himax proprietary ultralow power AI processor, always-on CMOS image sensor, and CNN-based AI algorithm ... consumer electronics and AIoT related applications.]

 

Go to the original article...

CIS startup MetaSilicon raises over $40m

Image Sensors World        Go to the original article...

https://www.startupresearcher.com/news/metasilicon-secures-over-usd43-2-million-in-series-a-funding

MetaSilicon, a designer of high-dynamic CMOS image sensors, has successfully closed its A+ financing round, securing over $43.2 million. The funding, led by a consortium of prominent investors, is earmarked for accelerating research and development efforts. This strategic capital infusion will bolster the company's dual-track strategy targeting both the automotive and consumer electronics markets.

Strategic Investment and Market Confidence
The round was jointly led by Ceyuan Capital, Wuxi Industrial Investment, and the FAW Hongqi Private Equity Fund, signaling strong confidence in MetaSilicon's vision. A diverse group of new investors, including Innovation Works and CSC Financial, also participated in the financing. Existing shareholder GRC SinoGreen demonstrated continued support by increasing its investment, underscoring the company's promising trajectory.

Rapid Growth and Commercial Success
Since its inception, MetaSilicon has demonstrated remarkable growth, with its revenue soaring from just a few million yuan in 2023 to nearly $28.8 million in 2025. This financial achievement is complemented by significant operational scale, as the company has shipped over 75 million chips to date. This rapid expansion has established MetaSilicon as one of the fastest-growing image sensor design firms in the industry.

Dual-Track Market Domination
The company's success is built on a dual-track strategy that effectively serves two major technology sectors. In consumer electronics, MetaSilicon has delivered nearly 100 projects for industry giants such as Samsung, Xiaomi, and OPPO. This broad adoption by leading brands highlights the quality and competitiveness of its sensor technology in a highly demanding market.
Simultaneously, MetaSilicon has made significant inroads into the smart automotive industry, a key area for future growth. Its 1.3-megapixel and 3-megapixel automotive-grade sensors have passed rigorous validation with over 20 OEMs and Tier 1 suppliers. The company has established deep collaborations, notably with FAW Hongqi, achieving mass production for critical in-vehicle systems.

Advancing Automotive Sensor Technology
These automotive chips are already being integrated into essential applications like Advanced Driver-Assistance Systems (ADAS), in-cabin monitoring, and electronic rearview mirrors. This widespread implementation in production vehicles confirms the reliability and performance of MetaSilicon's technology. The company's ability to secure pre-installation contracts signifies its trusted position within the automotive supply chain.

Looking ahead, MetaSilicon is developing a next-generation 8-megapixel automotive CIS chip to meet the demands of advanced autonomous driving. This high-performance sensor is specifically designed for high-end ADAS, prioritizing superior night vision, high dynamic range, and anti-interference capabilities. The company plans to begin market promotion for this innovative product in 2026, reinforcing its technological leadership.

This successful A+ financing round marks a significant milestone for MetaSilicon, providing the necessary resources to fuel its next phase of innovation. According to founder and chairman Liu Canyi, the capital will be pivotal in deepening R&D investment and enhancing product value for customers. With a proven track record and a clear vision for the future, MetaSilicon is well-positioned to solidify its leadership in the competitive image sensor market. 

Go to the original article...

Grass Valley needs an Engineer in The Netherlands

Image Sensors World        Go to the original article...

Grass Valley Nederland B.V.

Hardware-Sensor Engineer - Breda, Netherlands - Link

Go to the original article...

Grass Valley needs an Engineer in The Netherlands

Image Sensors World        Go to the original article...

Grass Valley Nederland B.V.

Hardware-Sensor Engineer - Breda, Netherlands - Link

Go to the original article...

IR sensor tech firm Senseeker acquires Axis Machine

Image Sensors World        Go to the original article...

Santa Barbara, California (February 11th, 2025) - Senseeker Corp, a leading innovator of digital infrared image sensing technology, can now respond to customer requirements more quickly and thoroughly through the acquisition of Axis Machine (Santa Barbara, California) by Senseeker Machining Company (SMC).

Senseeker Machining Company will continue to support and grow Axis Machine’s established customer base built up over 20+ years in delivering high-quality machined parts. The acquisition will enable Senseeker to further grow mechanical component lines and to reduce the lead time on machined parts used in Senseeker’s programs and portfolio of industry standard commercial cryogenic test equipment for testing infrared focal plane arrays.

SMC will continue to operate from the existing machine shop facility, located at 81 David Love Place, just a short walk from the Senseeker Corp headquarters in Santa Barbara. The SMC facility is equipped with several 3-Axis and 4-Axis CNC Machining Centers, Lathes and Multi-Axis Milling Equipment to be able to maintain a high throughput of work. A Mitutoyo DCC-CMM, optical comparator and a full range of precision inspection tools are used for quality control. SMC also runs industry standard CAD and CNC programming software.

“Bringing high-quality machining capability to Senseeker is an important step in the evolution of the company’s unique lateral business model. Senseeker’s cryogenic Sensor Test Unit product lines have grown significantly in recent years and this acquisition will help accelerate delivery times,” said Kenton Veeder, CEO of Senseeker. “Additionally, our mechanical engineering has expanded across our program portfolio and our new machining capability will help us build better mechanical systems through tight coupling between machining and engineering. We are excited to build SMC into a high-quality machining organization for existing shop customers and new sensor community customers alike.”

https://senseeker.com/news/PR-20260211.htm 

Go to the original article...

Paper on 3D-stacked InGaAs/InP SPAD

Image Sensors World        Go to the original article...

In a "hot-off-the-press" paper in Optics Express titled "Room-temperature, 96×96 pixel 3D-stacked InGaAs/InP SPAD sensor with complementary gating for flash LiDAR", Yildirim et al. from EPFL/Fraunhofer/FBH write:

A room-temperature 3D-stacked flash LiDAR sensor is presented for the short-wave infrared (SWIR). The 96×96 InGaAs-InP SPAD array in the top tier is biased by a circuit at the bottom tier that implements a complementary cascoded gating at the pixel level to control noise and afterpulsing. The bottom-tier chip is fabricated in a 110-nm CMOS technology. The sensor is tested with a 1550nm laser operating at 100μW to 3.1mW average power. The SPADs are gated with 3ns pulses with 500ps skew. Intensity images and depth maps are shown both indoors and outdoors at 10m in 120 klux background light with telemetry up to 100m, having better than 2% accuracy.


Proposed complementary optical gating pixel for InGaAs SPADs (a) arranged in a 9696 array (b) and its timing diagram (c).

Micrograph of the bottom tier (a) and 3D-stacked chip micrograph (b). Illustration of the indium bump bonding scheme (c).
 


Outdoors flash LiDAR images with 120klux background sunlight. The scene, intensity image and depth image shown for 3m(a-c) and 10m(d-f).

Go to the original article...

Passive SPAD simulator and dataset

Image Sensors World        Go to the original article...

Preprint: https://arxiv.org/abs/2601.12850

In a preprint titled "Accurate Simulation Pipeline for Passive Single-Photon Imaging" Suonsivu et al. write:

Single-Photon Avalanche Diodes (SPADs) are new and promising imaging sensors. These sensors are sensitive enough to detect individual photons hitting each pixel, with extreme temporal resolution and without readout noise. Thus, SPADs stand out as an optimal choice for low-light imaging. Due to the high price and limited availability of SPAD sensors, the demand for an accurate data simulation pipeline is substantial. Indeed, the scarcity of SPAD datasets hinders the development of SPAD-specific processing algorithms and impedes the training of learning-based solutions. In this paper, we present a comprehensive SPAD simulation pipeline and validate it with multiple experiments using two recent commercial SPAD sensors. Our simulator is used to generate the SPAD-MNIST, a single-photon version of the seminal MNIST dataset, to investigate the effectiveness of convolutional neural network (CNN) classifiers on reconstructed fluxes, even at extremely low light conditions, e.g., 5 mlux. We also assess the performance of classifiers exclusively trained on simulated data on real images acquired from SPAD sensors at different light conditions. The synthetic dataset encompasses different SPAD imaging modalities and is made available for download. 

The dataset download link is here: https://boracchi.faculty.polimi.it/Projects/SPAD-MNIST.html

This is based on work presented at the European Conference on Computer Vision, Synthethic Data for Computer Vision Workshop in 2024 

 

Go to the original article...

Samsung’s US fab for iPhone CIS

Image Sensors World        Go to the original article...

TheElec reported in August 2025 that Samsung plans to use its Austin, Texas fab to make sensors for future iPhones:

Samsung to form smartphone image sensor line in Austin for Apple

3-layer stacked CMOS image sensor to power iPhone 18 in 2026

The plan ... seems to be a response to tariffs on South Korea-made semiconductors that the Trump Administration plans to impose.

If all goes to plan, it will mark the first time that Samsung is manufacturing CIS in the US.

The CIS is made with wafer-to-wafer hybrid bonding ... requires precise process control and only Sony and Samsung have commercialized it.

Go to the original article...

Canon’s weighted photon counting SPAD array

Image Sensors World        Go to the original article...

In June 2025 Canon announced an HDR SPAD sensor that performs weighted counting (as opposed to simply accumulating photon counts): https://global.canon/en/news/2025/20250612.html

Canon develops High Dynamic Range SPAD sensor with potential to detect subjects even in low-light conditions or environments with strong lighting contrasts thanks to unique technology

TOKYO, June 12, 2025—Canon Inc. announced today that it has developed a 2/3" SPAD sensor featuring approximately 2.1 megapixels and a high dynamic range of 156dB. Thanks to a unique circuit technology, it realizes high dynamic range, low power consumption, and the ability to mitigate flickering from LED lights. Canon will continue further technological development and aims to start mass production.

 SPAD sensors employ a principle called photon counting, which detects each photon (light particle) entering a pixel and counts the incident number of photons. This sensor does not take in any noise during the readout process, making it possible to capture a clear image of subjects. Also, it can measure the distance to the subject at high speed with excellent timing precision.

However, due to limitations in processing speed, when the incident number of photons exceed a certain threshold level under high-illuminance conditions, conventional SPAD sensors experienced difficulties when separating individual photons to read out, which led the acquired image to white-out. In addition, such sensors consume a large amount of power as each photon counting independently consumes power.
On the other hand, Canon's newly developed SPAD sensor uses a unique technology called “weighted photon counting.” Focusing on the fact that the frequency at which photons reach the sensor correlates with illuminance, this technology measures the time it takes for the initial photon to reach the pixel within a certain time frame, then estimates the total number of photons that will arrive at the pixel over a certain time period. As a result, the image does not white-out due to a large number of photons precisely estimated while they are not being actually counted, allowing the subject to be captured clearly.

While the conventional SPAD sensor actually counts all incident photons one by one, the new method estimates the total amount of incident photons within a certain timeframe based on the time it takes for the first incident photon to arrive. As a result, the new sensor achieves a high dynamic range of 156dB, approximately five times higher than the previous sensor2. At the same time, this approach limits the power consumption per pixel by roughly 75% by reducing the frequency of photon detections. In addition, this technology also mitigates the flickering that occurs when capturing light from LEDs such as traffic lights.

Canon anticipates that this new sensor will have a wide variety of applications, such as surveillance, onboard vehicle equipment, and industrial use. For instance, it is expected to be applied to autonomous driving3 and advanced driving-assistance systems3. As autonomous driving technology advances, the demand for onboard sensors is increasing. At the same time, as many countries increasingly tighten related safety standards, there is a need for advanced sensor technology to ensure the safety of autonomous driving. However, the currently available CMOS sensors that are commonly used in vehicles are known to have several issues with visibility in environments with strong contrasts between bright and dark scenes, such as tunnel exits, or extremely low light conditions. Canon has addressed these issues by combining new features with the conventional SPAD sensors, which excel in low-light shooting.

Canon announced this new sensor technology on June 12, 2025 at the 2025 Symposium on VLSI Technology and Circuits held in Kyoto, Japan.

  •  While conventional SPAD sensors count all incident photons one by one, the newly developed SPAD sensor uses a unique technology called weighted photon counting that estimates the total amount of incident photons within a certain period of time based on the detection of the first incident photon. This greatly widens the number of photons that can be measured.
  •  This technology can also mitigate flickering when light from LEDs such as traffic lights is captured.

 

Weighted photon counting enables photon detection in both high and low levels of illuminance
 
With excellent high dynamic range performance of 156dB, a clear image is captured including bright and dark subjects

Simplified illustration of the weighted photon counting technique. The earlier the arrival of the first incident photon, the brighter the incident light.

Go to the original article...

Sony releases image stabilizer chip

Image Sensors World        Go to the original article...

Link: https://www.sony-semicon.com/en/products/lsi-ic/stabilizer.html

The Stabilizer Large-Scale Integration (LSI) CXD5254GG chip combines an image sensor and 6-axis inertial measurement unit (IMU) to perform electronic image stabilization (EIS), removing vibrations and maintaining a level horizon in the video input from the image sensor, and outputting the stabilized image. The advanced algorithm for attitude control reduces blurs caused by camera vibrations and achieves both real-time horizon stabilization and suppression of “jello effect” video distortion. The Stabilizer LSI is also equipped with Sony’s unique contrast improvement feature, the intelligent Picture Controller (iPC). Together with the stabilizing features, it enables the camera to clearly capture objects or information that could not be previously recognized due to vibrations.

The CXD5254GG creates new imaging value that conventional camera technologies cannot achieve, enabling applications across a wide range of fields including broadcasting, sports entertainment, security, and robotics. In addition to the CXD5254GG itself, a choice of compact camera modules combining the IMX577 sensor and lens is also available for broadcasting/video production applications, meeting a wide range of user needs.

The product performs a wide range of signal processing including high-precision blur correction via EIS, horizon maintenance, suppression of the jello effect, and lens distortion correction. We also provide established stabilizer sample parameters, derived from a variety of actual applications including onboard cameras, dashboard cameras, wearable devices, first-person view (FPV) drones, remote-controlled (RC) cars, and fixed-point cameras, backed by Sony’s many years of expertise and know-how. These sample parameter configurations can be optimized for specific applications to maximize the potential of the CXD5254GG’s stabilizing performance.


 

Go to the original article...

EETimes Prophesee article

Image Sensors World        Go to the original article...


Few quotes:

“We have the sensor, defined use cases, and the full-stack demonstration, [including] machine learning models to software integration in platforms such as Raspberry Pi,” Ferré said. “What probably [has been] missing is the scale of the business and demonstration of value.”

“Our technology is fantastic, but the way to make money with it…probably needed a bit of tuning, so this is what we’re doing,” he added.

“I’ve been on the phone with one of our integrators for Electronic Supervision System cameras, and they said, ‘we’ve never sold so many evaluation kits in so many industries—drones, manufacturing’. There’s traction [here]…this is huge.”

When asked about acquisition potential—given the recent SynSense-iniVation merger, and myriad market heavyweights—he replied: “We’re talking to very powerful players. They are not looking to buy us.”

Go to the original article...

EETimes Prophesee article

Image Sensors World        Go to the original article...


Few quotes:

“We have the sensor, defined use cases, and the full-stack demonstration, [including] machine learning models to software integration in platforms such as Raspberry Pi,” Ferré said. “What probably [has been] missing is the scale of the business and demonstration of value.”

“Our technology is fantastic, but the way to make money with it…probably needed a bit of tuning, so this is what we’re doing,” he added.

“I’ve been on the phone with one of our integrators for Electronic Supervision System cameras, and they said, ‘we’ve never sold so many evaluation kits in so many industries—drones, manufacturing’. There’s traction [here]…this is huge.”

When asked about acquisition potential—given the recent SynSense-iniVation merger, and myriad market heavyweights—he replied: “We’re talking to very powerful players. They are not looking to buy us.”

Go to the original article...

Sony’s global shutter image sensor in JSSC

Image Sensors World        Go to the original article...

In a recent paper titled "A 5.94-μm Pixel-Pitch 25.2-Mpixel 120-Frames/s Full-Frame Global Shutter CMOS Image Sensor With Pixel-Parallel 14-bit ADC", Sakakibara et al. from Sony Semiconductor Solutions (Japan) write:

We present a 25.2-Mpixel, 120-frames/s full-frame global shutter CMOS image sensor (CIS) featuring pixel-parallel analog-to-digital converters (ADCs). The sensor addresses the limitations of conventional rolling shutters (RSs)—including motion distortion, flicker artifacts, and flash banding—while maintaining image quality suitable for professional and advanced amateur photography. A stacked architecture with 3- μ m-pitch Cu–Cu hybrid bonding enables more than 50 million direct connections between the pixel array and the ADC circuits. The pixel-parallel single-slope ADCs operate with a comparator current of 25 nA and use a positive-feedback (PFB) scheme with noise-bandwidth control using an additional 11.4-fF capacitor, achieving 2.66 e−rms ( 166.8 μVrms ) random noise (RN) at 0-dB gain with an REF slope of 2161 V/s. The 5.94- μ m pixel pitch accommodates 30-bit latches designed under SRAM rules in a 40-nm CMOS process. Noise analysis reveals that in subthreshold operation, the dominant noise contributors are the comparator current, REF slope, and second-stage load capacitance. The sensor delivers 14-bit resolution, a 75.5-dB dynamic range (DR), and 120-frames/s operation at a power consumption of 1545 mW. A figure of merit of 0.083 e−rms⋅  pJ/step is comparable to state-of-the-art RS sensors. These results demonstrate that pixel-parallel ADC technology can be scaled to tens of megapixels while preserving high image quality and energy efficiency, enabling motion-artifact-free imaging in battery-powered consumer cameras.






 Full paper link [behind paywall]: https://ieeexplore.ieee.org/document/11219086

Go to the original article...

Conference List – July 2026

Image Sensors World        Go to the original article...

2nd International Conference on Optical Imaging and Detection Technology (OIDT 2026) - 3-5 July 2026 - Yulin, China - Website

New Developments in Photodetection - 6-10 July 2026 - Troyes, France - Website

11th International Smart Sensor Technology Exhibition - 8-10 July 2026 - Goyang, South Korea - Website

Tenth International Conference on Imaging, Signal Processing and Communications - 11-13 July 2026 - Kobe, Japan - Website

IEEE International Conference on Flexible Printable Sensors and Systems - 12-15 July 2026 - Atlanta, Georgia, USA - Website

Optica Sensing Congress - 12-17 July 2026 - Maastricht, Netherlands - Website

IEEE Sensors Applications Symposium - 15-17 July 2026 - Vitoria, Brazil - Website

American Association of Physicists in Medicine 67th Annual Meeting and Exhibition - 19-22 July 2026 - Vancouver, BC, Canada - Website

IEEE Nuclear & Space Radiation Effects Conference (NSREC) - 20-24 July 2026 - San Juan, Puerto Rico, USA - Website

34th International Workshop  on Vertex Detectors - 20-24 July 2026 - Stoos, Switzerland - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Go to the original article...

Synthetic aperture imager

Image Sensors World        Go to the original article...

Link: scitechdaily.com/this-breakthrough-image-sensor-lets-scientists-see-tiny-details-from-far-away/

Open-access paper: Multiscale aperture synthesis imager  https://www.nature.com/articles/s41467-025-65661-8

A new lens-free imaging system uses software to see finer details from farther away than optical systems ever could before.

Imaging technology has reshaped how scientists explore the universe – from charting distant galaxies using radio telescope arrays to revealing tiny structures inside living cells. Despite this progress, one major limitation has remained unresolved. Capturing images that are both highly detailed and wide in scope at optical wavelengths has required bulky lenses and extremely precise physical alignment, making many applications difficult or impractical.

Researchers at the University of Connecticut may have found a way around this obstacle. A new study led by Guoan Zheng, a biomedical engineering professor and director of the UConn Center for Biomedical and Bioengineering Innovation (CBBI), along with his team at the University of Connecticut College of Engineering, was published in Nature Communications. The work introduces a new imaging strategy that could significantly expand what optical systems can do in scientific research, medicine, and industrial settings.

Why Synthetic Aperture Imaging Breaks Down at Visible Light

“At the heart of this breakthrough is a longstanding technical problem,” said Zheng. “Synthetic aperture imaging – the method that allowed the Event Horizon Telescope to image a black hole – works by coherently combining measurements from multiple separated sensors to simulate a much larger imaging aperture.”

This approach works well in radio astronomy because radio waves have long wavelengths, which makes precise coordination between sensors achievable. Visible light operates on a much smaller scale. At those wavelengths, the physical accuracy needed to keep multiple sensors synchronized becomes extremely difficult to maintain, placing strict limits on traditional optical synthetic aperture systems.

Letting Software Do the Synchronizing

The Multiscale Aperture Synthesis Imager (MASI) addresses this challenge in a fundamentally different way. Instead of requiring sensors to remain perfectly synchronized during measurement, MASI allows each optical sensor to collect light on its own. Computational algorithms are then used to align and synchronize the data after it has been captured.

Zheng describes the concept as similar to several photographers observing the same scene. Rather than taking standard photographs, each one records raw information about the behavior of light waves. Software later combines these independent measurements into a single image with exceptionally high detail.

This computational approach to phase synchronization removes the need for rigid interferometric setups, which have historically prevented optical synthetic aperture imaging from being widely used in real-world applications.

How MASI Captures and Rebuilds Light

MASI differs from conventional optical systems in two major ways. First, it does not rely on lenses to focus light. Instead, it uses an array of coded sensors placed at different locations within a diffraction plane. Each sensor records diffraction patterns, which describe how light waves spread after interacting with an object. These patterns contain both amplitude and phase information that can later be recovered using computational methods.

After the complex wavefield from each sensor is reconstructed, the system digitally extends the data and mathematically propagates the wavefields back to the object plane. A computational phase synchronization process then adjusts the relative phase differences between sensors. This iterative process increases coherence and concentrates energy in the combined image.

This software-based optimization is the central advance. By aligning data computationally rather than physically, MASI overcomes the diffraction limit and other restrictions that have traditionally governed optical imaging.

A Virtual Aperture With Fine Detail

The final result is a virtual synthetic aperture that is larger than any single sensor. This allows the system to achieve sub-micron resolution while still covering a wide field of view, all without using lenses.
Traditional lenses used in microscopes, cameras, and telescopes force engineers to balance resolution against working distance. To see finer details, lenses usually must be placed very close to the object, sometimes just millimeters away. That requirement can limit access, reduce flexibility, or make certain imaging tasks invasive.

MASI removes this constraint by capturing diffraction patterns from distances measured in centimeters and reconstructing images with sub-micron detail. Zheng compares this to being able to examine the fine ridges of a human hair from across a desk rather than holding it just inches from your eye.

Scalable Applications Across Many Fields

“The potential applications for MASI span multiple fields, from forensic science and medical diagnostics to industrial inspection and remote sensing,” said Zheng, “But what’s most exciting is the scalability – unlike traditional optics that become exponentially more complex as they grow, our system scales linearly, potentially enabling large arrays for applications we haven’t even imagined yet.”

The Multiscale Aperture Synthesis Imager represents a shift in how optical imaging systems can be designed. By separating data collection from synchronization and replacing bulky optical components with software-controlled sensor arrays, MASI shows how computation can overcome long-standing physical limits. The approach opens the door to imaging systems that are highly detailed, adaptable, and capable of scaling to sizes that were previously out of reach.

Go to the original article...

Eric Fossum receives 2026 Draper Prize for Engineering

Image Sensors World        Go to the original article...

Link: https://home.dartmouth.edu/news/2026/01/eric-fossum-awarded-draper-prize-engineering

Eric R. Fossum, the John H. Krehbiel Sr. Professor for Emerging Technologies, has been awarded the 2026 Charles Stark Draper Prize for Engineering, which is granted every two years by the National Academy of Engineering and is one of the world’s preeminent honors for engineering achievement.
The NAE recognized Fossum “for innovation, development, and commercialization of the complementary metal-oxide semiconductor active pixel image sensor,” an invention that remains the core technology behind roughly 7 billion cameras produced each year.

“Eric Fossum is a pioneering semiconductor device physicist and engineer whose invention of the CMOS active pixel image sensor, or ‘camera on a chip,’ has transformed imaging across everyday life, industry, and scientific discovery,” the NAE said in announcing the prize, which includes a $500,000 cash award.
The honor is the latest in a string of accolades for Fossum, who in addition to his role as a professor at Thayer School of Engineering also serves as vice provost for entrepreneurship and technology transfer and directs the PhD Innovation Program.

His other honors include the Queen Elizabeth Prize for Engineering, the National Medal for Technology and Innovation awarded at a White House ceremony last year, and a Technical Emmy Award recognizing the transformative impact of Fossum’s invention. 
Today, CMOS image sensors, which were intended to make digital cameras for space faster, better, and cheaper, are behind billions of captures in a vast variety of settings—selfies, high-definition videos, dental X-rays, and space images.

“Eric Fossum’s inventions have revolutionized digital imaging across industries,” says President Sian Leah Beilock. “His work is a prime example of how the applied research our faculty foster and undertake can drive innovation and improve our world.” 

Research for NASA

Tasked with creating smaller cameras for NASA spacecraft that would use less energy, Fossum led the team that invented and developed the CMOS image sensor technology at the Jet Propulsion Laboratory at the California Institute of Technology in the 1990s. The CMOS image sensor integrated all the essential camera functions on a single piece of silicon—each chip contained arrays of light-sensitive pixels, each with its own amplifier.

Fossum recalls the moment when their first image sensor worked flawlessly in testing. It was a eureka moment, but only in hindsight. His initial reaction was tempered by caution. “It seemed so straightforward that I figured others must have tried this before, and there must be a fatal flaw somewhere. So, it was exhilarating to see that it was working,” he says.

The CMOS sensor was commercialized through Photobit, the company he co-founded and helped lead until its acquisition by Micron. 

As the CMOS sensor grew in sophistication, so too did its impact, finding applications in both predictable and surprising ways, such as swallowable pill cameras that can take images inside the body and the explosion of smartphone cameras, which forever changed how we capture and share our lives.
“The impact it has had on social justice has been huge, which I did not anticipate at all, and is truly gratifying. It protects people that might otherwise be powerless, and those with power from false accusations,” Fossum says.

Fossum, a Connecticut native, received a bachelor of science degree in physics and engineering from Trinity College, and a PhD in engineering and applied science from Yale in 1984. Prior to his work at the Jet Propulsion Lab, he was a faculty member at Columbia University. After leading several startups, consulting, and co-founding the International Image Sensor Society, he joined Dartmouth in 2010.
Fossum’s many other honors include the NASA Exceptional Achievement Medal, the IEEE Jun-ichi Nishizawa Medal, and induction into the U.S. Space Foundation Technology Hall of Fame in 1999 and the National Inventors Hall of Fame in 2011. He also served as CEO of Siimpel, developing MEMS devices for autofocus in smartphone camera modules, and worked as a consultant for Samsung on time-of-flight sensor development. He is a member of the National Academy of Engineering and a fellow of the National Academy of Inventors, the Institute of Electrical and Electronics Engineers, and Optica.

Counting photons: The future of imaging

Fossum continues to push the boundaries of imaging. His more recent invention, the quanta image sensor, was developed at Dartmouth and enables high-resolution imaging in extremely low-light conditions.

“We’re working on sensors that can count photons, one at a time,” he says. “Imagine being able to take a photo in almost complete darkness or measuring extremely faint signals in biology. It’s like turning the lights on in a place that was previously invisible to us.” 

Fossum and two of his former Dartmouth students co-founded Gigajot to commercialize the technology.
“Eric’s achievements are not the result of a single breakthrough, but of sustained curiosity and a focus on real-world impact,” says Douglas Van Citters ’99, Thayer ’03, ’06, interim Thayer dean. “To this day, he brings exceptional dedication to teaching and research, along with a passion for entrepreneurship that permeates Dartmouth, especially Thayer. And that spirit has inspired generations of engineers at Dartmouth who, like Eric, are committed to improving lives through the technologies they create.”

When asked about where he sees the field of imaging in the next decade, Fossum imagines a world where great images can be captured using a handful of photons and where computational imaging allows humans to see the world in ways eyes themselves never could. 

“The ability to capture images in low light will continue to improve,” he predicts. “And we’re likely to see a proliferation of augmented reality technologies that will change the way we experience the world around us.”

 In his mind, the grand challenge ahead is miniaturization—creating sensors with pixels so tiny that they become smaller than the wavelength of light itself. With this breakthrough, imaging technology could scale to the point where a single chip contains billions of pixels, opening new possibilities for everything from medical diagnostics to space exploration.

Along with his continuing work on sensors, Fossum draws from his extensive experience in innovation and entrepreneurship in his role as vice provost and in overseeing the PhD Innovation Program.
He says that the program trains students not just to think creatively but to apply their research in ways that have a meaningful impact.

“It is just so much more satisfying to make a real impact with the work that you do,” he says.
The awards ceremony is scheduled for Feb. 18 in Washington, D.C. As he did with the Queen Elizabeth prize, Fossum plans to donate the majority of the Draper Prize funds to STEM-related charities.

Go to the original article...

Mythic image sensor

Image Sensors World        Go to the original article...

Link: https://www.eetimes.com/mythic-rises-from-the-ashes-with-125-million-funding-round/

Mythic Rises from the Ashes with $125 Million Funding Round 

Excerpt: 

A separate product family, dubbed “Starlight,” will use a Mythic compute chiplet hybrid-bonded under a vision sensor’s photodiode array. The two dies will use less than 1 W between them.
Ozcelik said he noticed a gap in the market for this type of device while previously working at OnSemiconductor.

“One of the biggest challenges for image sensors is low light performance,” he said. “Dynamic range is another major problem, especially in mission critical applications.”

A Mythic AI accelerator could run a neural network to improve low-light performance and dynamic range directly next to the sensor. Image sensors made for applications like cellphones are very small (one-third of an inch), and performance suffers as they get smaller, Ozcelik said. Mythic has a unique opportunity here as its technology is compact, and crucially, it uses very little power, according to Ozcelik (photodiode arrays are extremely thermally sensitive, meaning even a small DSP couldn’t be placed directly under the photodiode array).

Mythic is going to build this sensor and AI accelerator combination itself, and both the accelerator chiplet and the image sensor product will tape out this year, Ozcelik said.

Overall, Ozcelik is pragmatic about the scale of the challenges ahead, particularly given the company’s move into the data center where it will compete with Nvidia.

“[Our advantage] has to be incredibly material,” he said. “It has to be at least one hundred times, hopefully more.”

Go to the original article...

Voyant releases solid-state FMCW LiDAR

Image Sensors World        Go to the original article...

Press release: https://voyantphotonics.com/news/1075/

New York, NY – December 17, 2025 – Voyant Photonics, the leader in chip-scale frequency-modulated continuous-wave (FMCW) LiDAR, today announced its Helium™ Platform of fully solid-state LiDAR sensors and modules. The solution is built on a silicon photonics chip, enabling a breakthrough architecture designed to deliver unprecedented reliability, integration, and performance for industrial automation, robotics, and mobile autonomy.

Leveraging Voyant’s proprietary Photonic Integrated Circuit (PIC), Helium offers camera-like simplicity and unmatched flexibility. Helium uses a dense two-dimensional photonic focal plane array with fully integrated 2D on-chip beam steering — eliminating all unreliable scanning methods: MEMS, mirrors, and resulting in no moving parts. The FMCW LiDAR chip leverages a two-dimensional array of surface emitters to create a fully solid-state LiDAR in an ultra-compact, rugged design. Helium also supports multi-sensor configurations, combining for instance a wide-FoV short-range and narrow-FoV long-range sensing in one system — delivering the most versatile and cost-effective LiDAR solution for advanced perception applications.

Helium first prototype release will be demonstrated at Voyant’s booth (LVCC, West Hall, Booth #4875) at CES 2026 in Las Vegas, January 6-9, marking a major milestone in advancing silicon-photonics LiDAR from R&D into high-volume systems that are proliferating Physical AI.

“Helium represents the next step in our mission to deliver the most affordable high performance LiDAR sensor ever,” said Voyant CEO Clément Nouvel. “Industrial and consumer markets demand sensors that are small, cost efficient, and highly reliable. Helium provides all of that while delivering performance that unlocks new classes for intelligent machines.”

A Flexible Platform to Move Solid-State LiDAR Forward

Helium extends the technology foundation proven in Voyant’s Carbon™ product line, bringing full two-dimensional beam steering to a silicon-photonics platform for the first time. The result is a compact, high-precision 4D sensor that meets the highest industry standards for safety and reliability.

Key advantages include:

  •  True solid-state — no MEMS, polygon scanners, or rotating assemblies
  •  High-resolution FPA architecture spanning from 12,000 pixels to over 100,000 pixels
  •  Long-range FMCW performance, per-pixel radial velocity
  •  Software-defined LiDAR (SDL) enabling adaptive scan patterns and region of interest
  •  Ultra Compact Size -as small as a matchbox (<150 g mass and <50 cm³ volume), ideal for drones, mobile robots, and compact industrial systems

Field of view and range can be tailored with different lenses, and the platform scales from core module options to fully enclosed sensor. Helium is built on a 2D array of surface-emitting photonic antennas combined with a fixed lens and integrated electronics, forming a rugged module ideal for embedded perception.

With no moving parts and monolithic photonic integration, Helium offers an estimated 20× improvement in MTBF over legacy ToF LiDAR architectures —a critical reliability requirement for high-duty-cycle industrial fleets.

Engineered for Scalable Manufacturing 

As with the Carbon family, Helium is built entirely on Voyant’s leading proprietary silicon-photonics platform, enabling new levels of performance and integration. This deep integration eliminates the unreliable optical alignments that limit traditional TOF LiDAR manufacturability. Helium leverages the same mature photonics foundry ecosystem as the optical datacom industry — allowing Voyant to scale production toward semiconductor-class cost structures.

From Carbon to Helium —Voyant Advances a Modular LiDAR Platform for Broader Adoption
Voyant established the company’s leadership in compact, cost-optimized FMCW sensing for compute-constrained platforms with its first-generation Carbon™ family, extended last week with the new Carbon 32 and Carbon 64 variants. Helium builds directly on these advances, expanding the architecture from 1D to 2D on-chip beam steering, with higher resolution and a fully solid-state scan engine. Voyant now enables OEMs to integrate its sensing technology directly into their machines by offering module-only access along with full design-in support. This allows partners to build customized, high-performance sensor solutions tailored to their exact requirements.

Helium sensors and modules will be available with multiple resolution and range configurations, supporting a wide choice of field-of-view options—from ultra-wide coverage approaching 180° down to narrower, long-range targeting optics. These modular variants enable OEMs and developers to select and integrate lenses that best suit their application, allowing LiDAR architectures to be tailored for mobile robots, material-handling systems, smart infrastructure, and emerging edge-compute platforms. 

Go to the original article...

Leica image sensor development?

Image Sensors World        Go to the original article...

There are some recent news reports that Leica is developing its own image sensor.

Petapixel: https://petapixel.com/2026/01/02/leica-is-developing-its-own-image-sensors-again/

Lecia rumors: https://leicarumors.com/2026/01/01/leica-is-developing-its-own-camera-sensor-again-most-likely-for-the-leica-m12-camera.aspx/ 

Excerpt:

In a recent podcast, Dr. Andreas Kaufmann (Chairman of the Supervisory Board and majority shareholder of Leica Camera AG) confirmed that Leica is again developing their own sensor, most likely for the next Leica M12 camera (Google translation):

Furthermore, as has already become somewhat known, we are also developing our own sensor again. […] Up until the M10, we had a sensor of European origin. It was manufactured by AMS in Graz, or rather, developed by their Dutch development office. And the foundry itself was in Grenoble, a French company. And then there was the transition with the M11 to Sony sensors. It’s no secret that they’re in there. At the same time, we started developing our own sensor again, in a more advanced version. I think we’ve made significant progress with that. We can’t say more at the moment. 

Go to the original article...

Eric Fossum receives 2026 IEEE Nishizawa Medal

Image Sensors World        Go to the original article...

Link: https://engineering.dartmouth.edu/news/eric-fossum-to-receive-2026-ieee-jun-ichi-nishizawa-medal

Eric Fossum Named 2026 Recipient of IEEE Jun-ichi Nishizawa Medal
Dec 17, 2025

Eric R. Fossum, the John H. Krehbiel Sr. Professor for Emerging Technologies and vice provost for entrepreneurship and technology transfer at Dartmouth, has been named the 2026 recipient of the Institute of Electrical and Electronics Engineers' (IEEE) Jun-ichi Nishizawa Medal for the "invention, development, and commercialization of the CMOS image sensor" that revolutionized digital imaging around the world.

Fossum joins a distinguished group of some of the world's most renowned engineers and scientists selected by IEEE to receive the organization's highest honors for their contributions to technology, society, and the engineering profession. 

The prize is awarded annually by IEEE, the largest technical professional organization in the world dedicated to advancing technology for humanity.

Eric Fossum and the team that invented the CMOS image sensor, at NASA's Jet Propulsion Laboratory. (Photo courtesy of NASA/JPL-Caltech)

Fossum led the team at NASA's Jet Propulsion Laboratory that developed the complementary metal-oxide-semiconductor (CMOS) sensor during the early 1990s, an innovation that dramatically miniaturized cameras used in space missions onto a single chip. The "camera on a chip" sensor subsequently made digital photography and imaging widely accessible worldwide. 

Today, the CMOS sensor is integrated in nearly every smartphone, as well as in well as countless other devices including webcams, medical imaging devices, and automobile cameras.

Fossum will formally receive the medal at a ceremony in New York City in April 2026. Named in honor "father of Japanese microelectronics," the Nishizawa prize also comes with an honorarium, which Fossum plans to donate to STEM-related charities. 

Fossum co-founded Photobit Corporation to commercialize the CMOS sensor, serving as CEO, before the company was acquired by Micron. He also served as CEO of Siimpel Corporation, which developed MEMS-based camera modules with autofocus and shutter functions for cell phones. More recently, he served as chairman of Gigajot Technology Inc., which he co-founded with two former Dartmouth PhD students to develop and commercialize quanta image sensors, which they developed at Dartmouth.

Fossum joined Dartmouth's engineering faculty in 2010 and helped launch the PhD Innovation Program, the nation's first doctoral level program focused on research translation and entrepreneurship.

Fossum is a member of the National Academy of Engineering. He was inducted in the National Inventors Hall of Fame in 2011, and to date, holds 185 US patents. He is a fellow of the National Academy of Inventors, an IEEE life fellow, an Optica fellow, and a member of the Society of Motion Picture and Television Engineers and the American Association for the Advancement of Science.

Throughout his career, Fossum has earned numerous accolades for his work, including the Queen Elizabeth Prize for Engineering in 2017, the Emmy for technology and engineering from the National Academy of Television Arts and Sciences in 2021, and most recently the National Medal of Technology and Innovation from President Biden in 2025.

Go to the original article...

Conference List – June 2026

Image Sensors World        Go to the original article...

The International SPAD Sensor Workshop - 1-4 June 2026 - Seoul, South Korea - Website

SPIE Photonics for Quantum - 8-11 June 2026 - Waterloo, Ontario, Canada - Website

AutoSens USA 2026 - 9-11 June 2026 - Detroit, Michigan, USA - Website

Sensor+Test - 9-11 June 2026 - Nuremberg, Germany - Website

Smart Sensing - 10-12 June 2026 - Tokyo, Japan - Website

IEEE/JSAP Symposium on VLSI Technology and Circuits - 14-18 June 2026 - Honolulu, Hawaii, USA - Website

Quantum Structure Infrared Photodetector - 14-19 June 2026 - Sète, France - Website

International Conference on Sensors and Sensing Technology (ICCST2026)- 15-17 June 2026 - Florence, Italy - Website

International Conference on IC Design and Technology (ICICDT) - 22-24 June 2026 - Dresden, Germany - Website

Automate 2026 - 22-25 June 2026 - Chicago, Illinois, USA - Website

27th International Workshop on Radiation Imaging Detectors - 28 June-2 July 2026 - Ghent, Belgium - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Go to the original article...

Prophesee leadership change

Image Sensors World        Go to the original article...

Prophesee Appoints Jean Ferré as Chief Executive Officer to Lead Event-based Vision Sensing Pioneer in Next Stage of Growth

Paris, France – December 23, 2025 – Prophesee, a pioneer and global leader in event-based vision technology, today announced the appointment of Jean Ferré as Chief Executive Officer. He has been designated by the board to succeed Luca Verre, Prophesee’s co-founder and former CEO, who is leaving the company. This leadership transition comes as the company enters a new phase of commercialization and growth, building on a strong technological and organizational foundation and welcoming new investors. The company is sharpening its near-term focus on sectors with high value use cases demonstrating today the strongest demand and adoption momentum such as security, defense and aerospace, as well as industrial automation. Prophesee will continue to support volume vision-enabled applications markets where it has achieved initial commercial success such as IoT, AR/VR, consumer electronics.

[...

Full press release is available here: https://www.prophesee.ai/2025/12/23/prophesee-appoints-jean-ferre-as-chief-executive-officer-to-lead-event-based-vision-sensing-pioneer-in-next-stage-of-growth/ 

Go to the original article...

MagikEye’s real-time 3D system at CES

Image Sensors World        Go to the original article...

MagikEye to Showcase New High-Resolution Real-Time 3D Evaluation System at CES

Reference platform delivers with >8000 points in a 3D cloud at 30 FPS for robotics, low-cost LiDAR, and automotive in-cabin deployments

STAMFORD, Conn.--(BUSINESS WIRE)--Magik Eye Inc (www.magik-eye.com), a developer of advanced 3D depth sensing based on its ILT™ (Invertible Light Technology), will be showcasing a new high-resolution, real-time ILT evaluation system at the upcoming Consumer Electronics Show. The system is designed to help customers evaluate ILT performance, validate configurations, and begin application development for robotics, low-cost LiDAR-class replacement, and automotive in-cabin applications.

The new evaluation system is a reference implementation, not a commercial sensor product. It delivers an approximately over 8,600-point 3D point cloud per frame at 30 frames per second, corresponding to more than 259,000 depth-points per second, while maintaining real-time operation and low latency (~33 ms). This represents roughly 2× the spatial point density of MagikEye’s prior evaluation platform without sacrificing frame rate.

“Customers evaluating depth sensing technologies want realistic, real-time data they can actually build on,” said Skanda Visvanathan, VP of Business Development at MagikEye. “This reference system is designed to shorten the path from evaluation to application development by delivering higher-resolution ILT depth at a full 30 FPS, in a form factor and performance envelope aligned with embedded systems.”

Designed for real-world evaluation and development, the evaluation system enables customers to evaluate ILT depth sensing in their own environments, begin application software development using live 3D point cloud output, and validate specific ILT configurations—including field of view, operating range, optical setup, and processing pipeline—prior to custom module design.

Key characteristics of the evaluation platform include a wide 105° × 79° field of view, a wide operating range of 0.3 m to 2 m (with support for near-field proximity use cases), and operation in bright indoor lighting conditions of up to ~50,000 lux, dependent on distance and target reflectance.

Unlike depth solutions that increase point density by reducing frame rate, MagikEye’s ILT evaluation system maintains a full 30 FPS, enabling depth perception suitable for dynamic, real-time environments. ILT™ can scale to even higher frame rates with increased processing performance.

At CES, MagikEye will demonstrate how the evaluation system supports development and prototyping across robotics applications such as real-time perception and navigation, low-cost LiDAR-class embedded sensing, and automotive in-cabin occupancy and interior monitoring.

The evaluation system integrates with MagikEye’s MKE API, allowing customers to stream point clouds and integrate ILT depth data into existing software stacks.

MagikEye will be showcasing the new evaluation system at CES in Las Vegas. To schedule a meeting or request a demonstration, please contact ces2026@magik-eye.com. 

Go to the original article...

AZO Sensors interview article on Teledyne e2v CCD imagers

Image Sensors World        Go to the original article...

The Enduring Relevance of CCD Sensors in Scientific and Space Imaging

(Inteview with Marc Watkins, Teledyne e2v)

While CMOS technology has become the dominant force in many imaging markets, Charge-Coupled Devices (CCDs) continue to hold an essential place in scientific and space imaging. From the Euclid Space Telescope to cutting-edge microscopy and spectroscopy systems, CCDs remain the benchmark for precision, low-noise performance, and reliability in mission-critical environments.
In this interview, Marc Watkins from Teledyne e2v, discusses why CCD technology continues to thrive, the company’s long-standing heritage in space missions and scientific discovery, and how ongoing innovation is ensuring CCDs remain a trusted solution for the most demanding imaging applications. 

To begin, could you provide an overview of your role at Teledyne e2v and the types of imaging applications your team typically supports?

I manage the CCD product portfolio and associated sales globally. Our CCDs are mostly used in scientific applications such as astronomy, microscopy, spectroscopy, in vivo imaging, X-ray imaging, and space imaging. Almost every large telescope worldwide uses our CCDs for their visible light instruments.

CCDs are vital for medical research, especially for in vivo preclinical trials in areas such as cancer research. Advanced microscopy techniques such as Super Resolution Microscopy require the extreme sensitivity of EMCCDs. Not all CCDs are hidden in labs, on top of mountains, or in space; you’ll likely have passed a CCD in airport security without realising it.

In a time when CMOS technology has become dominant in most imaging markets, what are the primary reasons CCD sensors still maintain relevance in scientific, astronomical, and space-based applications?

We observe that in many markets, CMOS has made significant advances; however, CCDs remain the best overall solution for many niche applications, such as the ones I just described. The technical advantages vary greatly between applications.

Could you elaborate on some of the technical advantages CCD sensors offer over CMOS in high-performance or mission-critical imaging environments?

CCDs are great for long integrations where larger charge capacities, higher linearity, and low noise provide the best performance. They can be deeply cooled, making dark noise negligible. CCDs can be manufactured on thicker silicon, which gives better Red/near-infrared sensitivity. CCD pixels can be combined or “binned” together noiselessly, a technique widely used in spectroscopy. Specialized “Electron Multiplying” CCDs are sensitive enough to count individual photons.

What are some of the unique requirements in space or astronomy applications that make CCDs a more suitable choice than CMOS?

Most astronomy applications use very long integration times, require excellent Red/NIR response, and have no problem cooling to -100 °C, making CCDs a much better solution.

For space, the answer can be as simple as our mission heritage, making them a low-risk option. Since 1986, Teledyne’s sensors have unlocked countless scientific discoveries from over 160 flown missions. Our CCDs can be found exploring the deep expanses of space with the Hubble and Euclid Space Telescopes, imaging the sun from solar observatories, navigating Mars with rovers, and monitoring the environment with the Copernicus Earth observation Sentinel satellites.

As CMOS technology continues to advance, are you seeing any significant closing of the performance gap in areas where CCDs have traditionally been stronger, such as low noise, uniformity, or quantum efficiency?

For most of our applications, recent advances in CMOS technology have had little impact on the CCD business. An example of this might be the development of improved high-speed CMOS. If high speed is critical, then CMOS is already the incumbent technology. Where quantum efficiency is concerned, we can offer the same backthinning and AR coatings for both CCD and CMOS technologies, with a peak QE of up to 95 %.

One area of transition for us is in space applications, such as Earth observation, where improvements in areas such as radiation hardness, frame rate, and TDI are steering many of our customers from our CCD to our CMOS solutions.

How has Teledyne e2v continued to innovate or evolve its CCD product lines to meet the demands of modern applications while CMOS continues to gain market share?

Our CCD product lines have a long development heritage. In general, we aim to optimize existing designs by tailoring specifications, such as anti-reflective coatings, to benefit specific applications. With in-house sensor design, manufacture, assembly, and testing, all our CCDs can be supplied partially or fully customized to fit the application and achieve the best possible performance.

Our CCD wafer fab and processing facility in England was established in 1985 and quickly became the world’s major supplier for space imaging missions and large ground-based astronomical telescopes. We continue to develop a vertically integrated, dedicated CCD fab and are committed to the development of high-performance, customized CCD detectors.

The CCD fabrication facility is critical to the success and quality of future space and science projects. At Teledyne, we remain committed to being the long-term supplier of high-specification and high-quality devices for the world’s major space agencies and scientific instrument producers.

Are there particular missions or projects, either current or upcoming, where CCD technology remains critical? What makes CCDs indispensable in those scenarios?

A prototype for a new intraoperative imaging technique incorporates CCDs, which we hope will have a significant impact on cancer treatments in the future.

In astronomy, one example is the Vera C. Rubin Observatory, which utilizes an enormous 3.2 Gigapixel camera composed of an array of HiRho CCDs, offering NIR sensitivity and close butting, features not currently available in CMOS technology.

In space, ESA’s recently completed Gaia missions relied completely on the functionality (TDI) and performance of our CCDs. The second Aeolus mission, that will continue to measure the Earth’s wind profiles to improve weather forecasting, uses a unique ‘Accumulation CCD’ which allows noiseless summing of many LIDAR signals to achieve measurable signal levels.

How do you address customer questions or misconceptions around CCDs being considered legacy technology in an industry that often pushes toward the latest advancements?

Consider what is best for your application; it may well be a CCD. You can find our range of available CCDs and their performance on our website, or I would be happy to discuss your application directly. If you would like to speak with me in person, I’ll be attending SPIE Astronomical Telescopes + Instrumentation in July 2026.

Looking ahead, what do you see as the long-term future of CCD sensors within the broader imaging ecosystem? Will they continue to coexist with CMOS, or is the industry moving toward complete CMOS dominance?

The sheer variety of imaging requirements, combined with the continued advantages of CCDs, suggests a long-term demand. We continue to see instruments baselining CCD products into 2030 and beyond.
How does Teledyne e2v position itself within this evolving landscape, and what message would you give to organizations evaluating sensor technologies for specialized imaging applications?
Teledyne e2v solutions are technology agnostic and will recommend what's best for the application, be it CMOS, MCT, or of course CCD.

 

Go to the original article...

Singular Photonics and Renishaw collaboration

Image Sensors World        Go to the original article...

Singular Photonics and Renishaw Shed New Light on Spectroscopy
 
Strategic collaboration integrates next-generation SPAD-based image sensor into Renishaw’s new Raman spectroscopy module to allow measurements of highly fluorescent samples
 
Edinburgh, UK – December 17, 2025 – Image-sensor innovator Singular Photonics today announced a major milestone in its strategic collaboration with Renishaw, a global leader in metrology and analytical instrumentation. The companies have been co-developing next-generation spectroscopy capabilities powered by Singular’s new suite of single-photon avalanche diode (SPAD) image sensors.
 
Renishaw today revealed the launch of its latest breakthrough in Raman spectroscopy: the addition of Time-Resolved Raman Spectroscopy (TRRS) to its renowned inVia™ confocal Raman microscope. At the core of this innovation is Singular’s Sirona SPAD sensor, enabling researchers and engineers to overcome one of Raman spectroscopy’s most persistent challenges – capturing Raman signals obscured by intense fluorescence backgrounds. With TRRS and Sirona, inVia users can now acquire high-quality Raman spectra from samples previously considered too difficult or impossible to measure.
 
“We are always on the lookout for new, innovative technology to maintain our lead in this market, and we believe we have achieved this with our partnership with Singular Photonics,” said Dr Tim Batten, Director and General Manager, Spectroscopy Products Division, Renishaw. “Our TRRS solution for the inVia microscope offers customers a multitude of benefits when dealing with highly fluorescent samples, such as those containing pigments. We have had an in-depth collaboration with Singular Photonics dating back to their inception and have been developing this product in tandem with their cutting-edge Sirona SPAD sensor.”
 
Built on advanced CMOS SPAD architecture, Singular’s Sirona is a 512-pixel SPAD-based line sensor integrating on-chip time-resolved processing and histogramming functionality. This allows simultaneous acquisition of both fluorescence and Raman signals with high temporal precision, unlocking new measurement modalities for scientific and industrial applications.
 
“By integrating the Sirona sensor into Renishaw’s new TRRS system, they have created a spectrometer that showcases the clear performance advantages of our SPAD technology,” said Shahida Imani, CEO of Singular Photonics. “We’ve built a strong relationship with the Renishaw team since before our spin-out from the University, fostering trust and deep technical collaboration. This partnership opens a significant opportunity to expand our market reach, especially in high-precision scientific and industrial sectors.”

Go to the original article...

css.php