Archives for August 2024
Tamron 50-400mm f4.5-6.3 Di III VC Nikon Z review so far
2024 SEMI MEMS and Imaging Summit program announced
Image Sensors World Go to the original article...
SEMI MEMS & Imaging Sensors Summit 2024 will take place November 14-15 at the International Conference Center Munich (ICM), Messe Münich in Germany.
Thursday, 14th November 2024
Session 1: Market Dynamics: Landscape and Growth Strategies
09:00 Welcome Remarks
Laith Altimime, President, SEMI Europe
09:20 Opening Remarks by MEMS and Imaging Committee
Chair
Philippe Monnoyer, VTT Technical Research Center of Finland Ltd
09:25 Keynote: Smart Sensors for Smart Life – How
Advanced Sensor Technologies Enable Life-Changing Use Cases
Stefan Finkbeiner, General Manager, Bosch Sensortec
09:45 Keynote: Sensing the World: Innovating for a
More Sustainable Future
Simone Ferri, APMS Group Vice President, MEMS sub-group General Manager,
STMicroelectronics
10:05 Reserved for Yole Development
10:25 Key Takeaways by MEMS and Imaging Committee
Chair
Philippe Monnoyer, VTT Technical Research Center of Finland Ltd
10:30 Networking Coffee Break
Session 2: Sustainable Supply Chain Capabilities
11:10 Opening Remarks by Session Chair
Pawel Malinowski, Program Manager and Researcher, imec
11:15 A Paradigm Shift From Imaging to Vision:
Oculi Enables 600x Reduction in Latency-Energy Factor for Visual Edge
Applications
Charbel Rizk, Founder & CEO, Oculi
11:35 Reserved for Comet Yxlon
11:55 Key Takeaways by Session Chair
Pawel Malinowski, Program Manager and Researcher, imec
12:00 Networking Lunch
Session 3: MEMS - Exploring Future Trends for Technologies and Device Manufacturing
13:20 Opening Remarks by Session Chair
Pierre Damien Berger, MEMS Industrial Partnerships Manager, CEA LETI
13:25 Unlocking Novel Opportunities: How
300mm-capable MEMS Foundries Will Change the Game
Jessica Gomez, CEO, Rogue Valley Microdevices
13:45 Trends in Emerging MEMS
Alissa Fitzgerald, CEO, A.M. Fitzgerald & Associates, LLC
14:05 The Most Common Antistiction Films are PFAS,
Now What?
David Springer, Product Manager, MVD and Release Etch Products, KLA Corporation
14:25 Reserved for Infineon
14:45 Latest Innovations in MEMS Wafer Bonding
Thomas Uhrmann, Director of Business Development, EV Group
15:05 Key Takeaways by Session Chair
Pierre Damien Berger, MEMS Industrial Partnerships Manager, CEA LETI
Session 4: Imaging - Exploring Future Trends for Technologies and Device Manufacturing
15:10 Opening Remarks by Session Chair
Stefano Guerrieri, Engineering Fellow and Key Expert Imager & Sensor
Components, ams OSRAM
15:15 Topic Coming Soon
Avi Bakal, CEO & Co-founder, TriEye
15:35 Active Hyperspectral Imaging Using Extremely
Fast Tunable SWIR Light Source
Jussi Soukkamaki, Lead, Hyperspectral & Imaging Technologies, VTT Technical
Research Centre of Finland Ltd
15:55 Networking Coffee Break
16:40 Reserved
17:00 Reserved for CEA-Leti
17:20 Reserved for STMicroelectronics
17:40 Key Takeaways by Session Chair
Stefano Guerrieri, Engineering Fellow and Key Expert Imager & Sensor
Components, ams OSRAM
Friday, 15th November 2024
Session 5: MEMS and Imaging Young Talent
09:00 Opening Remarks by Session Chair
Dimitrios Damianos, Project Manager, Yole Group
09:05 Unlocking Infrared Multispectral Imaging with
Pixelated Metasurface Technology
Charles Altuzarra, Chief Executive Officer & Co-founder, Metahelios
09:10 Electrically Tunable Dual-Band VIS/SWIR
Imaging and Sensing
Andrea Ballabio, CEO, EYE4NIR
09:15 FMCW Chip-Scale LiDARs Scale Up for Large
Volume Markets Thanks to Silicon Photonics Technology
Simoens François, CEO, SteerLight
09:20 ShadowChrome: A Novel Approach to an Old
Problem
Geoff Rhoads, Chief Technology Officer, Transformative Optics Corporation
09:25 Feasibility Investigation of Spherically Bent
Image Sensors
Amit Pandey, PhD Student, Technische Hochschule Ingolstadt
09:30 Intelligence Through Vision
Stijn Goossens, CTO, Qurv
09:35 Next Generation Quantum Dot SWIR Sensors
Artem Shulga, CEO & Founder, QDI Systems
09:40 Closing Remarks by Session Chair
Dimitrios Damianos, Project Manager, Yole Group
09:45 Networking Coffee Break
Session 6: Innovations for Next-Gen Applications: Smart Mobility
10:35 Opening Remarks by Session Chair
Bernd Dielacher, Business Development Manager MEMS, EVG
10:40 Reserved
11:00 New Topology for MEMS Advances Performance
and Speeds Manufacturing
Eric Aguilar, CEO, Omnitron Sensors, Inc.
11:20 Key Takeaways by Session Chair
Bernd Dielacher, Business Development Manager MEMS, EVG
Session 7: Innovations for Next-Gen Applications: Health
11:25 Opening Remarks by Session Chair
Ran Ruby YAN, Director of HMI & HealthTech Business Line, GLOBALFOUNDRIES
11:30 Reserved
11:50 Sensors for Monitoring Vital Signs in
Wearable Devices
Markus Arzberger, Senior Director, ams-OSRAM International GmbH
12:10 Pioneering Non-Invasive Wearable MIR
Spectrometry for Key Health Biomarkers Analysis
Jan F. Kischkat, CEO, Quantune Technologies GmbH
12:30 Key Takeaways by Session Chair
Ran Ruby YAN, Director of HMI & HealthTech Business Line, GLOBALFOUNDRIES
12:35 End of Conference Reflections by MEMS and
Imaging Committee Chair
Philippe Monnoyer, VTT Technical Research Center of Finland Ltd
12:45 Closing Remarks
Laith Altimime, President, SEMI Europe
12:50 Networking Lunch
IEEE SENSORS 2024 — image sensor topics announced
Image Sensors World Go to the original article...
The list of topics and the authors for the following two events related to image sensor technology have been finalized for the IEEE SENSORS 2024 Conference. The conference will be held in Kobe, Japan, from 20-23 October 2024. It will provide the opportunity to hear world class speakers in the field of image sensors and to sample the sensor ecosystem that extends beyond to see how imaging fits in.
Workshop: “From Imaging to Sensing: Latest and Future Trends of CMOS Image Sensors” [Sunday, 20 October]
Organizers: Sozo Yokogawa (Sony Semiconductor Solutions corp.) • Erez Tadmor (onsemi)
“Trends and Developments in State-of-the-Art CMOS Image Sensors”, Daniel McGrath, TechInsights
“CMOS Image Sensor Technology: what we have solved, what are to be solved”, Eiichi Funatsu, OMNIVISION
“Automotive Imaging: Beyond human Vision”, Vladi Korobov, onsemi
“Recent Evolution of CMOS Image Sensor Pixel Technology”, Bumsuk Kim et al., Samsung Electronics
“High precision ToF image sensor and system for 3D scanning application”, Keita Yasutomi, Shizuoka University
“High-definition SPAD image sensors for computer vision applications”, Kazuhiro Morimoto, Canon Inc.
“Single Photon Avalanche Diode Sensor Technologies for Pixel Size Shrinkage, Photon Detection Efficiency Enhancement and 3.36-pm-pitch Photon-counting Architecture”, Jun Ogi, Sony Semiconductor Solutions Corp.
“SWIR Single-Photon Detection with Ge-on-Si Technology”, Neil Na, Artilux Inc.
“From SPADs to smart sensors: ToF system innovation and AI enable endless application”, Laurent Plaza & Olivier Lemarchand, STMicroelectronics
“Depth Sensing Technologies, Cameras and Sensors for VR and AR”, Harish Venkataraman, Meta Inc.
Focus session: Overview of The Focus Sensor on Stacking in Image Sensor, [Monday, 21 October]
Orgainizer: S-G. Wu, Brillnics
Co-chairs: DN Yaung, TSMC; John McCarten, L3 Harris
Over the past decade, 3-dimensional (3D) wafer level stacked backside Illuminated (BSI) CMOS image sensors (CIS) have achieved the rapid progress in mass production. This focus session on stacking in image sensors will have 4 invited papers to explore the sensor stack technology evolution from process development, circuit architecture to AI/edge computing in system integration.
“The Productization of Stacking in Image Sensors”, Daniel McGrath, TechInsights
“Evolution of Image Sensing and Computing Architectures with Stacking Device Technologies”, BC Hseih, Qualcomm
“Event-based vision sensor”, Christoph Posch, Prophesee
“Evolution of digital pixel sensor (DPS) and advancement by stacking technologies”, Ikeno Rimon, Brillnics
Galaxycore educational videos
Image Sensors World Go to the original article...
Are you curious about how CMOS image sensors capture such clear and vivid images? Start your journey with the first episode of "CIS Explained". In this episode, we dive deep into the workings of these sophisticated sensors, from the basics of pixel arrays to the intricacies of signal conversion.
This episode serves as your gateway to understanding CMOS image sensors.
In this video, we're breaking down Quantum Efficiency (QE) and its crucial role in CIS. QE is a critical measure of how efficiently our sensors convert incoming light into electrical signals, directly affecting image accuracy and quality. This video will guide you through what QE means for CIS, its impact on your images, and how we're improving QE for better, more reliable imaging.
GalaxyCore DAG HDR Technology Film
Exploring GalaxyCore's Sensor-Shift Optical Image Stabilization (OIS) in under Two Minutes
GalaxyCore's COM packaging technology—a breakthrough in CIS packaging. This video explains how placing two suspended gold wires on the image sensor and bonding it to an IR base can enhance the durability and clarity of image sensors, prevent contamination, and ensure optimal optical alignment.
Avoiding information loss in the photon transfer method
Image Sensors World Go to the original article...
In a recent paper titled "PCH-EM: A Solution to Information Loss in the Photon Transfer Method" in IEEE Trans. on Electron Devices, Aaron Hendrickson et al. propose a new statistical technique to estimate CIS parameters such as conversion gain and read noise.
Abstract: Working from a Poisson-Gaussian noise model, a multisample extension of the photon counting histogram expectation-maximization (PCH-EM) algorithm is derived as a general-purpose alternative to the photon transfer (PT) method. This algorithm is derived from the same model, requires the same experimental data, and estimates the same sensor performance parameters as the time-tested PT method, all while obtaining lower uncertainty estimates. It is shown that as read noise becomes large, multiple data samples are necessary to capture enough information about the parameters of a device under test, justifying the need for a multisample extension. An estimation procedure is devised consisting of initial PT characterization followed by repeated iteration of PCH-EM to demonstrate the improvement in estimating uncertainty achievable with PCH-EM, particularly in the regime of deep subelectron read noise (DSERN). A statistical argument based on the information theoretic concept of sufficiency is formulated to explain how PT data reduction procedures discard information contained in raw sensor data, thus explaining why the proposed algorithm is able to obtain lower uncertainty estimates of key sensor performance parameters, such as read noise and conversion gain. Experimental data captured from a CMOS quanta image sensor with DSERN are then used to demonstrate the algorithm’s usage and validate the underlying theory and statistical model. In support of the reproducible research effort, the code associated with this work can be obtained on the MathWorks file exchange (FEX) (Hendrickson et al., 2024).
RRMSE versus read noise for parameter estimates computed using constant flux implementation of PT and PCH-EM. RRMSE curves for PT μ~ and σ~ grow large near σread=0 and were clipped from the plot window.
Open access paper link: https://ieeexplore.ieee.org/document/10570238
Job Postings – Week of 18 August 2024
Image Sensors World Go to the original article...
Omnivision Principal Image Sensor Technology Engineer |
Santa Clara, California, USA |
|
Teledyne Product Assurance Engineer |
Chelmsford, England, UK |
|
Tokyo Electron Labs Heterogenous Integration Process Engineer I |
Albany, New York, USA |
|
Fraunhofer IMS Doktorand*in Optische Detektoren mit integrierten 2D-Materialien |
Duisburg, Germany |
|
AMETEK Forza Silicon Principal Mixed Signal Design Engineer |
Pasadena, CA, USA |
|
University of Birmingham Professor of Silicon Detector Instrumentation for Particle Physics |
Birmingham, England, UK |
|
Ouster Sensor Package Design Engineer |
San Francisco, California, USA |
|
Beijing Institute of High Energy Physics CEPC Overseas High-Level Young Talents |
Beijing, China |
|
Thermo Fisher Scientific Sr. Staff Product Engineer |
Waltham, Massachusetts, USA (Remote) |
Harvest Imaging Forum 2024 registration open
Image Sensors World Go to the original article...
The Harvest Imaging forum tradition continues, a next and tenth one will be organized on November 7 & 8, 2024, in Delft, the Netherlands. The basic intention of the Harvest Imaging forum is to have a scientific and technical in-depth discussion on one particular topic that is of great importance and value to digital imaging. The forum 2024 will be an in-person event.
The 2024 Harvest Imaging forum will deal with a single topic from the field of solid-state imaging world and will have only one world-level expert as the speaker:
"AI and VISION : A shallow dive into deep learning"
Prof. dr. Jan van Gemert (Delft Univ. of Technology, Nl)
Abstract: Artificial Intelligence is taking the world by storm! The AI engine is powered by “Deep Learning”. Deep learning differs from normal computer programming in that it allows computers to learn tasks from large, labelled, datasets. In this Harvest Imaging Forum we will go through all fundamentals of Deep Learning: Multi-layer perceptrons, Back-propagation, Optimization, Convolutional neural networks, Recurrent neural networks, un-/self-supervised learning and transformers and self-attention (GPT).
Bio: Jan van Gemert received a PhD degree from the University of Amsterdam in 2010. There he was a post-doctoral fellow as well as at École Normale Supérieure in Paris. Currently he leads the Computer Vision lab at Delft University of Technology. He teaches the Deep learning and Computer Vision MSc courses. His research focuses on visual inductive priors for deep learning for automatic image and video understanding. He has published over 100 peer-reviewed papers with more than 7,500 citations. See his Google scholar profile for his publications: https://scholar.google.com/citations?hl=en&user=JUdMRGcAAAAJ
Registration: The registration fee for this 2-days forum is set to 1295 Euro for an in-person attendance. Next to the cost of attending the forum, this fee for the in-person attendance does include:
- Coffee breaks in the mornings and afternoons,
- Lunch on both forum days,
- Dinner on the first forum day,
- Soft and hard copy of the presented material.
If you are interested to attend this forum, please fill out the registration form here: https://harvestimaging.com/forum_registration_2024.php
PhD thesis on a low power "time-to-first-spike" event sensor
Image Sensors World Go to the original article...
Title: Event-based Image Sensor for low-power
Author: Mohamed AKRARAI (Universite Grenoble Alpes)
Abstract: In the framework of the OCEAN 12 European project, this PhD achieved the design, the implementation, the testing of an event based image sensor, and the publication of several scientific papers in international conferences, including renowned ones like the International Symposium on Asynchronous Circuits and Systems (ASYNC). The design of event-based image sensors, which are frameless, require a dedicated architecture and an asynchronous logic reacting to events. First, this PhD gives an overview of architectures based on a hybrid pixel matrix including TFS and DVS pixels. Indeed, this two kind of pixels are able to manage the spatial redundancy and the temporal redundancy respectively. One of the main achievement of this work is to take advantage of having both pixels inside an imager in order to reduce its output bitstream and its power consumption. Then, the design of the pixels and readout in FDSOI 28 nm technology from STMicroelectronics is detailed. Finally, two image sensors have been implemented in a testchip and tested.
Link: https://theses.hal.science/tel-04213080v1/file/AKRARAI_2023_archivage.pdf
EETimes article on imec
Image Sensors World Go to the original article...
Full article: https://www.eetimes.eu/imec-getting-high-precision-sensors-to-market/
Imec: Getting High-Precision Sensors to Market
At the recent ITF World 2024, EE Times Europe talked with imec researchers to catch up on what they’re doing with high-precision sensors—and more importantly, how they make sure their innovations get into the hands of industrial players.
Imec develops sensors for cameras and displays, and it works with both light and ultrasound—for medical applications, for example. But the Leuven, Belgium–based research institute never takes technology to market itself. It either finds industrial partners—or when conditions are right, imec creates a spinoff. One way to understand how imec takes an idea from lab to fab and finds a way to get it to market is to zoom in on its approach with image sensors for cameras.
“We make image sensors that are at the beating heart of incredible cameras around the world,” said Paul Heremans, vice president of future CMOS devices and senior fellow at imec. “Our research starts with material selection and an overall new concept for sensors and goes all the way to development, engineering and low-volume manufacturing within imec’s pilot line.”
A good example is the Pharsighted E9-100S ultra-high-speed video camera, developed by Pharsighted LLC and marketed by Photron. The camera reaches 326,000 frames per second (full frame: 640 × 480 pixels) and up to 2,720,000 frames per second at a lower frame size (640 × 32 pixels), thanks to a high-speed image sensor developed and manufactured by imec.
Another example is an electron imager used in a cryo-transmission electron microscope (cryo-TEM) marketed by a U.S. company called Thermo Fisher. The instrument produces atomic resolution pictures of DNA strands and other complex molecules. These images help in the drug-discovery process by allowing researchers to understand the structure of the molecules they need to target.
Thermo Fisher uses direct electron detection imagers, developed by imec and built into the company’s Falcon direct electron detection imagers, each composed of 4K × 4K pixels. The pixels are very large to get to the ultimate sensitivity. Consequently, the chip is so large (5.7 × 5.7 cm) that only four fit on a 200-mm wafer.
A third example is hyperspectral imagers, with very special filters that detect many more colors than just red, green and blue (RGB). Hyperspectral imagers pick up tens or hundreds of spectral bands. They can achieve this level of performance because imec implements processing filters on each pixel.
“We can do that on almost any commercial imager and turn it into a hyperspectral camera,” Heremans said. “Our technology is used by plenty of customers with a range of applications—from surveillance to satellite-based Earth observation, from medical to agriculture and more.”
Spectricity
To bring some of its work on hyperspectral imagers to market, imec created a startup called Spectricity. “The whole idea is to bring this field of multispectral imaging or spectroscopy into cellphones or other high-volume products,” said Glenn Vandevoorde, CEO of Spectricity. “Our imagers can see things that are not visible to the human eye. Instead of just processing RGB data, which a traditional camera does, we take a complete spectral image, where each pixel contains 16 different color points—including near-infrared. And with that, you can detect different materials that look alike but are actually very different. Or you can do color correction on smartphones. Sometimes people look very different, depending on the ambient light. We can detect what kind of light is shining—and based on that, adjust the color.”
The first use case for cellphones is auto white balancing. When a picture is taken with a cellphone, sometimes the colors show up very differently from reality, because the camera doesn’t have an accurate white point, which is the set of values that make up the color white in an image. These values change under different conditions, which means they need to be calibrated often. All other colors are then adjusted based on the white point reference.
Traditional smartphone cameras cannot determine the ambient light accurately, so they cannot find the white point to serve as a viable reference. But the multispectral imager obtains the full spectral information of the ambient light and applies advanced AI algorithms to detect the white point, which leads to accurate auto white balancing and true color correction.
Spectricity said its sensor is being evaluated by seven out of the top eight smartphone manufacturers in the world for integration into phones. “By the end of this year, you will see several smartphone vendors launching the first phones with multispectral imagers inside,” Vandevoorde said.
While smartphones are the ultimate target for high volume, they are also very cost-competitive—and it takes a long time to introduce a new feature in a smartphone. Spectricity is targeting other smartphone applications but also applications for webcams, security cameras and in-cabin video cameras for cars. One category of use cases takes advantage of the ability of multispectral images to detect health conditions.
“For example, you can accurately monitor how a person’s skin tone develops every day,” Vandevoorde said. “We can monitor blood flow in the skin, we can monitor moisture in the skin, we can detect melanoma and so on. These and many other things can be detected with these multispectral imagers.”
Spectricity has raised €28 million in funding since it was founded in 2018—and the startup has its own mass-production line at X-Fab, one of the company’s investors. “We have our machinery and our process installed there,” Vandevoorde said. “It’s now going through qualification—and by the end of the year, we’ll be ready for mass production to start shipping large volume to customers.”
How imec finds the right trends to target
Spectricity is a good example of how imec spots a need and develops technology to meet that need. Spectroscopy, of course, is not new. It’s been around for decades, and researchers use it in labs to detect different materials and different gases. What’s new is that imec integrated spectroscopy onto CMOS technology and developed processes to produce it in high volumes for just a couple of dollars. Researchers worked on the idea for about 10 years—and once it was running on imec’s pilot line, the institute set up Spectricity to take it into mass production and develop applications around it.
“We sniff around different trends,” said Xavier Rottenberg, scientific director and group leader of wave-based sensors and actuators at imec. “We’re in contact with a lot of players in the industry to get exposed to plenty of problems. Based on that, we develop a gut feeling. But gut feelings are dangerous, because it might be that you’re just hungry. However, with an educated gut feeling, sometimes your intuition is right.”
Once imec develops an idea in the lab, it takes the technology to its pilot line to develop a demonstrator. “We do proofs of concept to see how a device performs,” Rottenberg said. “Then we set up contacts in the ecosystem to form partnerships to bring the platform to a level where it can be mass-produced in an industrial fab.”
In some cases, an idea is too far out for partners to pick up for near-term profit. That’s when imec ventures out with a spinoff company, as it did with Spectricity.
Sony rebranding IMX sensors to LYTIA (?)
Image Sensors World Go to the original article...
Link to full article: https://www.phonearena.com/news/sonys-image-sensor-makeover-imx-to-lytia-by-2026_id160402
Sony's image sensor makeover: IMX to LYTIA by 2026
... there's a buzz about Sony making a branding shift for its smartphone image sensors. According to a recent report, Sony is considering moving all its mobile image sensors, including the current IMX lineup, under the newer LYTIA brand. The company is gradually phasing out the IMX brand, and some IMX sensors have already been rebranded to LYTIA. Reportedly, the company plans to fully transition to the LYT lineup by 2026.
The report states that the 50MP IMX890 and IMX882 sensors have already been rebranded as LYT-701 and LYT-600. For instance, the LYT-600 is already used in the vivo X100 Ultra, launched in May this year.
A 100kfps X-ray imager
Image Sensors World Go to the original article...
Marras et al. presented a paper titled "Development of the Continuous Readout Digitising Imager Array Detector" at the Topical Workshop on Electronics for Particle Physics 2023.
Abstract: Abstract: The CoRDIA project aims to develop an X-ray imager capable of continuous operation in excess of 100 kframe/s. The goal is to provide a suitable instrument for Photon Science experiments at diffraction-limited Synchrotron Rings and Free Electron Lasers considering Continuous Wave operation. Several chip prototypes were designed in a 65 nm process: in this paper we will present an overview of the challenges and solutions adopted in the ASIC design.
Pixel-level programmable regions-of-interest for high-speed microscopy
Image Sensors World Go to the original article...
Zhang et al. from MIT recently published a paper titled "Pixel-wise programmability enables dynamic high-SNR cameras for high-speed microscopy" in Nature Communications.
Abstract: High-speed wide-field fluorescence microscopy has the potential to capture biological processes with exceptional spatiotemporal resolution. However, conventional cameras suffer from low signal-to-noise ratio at high frame rates, limiting their ability to detect faint fluorescent events. Here, we introduce an image sensor where each pixel has individually programmable sampling speed and phase, so that pixels can be arranged to simultaneously sample at high speed with a high signal-to-noise ratio. In high-speed voltage imaging experiments, our image sensor significantly increases the output signal-to-noise ratio compared to a low-noise scientific CMOS camera (~2–3 folds). This signal-to-noise ratio gain enables the detection of weak neuronal action potentials and subthreshold activities missed by the standard scientific CMOS cameras. Our camera with flexible pixel exposure configurations offers versatile sampling strategies to improve signal quality in various experimental conditions.
a Pixels within an ROI capture spatiotemporally-correlated physiological activity, such as signals from somatic genetically encoded voltage indicators (GEVI). b Simulated CMOS pixel outputs with uniform exposure (TE) face the trade between SNR and temporal resolution. Short TE (1.25 ms) provides high temporal resolution but low SNR. Long TE (5 ms) enhances SNR but suffers from aliasing due to low sample rate, causing spikes (10 ms interspike interval) to be indiscernible. Pixel outputs are normalized row-wise. Gray brackets: the zoomed-in view of the pixel outputs. c Simulated pixel outputs of the PE-CMOS. Pixel-wise exposure allows pixels to sample at different speeds and phases. Two examples: in the staggered configuration, the pixels sample the spiking activity with prolonged TE (5 ms) at multiple phases with offsets of (Δ = 0, 1,25, 2.5, 3.75 ms). This configuration maintains SNR and prevents aliasing, as the interspike interval exceeding the temporal resolution of a single phase is captured by phase-shifted pixels. In the multiple exposure configuration, the ROI is sampled with pixels at different speeds, resolving high-frequency spiking activity and slow varying subthreshold potentials that are challenging to acquire simultaneously at a fixed sampling rate. d The PE-CMOS pixel schematic with 6 transistors (T1-T6), a photodiode (PD), and an output (OUT). RST, TX, and SEL are row control signals. EX is a column signal that controls pixel exposure. e The pixel layout. The design achieves programmable pixel-wise exposure while maximizing the PD fill factor for high optical sensitivity.
Open access article link: https://www.nature.com/articles/s41467-024-48765-5
PhD thesis on SciDVS event camera
Image Sensors World Go to the original article...
Link: https://www.research-collection.ethz.ch/handle/20.500.11850/683623
Thesis title: A Scientific Event Camera: Theory, Design, and Measurements
Author: Rui Garcia
Advisor: Tobi Delbrück
These applications consist in the detection of low contrast changes in light intensity during time intervals of only a few milliseconds or less, in observations that can last several minutes. Currently, these applications rely on scientific image sensors capable of capturing thousands of frames per second. While the paradigm of frames is highly prevalent in computer vision, it has significant downsides. Namely, for the applications described, the acquisition of thousands of frames per second during several minutes leads to a highly redundant output, resulting in an extremely inefficient data utilization.
Characteristics of the DVS such as its high-speed performance with low latency and high data efficiency, as well as its high dynamic range, make it an emerging technology with growing popularity over the last years. These characteristics make the DVS a promising candidate for the scientific applications mentioned above. However, DVS implementations proposed before this work did not demonstrate sufficient sensitivity in the light-constrained settings required by these applications.
The main purpose of the work presented in this thesis is the development of a novel DVS event camera with improved sensitivity under dim light. To achieve this goal, this thesis investigates the physical limits of the DVS technology, demonstrating that the DVS is limited to a minimum of 2x shot noise, and providing the conditions for the camera to operate near this limit. It also shows that spatial and temporal integration of light are fundamental to improve sensitivity in the dark - a result known from other visual systems, but never fully exploited in the DVS. This new knowledge, resulting from extensive measurements of DVS cameras and supported by theoretical analysis, resulted in a more realistic model of the DVS pixel, capable of reproducing measured phenomena and aligning with theory.
The results obtained are useful for DVS users, by providing optimal biasing strategies, for algorithm developers, by providing novel interpretations and insight about DVS data encoding, and for DVS designers, by defining the limits of the technology and optimization goals.
Finally, supported by an improved understanding of the DVS pixel and its limits, this thesis proposes SciDVS: A scientific event camera capable of responding to edges of 1.7 % contrast under dim light settings at 0.7 lx on-chip illuminance. SciDVS features an array of 126 × 112 pixels with a pitch of 30 μm, implemented on a 180 nm CMOS Image Sensor process. The SciDVS pixel introduces novelty such as an auto-centering high dynamic range pre-amplifier, improved bandwidth control achieving cutoff frequencies down to 3.5 Hz, and pixel binning.
Yole analysis of onsemi acquisition of SWIR Vision Systems
Image Sensors World Go to the original article...
Article by Axel Clouet, Ph.D. (Yole Group)
Onsemi, a leading CMOS image sensor supplier, has acquired SWIR Vision Systems, a pioneer in quantum-dots-based short-wave infrared (SWIR) imaging technology. Yole Group tracks and reports on these technologies through reports like Status of the CMOS Image Sensor 2024 and SWIR Imaging 2023. Yole Group’s Imaging Team discusses how this acquisition mirrors current industry trends.
SWIR Vision Systems pioneered the quantum dots platform
SWIR imaging modality has long been used in defense and industrial applications, generating $97 million in revenue for SWIR imager suppliers in 2022. However, its adoption has been limited by the high cost of InGaAs technology, the historical platform necessary to capture these wavelengths, compared to standard CMOS technology. In recent years, SWIR has attracted interest with the emergence of lower-cost technologies like quantum dots and germanium-on-silicon, both compatible with CMOS fabs and anticipated to serve the mass markets in the long term.
SWIR Vision Systems, a U.S.-based start-up, pioneered the quantum dots platform, introducing the first-ever commercial product in 2018. This company is fully vertically integrated, making its own image sensors for integration into its own cameras.
An acquisition aligned with Onsemi’s positioning
The CMOS image sensor industry was worth $21.8 billion in 2023 and is expected to reach $28.6 billion by 2029. With a market share of 6%, onsemi is the fourth largest CMOS image sensor supplier globally. The company is the leader in the fast-growing $2.3 billion automotive segment and has a significant presence in the industrial, defense and aerospace, and medical segments.
In the short term, SWIR products will help onsemi catch up with Sony’s InGaAs products in the industrial segment by leveraging the cost advantage of quantum dots. Its existing sales channels will facilitate the adoption of quantum dots technology by camera manufacturers.
Additionally, onsemi is set to establish long-term relationships with defense customers, a segment poised for growth due to global geopolitical instability. By acquiring SWIR Vision Systems and the East Fishkill CMOS fab completed in 2022, onsemi secured its supply chain, owns the SWIR strategic technology, and has a large-volume U.S.-based factory. It is, therefore, aligned with the dual-use approach promoted by the U.S. government for its local industry.
This acquisition will contribute to faster development and adoption of the quantum dots platform without disrupting the SWIR landscape. For onsemi, it is an attractive feature to quickly attract new customers in the industrial and defense sectors and a differentiating technology for the automotive segment in the long term.