Quantum Solutions and Topodrone launch quantum dot SWIR camera

Image Sensors World        Go to the original article...

Press release from Quantum Solutions:

September 19, 2024

QUANTUM SOLUTIONS and TOPODRONE Unveil TOPODRONE x Q.Fly: A Cost-Effective, DJI- Ready Quantum Dot SWIR Camera for UAV Applications

Quantum Solutions and Topodrone are excited to announce the launch of the Q.Fly, a next- generation camera with Quantum Dot Short Wave Infrared (SWIR) imaging capability designed specifically for UAV (drones) platforms. The Q.Fly is fully DJI-ready, working seamlessly out of the box with DJI Matrice 300 and DJI Matrice 350 RTK, offering real-time video streaming, control, and configuration directly from the DJI remote controller.

Developed to make SWIR technology more accessible and affordable for drone service companies and drone users, Q.Fly delivers a ready-to-use solution that eliminates the complexities of integrating advanced sensors into UAV platforms. The camera system also includes an RGB camera and/or a thermal camera for enhanced vision capabilities. With plug- and-play compatibility and unmatched spectral imaging performance, Q.Fly redefines what’s possible for a wide range of airborne applications.

This unique product combines the Quantum Solutions’ Quantum Dot SWIR Imaging technology with TOPODRONE’s UAV expertise, providing a cost-effective alternative to traditional SWIR cameras. Q.Fly covers a broad spectral range from VIS-SWIR (400–1700 nm), making it ideal for a variety of airborne applications that demand precise, high-resolution imaging.

Key Features of Q.Fly:

·       Quantum Dot SWIR Sensor: 640 x 512 pixels, covering a spectral range of 400–1700 nm

·       Cost-Effective and Accessible: Q.Fly offers an affordable solution, finally making SWIR imaging technology accessible to a broader audience of drone users and service providers

·       DJI Integration: Fully compatible with DJI Matrice 300 and Matrice 350 RTK, featuring real-time video streaming, control, and configuration from the remote controller


·       Built-In RGB Cameras with optional Thermal imager: Includes a 16 MP RGB camera for visual positioning and a thermal imager (640 x 512 pixels, 30 Hz) for enhanced versatility

·       High-precision spectral images geo-referencing

·       High-Speed Spectral Imaging: Capable of operating at 220 Hz, delivering superior spectral imaging performance in real-time

·       Lightweight Design: Weighing only 650g with its 3-axis gyrostabilized gimbal, Q.Fly allows for flight times of up to 35 minutes per battery cycle

·       Built-In Linux Computer: Facilitates easy camera control and supports a variety of protocols, including DJI PSDK and Mavlink

·       Filter Flexibility: Supports quick installation of spectral filters to adapt to specific use cases

Q.Fly is designed to serve industries that require precise, reliable, and easy-to-use drone-based imaging solutions, including:

  • Agriculture
  •  Fire Safety and Rescue
  •  Security&surveillance
  •  Industrial Inspection and Surveying

 

Product Launch at INTERGEO 2024
The TOPODRONE x Q.Fly will be officially unveiled at the INTERGEO 2024 exhibition in Stuttgart from September 24–26. This breakthrough technology will be showcased, highlighting its cost-effectiveness and how it can transform UAV imaging for various industries.
Attendees are invited to visit TOPODRONE Booth: Booth Hall 1 - Booth: B1.055 to experience the Q.Fly and learn more about its unparalleled ease of use and advanced SWIR capabilities.
 
Unparalleled Ease of Use for Drone Operators
Q.Fly is designed with drone operators in mind, offering a hassle-free solution that simplifies the often-complex process of integrating advanced sensors into UAV platforms. With its plug- and-play compatibility with DJI drones, users can quickly deploy the Q.Fly for a wide range of applications without the need for complex setup procedures.

Go to the original article...

ITE/IISS 6th International Workshop on Image Sensors and Imaging Systems (IWISS2024)

Image Sensors World        Go to the original article...

The 6th International Workshop on Image Sensors and Imaging Systems (IWISS2024) will be held at the Tokyo University of Science on Friday November 8, 2024.

In this workshop, people from various research fields, such as image sensing, imaging systems, optics, photonics, computer vision, and computational photography/imaging, come together to discuss the future and frontiers of image sensor technologies in order to explore the continuous progress and diversity in image sensors engineering and state-of-the-art and emerging imaging systems technologies.


Date: November 8 (Fri), 2024
Venue: Forum-2, Morito Memorial Hall, Building 13, Tokyo University of Science / Online
Access: https://maps.app.goo.gl/LyecM4XUYazco5D79
Address: 4-2-2, Kagurazaka, Shinjuku-ku, Tokyo 162-0825, JAPAN

 

Online registration fees information is available here.
Registration is necessary because the number of seats in person is limited. Online viewing via Zoom is also offered.
Registration deadline is Nov. 5 (Tue).
Register and pay online from the following website: [Online registration page]

[Plenary Talk]
"CMOS Direct Time-of-Flight Depth Sensor for Solid-Sate LiDAR Systems"
by Jaehyuk Choi (SolidVue, Inc., Korea & Sungkyunkwan Univ. (SKKU), Korea)

[8 Invited Talks]
Invited-1 “Plasmonic Color Filters for Multi-spectral Imaging” by Atsushi Ono (Shizuoka Univ., Japan)
Invited-2 (online) “Intelligent Imager with Processing-in-Sensor Techniques” by Chih-Cheng Hsieh (National Tsing Hua Univ. (NTHU), Taiwan)
Invited-3 “Designing a Camera for Privacy Preserving” by Hajime Nagahara (Osaka Univ., Japan)
Invited-4 “Deep Compressive Sensing with Coded Image Sensor” by Michitaka Yoshida (JSPS, Japan), et al.
Invited-5 “Event-based Computational Imaging using Modulated Illumination” by Tsuyoshi Takatani (Univ. of Tsukuba, Japan)
Invited-6 “Journey of Pixel Optics Scaling into Deep Sub-micron and Migration to Meta Optics Era” by In-Sung Joe (Samsung Electronics, Korea)
Invited-7 “Trigger-Output Event-Driven SOI pixel Sensor for X-ray Astronomy” by Takeshi Tsuru (Kyoto Univ., Japan)
Invited-8 “New Perspectives for Infrared Imaging Enabled by Colloidal Quantum Dots” by Pawel E. Malinowski (imec, Belgium), et al.

Sponsored by:
Technical Group on Information Sensing Technologies (IST), the Institute of Image Information and Television Engineers (ITE)
Co-sponsored by:
International Image Sensor Society (IISS)

Group of Information Photonics (IPG) +CMOS Working Group, the Optical Society of Japan
General Chair: Keiichiro Kagawa (Shizuoka Univ., Japan)
Technical Program Committee (Alphabetical order): Keiichiro Kagawa (Shizuoka Univ., Japan), Hiroyuki Suzuki (Gunma Univ., Japan), Hisayuki Taruki (Toshiba Electronic Devices & Storage Corporation, Japan), Min-Woong Seo (Samsung Electronics, Korea), Sanshiro Shishido (Panasonic Holdings Corporation, Japan)

Contact for any question about IWISS2024
E-mail: iwiss2024@idl.rie.shizuoka.ac.jp (Keiichiro Kagawa, Shizuoka Univ., Japan)

Go to the original article...

Tamron 90mm f2.8 Di III Macro review

Cameralabs        Go to the original article...

Tamron's new 1:1 macro lens for Sony E-mount and Nikon Z-mount looks like an interesting alternativ to Nikon's and Sony's own macro lenses. Find out more about the features and qualities of the 90mm f2.8 Di III Macro in my full review!…

Go to the original article...

Canon Inc. announces Audit & Supervisory Board Member changes

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Canon delivers FPA -1200NZ2C nanoimprint lithography system for semiconductor manufacturing to the Texas Institute for Electronics

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Canon releases FPA-3030i6 semiconductor lithography system for small wafers, with a newly developed lens and a variety of options to meet the growing demand for power devices

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Canon releases FPA-3030i6 semiconductor lithography system for small wafers, with a newly developed lens and a variety of options to meet the growing demand for power devices

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Canon PowerShot G2 retro review

Cameralabs        Go to the original article...

Back in 2001, Canon launched the second in its legendary PowerShot G series, the G2. It upgraded and fixed several issues, and can be bagged at a bargain price today. Find out its story 23 years later!…

Go to the original article...

Job Postings – Week of 22 September 2024

Image Sensors World        Go to the original article...

Anduril Industries

Chief Engineer, Imaging

Lexington, Massachusetts, USA

Link

Purdue University

Assistant Professor of Physics and Astronomy

West Lafayette, Indiana, USA

Link

RTX Raytheon

Mixed Signal IC Design Senior Engineer

Goleta, California, USA

Link

Sandia National Laboratories

Postdoctoral Appointee - Optoelectronic and Microelectronic Device Fabrication, Onsite

Albuquerque, New Mexico, USA

Link

Apple

Electrical Engineer - Camera Hardware

San Diego, California, USA

Link

University of Birmingham

Professor of Silicon Detector Instrumentation for Particle Physics

Birmingham, England, UK

Link

Google

Imaging Systems Engineer, Devices and Services

Mountain View, California, USA

Link

Institute of Physics in Prague

Postdoctoral research associate in ATLAS

Prague, Czech Republic

Link

Marvell

Silicon Photonics Engineer

Ottawa, Ontario, Canada

Link

Go to the original article...

Nikon Z 50mm f1.4 review

Cameralabs        Go to the original article...

Nikon's second f1.4 prime lens for Z-mount follows shortly after the introduction of their Z 35mm f1.4. Find out more about the features and qualities of the Z 50mm f1.4 in my review!…

Go to the original article...

Canon EOS 10D retro review

Cameralabs        Go to the original article...

Join me on a trip back to 2003 where we’ll visit the EOS 10D, Canon’s most affordable DSLR to date. This was the first in an hugely popular series which set the standard for Canon’s semi-pro bodies, crucially at an attainable price for many enthusiasts.…

Go to the original article...

Canon’s PowerShot V10 Vlog camera honored with Silver Award at the International Design Excellence Awards

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Canon successfully removed toner cartridge listings from e-commerce platforms in first half of 2024

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Canon RF 28-70mm f2.8 IS STM review

Cameralabs        Go to the original article...

The Canon RF 28-70mm f2.8 IS STM is a mid-range general-purpose zoom for their full-frame EOS R mirrorless cameras. Find out how it performs in my review!…

Go to the original article...

PhD thesis on CMOS SPAD dToF Systems

Image Sensors World        Go to the original article...

Thesis Title: Advanced techniques for SPAD-based CMOS d-ToF systems
Author: Alessandro Tontini
Affiliation: University of Trento and FBK

Full text available here: [link]

Abstract:

The possibility to enable spatial perception to electronic devices gave rise to a number of important development results in a wide range of fields, from consumer and entertainment applications to industrial environments, automotive and aerospace. Among the many techniques which can be used to measure the three-dimensional (3D) information of the observed scene, the unique features offered by direct time-of-flight (d-ToF) with single photon avalanche diodes (SPADs) integrated into a standard CMOS process result in a high interest for development from both researchers and market stakeholders. Despite the net advantages of SPAD-based CMOS d-ToF systems over other techniques, still many challenges have to be addressed. The first performance-limiting factor is represented by the presence of uncorrelated background light, which poses a physical limit to the maximum achievable measurement range. Another problem of concern, especially for scenarios where many similar systems are expected to operate together, is represented by the mutual system-to-system interference, especially for industrial and automotive scenarios where the need to guarantee safety of operations is a pillar. Each application, with its own set of requirements, leads to a different set of design challenges. However, given the statistical nature of photons, the common denominator for such systems is the necessity to operate on a statistical basis, i.e., to run a number of repeated acquisitions over which the time-of-flight (ToF) information is extracted. The gold standard to manage a possibly huge amount of data is to compress them into a histogram memory, which represents the statistical distribution of the arrival time of photons collected during the acquisition. Considering the increased interest for long-range systems capable of both high imaging and ranging resolutions, the amount of data to be handled reaches alarming levels. In this thesis, we propose an in-depth investigation of the aforesaid limitations. The problem of background light has been extensively studied over the years, and already a wide set of techniques which can mitigate the problem are proposed. However, the trend was to investigate or propose single solutions, with a lack of knowledge regarding how different implementations behave on different scenarios. For such reason, our effort in this view focused on the comparison of existing techniques against each other, highlighting each pros and cons and suggesting the possibility to combine them to increase the performance. Regarding the problem of mutual system interference, we propose the first per-pixel implementation of an active interference-rejection technique, with measurement results from a chip designed on purpose. To advance the state-of-the-art in the direction of reducing the amount of data generated by such systems, we provide for the first time a methodology to completely avoid the construction of a resource-consuming histogram of timestamps. Many of the results found in our investigations are based on preliminary investigations with Monte Carlo simulations, while the most important achievements in terms of interference rejection capability and data reduction are supported by measurements obtained with real sensors.

Contents

Contents
1 Introduction 1
1.1 Single Photon Avalanche Diode (SPAD)
1.1.1 Passive quenching
1.1.2 Active quenching
1.1.3 Photon Detection Efficiency (PDE)
1.1.4 Dark Count Rate (DCR) and afterpulsing

2 Related work
2.1 Pioneering results
2.2 Main challenges
2.3 Integration challenges

3 Numerical modelling of SPAD-based CMOS d-ToF sensors
3.1 Simulator architecture overview
3.2 System features modeling
3.2.1 Optical model
3.2.2 Illumination source - modeling of the laser emission profile
3.3 Monte Carlo simulation
3.3.1 Generation of SPAD-related events
3.3.2 Synchronous and asynchronous SPAD model
3.4 Experimental results
3.5 Summary

4 Analysis and comparative evaluation of background rejection techniques
4.1 Background rejection techniques
4.1.1 Photon coincidence technique
4.1.2 Auto-Sensitivity (AS) technique
4.1.3 Last-hit detection
4.2 Results
4.2.1 Auto-Sensitivity vs. photon coincidence
4.2.2 Comparison of photon coincidence circuits
4.2.3 Last-hit detection characterization
4.3 Automatic adaptation of pixel parameters
4.4 Summary


5 A SPAD-based linear sensor with in-pixel temporal pattern detection for interference and background rejection with smart readout scheme
5.1 Architecture
5.1.1 Pixel architecture
5.1.2 Readout architecture
5.2 Characterization
5.2.1 In-pixel laser pattern detection characterization
5.2.2 Readout performance assessment
5.3 Operating conditions and limits
5.4 Summary

6 SPAD response linearization: histogram-less LiDAR and high photon flux measurements
6.1 Preliminary validation
6.1.1 Typical d-ToF operation
6.1.2 Histogram-less approach
6.2 Mathematical analysis
6.3 Acquisition schemes
6.3.1 Acquisition scheme #1: Acquire or discard
6.3.2 Acquisition scheme #2: Time-gated
6.3.3 Discussion on implementation, expected performance and mathematical analysis
6.3.4 Comparison with state-of-the-art
6.4 Measurement results
6.4.1 Preliminary considerations
6.4.2 Measurements with background light only
6.4.3 Measurements with background and laser light and extraction of the ToF
6.5 Summary

7 Conclusion
7.1 Results
7.1.1 Modelling of SPAD-based d-ToF systems
7.1.2 Comparative evaluation of background-rejection techniques
7.1.3 Interference rejection
7.1.4 Histogram-less and high-flux LiDAR
7.2 Future work and research
Bibliography

Go to the original article...

Canon EOS C80 review

Cameralabs        Go to the original article...

The Canon EOS C80 is a cinema camera which packs the full-frame 6k sensor of the C400 into the smaller body of the C70. Here's everything you need to know!…

Go to the original article...

8th Space & Scientific CMOS Image Sensors workshop – abstracts due Sep 13, 2024

Image Sensors World        Go to the original article...

CNES, ESA, AIRBUS DEFENCE & SPACE, THALES ALENIA SPACE, SODERN, OHB, ISAE SUP’AERO are pleased to invite you to the 8th “Space & Scientific CMOS Image Sensors” workshop to be held in TOULOUSE on November 26th and 27th 2024 within the framework of the Optics and Optoelectronics COMET (Communities of Experts).

The aim of this workshop is to focus on CMOS image sensors for scientific and space applications. Although this workshop is organized by actors of the Space Community, it is widely open to other professional imaging applications such as Machine vision, Medical, Advanced Driver Assistance Systems (ADAS), and Broadcast (UHDTV) that boost the development of new pixel and sensor architectures for high end applications. Furthermore, we would like to invite Laboratories and Research Centers which develop Custom CMOS image sensors with advanced smart design on-chip to join this workshop.

Topics
- Pixel design (high QE, FWC, MTF optimization, low lag,…)
- Electrical design (low noise amplifiers, shutter, CDS, high speed architectures, TDI, HDR)
- On-chip ADC or TDC (in pixel, column, …)
- On-chip processing (smart sensors, multiple gains, summation, corrections)
- Low-light detection (electron multiplication, avalanche photodiodes, quanta image sensors,)
- Photon counting, Time resolving detectors (gated, time-correlated single-photon counting)
- Hyperspectral architectures
- Materials (thin film, optical layers, dopant, high-resistivity, amorphous Si)
- Processes (backside thinning, hybridization, 3D stacking, anti-reflection coating)
- Packaging
- Optical design (micro-lenses, trench isolation, filters)
- Large size devices (stitching, butting)
- High speed interfaces
- Focal plane architectures
- CMOS image sensors with recent space heritage (in-flight performance)

Venue
DIAGORA
Centre de Congrès et d'Exposition. 150, rue Pierre Gilles de Gennes
31670 TOULOUSE – LABEGE

Abstract submission
Please send a short abstract on one A4 page maximum in word or pdf format giving the title, the authors name and affiliation, and presenting the subject of your talk, to L-WCIS24@cnes.fr

Workshop format & official language
Oral presentation shall be requested for the workshop. The official language for the workshop is English.

Slide submission
After abstract acceptance notification, the author(s) will be requested to prepare their presentation in pdf or Powerpoint file format, to be presented at the workshop and to provide a copy to the organizing committee with an authorization to make it available for all attendees, and on-line for the CCT members.

Registration
Registration fee : 100 €.
https://evenium.events/space-and-scientific-cmos-image-sensors-2024/ 

Calendar
13th September 2024 Deadline for abstract submission
11th October 2024 Author notification & preliminary programme
14th October 2024 Registration opening
8th November 2024 Final programme
26th-27th November 2024 Workshop

Go to the original article...

TriEye launches TES200 SWIR Image Sensor

Image Sensors World        Go to the original article...

TriEye has launched the TES200, a 1.3MP SWIR image sensor for machine vision and robotics. See press release below.

TEL  AVIV,  Israel,  September 3, 2024/ – TriEye, pioneer of the world's first cost-effective,  mass-market  Short-Wave  Infrared  (SWIR)  sensing  technology, announced today the release of the TES200 1.3MP SWIR image sensor. Based on the innovative TriEye CMOS image sensor technology that allows SWIR capabilities using a CMOS manufacturing process, the TES200 is the first commercially available product released in the Raven product family.

The TES200 operates in the 700nm to 1650nm wavelength range, delivering high sensitivity and 1.3MP resolution. With its large format, high frame rate, and low power consumption, the TES200 offers enhanced sensitivity and dynamic range. This makes the new image sensor ideal for imaging and sensing applications across various industries, including automotive, industrial, robotics, and biometrics.

"We are proud to announce the commercial availability of the TES200 image sensor. Our CMOS-based solution has set new standards in the automotive market, and with the rise of new Artificial Intelligence (AI) systems, the demand for more sensors and more information has increased. The TES200 now brings these advanced SWIR capabilities to machine vision and robotic systems in various  industries,” said Avi Bakal, CEO of TriEye. “We are excited to offer a solution that delivers a new domain of capabilities in a cost-effective and scalable way, broadening the reach of advanced sensing technology."

The TriEye Raven image sensor family is designed for emerging machine vision and robotics applications,  incorporating  the  latest  SWIR  pixel  and  packaging technologies. The  TES200 is  immediately available in sample quantities and available for production orders with delivery in Q2 2025. 


 

Experience the TES200 in Action at CIOE and VISION 2024

We invite you to explore the advanced capabilities of the TES200 at the CIOE exhibition, held from September 11 to 13, 2024, at the Shenzhen World Exhibition and  Convention  Center,  China,  within the  Lasers  Technology  &  Intelligent Manufacturing Expo. View the demo at the Vertilas booth no. 4D021, 4D022. Then, meet TriEye’s executive team at VISION 2024 in Stuttgart, Germany, from October 8 to 10, at the TriEye booth no. 8A08, where you can experience a live demo of the TES200 and the brand new Ovi 2.0 devkit, and learn firsthand about our latest developments in SWIR imaging.

About TriEye 

TriEye is the pioneer of the world’s-first CMOS-based Short-Wave Infrared (SWIR) image  sensing solutions.  Based  on  advanced  academic  research,  TriEye’s breakthrough technology enables HD SWIR imaging and accurate deterministic 3D sensing  in  all  weather  and  ambient  lighting conditions.  The  company's semiconductor and photonics technology enabled the development of the SEDAR (Spectrum Enhanced Detection And Ranging) platform, which allows perception systems to operate and deliver reliable image data and actionable information, while reducing expenditure up to 100x the existing industry rates. For more information, visit www.trieye.tech

Go to the original article...

Tamron 50-400mm f4.5-6.3 Di III VC Nikon Z review so far

Cameralabs        Go to the original article...

Tamron's 50-400mm f4.5-6.3 Di III VC for Z-mount fills a gap in Nikon's line-up of zoom-lenses reaching 400mm focal length. How does it compare to Nikon's alternatives? Find out in my review so far.…

Go to the original article...

2024 SEMI MEMS and Imaging Summit program announced

Image Sensors World        Go to the original article...

SEMI MEMS & Imaging Sensors Summit 2024 will take place November 14-15 at the International Conference Center Munich (ICM), Messe Münich in Germany.

Thursday, 14th November 2024 

Session 1: Market Dynamics: Landscape and Growth Strategies

09:00  Welcome Remarks
Laith Altimime, President, SEMI Europe

09:20  Opening Remarks by MEMS and Imaging Committee Chair
Philippe Monnoyer, VTT Technical Research Center of Finland Ltd

09:25  Keynote: Smart Sensors for Smart Life – How Advanced Sensor Technologies Enable Life-Changing Use Cases
Stefan Finkbeiner, General Manager, Bosch Sensortec

09:45  Keynote: Sensing the World: Innovating for a More Sustainable Future
Simone Ferri, APMS Group Vice President, MEMS sub-group General Manager, STMicroelectronics

10:05  Reserved for Yole Development

10:25  Key Takeaways by MEMS and Imaging Committee Chair
Philippe Monnoyer, VTT Technical Research Center of Finland Ltd

10:30  Networking Coffee Break

Session 2: Sustainable Supply Chain Capabilities

11:10  Opening Remarks by Session Chair
Pawel Malinowski, Program Manager and Researcher, imec

11:15  A Paradigm Shift From Imaging to Vision: Oculi Enables 600x Reduction in Latency-Energy Factor for Visual Edge Applications
Charbel Rizk, Founder & CEO, Oculi

11:35  Reserved for Comet Yxlon

11:55  Key Takeaways by Session Chair
Pawel Malinowski, Program Manager and Researcher, imec

12:00  Networking Lunch

Session 3: MEMS - Exploring Future Trends for Technologies and Device Manufacturing

13:20  Opening Remarks by Session Chair
Pierre Damien Berger, MEMS Industrial Partnerships Manager, CEA LETI

13:25  Unlocking Novel Opportunities: How 300mm-capable MEMS Foundries Will Change the Game
Jessica Gomez, CEO, Rogue Valley Microdevices

13:45  Trends in Emerging MEMS
Alissa Fitzgerald, CEO, A.M. Fitzgerald & Associates, LLC

14:05  The Most Common Antistiction Films are PFAS, Now What?
David Springer, Product Manager, MVD and Release Etch Products, KLA Corporation

14:25  Reserved for Infineon

14:45  Latest Innovations in MEMS Wafer Bonding
Thomas Uhrmann, Director of Business Development, EV Group

15:05  Key Takeaways by Session Chair
Pierre Damien Berger, MEMS Industrial Partnerships Manager, CEA LETI

Session 4: Imaging - Exploring Future Trends for Technologies and Device Manufacturing

15:10  Opening Remarks by Session Chair
Stefano Guerrieri, Engineering Fellow and Key Expert Imager & Sensor Components, ams OSRAM

15:15  Topic Coming Soon
Avi Bakal, CEO & Co-founder, TriEye

15:35  Active Hyperspectral Imaging Using Extremely Fast Tunable SWIR Light Source
Jussi Soukkamaki, Lead, Hyperspectral & Imaging Technologies, VTT Technical Research Centre of Finland Ltd

15:55  Networking Coffee Break

16:40  Reserved

17:00  Reserved for CEA-Leti

17:20  Reserved for STMicroelectronics

17:40  Key Takeaways by Session Chair
Stefano Guerrieri, Engineering Fellow and Key Expert Imager & Sensor Components, ams OSRAM

Friday, 15th November 2024 

Session 5: MEMS and Imaging Young Talent

09:00  Opening Remarks by Session Chair
Dimitrios Damianos, Project Manager, Yole Group

09:05  Unlocking Infrared Multispectral Imaging with Pixelated Metasurface Technology
Charles Altuzarra, Chief Executive Officer & Co-founder, Metahelios

09:10  Electrically Tunable Dual-Band VIS/SWIR Imaging and Sensing
Andrea Ballabio, CEO, EYE4NIR

09:15  FMCW Chip-Scale LiDARs Scale Up for Large Volume Markets Thanks to Silicon Photonics Technology
Simoens François, CEO, SteerLight

09:20  ShadowChrome: A Novel Approach to an Old Problem
Geoff Rhoads, Chief Technology Officer, Transformative Optics Corporation

09:25  Feasibility Investigation of Spherically Bent Image Sensors
Amit Pandey, PhD Student, Technische Hochschule Ingolstadt

09:30  Intelligence Through Vision
Stijn Goossens, CTO, Qurv

09:35  Next Generation Quantum Dot SWIR Sensors
Artem Shulga, CEO & Founder, QDI Systems

09:40  Closing Remarks by Session Chair
Dimitrios Damianos, Project Manager, Yole Group

09:45  Networking Coffee Break

Session 6: Innovations for Next-Gen Applications: Smart Mobility

10:35  Opening Remarks by Session Chair
Bernd Dielacher, Business Development Manager MEMS, EVG

10:40  Reserved

11:00  New Topology for MEMS Advances Performance and Speeds Manufacturing
Eric Aguilar, CEO, Omnitron Sensors, Inc.

11:20  Key Takeaways by Session Chair
Bernd Dielacher, Business Development Manager MEMS, EVG

Session 7: Innovations for Next-Gen Applications: Health

11:25  Opening Remarks by Session Chair
Ran Ruby YAN, Director of HMI & HealthTech Business Line, GLOBALFOUNDRIES

11:30  Reserved

11:50  Sensors for Monitoring Vital Signs in Wearable Devices
Markus Arzberger, Senior Director, ams-OSRAM International GmbH

12:10  Pioneering Non-Invasive Wearable MIR Spectrometry for Key Health Biomarkers Analysis
Jan F. Kischkat, CEO, Quantune Technologies GmbH

12:30  Key Takeaways by Session Chair
Ran Ruby YAN, Director of HMI & HealthTech Business Line, GLOBALFOUNDRIES

12:35  End of Conference Reflections by MEMS and Imaging Committee Chair
Philippe Monnoyer, VTT Technical Research Center of Finland Ltd

12:45  Closing Remarks
Laith Altimime, President, SEMI Europe

12:50  Networking Lunch

Go to the original article...

IEEE SENSORS 2024 — image sensor topics announced

Image Sensors World        Go to the original article...

The list of topics and the authors for the following two events related to image sensor technology have been finalized for the IEEE SENSORS 2024 Conference. The conference will be held in Kobe, Japan, from 20-23 October 2024. It will provide the opportunity to hear world class speakers in the field of image sensors and to sample the sensor ecosystem that extends beyond to see how imaging fits in.

Workshop: “From Imaging to Sensing: Latest and Future Trends of CMOS Image Sensors” [Sunday, 20 October]

Organizers: Sozo Yokogawa (Sony Semiconductor Solutions corp.) • Erez Tadmor (onsemi)

Trends and Developments in State-of-the-Art CMOS Image Sensors”, Daniel McGrath, TechInsights
CMOS Image Sensor Technology: what we have solved, what are to be solved”, Eiichi Funatsu, OMNIVISION
Automotive Imaging: Beyond human Vision”, Vladi Korobov, onsemi
Recent Evolution of CMOS Image Sensor Pixel Technology”, Bumsuk Kim et al., Samsung Electronics
High precision ToF image sensor and system for 3D scanning application”, Keita Yasutomi, Shizuoka University
High-definition SPAD image sensors for computer vision applications”, Kazuhiro Morimoto, Canon Inc.
Single Photon Avalanche Diode Sensor Technologies for Pixel Size Shrinkage, Photon Detection Efficiency Enhancement and 3.36-pm-pitch Photon-counting Architecture”, Jun Ogi, Sony Semiconductor Solutions Corp.
SWIR Single-Photon Detection with Ge-on-Si Technology”, Neil Na, Artilux Inc.
From SPADs to smart sensors: ToF system innovation and AI enable endless application”, Laurent Plaza & Olivier Lemarchand, STMicroelectronics
Depth Sensing Technologies, Cameras and Sensors for VR and AR”, Harish Venkataraman, Meta Inc.
 
Focus session: Overview of The Focus Sensor on Stacking in Image Sensor, [Monday, 21 October]

Orgainizer: S-G. Wu, Brillnics

Co-chairs: DN Yaung, TSMC; John McCarten, L3 Harris

Over the past decade, 3-dimensional (3D) wafer level stacked backside Illuminated (BSI) CMOS image sensors (CIS) have achieved the rapid progress in mass production. This focus session on stacking in image sensors will have 4 invited papers to explore the sensor stack technology evolution from process development, circuit architecture to AI/edge computing in system integration.

The Productization of Stacking in Image Sensors”, Daniel McGrath, TechInsights
Evolution of Image Sensing and Computing Architectures with Stacking Device Technologies”, BC Hseih, Qualcomm
Event-based vision sensor”, Christoph Posch, Prophesee
Evolution of digital pixel sensor (DPS) and advancement by stacking technologies”, Ikeno Rimon, Brillnics

Go to the original article...

Galaxycore educational videos

Image Sensors World        Go to the original article...

 

Are you curious about how CMOS image sensors capture such clear and vivid images? Start your journey with the first episode of "CIS Explained". In this episode, we dive deep into the workings of these sophisticated sensors, from the basics of pixel arrays to the intricacies of signal conversion.
This episode serves as your gateway to understanding CMOS image sensors.


In this video, we're breaking down Quantum Efficiency (QE) and its crucial role in CIS. QE is a critical measure of how efficiently our sensors convert incoming light into electrical signals, directly affecting image accuracy and quality. This video will guide you through what QE means for CIS, its impact on your images, and how we're improving QE for better, more reliable imaging.


GalaxyCore DAG HDR Technology Film


Exploring GalaxyCore's Sensor-Shift Optical Image Stabilization (OIS) in under Two Minutes


GalaxyCore's COM packaging technology—a breakthrough in CIS packaging. This video explains how placing two suspended gold wires on the image sensor and bonding it to an IR base can enhance the durability and clarity of image sensors, prevent contamination, and ensure optimal optical alignment.

Go to the original article...

Avoiding information loss in the photon transfer method

Image Sensors World        Go to the original article...

In a recent paper titled "PCH-EM: A Solution to Information Loss in the Photon Transfer Method" in IEEE Trans. on Electron Devices, Aaron Hendrickson et al. propose a new statistical technique to estimate CIS parameters such as conversion gain and read noise.

Abstract: Working from a Poisson-Gaussian noise model, a multisample extension of the photon counting histogram expectation-maximization (PCH-EM) algorithm is derived as a general-purpose alternative to the photon transfer (PT) method. This algorithm is derived from the same model, requires the same experimental data, and estimates the same sensor performance parameters as the time-tested PT method, all while obtaining lower uncertainty estimates. It is shown that as read noise becomes large, multiple data samples are necessary to capture enough information about the parameters of a device under test, justifying the need for a multisample extension. An estimation procedure is devised consisting of initial PT characterization followed by repeated iteration of PCH-EM to demonstrate the improvement in estimating uncertainty achievable with PCH-EM, particularly in the regime of deep subelectron read noise (DSERN). A statistical argument based on the information theoretic concept of sufficiency is formulated to explain how PT data reduction procedures discard information contained in raw sensor data, thus explaining why the proposed algorithm is able to obtain lower uncertainty estimates of key sensor performance parameters, such as read noise and conversion gain. Experimental data captured from a CMOS quanta image sensor with DSERN are then used to demonstrate the algorithm’s usage and validate the underlying theory and statistical model. In support of the reproducible research effort, the code associated with this work can be obtained on the MathWorks file exchange (FEX) (Hendrickson et al., 2024).

 

RRMSE versus read noise for parameter estimates computed using constant flux implementation of PT and PCH-EM. RRMSE curves for PT μ~ and σ~ grow large near σread=0 and were clipped from the plot window.


Open access paper link: https://ieeexplore.ieee.org/document/10570238

Go to the original article...

Job Postings – Week of 18 August 2024

Image Sensors World        Go to the original article...

Omnivision

Principal Image Sensor Technology Engineer

Santa Clara, California, USA

Link

Teledyne

Product Assurance Engineer

Chelmsford, England, UK

Link

Tokyo Electron Labs

Heterogenous Integration Process Engineer I

Albany, New York, USA

Link

Fraunhofer IMS

Doktorand*in Optische Detektoren mit integrierten 2D-Materialien

Duisburg, Germany

Link

AMETEK Forza Silicon

Principal Mixed Signal Design Engineer

Pasadena, CA, USA

Link

University of Birmingham

Professor of Silicon Detector Instrumentation for Particle Physics

Birmingham, England, UK

Link

Ouster

Sensor Package Design Engineer

San Francisco, California, USA

Link

Beijing Institute of High Energy Physics

CEPC Overseas High-Level Young Talents

Beijing, China

Link

Thermo Fisher Scientific

Sr. Staff Product Engineer

Waltham, Massachusetts, USA (Remote)

Link

Go to the original article...

Harvest Imaging Forum 2024 registration open

Image Sensors World        Go to the original article...

The Harvest Imaging forum tradition continues, a next and tenth one will be organized on November 7 & 8, 2024, in Delft, the Netherlands. The basic intention of the Harvest Imaging forum is to have a scientific and technical in-depth discussion on one particular topic that is of great importance and value to digital imaging. The forum 2024 will be an in-person event.

The 2024 Harvest Imaging forum will deal with a single topic from the field of solid-state imaging world and will have only one world-level expert as the speaker:

"AI and VISION : A shallow dive into deep learning"

Prof. dr. Jan van Gemert (Delft Univ. of Technology, Nl)

Abstract: Artificial Intelligence is taking the world by storm! The AI engine is powered by “Deep Learning”. Deep learning differs from normal computer programming in that it allows computers to learn tasks from large, labelled, datasets. In this Harvest Imaging Forum we will go through all fundamentals of Deep Learning: Multi-layer perceptrons, Back-propagation, Optimization, Convolutional neural networks, Recurrent neural networks, un-/self-supervised learning and transformers and self-attention (GPT).

Bio: Jan van Gemert received a PhD degree from the University of Amsterdam in 2010. There he was a post-doctoral fellow as well as at École Normale Supérieure in Paris. Currently he leads the Computer Vision lab at Delft University of Technology. He teaches the Deep learning and Computer Vision MSc courses. His research focuses on visual inductive priors for deep learning for automatic image and video understanding. He has published over 100 peer-reviewed papers with more than 7,500 citations. See his Google scholar profile for his publications: https://scholar.google.com/citations?hl=en&user=JUdMRGcAAAAJ

Registration: The registration fee for this 2-days forum is set to 1295 Euro for an in-person attendance. Next to the cost of attending the forum, this fee for the in-person attendance does include:

  •  Coffee breaks in the mornings and afternoons,
  •  Lunch on both forum days,
  •  Dinner on the first forum day,
  •  Soft and hard copy of the presented material.

If you are interested to attend this forum, please fill out the registration form here: https://harvestimaging.com/forum_registration_2024.php

Go to the original article...

PhD thesis on a low power "time-to-first-spike" event sensor

Image Sensors World        Go to the original article...

Title: Event-based Image Sensor for low-power

Author: Mohamed AKRARAI (Universite Grenoble Alpes)

Abstract: In the framework of the OCEAN 12 European project, this PhD achieved the design, the implementation, the testing of an event based image sensor, and the publication of several scientific papers in international conferences, including renowned ones like the International Symposium on Asynchronous Circuits and Systems (ASYNC). The design of event-based image sensors, which are frameless, require a dedicated architecture and an asynchronous logic reacting to events. First, this PhD gives an overview of architectures based on a hybrid pixel matrix including TFS and DVS pixels. Indeed, this two kind of pixels are able to manage the spatial redundancy and the temporal redundancy respectively. One of the main achievement of this work is to take advantage of having both pixels inside an imager in order to reduce its output bitstream and its power consumption. Then, the design of the pixels and readout in FDSOI 28 nm technology from STMicroelectronics is detailed. Finally, two image sensors have been implemented in a testchip and tested.

Link: https://theses.hal.science/tel-04213080v1/file/AKRARAI_2023_archivage.pdf

 

Go to the original article...

EETimes article on imec

Image Sensors World        Go to the original article...

Full article: https://www.eetimes.eu/imec-getting-high-precision-sensors-to-market/

Imec: Getting High-Precision Sensors to Market

At the recent ITF World 2024, EE Times Europe talked with imec researchers to catch up on what they’re doing with high-precision sensors—and more importantly, how they make sure their innovations get into the hands of industrial players.

Imec develops sensors for cameras and displays, and it works with both light and ultrasound—for medical applications, for example. But the Leuven, Belgium–based research institute never takes technology to market itself. It either finds industrial partners—or when conditions are right, imec creates a spinoff. One way to understand how imec takes an idea from lab to fab and finds a way to get it to market is to zoom in on its approach with image sensors for cameras.

“We make image sensors that are at the beating heart of incredible cameras around the world,” said Paul Heremans, vice president of future CMOS devices and senior fellow at imec. “Our research starts with material selection and an overall new concept for sensors and goes all the way to development, engineering and low-volume manufacturing within imec’s pilot line.”

A good example is the Pharsighted E9-100S ultra-high-speed video camera, developed by Pharsighted LLC and marketed by Photron. The camera reaches 326,000 frames per second (full frame: 640 × 480 pixels) and up to 2,720,000 frames per second at a lower frame size (640 × 32 pixels), thanks to a high-speed image sensor developed and manufactured by imec.

Another example is an electron imager used in a cryo-transmission electron microscope (cryo-TEM) marketed by a U.S. company called Thermo Fisher. The instrument produces atomic resolution pictures of DNA strands and other complex molecules. These images help in the drug-discovery process by allowing researchers to understand the structure of the molecules they need to target.
Thermo Fisher uses direct electron detection imagers, developed by imec and built into the company’s Falcon direct electron detection imagers, each composed of 4K × 4K pixels. The pixels are very large to get to the ultimate sensitivity. Consequently, the chip is so large (5.7 × 5.7 cm) that only four fit on a 200-mm wafer.

A third example is hyperspectral imagers, with very special filters that detect many more colors than just red, green and blue (RGB). Hyperspectral imagers pick up tens or hundreds of spectral bands. They can achieve this level of performance because imec implements processing filters on each pixel.

“We can do that on almost any commercial imager and turn it into a hyperspectral camera,” Heremans said. “Our technology is used by plenty of customers with a range of applications—from surveillance to satellite-based Earth observation, from medical to agriculture and more.”

Spectricity

To bring some of its work on hyperspectral imagers to market, imec created a startup called Spectricity. “The whole idea is to bring this field of multispectral imaging or spectroscopy into cellphones or other high-volume products,” said Glenn Vandevoorde, CEO of Spectricity. “Our imagers can see things that are not visible to the human eye. Instead of just processing RGB data, which a traditional camera does, we take a complete spectral image, where each pixel contains 16 different color points—including near-infrared. And with that, you can detect different materials that look alike but are actually very different. Or you can do color correction on smartphones. Sometimes people look very different, depending on the ambient light. We can detect what kind of light is shining—and based on that, adjust the color.”
The first use case for cellphones is auto white balancing. When a picture is taken with a cellphone, sometimes the colors show up very differently from reality, because the camera doesn’t have an accurate white point, which is the set of values that make up the color white in an image. These values change under different conditions, which means they need to be calibrated often. All other colors are then adjusted based on the white point reference.

Traditional smartphone cameras cannot determine the ambient light accurately, so they cannot find the white point to serve as a viable reference. But the multispectral imager obtains the full spectral information of the ambient light and applies advanced AI algorithms to detect the white point, which leads to accurate auto white balancing and true color correction.

Spectricity said its sensor is being evaluated by seven out of the top eight smartphone manufacturers in the world for integration into phones. “By the end of this year, you will see several smartphone vendors launching the first phones with multispectral imagers inside,” Vandevoorde said.

While smartphones are the ultimate target for high volume, they are also very cost-competitive—and it takes a long time to introduce a new feature in a smartphone. Spectricity is targeting other smartphone applications but also applications for webcams, security cameras and in-cabin video cameras for cars. One category of use cases takes advantage of the ability of multispectral images to detect health conditions.

 

Spectricity’s spectral image sensor technology extends the paradigm of RGB color image sensors. Instead of red, green and blue filters on the pixels, many different spectral filters are deposited on the pixels, using wafer-scale, high-volume fabrication techniques. (Source: Spectricity)

 
Spectricity’s miniaturized spectral camera module, optimized for mobile devices.

“For example, you can accurately monitor how a person’s skin tone develops every day,” Vandevoorde said. “We can monitor blood flow in the skin, we can monitor moisture in the skin, we can detect melanoma and so on. These and many other things can be detected with these multispectral imagers.”
Spectricity has raised €28 million in funding since it was founded in 2018—and the startup has its own mass-production line at X-Fab, one of the company’s investors. “We have our machinery and our process installed there,” Vandevoorde said. “It’s now going through qualification—and by the end of the year, we’ll be ready for mass production to start shipping large volume to customers.” 

How imec finds the right trends to target
Spectricity is a good example of how imec spots a need and develops technology to meet that need. Spectroscopy, of course, is not new. It’s been around for decades, and researchers use it in labs to detect different materials and different gases. What’s new is that imec integrated spectroscopy onto CMOS technology and developed processes to produce it in high volumes for just a couple of dollars. Researchers worked on the idea for about 10 years—and once it was running on imec’s pilot line, the institute set up Spectricity to take it into mass production and develop applications around it. 

“We sniff around different trends,” said Xavier Rottenberg, scientific director and group leader of wave-based sensors and actuators at imec. “We’re in contact with a lot of players in the industry to get exposed to plenty of problems. Based on that, we develop a gut feeling. But gut feelings are dangerous, because it might be that you’re just hungry. However, with an educated gut feeling, sometimes your intuition is right.”

Once imec develops an idea in the lab, it takes the technology to its pilot line to develop a demonstrator. “We do proofs of concept to see how a device performs,” Rottenberg said. “Then we set up contacts in the ecosystem to form partnerships to bring the platform to a level where it can be mass-produced in an industrial fab.”

In some cases, an idea is too far out for partners to pick up for near-term profit. That’s when imec ventures out with a spinoff company, as it did with Spectricity.


Go to the original article...

Sony rebranding IMX sensors to LYTIA (?)

Image Sensors World        Go to the original article...

Link to full article: https://www.phonearena.com/news/sonys-image-sensor-makeover-imx-to-lytia-by-2026_id160402

Sony's image sensor makeover: IMX to LYTIA by 2026

... there's a buzz about Sony making a branding shift for its smartphone image sensors. According to a recent report, Sony is considering moving all its mobile image sensors, including the current IMX lineup, under the newer LYTIA brand. The company is gradually phasing out the IMX brand, and some IMX sensors have already been rebranded to LYTIA. Reportedly, the company plans to fully transition to the LYT lineup by 2026.

The report states that the 50MP IMX890 and IMX882 sensors have already been rebranded as LYT-701 and LYT-600. For instance, the LYT-600 is already used in the vivo X100 Ultra, launched in May this year.

Go to the original article...

A 100kfps X-ray imager

Image Sensors World        Go to the original article...

Marras et al. presented a paper titled "Development of the Continuous Readout Digitising Imager Array Detector" at the Topical Workshop on Electronics for Particle Physics 2023.

Abstract: Abstract: The CoRDIA project aims to develop an X-ray imager capable of continuous operation in excess of 100 kframe/s. The goal is to provide a suitable instrument for Photon Science experiments at diffraction-limited Synchrotron Rings and Free Electron Lasers considering Continuous Wave operation. Several chip prototypes were designed in a 65 nm process: in this paper we will present an overview of the challenges and solutions adopted in the ASIC design.

 
 
 
  


Go to the original article...

Pixel-level programmable regions-of-interest for high-speed microscopy

Image Sensors World        Go to the original article...

Zhang et al. from MIT recently published a paper titled "Pixel-wise programmability enables dynamic high-SNR cameras for high-speed microscopy" in Nature Communications.

Abstract: High-speed wide-field fluorescence microscopy has the potential to capture biological processes with exceptional spatiotemporal resolution. However, conventional cameras suffer from low signal-to-noise ratio at high frame rates, limiting their ability to detect faint fluorescent events. Here, we introduce an image sensor where each pixel has individually programmable sampling speed and phase, so that pixels can be arranged to simultaneously sample at high speed with a high signal-to-noise ratio. In high-speed voltage imaging experiments, our image sensor significantly increases the output signal-to-noise ratio compared to a low-noise scientific CMOS camera (~2–3 folds). This signal-to-noise ratio gain enables the detection of weak neuronal action potentials and subthreshold activities missed by the standard scientific CMOS cameras. Our camera with flexible pixel exposure configurations offers versatile sampling strategies to improve signal quality in various experimental conditions.

 

a Pixels within an ROI capture spatiotemporally-correlated physiological activity, such as signals from somatic genetically encoded voltage indicators (GEVI). b Simulated CMOS pixel outputs with uniform exposure (TE) face the trade between SNR and temporal resolution. Short TE (1.25 ms) provides high temporal resolution but low SNR. Long TE (5 ms) enhances SNR but suffers from aliasing due to low sample rate, causing spikes (10 ms interspike interval) to be indiscernible. Pixel outputs are normalized row-wise. Gray brackets: the zoomed-in view of the pixel outputs. c Simulated pixel outputs of the PE-CMOS. Pixel-wise exposure allows pixels to sample at different speeds and phases. Two examples: in the staggered configuration, the pixels sample the spiking activity with prolonged TE (5 ms) at multiple phases with offsets of (Δ = 0, 1,25, 2.5, 3.75 ms). This configuration maintains SNR and prevents aliasing, as the interspike interval exceeding the temporal resolution of a single phase is captured by phase-shifted pixels. In the multiple exposure configuration, the ROI is sampled with pixels at different speeds, resolving high-frequency spiking activity and slow varying subthreshold potentials that are challenging to acquire simultaneously at a fixed sampling rate. d The PE-CMOS pixel schematic with 6 transistors (T1-T6), a photodiode (PD), and an output (OUT). RST, TX, and SEL are row control signals. EX is a column signal that controls pixel exposure. e The pixel layout. The design achieves programmable pixel-wise exposure while maximizing the PD fill factor for high optical sensitivity.

 

a Maximum intensity projection of the sCMOS (Hamamatsu Orca Flash 4.0 v3) and the PE-CMOS videos of a cultured neuron expressing the ASAP3 GEVI protein. b ROI time series from the sCMOS sampled at 800 Hz with pixel exposure (TE) of 1.25 ms. Black trace: ROI time series. Gray trace: the time series each with 1/4 pixels of the ROI. Plotted signals are inverted from raw samples for visualization. c simultaneously imaged ROI time series of the PE-CMOS. Colored trace: the time series of phase-shifted pixels at offsets (Δ) of 0, 1.25, 2.5, and 3.75 ms each contain 1/4 pixels of the ROI. All pixels are sampled at 200 Hz with TE = 5 ms. Black trace: the interpolated ROI time series with 800 Hz equivalent sample rate. Black arrows: An example showing a spike exceeding the temporal resolution of a single phase is captured by phase-shifted pixels. Black circles: an example subthreshold event barely discernable in sCMOS is visible in the pCMOS output. d, e, f: same at panels (a, b, c) with an example showing a spike captured by the PE-CMOS but not resolvable in the sCMOS output due to low SNR (marked by the magenta arrow). g, h comparison of signal quality from smaller ROIs covering parts of the cell membrane. Gray boxes: zoomed-in view of a few examples of putative spiking events. i SNR of putative spikes events from ROIs in panel (g). A putative spiking event is recorded when the signals from either output exceed SNR > 5. Data are presented as mean values +/- SD, two-sided Wilcoxon rank-sum test for equal medians, n = 93 events, p = 2.99 × 10-24. The gain is calculated as the spike SNR in the PE-CMOS divide by the SNR in the sCMOS. All vertical scales of SNR are 5 in all subfigures.

a The intracellular potential of the cell and the ROI GEVI time-series of the PE-CMOS and sCMOS. GEVI pulse amplitude is the change in GEVI signal corresponding to each current injection pulse. It is measured as the difference between the average GEVI intensity during each current pulse and the average GEVI intensity 100 ms before and after the current injection pulse. GEVI pulse amplitude is converted into SNR by dividing the noise standard deviation. b max. projection of the cell in PE-CMOS and sCMOS. c zoomed in view of the intracellular voltage and GEVI pulses in (a). The red arrow indicates spike locations identified from the intracellular voltage. The black arrows indicate a time where intracellular potential shows a flat response when the GEVI signals in both PE-CMOS and sCMOS exhibit significant amplitude variations. These can be mistaken for spiking events. d zoomed in view of (c) showing the PE-CMOS trace can resolve two spikes with small inter-spike interval, while sCMOS at 800 Hz and 200 Hz both fail to do so. The blue arrows point to the first spike invoked by the current pulse. While the sharp rising edges make them especially challenging for image sensors to sample, the PE-CMOS can preserve their amplitudes better the sCMOS.
 

a Maximum intensity projection of the PE-CMOS videos, raw and filtered (2 × 2 spatial box filter) output at full spatial resolution. Intensity is measured by digital bits (range: 0–1023). b Maximum intensity projection divided into four sub-frames according to pixel sampling speed, each with 1/4 spatial resolution. c The ROI time series from pixels of different speeds (colored trace). Black trace: a 1040 Hz equivalent signal interpolated across all ROI pixels. d Fast sampling pixels (520 Hz) resolves high-SNR spike bursts. e–f Pixels with more prolonged exposure (TE = 2.8–5.7 ms) improves SNR to detect weak subthreshold activity (black arrow) and (f) low SNR spike. The vertical scale of SNR is 10 unless otherwise noted.


Open access article link: https://www.nature.com/articles/s41467-024-48765-5

Go to the original article...

css.php