IISW 2025 Final Call for Papers is out

Image Sensors World        Go to the original article...

The 2025 International Image Sensor Workshop (IISW) provides a biennial opportunity to present innovative work in the area of solid-state image sensors and share new results with the image sensor community. The event is intended for image sensor technologists; in order to encourage attendee interaction and a shared experience, attendance is limited, with strong acceptance preference given to workshop presenters. As is the tradition, the 2025 workshop will emphasize an open exchange of information among participants in an informal, secluded setting beside the Awaji Island in Hyōgo, Japan.

The scope of the workshop includes all aspects of electronic image sensor design and development. In addition to regular oral and poster papers, the workshop will include invited talks and announcement of International Image Sensors Society (IISS) Award winners.

Submission of abstracts:
An abstract should consist of a single page of maximum 500-words text with up to two pages of illustrations (3 pages maximum), and include authors’ name(s), affiliation, mailing address, telephone number, and e-mail address.


The deadline for abstract submission is 11:59pm, Thursday Dec 19, 2024 (GMT).
To submit an abstract, please go to: https://cmt3.research.microsoft.com/IISW2025 

 

Go to the original article...

Space & Scientific CMOS Image Sensors Workshop

Image Sensors World        Go to the original article...

The preliminary program for Space & Scientific CMOS Image Sensors Workshop to be held on 26th & 27th November in Toulouse Labège is available.

Registration: https://evenium.events/space-and-scientific-cmos-image-sensors-2024/








Go to the original article...

Call for Nominations for the 2025 Walter Kosonocky Award

Image Sensors World        Go to the original article...

International Image Sensor Society calls for nominations for the 2025 Walter Kosonocky Award for Significant Advancement in Solid-State Image Sensors.
 
The Walter Kosonocky Award is presented biennially for THE BEST PAPER presented in any venue during the prior two years representing significant advancement in solid-state image sensors. The award commemorates the many important contributions made by the late Dr. Walter Kosonocky to the field of solid-state image sensors. Personal tributes to Dr. Kosonocky appeared in the IEEE Transactions on Electron Devices in 1997. Founded in 1997 by his colleagues in industry, government and academia, the award is also funded by proceeds from the International Image Sensor Workshop.
 
The award is selected from nominated papers by the Walter Kosonocky Award Committee, announced and presented at the International Image Sensor Workshop (IISW), and sponsored by the International Image Sensor Society (IISS). The winner is presented with a certificate, complementary registration to the IISW, and an honorarium.
 
Please send us an email nomination for this year's award, with a pdf file of the nominated paper (that you judge is the best paper published/ presented in calendar years 2023 and 2024) as well as a brief description (less than 100 words) of your reason nominating the paper. Nomination of a paper from your company/ institute is also welcome.
 
The deadline for receiving nominations is January 15th, 2025.
 
Your nominations should be sent to Yusuke Oike (2025nominations@imagesensors.org), Secretary of the IISS Award Committee.

Go to the original article...

Single Photon Workshop 2024 Program Available

Image Sensors World        Go to the original article...

The 11th Single Photon Workshop will be held at the Edinburgh International Conference Centre (EICC) over the five-day period, 18-22nd November 2024.

The full program is available here: https://fitwise.eventsair.com/2024singlephotonworkshop/programme

Here are some image-sensor specific sessions and talks:

Wednesday Nov 20, 2024 Session Title: Superconducting Photon Detectors 1
Chair: Dmitry Morozov
4:40 PM - 5:10 PM
Demonstration of a 400,000 pixel superconducting single-photon camera
Invited Speaker - Adam McCaughan - National Institute of Standards and Technology (NIST)
5:10 PM - 5:15 PM
Company Symposium: Photon Spot Platinum Sponsor Speaker: Vikas Anant
5:15 PM - 5:30 PM
Development of Superconducting Wide Strip Photon Detector Paper Number: 112 Speaker: Shigehito Miki - National Institute of Information and Communications Technology (NICT)
5:30 PM - 5:45 PM
Superconducting nanowire single photon detectors arrays for quantum optics Paper Number: 34 Speaker: Val Zwiller - KTH Royal Institute of Technology
5:45 PM - 6:00 PM
Single photon detection up to 2 µm in pair of parallel microstrips based on NbRe ultrathin films
Paper Number: 80 Speaker: Loredana Parlato - University of Naples Federico II
6:00 PM - 6:15 PM
Reading out SNSPDs with Opto-Electronic Converters Paper Number: 87 Speaker: Frederik Thiele - Paderborn Univeristy
6:15 PM - 6:30 PM
Development of Mid to Far-Infrared Superconducting Nanowire Single Photon Detectors Paper Number: 195 Speaker: Sahil Patel - California Institute Of Technology

Thursday Nov 21, 2024 Session Title: Superconducting Photon Detectors 2
Chair: Martin J Stevens
8:30 AM - 8:45 AM
Opportunities and challenges for photon-number resolution with SNSPDs Paper Number: 148 Speaker: Giovanni V Resta - ID Quantique
8:45 AM - 9:00 AM
Detecting molecules at the quantum yield limit for mass spectroscopy with arrays of NbTiN superconducting nanowire detectors Paper Number: 61 Speaker: Ronan Gourgues - Single Quantum
9:00 AM - 9:30 AM
Current state of SNSPD arrays for deep space optical communication Invited Speaker - Emma E Wollman - California Institute Of Technology
9:30 AM - 9:35 AM
Company Symposium: Quantum Opus/MPD presentation Platinum Sponsors
9:35 AM - 9:50 AM
Novel kinetic inductance current sensor for transition-edge sensor readout Paper Number:238 Speaker: Paul Szypryt - National Institute of Standards and Technology (NIST)
9:50 AM - 10:05 AM
Quantum detector tomography for high-Tc SNSPDs Paper Number: 117 Speaker: Mariia Sidorova - Humboldt University of Berlin
10:05 AM - 10:20 AM
Enhanced sensitivity and system integration for infrared waveguide-integrated superconducting nanowire single-photon detectors Paper Number: 197 Speaker: Adan Azem - University Of British Columbia

 

Thursday Nov 21, 2024 Session Title: SPADs 1
Chair: Chee Hing Tan
11:00 AM - 11:30 AM
A 3D-stacked SPAD Imager with Pixel-parallel Computation for Diffuse Correlation Spectroscopy
Invited Speaker - Robert Henderson - University of Edinburgh
11:30 AM - 11:45 AM
High temporal resolution 32 x 1 SPAD array module with 8 on-chip 6 ps TDCs
Paper Number: 182 Speaker: Chiara Carnati - Politecnico Di Milano
11:45 AM - 12:00 PM
A 472 x 456 SPAD Array with In-Pixel Temporal Correlation Capability and Address-Based Readout for Quantum Ghost Imaging Applications
Paper Number: 186 Speaker :Massimo Gandola - Fondazione Bruno Kessler
12:00 PM - 12:15 PM
High Performance Time-to-Digital Converter for SPAD-based Single-Photon Counting applications
Paper Number: 181 Speaker: Davide Moschella - Politecnico Di Milano
12:15 PM - 12:30 PM
A femtosecond-laser-written programmable photonic circuit directly interfaced to a silicon SPAD array
Paper Number: 271 Speaker: Francesco Ceccarelli - The Istituto di Fotonica e Nanotecnologie (CNR-IFN)

Thursday Nov 21, 2024 Session Title: SPADs 2
Chair: Alberto Tosi
2:00 PM - 2:30 PM
Ge-on-Si Technology Enabled SWIR Single-Photon Detection
Invited Speaker - Neil Na - Artilux
2:30 PM - 2:45 PM
The development of pseudo-planar Ge-on-Si single-photon avalanche diode detectors for photon detection in the short-wave infrared spectral region
Paper Number: 254 Speaker: Lisa Saalbach - Heriot-Watt University
2:45 PM - 3:00 PM
Hybrid integration of InGaAs/InP single photon avalanche diodes array and silicon photonics chip
Paper Number: 64 Speaker: Xiaosong Ren - Tsinghua University
3:00 PM - 3:15 PM
Dark Current and Dark Count Rate Dependence on Anode Geometry of InGaAs/InP Single-Photon Avalanche Diodes
Paper Number: 248 Speaker: Rosemary Scowen - Toshiba Research Europe
3:15 PM - 3:30 PM
Compact SAG-based InGaAs/InP SPAD for 1550nm photon counting
Paper Number: 111 Speaker: Ekin Kizilkan - École Polytechnique Fédérale de Lausanne (EPFL)

Thursday Nov 21, 2024 Session Title: Single-photon Imaging and Sensing 1
Chair: Aurora Maccarone
4:15 PM - 4:45 PM
Single Photon LIDAR goes long Range
Invited Speaker - Feihu Xu - USTC China
4:45 PM - 5:00 PM
The Deep Space Optical Communication Photon Counting Camera
Paper Number: 11 Speaker: Alex McIntosh - MIT Lincoln Laboratory
5:00 PM - 5:15 PM
Human activity recognition with Single-Photon LiDAR at 300 m range
Paper Number: 232 Speaker: Sandor Plosz - Heriot-Watt University
5:15 PM - 5:30 PM
Detection Times Improve Reflectivity Estimation in Single-Photon Lidar
Paper Number: 273 Speaker: Joshua Rapp - Mitsubishi Electric Research Laboratories
5:30 PM - 5:45 PM
Bayesian Neuromorphic Imaging for Single-Photon LiDAR
Paper Number: 57 Speaker: Dan Yao - Heriot-Watt University
5:45 PM - 6:00 PM
Single Photon FMCW LIDAR for Vibrational Sensing and Imaging
Paper Number: 23 Speaker: Theodor Staffas - KTH Royal Institute of Technology

Friday Nov 22, 2024 Session Title: Single-photon Imaging 2
9:00 AM - 9:15 AM
Quantum-inspired Rangefinding for Daytime Noise Resistance
Paper Number:208 Speaker: Weijie Nie - University of Bristol
9:15 AM - 9:30 AM
High resolution long range 3D imaging with ultra-low timing jitter superconducting nanowire single-photon detectors
Paper Number: 296 Speaker: Aongus McCarthy - Heriot-Watt University
9:30 AM - 9:45 AM
A high-dimensional imaging system based on an SNSPD spectrometer and computational imaging
Paper Number: 62 Speaker: Mingzhong Hu - Tsinghua University
9:45 AM - 10:00 AM
Single-photon detection techniques for real-time underwater three-dimensional imaging
Paper Number: 289 Speaker: Aurora Maccarone - Heriot-Watt University
10:00 AM - 10:15 AM
Photon-counting measurement of singlet oxygen luminescence generated from PPIX photosensitizer in biological media
Paper Number: 249 Speaker: Vikas - University of Glasgow
10:15 AM - 10:30 AM
A Plug and Play Algorithm for 3D Video Super-Resolution of single-photon data
Paper Number:297 Speaker: Alice Ruget - Heriot-Watt University

Friday Nov 22, 2024 Session Title: Single-photon Imaging and Sensing 2
11:00 AM - 11:30 AM
Hyperspectral Imaging with Mid-IR Undetected Photons
Invited Speaker - Sven Ramelow - Humboldt University of Berlin
11:30 AM - 11:45 AM
16-band Single-photon imaging based on Fabry-Perot Resonance
Paper Number: 35 Speaker: Chufan Zhou - École Polytechnique Fédérale de Lausanne (EPFL)
11:45 AM - 12:00 PM
High-frame-rate fluorescence lifetime microscopy with megapixel resolution for dynamic cellular imaging
Paper Number: 79 Speaker: Euan Millar - University of Glasgow
12:00 PM - 12:15 PM
Beyond historical speed limitation in time correlated single photon counting without distortion: experimental measurements and future developments
Paper Number: 237 Speaker: Giulia Acconcia - Politecnico Di Milano
12:15 PM - 12:30 PM
Hyperspectral mid-infrared imaging with undetected photons
Paper Number: 268 Speaker: Emma Pearce - Humboldt University of Berlin
12:30 PM - 12:45 PM
Determination of scattering coefficients of brain tissues by wide-field time-of-flight measurements with single photon camera.
Paper Number: 199 Speaker: André Stefanov - University Of Bern

Go to the original article...

Image sensor basics

Image Sensors World        Go to the original article...

These lecture slides by Prof. Yuhao Zhu at U. Rochester are a great first introduction to how an image sensor works. A few selected slides are shown below. For the full slide deck visit: https://www.cs.rochester.edu/courses/572/fall2022/decks/lect10-sensor-basics.pdf

 













Go to the original article...

SLVS-EC IF Standard v3 released

Image Sensors World        Go to the original article...

Link: http://jiia.org/en/slvs-ec-if-standard-version-3-0-has-been-released/

Embedded Vision I/F WG has released "SLVS-EC IF Standard Version 3.0”.
Version 3.0 supports up to 10Gbps/lane, which is 2x faster than Version 2.0, and improved data transmission efficiency.

Link: https://www.m-pression.com/solutions/hardware/slvs-ec-rx-30-ip

SLVS-EC v3.0 Rx IP is an interface IP core that runs on Altera® FPGAs. Using this IP, you can quickly and easily implement products that support the latest SLVS-EC standard v3.0. You will also receive an "Evaluation kit" for early adoption.

  •  Altera® FPGAs can receive signals directly from the SLVS-EC Interface.
  •  Compatible with the latest SLVS-EC Specification Version 3.0.
  •  Supports powerful De-Skew function. Enables board design without considering Skew that occurs between lanes.
  •  "Evaluation kit”(see below) is available for speedy evaluation at the actual device level.

 About SLVS-EC:

SLVS-EC (Scalable Low Voltage Signaling with Embedded Clock) is an interface standard for high-speed & high-resolution image sensors developed by Sony Semiconductor Solutions Corporation. The SLVS-EC standard is standardized by JIIA (Japan Industrial Imaging Association).



Go to the original article...

Emberion 50 euro CQD SWIR imager

Image Sensors World        Go to the original article...


From: https://invision-news.de/allgemein/extrem-kostenguenstiger-swir-sensor/

Emberion is introducing an extremely cost-effective SWIR sensor that covers a range from 400 to 2,000 nm and whose manufacturing costs for large quantities are less than €50. The sensors are smaller and lighter, which expands the application possibilities of this technology in a wide range of applications. They combine Emberion's existing patented Quantom Dot technology with the patented wafer-level packaging.

Press release from Emberion: https://www.emberion.com/emberion-oy-introduces-groundbreaking-ultra-low-cost-swir-sensor/

The unique SWIR image sensor’s manufacturing cost is less than 50€ in large volume production.

Espoo, Finland — 1.10.2024 — The current cost level of SWIR imaging technology seriously limits the use of SWIR imaging in a variety of industrial, defense & surveillance, automotive and professional/consumer applications. Emberion Oy, a leading innovator in quantum dot based shortwave infrared sensing technology, is excited to announce its new ultra-low cost SWIR (Short-Wave Infrared) sensor that brings the sensor production cost down to €50 level in large volumes. This revolutionary product is set to deliver high-performance infrared imaging to truly mass-market applications such as automotive and consumer electronics as well as enabling much wider deployment of SWIR imaging in industrial, defence and surveillance applications. The revolutionary sensors are also smaller in size and weight, further extending the possibilities to use this technology in a variety of use cases. Emberion is already shipping extended range high speed SWIR cameras and will bring first ultra-low cost sensor based products to the market in 2025.

 

Bringing Advanced Imaging to Everyday Devices at a fraction of current cost

The new Emberion sensor family is designed to make advanced shortwave infrared technology accessible to wider markets, including large volume markets such as automotive sensing and consumer electronics. The new ultra-low cost SWIR sensor combines Emberion’s existing patented quantum dot sensor technology with Emberion’s patented wafer-level packaging to drastically reduce the manufacturing costs of packaged sensors. Current InGaAs and quantum dot based image sensors are typically packaged in metal or ceramic casings with a total production cost for packaged imagers in the range of several hundred euros to a few thousand euros depending on sensor technology, imager wavelength range, packaging choices and production volumes. Emberion’s sensors are manufactured and packaged on a full wafer with up to 100 imagers on a single 8” wafer, making the production cost of a single sensor to be a fraction of current alternatives. In addition to low cost, the sensor enables high integration of functionality into the in-house designed read-out IC, reduces size and weight, and provides stability in performance, enabling new functionalities in everyday technology that were once only available in high-end or niche markets.

Examples of applications that require low-cost, compact sensors:

  • Automotive Industry: Enhanced driver assistance systems (ADAS) with improved visibility in demanding weather conditions for increased safety and performance.
  • Consumer Electronics: Integrating SWIR sensors into smartphones and wearable devices, allowing for facial recognition in all lighting conditions, gesture control, and material identification.
  • Augmented and Virtual Reality (AR/VR): Enabling more accurate environmental sensing for immersive, real-world interaction in AR/VR environments.
  • Drones: Precision vision systems for navigation and object detection in both consumer and defence markets.

Some of the key benefits of the Emberion SWIR sensor include:

  • Cost Efficiency: Thanks to wafer-level packaging, the production process is streamlined, making this sensor by magnitude more affordable than any existing SWIR solution. Also, the high sensor integration level with image processing embedded into the sensor decreases the need for image post processing significantly and decreases the need for camera components on system level.
  • Size, weight and power (SWaP) optimization: The miniature and power efficient design is ideal for space-constrained applications like consumer electronics and automotive components. The high sensor integration level is also a significant contributor to the system SWaP optimization.
  • Stability: The wafer-level packaging improves the sensor stability and protection and makes it suitable for demanding environments like automotive and outdoor applications. It can also be integrated into external packaging if needed, e.g. LCC or metal packaging.
  • Extended Wavelength Sensitivity: Covering a range from 400 nm to 2000 nm, ideal for detecting objects and scenes extending the spectral range beyond traditional SWIR sensors.

Go to the original article...

CVSENS raises series A funding

Image Sensors World        Go to the original article...

CVSENS is a high-performance CIS design company headquartered in Shenzhen:  http://www.cvsens.com/language/en/

Original news in Chinese: https://laoyaoba.com/n/919232

Translation from Google Translate:

AVC Semiconductor completes a new round of financing of hundreds of millions of yuan to accelerate the localization of high-end CMOS image sensor chips

Recently, CVSENS successfully completed its A round of financing of hundreds of millions of yuan. The financing was led by Hanlian Semiconductor Industry Fund , and co-invested with Zhejiang University Education Foundation and Shanghai Anchuang Chuangxin , which indicates the market's high recognition and confidence in CVSENS.

As a leading CMOS image sensor chip developer in China, Chuangshi Semiconductor focuses on the design and development of high-value-added CMOS image sensor chips, and is committed to providing customers with better quality and more efficient services and products. With more than 15 years of experience in high-end product development, the core team of Chuangshi Semiconductor has broken through the core technology barriers of high-end CIS in various application fields. At present, more than ten CIS chips have been launched, all of which have been successfully taped out at one time, covering multiple application directions such as smart security, low-power IoT, smart cars, and machine vision. Many of the industry's first innovative products have won unanimous praise from clients. In the future, Chuangshi Semiconductor will continue to deepen its image sensor technology, promote industrial upgrading, and lead the new direction of industry development.

Hanlian Semiconductor Industry Fund said: We are optimistic about the huge development space in the field of image sensors and the market opportunities for domestic manufacturers. The Chuangshi Semiconductor team has excellent technical capabilities, business focus and product innovation capabilities, and is a new force in the industry with comprehensive competitiveness. At the same time, working with Chuangshi Semiconductor is an important part of Hanlian Semiconductor Industry Fund's layout in the field of vision. We hope that in the future, Chuangshi Semiconductor will work closely with other projects in our system to jointly develop first-class products and forward-looking innovative technologies in the industry, and provide better product solutions for more application scenarios.

Shanghai Anchuang Chuangxin Enterprise Management Consulting Partnership stated: As a corporate consulting and investment institution focusing on the high-tech field, we are very optimistic about the image sensor chip R&D team of AVC Semiconductor and its outstanding product innovation capabilities. This investment not only provides financial support for AVC Semiconductor, but also uses our ecosystem resources and industry-leading technologies to provide AVC Semiconductor with in-depth industrial links through innovation empowerment, helping to achieve longer-term development goals. We are full of confidence in participating in this investment in AVC Semiconductor, and look forward to helping AVC Semiconductor achieve greater success in technological innovation, market expansion and brand building, and work with AVC Semiconductor to create a new chapter in the image sensor industry.

The founder of AVC Semiconductor said: "I am very honored to receive joint investment from Hanlian Semiconductor Industry Fund, the Education Foundation of my alma mater Zhejiang University, and Shanghai Anchuang Chuangxin Enterprise Management Consulting Partnership. This is not only a recognition of AVC Semiconductor's past achievements, but will also help the company further promote technological innovation, enhance market competitiveness, and inject vitality into the company's long-term development. Since its establishment, AVC Semiconductor has been focusing on the research and development and innovation of CMOS image sensor chips. Its products and services are widely used in many fields such as automotive vision, smart security, low-power IoT, machine vision, and medical vision, constantly promoting technological progress and meeting market demand. We also look forward to working with more partners to jointly promote the innovative development of the image sensor industry."

Transvision Semiconductor will continue to take technological innovation as the core driving force, uphold the core concept of "gratitude, pragmatism and courage to innovate", actively seize market opportunities, continuously expand market share, strengthen industrial chain collaboration, and practice sustainable development, aiming to become a global leading CIS solution provider and provide customers with better quality and more efficient services and products.

Go to the original article...

Galaxycore chip-on-module packaging for CIS

Image Sensors World        Go to the original article...

Link: https://en.gcoreinc.com/news/detail-69

 


The performance of an image sensor relies not only on its design and manufacturing but also on the packaging technology.

CIS packaging is particularly challenging, as any particle in the environment that drops on the sensor surface during the process can cause a significant affect on the final image quality. GalaxyCore’s COM (Chip on Module) packaging technology has revolutionized traditional CSP (Chip Scale Package) and COB (Chip on Board) methods, enhancing the performance, reliability, and applicability of the optical system of camera modules.

Birth of the COM Packaging

Before the advent of COM packaging, CSP and COB were the predominant packaging choices for CIS. CSP places a layer of glass on the sensor to prevent dust. However, the glass also reflects some light, thus degrading image quality. COB requires an exceptionally demanding environment, typically a Class 100 clean room.

Is there an alternative? GalaxyCore’s technical team developed an innovative solution by directly suspending gold wire to serve as pins. In the fantastic microscopic realm, the short gold wire becomes hard and elastic, which can used directly as pins.

At GalaxyCore’s Class 100 clean rooms in the packaging and testing factory in Jiashan City, Zhejiang Province, a fully-automated high-precision equipment bonds the gold wire to the image sensor with exacting accuracy. The sensor is then mounted on a filter base, and the other end of the gold wire is suspended as the pin. The pin is subsequently soldered by the camera module manufacturer to the FPCB. When assembled with a lens and the actuator, a complete camera module can be formed.

We were pleasantly surprised to discover that the performance and reliability of the COM packaging are on par with, or even exceed, those of high-end COB packaging.

Three Advantages for System-level Improvement

1. Enhanced Optical System Performance
The COM packaging notably enhances the optical system performance of camera modules. In the COB packaging, the chip is directly mounted on the FPCB. However, the FPCB is prone to deformation during production, which may lead to the tilt of the optical axis and further affect the image quality.
In GalaxyCore’s COM packaging, both the chip and lens use the filter base as the benchmark, thus mitigating the optical axis tilt caused by FPCB deformation. This significantly improves the edge resolution of images, especially in large aperture and high-pixel camera modules.

2. Improved Module Reliability and Flexibility
In the COM packaging, due to a certain distance between the chip and the FPCB, the camera module is subject to greater back pressure, thus improving the reliability and durability of the module.
In the COB packaging, the CIS directly mounted on the FPCB is more sensitive to the back pressure, and the SFR (i.e. image resolution) is more likely to be affected. By contrast, in the COM packaging, the CIS chip is relatively isolated and suspended, making it hard for the back pressure to directly act on the CIS chip. As such, a better image resolution can be achieved. Different from the COB packaging, the COM packaging connects the chip pins and pads through soldering. This solution reduces the material requirements for the FPCB and further enhances its adaptability and flexibility.

3. Minimized Module
In the COM packaging, FPCB can be hollowed out to allow the chip to sink into it. Compared to the COB packaging with direct mounting of chip on the FPCB or reinforcement of steel sheets, the COM solution can control the back pressure more effectively and reduce the requirements for steel sheet thickness. This enhances the height advantage of the overall packaging module, to meet cell phones’ stringent requirements for space. This advantage is more notable in devices seeking thin and light designs.

GalaxyCore’s COM packaging ensures both high performance and reliability for the optical system while simplifying the subsequent production processes for module manufacturers. This method reduces the dependence on dust-free environments and enhances quality, yield, and efficiency. With the mass production of COM chips and further application of this technology, it will deliver improved imaging performance across a broader range of end products.

Go to the original article...

EI2025 late submissions deadline tomorrow Oct 15, 2024

Image Sensors World        Go to the original article...

Electronic Imaging 2025 is accepting submissions --- late submission deadline is tomorrow (Oct 15, 2024). The Electronic Imaging Symposium comprises 17 technical conferences to be held in person at the Hyatt Regency San Francisco Airport in Burlingame, California.


IMPORTANT DATES

Journal-first (JIST/JPI) Submissions Due 15 Aug
Final Journal-first manuscripts due 31 Oct
Late Submission Deadline 15 Oct
FastTrack Proceedings Manuscripts Due 8 Jan 2025
All Outstanding Manuscripts Due 21 Feb 2025

Registration Opens mid-Oct
Demonstration Applications Due 21 Dec
Early Registration Ends 18 Dec


Hotel Reservation Deadline 10 Jan
Symposium Begins 2 Feb
Non-FastTrack Proceedings Manuscripts Due
21 Feb

There are three submission options to fit your publication needs: journal, conference, and abstract-only.



Go to the original article...

Another PhD Defense Talk on Event Cameras

Image Sensors World        Go to the original article...

Thesis title: A Scientific Event Camera: Theory, Design, and Measurements
Author: Rui Garcia
Advisor: Tobi Delbrück


 See also, earlier post about the PhD thesis abstract and full text link: https://image-sensors-world.blogspot.com/2024/08/phd-thesis-on-scidvs-event-camera.html

The full thesis text is available here after the embargo ends in July 2026: https://www.research-collection.ethz.ch/handle/20.500.11850/683623

Go to the original article...

Artilux paper on room temperature quantum computing using Ge-Si SPADs

Image Sensors World        Go to the original article...

Neil Na et al from Artilux and UMass Boston have published a paper titled "Room-temperature photonic quantum computing in integrated silicon photonics with germanium–silicon single-photon avalanche diodes" in APL Quantum.

Abstract: Most, if not all, photonic quantum computing (PQC) relies upon superconducting nanowire single-photon detectors (SNSPDs) typically based on niobium nitride (NbN) operated at a temperature <4 K. This paper proposes and analyzes 300 K waveguide-integrated germanium–silicon (GeSi) single-photon avalanche diodes (SPADs) based on the recently demonstrated normal-incidence GeSi SPADs operated at room temperature, and shows that their performance is competitive against that of NbN SNSPDs in a series of metrics for PQC with a reasonable time-gating window. These GeSi SPADs become photon-number-resolving avalanche diodes (PNRADs) by deploying a spatially-multiplexed M-fold-waveguide array of M GeSi SPADs. Using on-chip waveguided spontaneous four-wave mixing sources and waveguided field-programmable interferometer mesh circuits, together with the high-metric SPADs and PNRADs, high-performance quantum computing at room temperature is predicted for this PQC architecture.

Link: https://doi.org/10.1063/5.0219035

Schematic plot of the proposed room-temperature PQC paradigm with integrated SiPh using the path degree of freedom of single photons: single photons are generated through SFWM (green pulses converted to blue and red pulses) in SOI rings (orange circles), followed by active temporal multiplexers (orange boxes that block the blue pulses), and active spatial multiplexers (orange boxes that convert serial pulses to parallel pulses) (quantum sources), manipulated by a FPIM using cascaded MZIs (quantum circuits), and measured by the proposed waveguide GeSi SPADs as SPDs and/or NPDs (quantum detectors). An application-specific integrated circuit (ASIC) layer is assumed to be flipped and bonded on the PIC layer with copper (Cu)–Cu pillars (yellow lines) connected wafer-level hybrid bond, or with metal bumps (yellow lines) connected chip-on-wafer-on-substrate (CoWoS) packaging. The off-chip fiber couplings are either for the pump lasers or the optical delay lines.

 


 (a) Top view of the proposed waveguide GeSi SPAD, in which the materials assumed are listed. (b) Cross-sectional view of the proposed waveguide GeSi SPAD, in which the variables for optimizing QE are illustrated.

 

 

(a) QE of the proposed waveguide GeSi SPAD without the Al back mirror, simulated at 1550 nm as a function of coupler length and Ge length. (b) QE of the proposed waveguide GeSi SPAD with the Al back mirror, simulated at 1550 nm as a function of gap length and Ge length. (c) QE of the proposed waveguide GeSi SPAD with the Al back mirror, simulated as a function of wavelength centered at 1550 ± 50 nm (around the C band) and 1310 ± 50 nm (around the O band), given the optimal conditions, that is, coupler length equal to 1.4 μm, gap length equal to 0.36 μm, and Ge length equal to 14.2 μm. While the above data are obtained by 2D FDTD simulations, we also verify that for Ge width >1 μm and mesa design rule <200 nm, there is little difference between the data obtained by 2D and 3D FDTD simulations.


Dark current of GeSi PD at −1 V reverse bias, normalized by its active region circumference, plotted as a function of active region diameter. The experimental data (blue dots) consist of the average dark current between two device repeats (the ratio of the standard deviation to the average is <2%) for five different active region diameters. The linear fitting (red line) shows the bulk dark current density and the surface dark current density with its slope and intercept, respectively.



For the scheme of photon-based PQC: (a) The probability of successfully detecting N photon state and (b) the fidelity of detecting N photon state, using M spatially-multiplexed waveguide GeSi SPADs at 300 K as an NPD. (c) The difference in the probabilities of successfully detecting N photon state, and (b) the difference in the fidelities of detecting N photon state, using M spatially-multiplied waveguide GeSi SPADs at 300 K and NbN SNSPDs at 4 K as NPDs. Note that no approximation is used in the formula for plotting these figures.



For the scheme of qubit-based PQC: (a) The probability of successfully detecting N qubit state, and the fidelity of detecting N qubit state, using single waveguide GeSi SPADs at 300 K as SPDs. (b) The difference in the probabilities of successfully detecting N qubit state, and the difference in the fidelities of detecting N qubit state, using single waveguide GeSi SPADs at 300 K and NbN SNSPDs at 4 K as SPDs. Note that no approximation is used in the formula for plotting these figures.




Go to the original article...

Image sensors review paper

Image Sensors World        Go to the original article...

Eric Fossum, Nobukazu Teranishi, and Albert Theuwissen have published a review paper titled "Digital Image Sensor Evolution and New Frontiers" in the Annual Review of Vision Science.

Link: https://doi.org/10.1146/annurev-vision-101322-105538

Abstract:

This article reviews nearly 60 years of solid-state image sensor evolution and identifies potential new frontiers in the field. From early work in the 1960s, through the development of charge-coupled device image sensors, to the complementary metal oxide semiconductor image sensors now ubiquitous in our lives, we discuss highlights in the evolutionary chain. New frontiers, such as 3D stacked technology, photon-counting technology, and others, are briefly discussed.



Figure 1  Illustration of a four-phase charge-coupled device diagram, a potential well diagram, and clock charts. As four clocks switch sequentially, the potential wells move rightward together with the charge packets.

Figure 2  Illustration of a (three-phase) interline-transfer (ILT) charge-coupled device (CCD) showing (left) a unit cell with a photodiode (PD) and vertical CCD and (right) the entire ILT CCD image sensor. The photosignal moves from the PD into the vertical CCD, and then into the horizontal CCD to the sense node and output amplifier.



Figure 3  A pinned PD in an interline-transfer CCD with one phase of the CCD shift register (VCCD) shown. (a) A physical cross-section and (b) a potential diagram showing the electrons transferring from the PD to the VCCD. Abbreviations: CCD, charge-coupled device; CS, channel stop; PD, photodiode; TG, transfer gate; VCCD, vertical CCD.



Figure 4  Microlenses to concentrate light on the photoactive area of a pixel. (a) Top view. (b) Cross-sections for different thermal-flow times. Images courtesy of NEC Corp.

Figure 5  A 16-Mpixel stitched complementary metal oxide semiconductor image sensor on a 6-inch-diameter wafer. Figure reproduced from Ay & Fossum (2006).


Figure 6  (a) Complementary metal oxide semiconductor (CMOS) image sensor block diagram. (b) Photograph of early Photobit CMOS image sensor chip for webcams. (Left) Digital logic for control and input-output (I/O) functions. (Top right) The pixel array. (Bottom right) The column-parallel analog signal processing and analog-to-digital converter (ADC) circuits. Photo courtesy of E.R.F.


Figure 7  An illustrative PPD 4-T active pixel with intrapixel charge transfer. (a) A circuit schematic (Fossum & Hondongwa 2014). (b) A band diagram looking vertically through the PPD showing the photon, electron–hole pair, and SW. (c) A physical cross-section showing doping levels (Fossum 2023). Abbreviations: COL BUS, column bus line; FD, floating diffusion; PPD, pinned photodiode; RST, reset gate; SEL, select gate; SF, source-follower; SW, storage well; TG, transfer gate.



Figure 8  Illustrative example of (a) a frontside-illuminated pixel and (b) a backside-illuminated (BSI) pixel showing the better light gathering capability of the BSI pixel.



Figure 9  Illustrative cross-sectional comparison of (a) a backside-illuminated device and (b) 3D stacked image sensors where the lower layer is used for additional circuitry.



Figure 10  Quanta image sensor concept showing the spatial distribution of jot outputs (left), an expanded view of jot output bit planes at different time slices (center), and gray-scale image pixels formed from spatiotemporal neighborhoods of jots (right). Figure adapted from Ma et al. (2022a).

Go to the original article...

Hamamatsu completes acquisition of NKT Photonics

Image Sensors World        Go to the original article...

Press release: https://www.hamamatsu.com/us/en/news/featured-products_and_technologies/2024/20240531000000.html

Acquisition completion of NKT Photonics. Accelerating growth in the semiconductor, quantum, and medical fields through laser business enhancement.

Hamamatsu Photonics K.K. (hereinafter referred to as “Hamamatsu Photonics”) is pleased to announce the completion of the previously published acquisition of NKT Photonics A/S (hereinafter referred to as “NKT Photonics”).
 
NKT Photonics is the leading supplier of high-performance fiber lasers and photonic crystal fibers. Based on their unique fiber technology, the laser products fall within three major product lines:

  1.  Supercontinuum White Light Lasers (SuperK): The SuperK lasers deliver high brightness in a broad spectral range (400 nm-2500 nm), and are used within bio-imaging, semiconductor metrology, and device-characterization.
  2.  Single-Frequency DFB Fiber Lasers (Koheras): The Koheras lasers have extremely high wavelength stability and low noise, and are ideal for fiber sensing, quantum computing, and quantum sensing.
  3.  Ultra-short pulse Lasers (aeroPULSE and Origami): This range of lasers consists of picosecond and femtosecond pulsed lasers with excellent beam quality and stability. The lasers are mainly used within ophthalmic surgery, bio-imaging, and optical processing applications.

 
The acquisition enables us to combine Hamamatsu Photonics’ detectors and cameras with NKT Photonics' lasers and fibers, thereby offering unique system solutions to the customers.
 
One special market of interest is the rapidly growing quantum computing area. Here NKT Photonics’ Koheras lasers serve customers with trapped ions systems requiring high power narrow linewidth lasers with extremely high wavelength stability and low noise. The same customers use Hamamatsu Photonics’ high-sensitivity cameras and sensors to detect the quantum state of the qubits. Together, we will be able to provide comprehensive solutions including lasers, detectors, and optical devices for the quantum-technology market.
 
Another important area of collaboration is the semiconductor market. With the trend toward more complex three-dimensional semiconductor devices, there is an increasing demand for high precision measurement equipment covering a wide range of wavelengths. By combining NKT Photonics' broadband SuperK lasers with Hamamatsu Photonics’ optical sensors and measuring devices, we can supply expanded solutions for semiconductor customers needing broader wavelength coverage, multiple measurement channels, and higher sensitivity.
 
Finally, in the hyperspectral imaging market, high-brightness light sources with a broad spectral range from visible to near-infrared (400 nm-2500 nm) are essential. Additionally, unlike halogen lamps, since no heat generation occur, the demand for NKT Photonics' SuperK is increasing. We can provide optimal solutions by integrating it with Hamamatsu Photonics’s image sensors and cameras, leveraging the unique compound semiconductor technologies.
 
With this acquisition, Hamamatsu Photonics Group now possesses a very broad range of technologies within light sources, lasers, and detectors. The combination of NKT Photonics and Hamamatsu Photonics will help us to drive our technology to the next level. NKT Photonics will continue their operating structure and focus on providing superior products and solutions to their customers.

Go to the original article...

SeeDevice Inc files complaint

Image Sensors World        Go to the original article...

From GlobeNewswire: https://www.globenewswire.com/news-release/2024/09/13/2945864/0/en/SeeDevice-Inc-Files-Complaint-In-U-S-District-Court-Against-Korean-Broadcasting-System.html

SeeDevice Inc. Files Complaint In U.S. District Court Against Korean Broadcasting System

ORANGE, California, Sept. 13, 2024 (GLOBE NEWSWIRE) -- SeeDevice Inc. (“SeeDevice”), together with its CEO and founder Dr. Hoon Kim, has filed a Complaint in the U.S. District Court for the Central District of California against Korean Broadcasting System (KBS), and its U.S. subsidiary KBS America, Inc. (collectively, “KBS”) for trade libel and defamation. The claims are based on an August 25, 2024, broadcast KBS is alleged to have published on its YouTube channel and KBS-america.com (“The KBS Broadcast”).

The complaint asserts that KBS Broadcast published false and misleading statements regarding the viability and legitimacy of SeeDevice and Dr. Kim’s QMOS™ (quantum effect CMOS) SWIR image sensor, as a result of having omitted the fact that in 2009, and again in 2012, the Seoul High Court and Seoul Administrative Court found Dr. Kim’s sensor to be legitimate.

Dr. Kim’s QMOS™ sensor has garnered industry praise and recognition and is the subject of numerous third-party awards. In the past year alone, SeeDevice has been recognized with four awards for outstanding leadership and innovative technology: "20 Most Innovative Business Leaders to Watch 2023" by Global Business Leaders, "Top 10 Admired Leaders 2023" by Industry Era, "Most Innovative Image Technology Company 2023" by Corporate Vision, and “Company of the Year” of the Top 10 Semiconductor Tech Startups 2023 by Semiconductor Review. 

In their lawsuit, SeeDevice and Dr. Kim seek retraction of KBS’s defamatory broadcast, and a correction of the record, in addition to significant monetary damages and injunctive relief preventing further misconduct by KBS.

Go to the original article...

Event Cameras for Space Applications

Image Sensors World        Go to the original article...

Dissertation defense by B. McReynolds on his thesis titled "Benchmarking and Pushing the Boundaries of Event Camera Performance for Space and Sky Observations," PhD, ETH Zurich, 2024


Courtesy: Prof. Tobi Delbruck

Go to the original article...

Quantum Solutions and Topodrone launch quantum dot SWIR camera

Image Sensors World        Go to the original article...

Press release from Quantum Solutions:

September 19, 2024

QUANTUM SOLUTIONS and TOPODRONE Unveil TOPODRONE x Q.Fly: A Cost-Effective, DJI- Ready Quantum Dot SWIR Camera for UAV Applications

Quantum Solutions and Topodrone are excited to announce the launch of the Q.Fly, a next- generation camera with Quantum Dot Short Wave Infrared (SWIR) imaging capability designed specifically for UAV (drones) platforms. The Q.Fly is fully DJI-ready, working seamlessly out of the box with DJI Matrice 300 and DJI Matrice 350 RTK, offering real-time video streaming, control, and configuration directly from the DJI remote controller.

Developed to make SWIR technology more accessible and affordable for drone service companies and drone users, Q.Fly delivers a ready-to-use solution that eliminates the complexities of integrating advanced sensors into UAV platforms. The camera system also includes an RGB camera and/or a thermal camera for enhanced vision capabilities. With plug- and-play compatibility and unmatched spectral imaging performance, Q.Fly redefines what’s possible for a wide range of airborne applications.

This unique product combines the Quantum Solutions’ Quantum Dot SWIR Imaging technology with TOPODRONE’s UAV expertise, providing a cost-effective alternative to traditional SWIR cameras. Q.Fly covers a broad spectral range from VIS-SWIR (400–1700 nm), making it ideal for a variety of airborne applications that demand precise, high-resolution imaging.

Key Features of Q.Fly:

·       Quantum Dot SWIR Sensor: 640 x 512 pixels, covering a spectral range of 400–1700 nm

·       Cost-Effective and Accessible: Q.Fly offers an affordable solution, finally making SWIR imaging technology accessible to a broader audience of drone users and service providers

·       DJI Integration: Fully compatible with DJI Matrice 300 and Matrice 350 RTK, featuring real-time video streaming, control, and configuration from the remote controller


·       Built-In RGB Cameras with optional Thermal imager: Includes a 16 MP RGB camera for visual positioning and a thermal imager (640 x 512 pixels, 30 Hz) for enhanced versatility

·       High-precision spectral images geo-referencing

·       High-Speed Spectral Imaging: Capable of operating at 220 Hz, delivering superior spectral imaging performance in real-time

·       Lightweight Design: Weighing only 650g with its 3-axis gyrostabilized gimbal, Q.Fly allows for flight times of up to 35 minutes per battery cycle

·       Built-In Linux Computer: Facilitates easy camera control and supports a variety of protocols, including DJI PSDK and Mavlink

·       Filter Flexibility: Supports quick installation of spectral filters to adapt to specific use cases

Q.Fly is designed to serve industries that require precise, reliable, and easy-to-use drone-based imaging solutions, including:

  • Agriculture
  •  Fire Safety and Rescue
  •  Security&surveillance
  •  Industrial Inspection and Surveying

 

Product Launch at INTERGEO 2024
The TOPODRONE x Q.Fly will be officially unveiled at the INTERGEO 2024 exhibition in Stuttgart from September 24–26. This breakthrough technology will be showcased, highlighting its cost-effectiveness and how it can transform UAV imaging for various industries.
Attendees are invited to visit TOPODRONE Booth: Booth Hall 1 - Booth: B1.055 to experience the Q.Fly and learn more about its unparalleled ease of use and advanced SWIR capabilities.
 
Unparalleled Ease of Use for Drone Operators
Q.Fly is designed with drone operators in mind, offering a hassle-free solution that simplifies the often-complex process of integrating advanced sensors into UAV platforms. With its plug- and-play compatibility with DJI drones, users can quickly deploy the Q.Fly for a wide range of applications without the need for complex setup procedures.

Go to the original article...

ITE/IISS 6th International Workshop on Image Sensors and Imaging Systems (IWISS2024)

Image Sensors World        Go to the original article...

The 6th International Workshop on Image Sensors and Imaging Systems (IWISS2024) will be held at the Tokyo University of Science on Friday November 8, 2024.

In this workshop, people from various research fields, such as image sensing, imaging systems, optics, photonics, computer vision, and computational photography/imaging, come together to discuss the future and frontiers of image sensor technologies in order to explore the continuous progress and diversity in image sensors engineering and state-of-the-art and emerging imaging systems technologies.


Date: November 8 (Fri), 2024
Venue: Forum-2, Morito Memorial Hall, Building 13, Tokyo University of Science / Online
Access: https://maps.app.goo.gl/LyecM4XUYazco5D79
Address: 4-2-2, Kagurazaka, Shinjuku-ku, Tokyo 162-0825, JAPAN

 

Online registration fees information is available here.
Registration is necessary because the number of seats in person is limited. Online viewing via Zoom is also offered.
Registration deadline is Nov. 5 (Tue).
Register and pay online from the following website: [Online registration page]

[Plenary Talk]
"CMOS Direct Time-of-Flight Depth Sensor for Solid-Sate LiDAR Systems"
by Jaehyuk Choi (SolidVue, Inc., Korea & Sungkyunkwan Univ. (SKKU), Korea)

[8 Invited Talks]
Invited-1 “Plasmonic Color Filters for Multi-spectral Imaging” by Atsushi Ono (Shizuoka Univ., Japan)
Invited-2 (online) “Intelligent Imager with Processing-in-Sensor Techniques” by Chih-Cheng Hsieh (National Tsing Hua Univ. (NTHU), Taiwan)
Invited-3 “Designing a Camera for Privacy Preserving” by Hajime Nagahara (Osaka Univ., Japan)
Invited-4 “Deep Compressive Sensing with Coded Image Sensor” by Michitaka Yoshida (JSPS, Japan), et al.
Invited-5 “Event-based Computational Imaging using Modulated Illumination” by Tsuyoshi Takatani (Univ. of Tsukuba, Japan)
Invited-6 “Journey of Pixel Optics Scaling into Deep Sub-micron and Migration to Meta Optics Era” by In-Sung Joe (Samsung Electronics, Korea)
Invited-7 “Trigger-Output Event-Driven SOI pixel Sensor for X-ray Astronomy” by Takeshi Tsuru (Kyoto Univ., Japan)
Invited-8 “New Perspectives for Infrared Imaging Enabled by Colloidal Quantum Dots” by Pawel E. Malinowski (imec, Belgium), et al.

Sponsored by:
Technical Group on Information Sensing Technologies (IST), the Institute of Image Information and Television Engineers (ITE)
Co-sponsored by:
International Image Sensor Society (IISS)

Group of Information Photonics (IPG) +CMOS Working Group, the Optical Society of Japan
General Chair: Keiichiro Kagawa (Shizuoka Univ., Japan)
Technical Program Committee (Alphabetical order): Keiichiro Kagawa (Shizuoka Univ., Japan), Hiroyuki Suzuki (Gunma Univ., Japan), Hisayuki Taruki (Toshiba Electronic Devices & Storage Corporation, Japan), Min-Woong Seo (Samsung Electronics, Korea), Sanshiro Shishido (Panasonic Holdings Corporation, Japan)

Contact for any question about IWISS2024
E-mail: iwiss2024@idl.rie.shizuoka.ac.jp (Keiichiro Kagawa, Shizuoka Univ., Japan)

Go to the original article...

PhD thesis on CMOS SPAD dToF Systems

Image Sensors World        Go to the original article...

Thesis Title: Advanced techniques for SPAD-based CMOS d-ToF systems
Author: Alessandro Tontini
Affiliation: University of Trento and FBK

Full text available here: [link]

Abstract:

The possibility to enable spatial perception to electronic devices gave rise to a number of important development results in a wide range of fields, from consumer and entertainment applications to industrial environments, automotive and aerospace. Among the many techniques which can be used to measure the three-dimensional (3D) information of the observed scene, the unique features offered by direct time-of-flight (d-ToF) with single photon avalanche diodes (SPADs) integrated into a standard CMOS process result in a high interest for development from both researchers and market stakeholders. Despite the net advantages of SPAD-based CMOS d-ToF systems over other techniques, still many challenges have to be addressed. The first performance-limiting factor is represented by the presence of uncorrelated background light, which poses a physical limit to the maximum achievable measurement range. Another problem of concern, especially for scenarios where many similar systems are expected to operate together, is represented by the mutual system-to-system interference, especially for industrial and automotive scenarios where the need to guarantee safety of operations is a pillar. Each application, with its own set of requirements, leads to a different set of design challenges. However, given the statistical nature of photons, the common denominator for such systems is the necessity to operate on a statistical basis, i.e., to run a number of repeated acquisitions over which the time-of-flight (ToF) information is extracted. The gold standard to manage a possibly huge amount of data is to compress them into a histogram memory, which represents the statistical distribution of the arrival time of photons collected during the acquisition. Considering the increased interest for long-range systems capable of both high imaging and ranging resolutions, the amount of data to be handled reaches alarming levels. In this thesis, we propose an in-depth investigation of the aforesaid limitations. The problem of background light has been extensively studied over the years, and already a wide set of techniques which can mitigate the problem are proposed. However, the trend was to investigate or propose single solutions, with a lack of knowledge regarding how different implementations behave on different scenarios. For such reason, our effort in this view focused on the comparison of existing techniques against each other, highlighting each pros and cons and suggesting the possibility to combine them to increase the performance. Regarding the problem of mutual system interference, we propose the first per-pixel implementation of an active interference-rejection technique, with measurement results from a chip designed on purpose. To advance the state-of-the-art in the direction of reducing the amount of data generated by such systems, we provide for the first time a methodology to completely avoid the construction of a resource-consuming histogram of timestamps. Many of the results found in our investigations are based on preliminary investigations with Monte Carlo simulations, while the most important achievements in terms of interference rejection capability and data reduction are supported by measurements obtained with real sensors.

Contents

Contents
1 Introduction 1
1.1 Single Photon Avalanche Diode (SPAD)
1.1.1 Passive quenching
1.1.2 Active quenching
1.1.3 Photon Detection Efficiency (PDE)
1.1.4 Dark Count Rate (DCR) and afterpulsing

2 Related work
2.1 Pioneering results
2.2 Main challenges
2.3 Integration challenges

3 Numerical modelling of SPAD-based CMOS d-ToF sensors
3.1 Simulator architecture overview
3.2 System features modeling
3.2.1 Optical model
3.2.2 Illumination source - modeling of the laser emission profile
3.3 Monte Carlo simulation
3.3.1 Generation of SPAD-related events
3.3.2 Synchronous and asynchronous SPAD model
3.4 Experimental results
3.5 Summary

4 Analysis and comparative evaluation of background rejection techniques
4.1 Background rejection techniques
4.1.1 Photon coincidence technique
4.1.2 Auto-Sensitivity (AS) technique
4.1.3 Last-hit detection
4.2 Results
4.2.1 Auto-Sensitivity vs. photon coincidence
4.2.2 Comparison of photon coincidence circuits
4.2.3 Last-hit detection characterization
4.3 Automatic adaptation of pixel parameters
4.4 Summary


5 A SPAD-based linear sensor with in-pixel temporal pattern detection for interference and background rejection with smart readout scheme
5.1 Architecture
5.1.1 Pixel architecture
5.1.2 Readout architecture
5.2 Characterization
5.2.1 In-pixel laser pattern detection characterization
5.2.2 Readout performance assessment
5.3 Operating conditions and limits
5.4 Summary

6 SPAD response linearization: histogram-less LiDAR and high photon flux measurements
6.1 Preliminary validation
6.1.1 Typical d-ToF operation
6.1.2 Histogram-less approach
6.2 Mathematical analysis
6.3 Acquisition schemes
6.3.1 Acquisition scheme #1: Acquire or discard
6.3.2 Acquisition scheme #2: Time-gated
6.3.3 Discussion on implementation, expected performance and mathematical analysis
6.3.4 Comparison with state-of-the-art
6.4 Measurement results
6.4.1 Preliminary considerations
6.4.2 Measurements with background light only
6.4.3 Measurements with background and laser light and extraction of the ToF
6.5 Summary

7 Conclusion
7.1 Results
7.1.1 Modelling of SPAD-based d-ToF systems
7.1.2 Comparative evaluation of background-rejection techniques
7.1.3 Interference rejection
7.1.4 Histogram-less and high-flux LiDAR
7.2 Future work and research
Bibliography

Go to the original article...

8th Space & Scientific CMOS Image Sensors workshop – abstracts due Sep 13, 2024

Image Sensors World        Go to the original article...

CNES, ESA, AIRBUS DEFENCE & SPACE, THALES ALENIA SPACE, SODERN, OHB, ISAE SUP’AERO are pleased to invite you to the 8th “Space & Scientific CMOS Image Sensors” workshop to be held in TOULOUSE on November 26th and 27th 2024 within the framework of the Optics and Optoelectronics COMET (Communities of Experts).

The aim of this workshop is to focus on CMOS image sensors for scientific and space applications. Although this workshop is organized by actors of the Space Community, it is widely open to other professional imaging applications such as Machine vision, Medical, Advanced Driver Assistance Systems (ADAS), and Broadcast (UHDTV) that boost the development of new pixel and sensor architectures for high end applications. Furthermore, we would like to invite Laboratories and Research Centers which develop Custom CMOS image sensors with advanced smart design on-chip to join this workshop.

Topics
- Pixel design (high QE, FWC, MTF optimization, low lag,…)
- Electrical design (low noise amplifiers, shutter, CDS, high speed architectures, TDI, HDR)
- On-chip ADC or TDC (in pixel, column, …)
- On-chip processing (smart sensors, multiple gains, summation, corrections)
- Low-light detection (electron multiplication, avalanche photodiodes, quanta image sensors,)
- Photon counting, Time resolving detectors (gated, time-correlated single-photon counting)
- Hyperspectral architectures
- Materials (thin film, optical layers, dopant, high-resistivity, amorphous Si)
- Processes (backside thinning, hybridization, 3D stacking, anti-reflection coating)
- Packaging
- Optical design (micro-lenses, trench isolation, filters)
- Large size devices (stitching, butting)
- High speed interfaces
- Focal plane architectures
- CMOS image sensors with recent space heritage (in-flight performance)

Venue
DIAGORA
Centre de Congrès et d'Exposition. 150, rue Pierre Gilles de Gennes
31670 TOULOUSE – LABEGE

Abstract submission
Please send a short abstract on one A4 page maximum in word or pdf format giving the title, the authors name and affiliation, and presenting the subject of your talk, to L-WCIS24@cnes.fr

Workshop format & official language
Oral presentation shall be requested for the workshop. The official language for the workshop is English.

Slide submission
After abstract acceptance notification, the author(s) will be requested to prepare their presentation in pdf or Powerpoint file format, to be presented at the workshop and to provide a copy to the organizing committee with an authorization to make it available for all attendees, and on-line for the CCT members.

Registration
Registration fee : 100 €.
https://evenium.events/space-and-scientific-cmos-image-sensors-2024/ 

Calendar
13th September 2024 Deadline for abstract submission
11th October 2024 Author notification & preliminary programme
14th October 2024 Registration opening
8th November 2024 Final programme
26th-27th November 2024 Workshop

Go to the original article...

TriEye launches TES200 SWIR Image Sensor

Image Sensors World        Go to the original article...

TriEye has launched the TES200, a 1.3MP SWIR image sensor for machine vision and robotics. See press release below.

TEL  AVIV,  Israel,  September 3, 2024/ – TriEye, pioneer of the world's first cost-effective,  mass-market  Short-Wave  Infrared  (SWIR)  sensing  technology, announced today the release of the TES200 1.3MP SWIR image sensor. Based on the innovative TriEye CMOS image sensor technology that allows SWIR capabilities using a CMOS manufacturing process, the TES200 is the first commercially available product released in the Raven product family.

The TES200 operates in the 700nm to 1650nm wavelength range, delivering high sensitivity and 1.3MP resolution. With its large format, high frame rate, and low power consumption, the TES200 offers enhanced sensitivity and dynamic range. This makes the new image sensor ideal for imaging and sensing applications across various industries, including automotive, industrial, robotics, and biometrics.

"We are proud to announce the commercial availability of the TES200 image sensor. Our CMOS-based solution has set new standards in the automotive market, and with the rise of new Artificial Intelligence (AI) systems, the demand for more sensors and more information has increased. The TES200 now brings these advanced SWIR capabilities to machine vision and robotic systems in various  industries,” said Avi Bakal, CEO of TriEye. “We are excited to offer a solution that delivers a new domain of capabilities in a cost-effective and scalable way, broadening the reach of advanced sensing technology."

The TriEye Raven image sensor family is designed for emerging machine vision and robotics applications,  incorporating  the  latest  SWIR  pixel  and  packaging technologies. The  TES200 is  immediately available in sample quantities and available for production orders with delivery in Q2 2025. 


 

Experience the TES200 in Action at CIOE and VISION 2024

We invite you to explore the advanced capabilities of the TES200 at the CIOE exhibition, held from September 11 to 13, 2024, at the Shenzhen World Exhibition and  Convention  Center,  China,  within the  Lasers  Technology  &  Intelligent Manufacturing Expo. View the demo at the Vertilas booth no. 4D021, 4D022. Then, meet TriEye’s executive team at VISION 2024 in Stuttgart, Germany, from October 8 to 10, at the TriEye booth no. 8A08, where you can experience a live demo of the TES200 and the brand new Ovi 2.0 devkit, and learn firsthand about our latest developments in SWIR imaging.

About TriEye 

TriEye is the pioneer of the world’s-first CMOS-based Short-Wave Infrared (SWIR) image  sensing solutions.  Based  on  advanced  academic  research,  TriEye’s breakthrough technology enables HD SWIR imaging and accurate deterministic 3D sensing  in  all  weather  and  ambient  lighting conditions.  The  company's semiconductor and photonics technology enabled the development of the SEDAR (Spectrum Enhanced Detection And Ranging) platform, which allows perception systems to operate and deliver reliable image data and actionable information, while reducing expenditure up to 100x the existing industry rates. For more information, visit www.trieye.tech

Go to the original article...

2024 SEMI MEMS and Imaging Summit program announced

Image Sensors World        Go to the original article...

SEMI MEMS & Imaging Sensors Summit 2024 will take place November 14-15 at the International Conference Center Munich (ICM), Messe Münich in Germany.

Thursday, 14th November 2024 

Session 1: Market Dynamics: Landscape and Growth Strategies

09:00  Welcome Remarks
Laith Altimime, President, SEMI Europe

09:20  Opening Remarks by MEMS and Imaging Committee Chair
Philippe Monnoyer, VTT Technical Research Center of Finland Ltd

09:25  Keynote: Smart Sensors for Smart Life – How Advanced Sensor Technologies Enable Life-Changing Use Cases
Stefan Finkbeiner, General Manager, Bosch Sensortec

09:45  Keynote: Sensing the World: Innovating for a More Sustainable Future
Simone Ferri, APMS Group Vice President, MEMS sub-group General Manager, STMicroelectronics

10:05  Reserved for Yole Development

10:25  Key Takeaways by MEMS and Imaging Committee Chair
Philippe Monnoyer, VTT Technical Research Center of Finland Ltd

10:30  Networking Coffee Break

Session 2: Sustainable Supply Chain Capabilities

11:10  Opening Remarks by Session Chair
Pawel Malinowski, Program Manager and Researcher, imec

11:15  A Paradigm Shift From Imaging to Vision: Oculi Enables 600x Reduction in Latency-Energy Factor for Visual Edge Applications
Charbel Rizk, Founder & CEO, Oculi

11:35  Reserved for Comet Yxlon

11:55  Key Takeaways by Session Chair
Pawel Malinowski, Program Manager and Researcher, imec

12:00  Networking Lunch

Session 3: MEMS - Exploring Future Trends for Technologies and Device Manufacturing

13:20  Opening Remarks by Session Chair
Pierre Damien Berger, MEMS Industrial Partnerships Manager, CEA LETI

13:25  Unlocking Novel Opportunities: How 300mm-capable MEMS Foundries Will Change the Game
Jessica Gomez, CEO, Rogue Valley Microdevices

13:45  Trends in Emerging MEMS
Alissa Fitzgerald, CEO, A.M. Fitzgerald & Associates, LLC

14:05  The Most Common Antistiction Films are PFAS, Now What?
David Springer, Product Manager, MVD and Release Etch Products, KLA Corporation

14:25  Reserved for Infineon

14:45  Latest Innovations in MEMS Wafer Bonding
Thomas Uhrmann, Director of Business Development, EV Group

15:05  Key Takeaways by Session Chair
Pierre Damien Berger, MEMS Industrial Partnerships Manager, CEA LETI

Session 4: Imaging - Exploring Future Trends for Technologies and Device Manufacturing

15:10  Opening Remarks by Session Chair
Stefano Guerrieri, Engineering Fellow and Key Expert Imager & Sensor Components, ams OSRAM

15:15  Topic Coming Soon
Avi Bakal, CEO & Co-founder, TriEye

15:35  Active Hyperspectral Imaging Using Extremely Fast Tunable SWIR Light Source
Jussi Soukkamaki, Lead, Hyperspectral & Imaging Technologies, VTT Technical Research Centre of Finland Ltd

15:55  Networking Coffee Break

16:40  Reserved

17:00  Reserved for CEA-Leti

17:20  Reserved for STMicroelectronics

17:40  Key Takeaways by Session Chair
Stefano Guerrieri, Engineering Fellow and Key Expert Imager & Sensor Components, ams OSRAM

Friday, 15th November 2024 

Session 5: MEMS and Imaging Young Talent

09:00  Opening Remarks by Session Chair
Dimitrios Damianos, Project Manager, Yole Group

09:05  Unlocking Infrared Multispectral Imaging with Pixelated Metasurface Technology
Charles Altuzarra, Chief Executive Officer & Co-founder, Metahelios

09:10  Electrically Tunable Dual-Band VIS/SWIR Imaging and Sensing
Andrea Ballabio, CEO, EYE4NIR

09:15  FMCW Chip-Scale LiDARs Scale Up for Large Volume Markets Thanks to Silicon Photonics Technology
Simoens François, CEO, SteerLight

09:20  ShadowChrome: A Novel Approach to an Old Problem
Geoff Rhoads, Chief Technology Officer, Transformative Optics Corporation

09:25  Feasibility Investigation of Spherically Bent Image Sensors
Amit Pandey, PhD Student, Technische Hochschule Ingolstadt

09:30  Intelligence Through Vision
Stijn Goossens, CTO, Qurv

09:35  Next Generation Quantum Dot SWIR Sensors
Artem Shulga, CEO & Founder, QDI Systems

09:40  Closing Remarks by Session Chair
Dimitrios Damianos, Project Manager, Yole Group

09:45  Networking Coffee Break

Session 6: Innovations for Next-Gen Applications: Smart Mobility

10:35  Opening Remarks by Session Chair
Bernd Dielacher, Business Development Manager MEMS, EVG

10:40  Reserved

11:00  New Topology for MEMS Advances Performance and Speeds Manufacturing
Eric Aguilar, CEO, Omnitron Sensors, Inc.

11:20  Key Takeaways by Session Chair
Bernd Dielacher, Business Development Manager MEMS, EVG

Session 7: Innovations for Next-Gen Applications: Health

11:25  Opening Remarks by Session Chair
Ran Ruby YAN, Director of HMI & HealthTech Business Line, GLOBALFOUNDRIES

11:30  Reserved

11:50  Sensors for Monitoring Vital Signs in Wearable Devices
Markus Arzberger, Senior Director, ams-OSRAM International GmbH

12:10  Pioneering Non-Invasive Wearable MIR Spectrometry for Key Health Biomarkers Analysis
Jan F. Kischkat, CEO, Quantune Technologies GmbH

12:30  Key Takeaways by Session Chair
Ran Ruby YAN, Director of HMI & HealthTech Business Line, GLOBALFOUNDRIES

12:35  End of Conference Reflections by MEMS and Imaging Committee Chair
Philippe Monnoyer, VTT Technical Research Center of Finland Ltd

12:45  Closing Remarks
Laith Altimime, President, SEMI Europe

12:50  Networking Lunch

Go to the original article...

IEEE SENSORS 2024 — image sensor topics announced

Image Sensors World        Go to the original article...

The list of topics and the authors for the following two events related to image sensor technology have been finalized for the IEEE SENSORS 2024 Conference. The conference will be held in Kobe, Japan, from 20-23 October 2024. It will provide the opportunity to hear world class speakers in the field of image sensors and to sample the sensor ecosystem that extends beyond to see how imaging fits in.

Workshop: “From Imaging to Sensing: Latest and Future Trends of CMOS Image Sensors” [Sunday, 20 October]

Organizers: Sozo Yokogawa (Sony Semiconductor Solutions corp.) • Erez Tadmor (onsemi)

Trends and Developments in State-of-the-Art CMOS Image Sensors”, Daniel McGrath, TechInsights
CMOS Image Sensor Technology: what we have solved, what are to be solved”, Eiichi Funatsu, OMNIVISION
Automotive Imaging: Beyond human Vision”, Vladi Korobov, onsemi
Recent Evolution of CMOS Image Sensor Pixel Technology”, Bumsuk Kim et al., Samsung Electronics
High precision ToF image sensor and system for 3D scanning application”, Keita Yasutomi, Shizuoka University
High-definition SPAD image sensors for computer vision applications”, Kazuhiro Morimoto, Canon Inc.
Single Photon Avalanche Diode Sensor Technologies for Pixel Size Shrinkage, Photon Detection Efficiency Enhancement and 3.36-pm-pitch Photon-counting Architecture”, Jun Ogi, Sony Semiconductor Solutions Corp.
SWIR Single-Photon Detection with Ge-on-Si Technology”, Neil Na, Artilux Inc.
From SPADs to smart sensors: ToF system innovation and AI enable endless application”, Laurent Plaza & Olivier Lemarchand, STMicroelectronics
Depth Sensing Technologies, Cameras and Sensors for VR and AR”, Harish Venkataraman, Meta Inc.
 
Focus session: Overview of The Focus Sensor on Stacking in Image Sensor, [Monday, 21 October]

Orgainizer: S-G. Wu, Brillnics

Co-chairs: DN Yaung, TSMC; John McCarten, L3 Harris

Over the past decade, 3-dimensional (3D) wafer level stacked backside Illuminated (BSI) CMOS image sensors (CIS) have achieved the rapid progress in mass production. This focus session on stacking in image sensors will have 4 invited papers to explore the sensor stack technology evolution from process development, circuit architecture to AI/edge computing in system integration.

The Productization of Stacking in Image Sensors”, Daniel McGrath, TechInsights
Evolution of Image Sensing and Computing Architectures with Stacking Device Technologies”, BC Hseih, Qualcomm
Event-based vision sensor”, Christoph Posch, Prophesee
Evolution of digital pixel sensor (DPS) and advancement by stacking technologies”, Ikeno Rimon, Brillnics

Go to the original article...

Galaxycore educational videos

Image Sensors World        Go to the original article...

 

Are you curious about how CMOS image sensors capture such clear and vivid images? Start your journey with the first episode of "CIS Explained". In this episode, we dive deep into the workings of these sophisticated sensors, from the basics of pixel arrays to the intricacies of signal conversion.
This episode serves as your gateway to understanding CMOS image sensors.


In this video, we're breaking down Quantum Efficiency (QE) and its crucial role in CIS. QE is a critical measure of how efficiently our sensors convert incoming light into electrical signals, directly affecting image accuracy and quality. This video will guide you through what QE means for CIS, its impact on your images, and how we're improving QE for better, more reliable imaging.


GalaxyCore DAG HDR Technology Film


Exploring GalaxyCore's Sensor-Shift Optical Image Stabilization (OIS) in under Two Minutes


GalaxyCore's COM packaging technology—a breakthrough in CIS packaging. This video explains how placing two suspended gold wires on the image sensor and bonding it to an IR base can enhance the durability and clarity of image sensors, prevent contamination, and ensure optimal optical alignment.

Go to the original article...

Avoiding information loss in the photon transfer method

Image Sensors World        Go to the original article...

In a recent paper titled "PCH-EM: A Solution to Information Loss in the Photon Transfer Method" in IEEE Trans. on Electron Devices, Aaron Hendrickson et al. propose a new statistical technique to estimate CIS parameters such as conversion gain and read noise.

Abstract: Working from a Poisson-Gaussian noise model, a multisample extension of the photon counting histogram expectation-maximization (PCH-EM) algorithm is derived as a general-purpose alternative to the photon transfer (PT) method. This algorithm is derived from the same model, requires the same experimental data, and estimates the same sensor performance parameters as the time-tested PT method, all while obtaining lower uncertainty estimates. It is shown that as read noise becomes large, multiple data samples are necessary to capture enough information about the parameters of a device under test, justifying the need for a multisample extension. An estimation procedure is devised consisting of initial PT characterization followed by repeated iteration of PCH-EM to demonstrate the improvement in estimating uncertainty achievable with PCH-EM, particularly in the regime of deep subelectron read noise (DSERN). A statistical argument based on the information theoretic concept of sufficiency is formulated to explain how PT data reduction procedures discard information contained in raw sensor data, thus explaining why the proposed algorithm is able to obtain lower uncertainty estimates of key sensor performance parameters, such as read noise and conversion gain. Experimental data captured from a CMOS quanta image sensor with DSERN are then used to demonstrate the algorithm’s usage and validate the underlying theory and statistical model. In support of the reproducible research effort, the code associated with this work can be obtained on the MathWorks file exchange (FEX) (Hendrickson et al., 2024).

 

RRMSE versus read noise for parameter estimates computed using constant flux implementation of PT and PCH-EM. RRMSE curves for PT μ~ and σ~ grow large near σread=0 and were clipped from the plot window.


Open access paper link: https://ieeexplore.ieee.org/document/10570238

Go to the original article...

Harvest Imaging Forum 2024 registration open

Image Sensors World        Go to the original article...

The Harvest Imaging forum tradition continues, a next and tenth one will be organized on November 7 & 8, 2024, in Delft, the Netherlands. The basic intention of the Harvest Imaging forum is to have a scientific and technical in-depth discussion on one particular topic that is of great importance and value to digital imaging. The forum 2024 will be an in-person event.

The 2024 Harvest Imaging forum will deal with a single topic from the field of solid-state imaging world and will have only one world-level expert as the speaker:

"AI and VISION : A shallow dive into deep learning"

Prof. dr. Jan van Gemert (Delft Univ. of Technology, Nl)

Abstract: Artificial Intelligence is taking the world by storm! The AI engine is powered by “Deep Learning”. Deep learning differs from normal computer programming in that it allows computers to learn tasks from large, labelled, datasets. In this Harvest Imaging Forum we will go through all fundamentals of Deep Learning: Multi-layer perceptrons, Back-propagation, Optimization, Convolutional neural networks, Recurrent neural networks, un-/self-supervised learning and transformers and self-attention (GPT).

Bio: Jan van Gemert received a PhD degree from the University of Amsterdam in 2010. There he was a post-doctoral fellow as well as at École Normale Supérieure in Paris. Currently he leads the Computer Vision lab at Delft University of Technology. He teaches the Deep learning and Computer Vision MSc courses. His research focuses on visual inductive priors for deep learning for automatic image and video understanding. He has published over 100 peer-reviewed papers with more than 7,500 citations. See his Google scholar profile for his publications: https://scholar.google.com/citations?hl=en&user=JUdMRGcAAAAJ

Registration: The registration fee for this 2-days forum is set to 1295 Euro for an in-person attendance. Next to the cost of attending the forum, this fee for the in-person attendance does include:

  •  Coffee breaks in the mornings and afternoons,
  •  Lunch on both forum days,
  •  Dinner on the first forum day,
  •  Soft and hard copy of the presented material.

If you are interested to attend this forum, please fill out the registration form here: https://harvestimaging.com/forum_registration_2024.php

Go to the original article...

PhD thesis on a low power "time-to-first-spike" event sensor

Image Sensors World        Go to the original article...

Title: Event-based Image Sensor for low-power

Author: Mohamed AKRARAI (Universite Grenoble Alpes)

Abstract: In the framework of the OCEAN 12 European project, this PhD achieved the design, the implementation, the testing of an event based image sensor, and the publication of several scientific papers in international conferences, including renowned ones like the International Symposium on Asynchronous Circuits and Systems (ASYNC). The design of event-based image sensors, which are frameless, require a dedicated architecture and an asynchronous logic reacting to events. First, this PhD gives an overview of architectures based on a hybrid pixel matrix including TFS and DVS pixels. Indeed, this two kind of pixels are able to manage the spatial redundancy and the temporal redundancy respectively. One of the main achievement of this work is to take advantage of having both pixels inside an imager in order to reduce its output bitstream and its power consumption. Then, the design of the pixels and readout in FDSOI 28 nm technology from STMicroelectronics is detailed. Finally, two image sensors have been implemented in a testchip and tested.

Link: https://theses.hal.science/tel-04213080v1/file/AKRARAI_2023_archivage.pdf

 

Go to the original article...

EETimes article on imec

Image Sensors World        Go to the original article...

Full article: https://www.eetimes.eu/imec-getting-high-precision-sensors-to-market/

Imec: Getting High-Precision Sensors to Market

At the recent ITF World 2024, EE Times Europe talked with imec researchers to catch up on what they’re doing with high-precision sensors—and more importantly, how they make sure their innovations get into the hands of industrial players.

Imec develops sensors for cameras and displays, and it works with both light and ultrasound—for medical applications, for example. But the Leuven, Belgium–based research institute never takes technology to market itself. It either finds industrial partners—or when conditions are right, imec creates a spinoff. One way to understand how imec takes an idea from lab to fab and finds a way to get it to market is to zoom in on its approach with image sensors for cameras.

“We make image sensors that are at the beating heart of incredible cameras around the world,” said Paul Heremans, vice president of future CMOS devices and senior fellow at imec. “Our research starts with material selection and an overall new concept for sensors and goes all the way to development, engineering and low-volume manufacturing within imec’s pilot line.”

A good example is the Pharsighted E9-100S ultra-high-speed video camera, developed by Pharsighted LLC and marketed by Photron. The camera reaches 326,000 frames per second (full frame: 640 × 480 pixels) and up to 2,720,000 frames per second at a lower frame size (640 × 32 pixels), thanks to a high-speed image sensor developed and manufactured by imec.

Another example is an electron imager used in a cryo-transmission electron microscope (cryo-TEM) marketed by a U.S. company called Thermo Fisher. The instrument produces atomic resolution pictures of DNA strands and other complex molecules. These images help in the drug-discovery process by allowing researchers to understand the structure of the molecules they need to target.
Thermo Fisher uses direct electron detection imagers, developed by imec and built into the company’s Falcon direct electron detection imagers, each composed of 4K × 4K pixels. The pixels are very large to get to the ultimate sensitivity. Consequently, the chip is so large (5.7 × 5.7 cm) that only four fit on a 200-mm wafer.

A third example is hyperspectral imagers, with very special filters that detect many more colors than just red, green and blue (RGB). Hyperspectral imagers pick up tens or hundreds of spectral bands. They can achieve this level of performance because imec implements processing filters on each pixel.

“We can do that on almost any commercial imager and turn it into a hyperspectral camera,” Heremans said. “Our technology is used by plenty of customers with a range of applications—from surveillance to satellite-based Earth observation, from medical to agriculture and more.”

Spectricity

To bring some of its work on hyperspectral imagers to market, imec created a startup called Spectricity. “The whole idea is to bring this field of multispectral imaging or spectroscopy into cellphones or other high-volume products,” said Glenn Vandevoorde, CEO of Spectricity. “Our imagers can see things that are not visible to the human eye. Instead of just processing RGB data, which a traditional camera does, we take a complete spectral image, where each pixel contains 16 different color points—including near-infrared. And with that, you can detect different materials that look alike but are actually very different. Or you can do color correction on smartphones. Sometimes people look very different, depending on the ambient light. We can detect what kind of light is shining—and based on that, adjust the color.”
The first use case for cellphones is auto white balancing. When a picture is taken with a cellphone, sometimes the colors show up very differently from reality, because the camera doesn’t have an accurate white point, which is the set of values that make up the color white in an image. These values change under different conditions, which means they need to be calibrated often. All other colors are then adjusted based on the white point reference.

Traditional smartphone cameras cannot determine the ambient light accurately, so they cannot find the white point to serve as a viable reference. But the multispectral imager obtains the full spectral information of the ambient light and applies advanced AI algorithms to detect the white point, which leads to accurate auto white balancing and true color correction.

Spectricity said its sensor is being evaluated by seven out of the top eight smartphone manufacturers in the world for integration into phones. “By the end of this year, you will see several smartphone vendors launching the first phones with multispectral imagers inside,” Vandevoorde said.

While smartphones are the ultimate target for high volume, they are also very cost-competitive—and it takes a long time to introduce a new feature in a smartphone. Spectricity is targeting other smartphone applications but also applications for webcams, security cameras and in-cabin video cameras for cars. One category of use cases takes advantage of the ability of multispectral images to detect health conditions.

 

Spectricity’s spectral image sensor technology extends the paradigm of RGB color image sensors. Instead of red, green and blue filters on the pixels, many different spectral filters are deposited on the pixels, using wafer-scale, high-volume fabrication techniques. (Source: Spectricity)

 
Spectricity’s miniaturized spectral camera module, optimized for mobile devices.

“For example, you can accurately monitor how a person’s skin tone develops every day,” Vandevoorde said. “We can monitor blood flow in the skin, we can monitor moisture in the skin, we can detect melanoma and so on. These and many other things can be detected with these multispectral imagers.”
Spectricity has raised €28 million in funding since it was founded in 2018—and the startup has its own mass-production line at X-Fab, one of the company’s investors. “We have our machinery and our process installed there,” Vandevoorde said. “It’s now going through qualification—and by the end of the year, we’ll be ready for mass production to start shipping large volume to customers.” 

How imec finds the right trends to target
Spectricity is a good example of how imec spots a need and develops technology to meet that need. Spectroscopy, of course, is not new. It’s been around for decades, and researchers use it in labs to detect different materials and different gases. What’s new is that imec integrated spectroscopy onto CMOS technology and developed processes to produce it in high volumes for just a couple of dollars. Researchers worked on the idea for about 10 years—and once it was running on imec’s pilot line, the institute set up Spectricity to take it into mass production and develop applications around it. 

“We sniff around different trends,” said Xavier Rottenberg, scientific director and group leader of wave-based sensors and actuators at imec. “We’re in contact with a lot of players in the industry to get exposed to plenty of problems. Based on that, we develop a gut feeling. But gut feelings are dangerous, because it might be that you’re just hungry. However, with an educated gut feeling, sometimes your intuition is right.”

Once imec develops an idea in the lab, it takes the technology to its pilot line to develop a demonstrator. “We do proofs of concept to see how a device performs,” Rottenberg said. “Then we set up contacts in the ecosystem to form partnerships to bring the platform to a level where it can be mass-produced in an industrial fab.”

In some cases, an idea is too far out for partners to pick up for near-term profit. That’s when imec ventures out with a spinoff company, as it did with Spectricity.


Go to the original article...

Sony rebranding IMX sensors to LYTIA (?)

Image Sensors World        Go to the original article...

Link to full article: https://www.phonearena.com/news/sonys-image-sensor-makeover-imx-to-lytia-by-2026_id160402

Sony's image sensor makeover: IMX to LYTIA by 2026

... there's a buzz about Sony making a branding shift for its smartphone image sensors. According to a recent report, Sony is considering moving all its mobile image sensors, including the current IMX lineup, under the newer LYTIA brand. The company is gradually phasing out the IMX brand, and some IMX sensors have already been rebranded to LYTIA. Reportedly, the company plans to fully transition to the LYT lineup by 2026.

The report states that the 50MP IMX890 and IMX882 sensors have already been rebranded as LYT-701 and LYT-600. For instance, the LYT-600 is already used in the vivo X100 Ultra, launched in May this year.

Go to the original article...

A 100kfps X-ray imager

Image Sensors World        Go to the original article...

Marras et al. presented a paper titled "Development of the Continuous Readout Digitising Imager Array Detector" at the Topical Workshop on Electronics for Particle Physics 2023.

Abstract: Abstract: The CoRDIA project aims to develop an X-ray imager capable of continuous operation in excess of 100 kframe/s. The goal is to provide a suitable instrument for Photon Science experiments at diffraction-limited Synchrotron Rings and Free Electron Lasers considering Continuous Wave operation. Several chip prototypes were designed in a 65 nm process: in this paper we will present an overview of the challenges and solutions adopted in the ASIC design.

 
 
 
  


Go to the original article...

css.php