Hamamatsu completes acquisition of NKT Photonics

Image Sensors World        Go to the original article...

Press release: https://www.hamamatsu.com/us/en/news/featured-products_and_technologies/2024/20240531000000.html

Acquisition completion of NKT Photonics. Accelerating growth in the semiconductor, quantum, and medical fields through laser business enhancement.

Hamamatsu Photonics K.K. (hereinafter referred to as “Hamamatsu Photonics”) is pleased to announce the completion of the previously published acquisition of NKT Photonics A/S (hereinafter referred to as “NKT Photonics”).
 
NKT Photonics is the leading supplier of high-performance fiber lasers and photonic crystal fibers. Based on their unique fiber technology, the laser products fall within three major product lines:

  1.  Supercontinuum White Light Lasers (SuperK): The SuperK lasers deliver high brightness in a broad spectral range (400 nm-2500 nm), and are used within bio-imaging, semiconductor metrology, and device-characterization.
  2.  Single-Frequency DFB Fiber Lasers (Koheras): The Koheras lasers have extremely high wavelength stability and low noise, and are ideal for fiber sensing, quantum computing, and quantum sensing.
  3.  Ultra-short pulse Lasers (aeroPULSE and Origami): This range of lasers consists of picosecond and femtosecond pulsed lasers with excellent beam quality and stability. The lasers are mainly used within ophthalmic surgery, bio-imaging, and optical processing applications.

 
The acquisition enables us to combine Hamamatsu Photonics’ detectors and cameras with NKT Photonics' lasers and fibers, thereby offering unique system solutions to the customers.
 
One special market of interest is the rapidly growing quantum computing area. Here NKT Photonics’ Koheras lasers serve customers with trapped ions systems requiring high power narrow linewidth lasers with extremely high wavelength stability and low noise. The same customers use Hamamatsu Photonics’ high-sensitivity cameras and sensors to detect the quantum state of the qubits. Together, we will be able to provide comprehensive solutions including lasers, detectors, and optical devices for the quantum-technology market.
 
Another important area of collaboration is the semiconductor market. With the trend toward more complex three-dimensional semiconductor devices, there is an increasing demand for high precision measurement equipment covering a wide range of wavelengths. By combining NKT Photonics' broadband SuperK lasers with Hamamatsu Photonics’ optical sensors and measuring devices, we can supply expanded solutions for semiconductor customers needing broader wavelength coverage, multiple measurement channels, and higher sensitivity.
 
Finally, in the hyperspectral imaging market, high-brightness light sources with a broad spectral range from visible to near-infrared (400 nm-2500 nm) are essential. Additionally, unlike halogen lamps, since no heat generation occur, the demand for NKT Photonics' SuperK is increasing. We can provide optimal solutions by integrating it with Hamamatsu Photonics’s image sensors and cameras, leveraging the unique compound semiconductor technologies.
 
With this acquisition, Hamamatsu Photonics Group now possesses a very broad range of technologies within light sources, lasers, and detectors. The combination of NKT Photonics and Hamamatsu Photonics will help us to drive our technology to the next level. NKT Photonics will continue their operating structure and focus on providing superior products and solutions to their customers.

Go to the original article...

Conference List – November 2024

Image Sensors World        Go to the original article...

6th International Workshop on Image Sensors and Imaging Systems (IWISS2024) - 8 Nov 2024 - Tokyo, Japan - Website

Photonics Spectra Sensors & Detectors Summit 2024 - 13 Nov 2024 - Online - Website

SEMI MEMS & Imaging Sensors Summit - 14 Nov 2024 - Munich, Germany - Website

Eleventh International Workshop on Semiconductor Pixel Detectors for Particles and Imaging (Pixel2024) - 18-22 Nov 2024 - Strasbourg, France - Website

The 6th International Workshop on new Photon-Detectors (PD24) - 19-21 Nov 2024 - Vancouver, BC, Canada - Website

Coordinating Panel for Advanced Detectors Workshop - 19-22 Nov 2024 - Oak Ridge, Tennessee, USA - Website

Compamed - 11-14 Nov 2024 - Dusseldorf, Germany - Website

If you know about additional local conferences, please add them as comments.

Return to Conference List index

Go to the original article...

SeeDevice Inc files complaint

Image Sensors World        Go to the original article...

From GlobeNewswire: https://www.globenewswire.com/news-release/2024/09/13/2945864/0/en/SeeDevice-Inc-Files-Complaint-In-U-S-District-Court-Against-Korean-Broadcasting-System.html

SeeDevice Inc. Files Complaint In U.S. District Court Against Korean Broadcasting System

ORANGE, California, Sept. 13, 2024 (GLOBE NEWSWIRE) -- SeeDevice Inc. (“SeeDevice”), together with its CEO and founder Dr. Hoon Kim, has filed a Complaint in the U.S. District Court for the Central District of California against Korean Broadcasting System (KBS), and its U.S. subsidiary KBS America, Inc. (collectively, “KBS”) for trade libel and defamation. The claims are based on an August 25, 2024, broadcast KBS is alleged to have published on its YouTube channel and KBS-america.com (“The KBS Broadcast”).

The complaint asserts that KBS Broadcast published false and misleading statements regarding the viability and legitimacy of SeeDevice and Dr. Kim’s QMOS™ (quantum effect CMOS) SWIR image sensor, as a result of having omitted the fact that in 2009, and again in 2012, the Seoul High Court and Seoul Administrative Court found Dr. Kim’s sensor to be legitimate.

Dr. Kim’s QMOS™ sensor has garnered industry praise and recognition and is the subject of numerous third-party awards. In the past year alone, SeeDevice has been recognized with four awards for outstanding leadership and innovative technology: "20 Most Innovative Business Leaders to Watch 2023" by Global Business Leaders, "Top 10 Admired Leaders 2023" by Industry Era, "Most Innovative Image Technology Company 2023" by Corporate Vision, and “Company of the Year” of the Top 10 Semiconductor Tech Startups 2023 by Semiconductor Review. 

In their lawsuit, SeeDevice and Dr. Kim seek retraction of KBS’s defamatory broadcast, and a correction of the record, in addition to significant monetary damages and injunctive relief preventing further misconduct by KBS.

Go to the original article...

Event Cameras for Space Applications

Image Sensors World        Go to the original article...

Dissertation defense by B. McReynolds on his thesis titled "Benchmarking and Pushing the Boundaries of Event Camera Performance for Space and Sky Observations," PhD, ETH Zurich, 2024


Courtesy: Prof. Tobi Delbruck

Go to the original article...

Quantum Solutions and Topodrone launch quantum dot SWIR camera

Image Sensors World        Go to the original article...

Press release from Quantum Solutions:

September 19, 2024

QUANTUM SOLUTIONS and TOPODRONE Unveil TOPODRONE x Q.Fly: A Cost-Effective, DJI- Ready Quantum Dot SWIR Camera for UAV Applications

Quantum Solutions and Topodrone are excited to announce the launch of the Q.Fly, a next- generation camera with Quantum Dot Short Wave Infrared (SWIR) imaging capability designed specifically for UAV (drones) platforms. The Q.Fly is fully DJI-ready, working seamlessly out of the box with DJI Matrice 300 and DJI Matrice 350 RTK, offering real-time video streaming, control, and configuration directly from the DJI remote controller.

Developed to make SWIR technology more accessible and affordable for drone service companies and drone users, Q.Fly delivers a ready-to-use solution that eliminates the complexities of integrating advanced sensors into UAV platforms. The camera system also includes an RGB camera and/or a thermal camera for enhanced vision capabilities. With plug- and-play compatibility and unmatched spectral imaging performance, Q.Fly redefines what’s possible for a wide range of airborne applications.

This unique product combines the Quantum Solutions’ Quantum Dot SWIR Imaging technology with TOPODRONE’s UAV expertise, providing a cost-effective alternative to traditional SWIR cameras. Q.Fly covers a broad spectral range from VIS-SWIR (400–1700 nm), making it ideal for a variety of airborne applications that demand precise, high-resolution imaging.

Key Features of Q.Fly:

·       Quantum Dot SWIR Sensor: 640 x 512 pixels, covering a spectral range of 400–1700 nm

·       Cost-Effective and Accessible: Q.Fly offers an affordable solution, finally making SWIR imaging technology accessible to a broader audience of drone users and service providers

·       DJI Integration: Fully compatible with DJI Matrice 300 and Matrice 350 RTK, featuring real-time video streaming, control, and configuration from the remote controller


·       Built-In RGB Cameras with optional Thermal imager: Includes a 16 MP RGB camera for visual positioning and a thermal imager (640 x 512 pixels, 30 Hz) for enhanced versatility

·       High-precision spectral images geo-referencing

·       High-Speed Spectral Imaging: Capable of operating at 220 Hz, delivering superior spectral imaging performance in real-time

·       Lightweight Design: Weighing only 650g with its 3-axis gyrostabilized gimbal, Q.Fly allows for flight times of up to 35 minutes per battery cycle

·       Built-In Linux Computer: Facilitates easy camera control and supports a variety of protocols, including DJI PSDK and Mavlink

·       Filter Flexibility: Supports quick installation of spectral filters to adapt to specific use cases

Q.Fly is designed to serve industries that require precise, reliable, and easy-to-use drone-based imaging solutions, including:

  • Agriculture
  •  Fire Safety and Rescue
  •  Security&surveillance
  •  Industrial Inspection and Surveying

 

Product Launch at INTERGEO 2024
The TOPODRONE x Q.Fly will be officially unveiled at the INTERGEO 2024 exhibition in Stuttgart from September 24–26. This breakthrough technology will be showcased, highlighting its cost-effectiveness and how it can transform UAV imaging for various industries.
Attendees are invited to visit TOPODRONE Booth: Booth Hall 1 - Booth: B1.055 to experience the Q.Fly and learn more about its unparalleled ease of use and advanced SWIR capabilities.
 
Unparalleled Ease of Use for Drone Operators
Q.Fly is designed with drone operators in mind, offering a hassle-free solution that simplifies the often-complex process of integrating advanced sensors into UAV platforms. With its plug- and-play compatibility with DJI drones, users can quickly deploy the Q.Fly for a wide range of applications without the need for complex setup procedures.

Go to the original article...

ITE/IISS 6th International Workshop on Image Sensors and Imaging Systems (IWISS2024)

Image Sensors World        Go to the original article...

The 6th International Workshop on Image Sensors and Imaging Systems (IWISS2024) will be held at the Tokyo University of Science on Friday November 8, 2024.

In this workshop, people from various research fields, such as image sensing, imaging systems, optics, photonics, computer vision, and computational photography/imaging, come together to discuss the future and frontiers of image sensor technologies in order to explore the continuous progress and diversity in image sensors engineering and state-of-the-art and emerging imaging systems technologies.


Date: November 8 (Fri), 2024
Venue: Forum-2, Morito Memorial Hall, Building 13, Tokyo University of Science / Online
Access: https://maps.app.goo.gl/LyecM4XUYazco5D79
Address: 4-2-2, Kagurazaka, Shinjuku-ku, Tokyo 162-0825, JAPAN

 

Online registration fees information is available here.
Registration is necessary because the number of seats in person is limited. Online viewing via Zoom is also offered.
Registration deadline is Nov. 5 (Tue).
Register and pay online from the following website: [Online registration page]

[Plenary Talk]
"CMOS Direct Time-of-Flight Depth Sensor for Solid-Sate LiDAR Systems"
by Jaehyuk Choi (SolidVue, Inc., Korea & Sungkyunkwan Univ. (SKKU), Korea)

[8 Invited Talks]
Invited-1 “Plasmonic Color Filters for Multi-spectral Imaging” by Atsushi Ono (Shizuoka Univ., Japan)
Invited-2 (online) “Intelligent Imager with Processing-in-Sensor Techniques” by Chih-Cheng Hsieh (National Tsing Hua Univ. (NTHU), Taiwan)
Invited-3 “Designing a Camera for Privacy Preserving” by Hajime Nagahara (Osaka Univ., Japan)
Invited-4 “Deep Compressive Sensing with Coded Image Sensor” by Michitaka Yoshida (JSPS, Japan), et al.
Invited-5 “Event-based Computational Imaging using Modulated Illumination” by Tsuyoshi Takatani (Univ. of Tsukuba, Japan)
Invited-6 “Journey of Pixel Optics Scaling into Deep Sub-micron and Migration to Meta Optics Era” by In-Sung Joe (Samsung Electronics, Korea)
Invited-7 “Trigger-Output Event-Driven SOI pixel Sensor for X-ray Astronomy” by Takeshi Tsuru (Kyoto Univ., Japan)
Invited-8 “New Perspectives for Infrared Imaging Enabled by Colloidal Quantum Dots” by Pawel E. Malinowski (imec, Belgium), et al.

Sponsored by:
Technical Group on Information Sensing Technologies (IST), the Institute of Image Information and Television Engineers (ITE)
Co-sponsored by:
International Image Sensor Society (IISS)

Group of Information Photonics (IPG) +CMOS Working Group, the Optical Society of Japan
General Chair: Keiichiro Kagawa (Shizuoka Univ., Japan)
Technical Program Committee (Alphabetical order): Keiichiro Kagawa (Shizuoka Univ., Japan), Hiroyuki Suzuki (Gunma Univ., Japan), Hisayuki Taruki (Toshiba Electronic Devices & Storage Corporation, Japan), Min-Woong Seo (Samsung Electronics, Korea), Sanshiro Shishido (Panasonic Holdings Corporation, Japan)

Contact for any question about IWISS2024
E-mail: iwiss2024@idl.rie.shizuoka.ac.jp (Keiichiro Kagawa, Shizuoka Univ., Japan)

Go to the original article...

Job Postings – Week of 22 September 2024

Image Sensors World        Go to the original article...

Anduril Industries

Chief Engineer, Imaging

Lexington, Massachusetts, USA

Link

Purdue University

Assistant Professor of Physics and Astronomy

West Lafayette, Indiana, USA

Link

RTX Raytheon

Mixed Signal IC Design Senior Engineer

Goleta, California, USA

Link

Sandia National Laboratories

Postdoctoral Appointee - Optoelectronic and Microelectronic Device Fabrication, Onsite

Albuquerque, New Mexico, USA

Link

Apple

Electrical Engineer - Camera Hardware

San Diego, California, USA

Link

University of Birmingham

Professor of Silicon Detector Instrumentation for Particle Physics

Birmingham, England, UK

Link

Google

Imaging Systems Engineer, Devices and Services

Mountain View, California, USA

Link

Institute of Physics in Prague

Postdoctoral research associate in ATLAS

Prague, Czech Republic

Link

Marvell

Silicon Photonics Engineer

Ottawa, Ontario, Canada

Link

Go to the original article...

PhD thesis on CMOS SPAD dToF Systems

Image Sensors World        Go to the original article...

Thesis Title: Advanced techniques for SPAD-based CMOS d-ToF systems
Author: Alessandro Tontini
Affiliation: University of Trento and FBK

Full text available here: [link]

Abstract:

The possibility to enable spatial perception to electronic devices gave rise to a number of important development results in a wide range of fields, from consumer and entertainment applications to industrial environments, automotive and aerospace. Among the many techniques which can be used to measure the three-dimensional (3D) information of the observed scene, the unique features offered by direct time-of-flight (d-ToF) with single photon avalanche diodes (SPADs) integrated into a standard CMOS process result in a high interest for development from both researchers and market stakeholders. Despite the net advantages of SPAD-based CMOS d-ToF systems over other techniques, still many challenges have to be addressed. The first performance-limiting factor is represented by the presence of uncorrelated background light, which poses a physical limit to the maximum achievable measurement range. Another problem of concern, especially for scenarios where many similar systems are expected to operate together, is represented by the mutual system-to-system interference, especially for industrial and automotive scenarios where the need to guarantee safety of operations is a pillar. Each application, with its own set of requirements, leads to a different set of design challenges. However, given the statistical nature of photons, the common denominator for such systems is the necessity to operate on a statistical basis, i.e., to run a number of repeated acquisitions over which the time-of-flight (ToF) information is extracted. The gold standard to manage a possibly huge amount of data is to compress them into a histogram memory, which represents the statistical distribution of the arrival time of photons collected during the acquisition. Considering the increased interest for long-range systems capable of both high imaging and ranging resolutions, the amount of data to be handled reaches alarming levels. In this thesis, we propose an in-depth investigation of the aforesaid limitations. The problem of background light has been extensively studied over the years, and already a wide set of techniques which can mitigate the problem are proposed. However, the trend was to investigate or propose single solutions, with a lack of knowledge regarding how different implementations behave on different scenarios. For such reason, our effort in this view focused on the comparison of existing techniques against each other, highlighting each pros and cons and suggesting the possibility to combine them to increase the performance. Regarding the problem of mutual system interference, we propose the first per-pixel implementation of an active interference-rejection technique, with measurement results from a chip designed on purpose. To advance the state-of-the-art in the direction of reducing the amount of data generated by such systems, we provide for the first time a methodology to completely avoid the construction of a resource-consuming histogram of timestamps. Many of the results found in our investigations are based on preliminary investigations with Monte Carlo simulations, while the most important achievements in terms of interference rejection capability and data reduction are supported by measurements obtained with real sensors.

Contents

Contents
1 Introduction 1
1.1 Single Photon Avalanche Diode (SPAD)
1.1.1 Passive quenching
1.1.2 Active quenching
1.1.3 Photon Detection Efficiency (PDE)
1.1.4 Dark Count Rate (DCR) and afterpulsing

2 Related work
2.1 Pioneering results
2.2 Main challenges
2.3 Integration challenges

3 Numerical modelling of SPAD-based CMOS d-ToF sensors
3.1 Simulator architecture overview
3.2 System features modeling
3.2.1 Optical model
3.2.2 Illumination source - modeling of the laser emission profile
3.3 Monte Carlo simulation
3.3.1 Generation of SPAD-related events
3.3.2 Synchronous and asynchronous SPAD model
3.4 Experimental results
3.5 Summary

4 Analysis and comparative evaluation of background rejection techniques
4.1 Background rejection techniques
4.1.1 Photon coincidence technique
4.1.2 Auto-Sensitivity (AS) technique
4.1.3 Last-hit detection
4.2 Results
4.2.1 Auto-Sensitivity vs. photon coincidence
4.2.2 Comparison of photon coincidence circuits
4.2.3 Last-hit detection characterization
4.3 Automatic adaptation of pixel parameters
4.4 Summary


5 A SPAD-based linear sensor with in-pixel temporal pattern detection for interference and background rejection with smart readout scheme
5.1 Architecture
5.1.1 Pixel architecture
5.1.2 Readout architecture
5.2 Characterization
5.2.1 In-pixel laser pattern detection characterization
5.2.2 Readout performance assessment
5.3 Operating conditions and limits
5.4 Summary

6 SPAD response linearization: histogram-less LiDAR and high photon flux measurements
6.1 Preliminary validation
6.1.1 Typical d-ToF operation
6.1.2 Histogram-less approach
6.2 Mathematical analysis
6.3 Acquisition schemes
6.3.1 Acquisition scheme #1: Acquire or discard
6.3.2 Acquisition scheme #2: Time-gated
6.3.3 Discussion on implementation, expected performance and mathematical analysis
6.3.4 Comparison with state-of-the-art
6.4 Measurement results
6.4.1 Preliminary considerations
6.4.2 Measurements with background light only
6.4.3 Measurements with background and laser light and extraction of the ToF
6.5 Summary

7 Conclusion
7.1 Results
7.1.1 Modelling of SPAD-based d-ToF systems
7.1.2 Comparative evaluation of background-rejection techniques
7.1.3 Interference rejection
7.1.4 Histogram-less and high-flux LiDAR
7.2 Future work and research
Bibliography

Go to the original article...

8th Space & Scientific CMOS Image Sensors workshop – abstracts due Sep 13, 2024

Image Sensors World        Go to the original article...

CNES, ESA, AIRBUS DEFENCE & SPACE, THALES ALENIA SPACE, SODERN, OHB, ISAE SUP’AERO are pleased to invite you to the 8th “Space & Scientific CMOS Image Sensors” workshop to be held in TOULOUSE on November 26th and 27th 2024 within the framework of the Optics and Optoelectronics COMET (Communities of Experts).

The aim of this workshop is to focus on CMOS image sensors for scientific and space applications. Although this workshop is organized by actors of the Space Community, it is widely open to other professional imaging applications such as Machine vision, Medical, Advanced Driver Assistance Systems (ADAS), and Broadcast (UHDTV) that boost the development of new pixel and sensor architectures for high end applications. Furthermore, we would like to invite Laboratories and Research Centers which develop Custom CMOS image sensors with advanced smart design on-chip to join this workshop.

Topics
- Pixel design (high QE, FWC, MTF optimization, low lag,…)
- Electrical design (low noise amplifiers, shutter, CDS, high speed architectures, TDI, HDR)
- On-chip ADC or TDC (in pixel, column, …)
- On-chip processing (smart sensors, multiple gains, summation, corrections)
- Low-light detection (electron multiplication, avalanche photodiodes, quanta image sensors,)
- Photon counting, Time resolving detectors (gated, time-correlated single-photon counting)
- Hyperspectral architectures
- Materials (thin film, optical layers, dopant, high-resistivity, amorphous Si)
- Processes (backside thinning, hybridization, 3D stacking, anti-reflection coating)
- Packaging
- Optical design (micro-lenses, trench isolation, filters)
- Large size devices (stitching, butting)
- High speed interfaces
- Focal plane architectures
- CMOS image sensors with recent space heritage (in-flight performance)

Venue
DIAGORA
Centre de Congrès et d'Exposition. 150, rue Pierre Gilles de Gennes
31670 TOULOUSE – LABEGE

Abstract submission
Please send a short abstract on one A4 page maximum in word or pdf format giving the title, the authors name and affiliation, and presenting the subject of your talk, to L-WCIS24@cnes.fr

Workshop format & official language
Oral presentation shall be requested for the workshop. The official language for the workshop is English.

Slide submission
After abstract acceptance notification, the author(s) will be requested to prepare their presentation in pdf or Powerpoint file format, to be presented at the workshop and to provide a copy to the organizing committee with an authorization to make it available for all attendees, and on-line for the CCT members.

Registration
Registration fee : 100 €.
https://evenium.events/space-and-scientific-cmos-image-sensors-2024/ 

Calendar
13th September 2024 Deadline for abstract submission
11th October 2024 Author notification & preliminary programme
14th October 2024 Registration opening
8th November 2024 Final programme
26th-27th November 2024 Workshop

Go to the original article...

TriEye launches TES200 SWIR Image Sensor

Image Sensors World        Go to the original article...

TriEye has launched the TES200, a 1.3MP SWIR image sensor for machine vision and robotics. See press release below.

TEL  AVIV,  Israel,  September 3, 2024/ – TriEye, pioneer of the world's first cost-effective,  mass-market  Short-Wave  Infrared  (SWIR)  sensing  technology, announced today the release of the TES200 1.3MP SWIR image sensor. Based on the innovative TriEye CMOS image sensor technology that allows SWIR capabilities using a CMOS manufacturing process, the TES200 is the first commercially available product released in the Raven product family.

The TES200 operates in the 700nm to 1650nm wavelength range, delivering high sensitivity and 1.3MP resolution. With its large format, high frame rate, and low power consumption, the TES200 offers enhanced sensitivity and dynamic range. This makes the new image sensor ideal for imaging and sensing applications across various industries, including automotive, industrial, robotics, and biometrics.

"We are proud to announce the commercial availability of the TES200 image sensor. Our CMOS-based solution has set new standards in the automotive market, and with the rise of new Artificial Intelligence (AI) systems, the demand for more sensors and more information has increased. The TES200 now brings these advanced SWIR capabilities to machine vision and robotic systems in various  industries,” said Avi Bakal, CEO of TriEye. “We are excited to offer a solution that delivers a new domain of capabilities in a cost-effective and scalable way, broadening the reach of advanced sensing technology."

The TriEye Raven image sensor family is designed for emerging machine vision and robotics applications,  incorporating  the  latest  SWIR  pixel  and  packaging technologies. The  TES200 is  immediately available in sample quantities and available for production orders with delivery in Q2 2025. 


 

Experience the TES200 in Action at CIOE and VISION 2024

We invite you to explore the advanced capabilities of the TES200 at the CIOE exhibition, held from September 11 to 13, 2024, at the Shenzhen World Exhibition and  Convention  Center,  China,  within the  Lasers  Technology  &  Intelligent Manufacturing Expo. View the demo at the Vertilas booth no. 4D021, 4D022. Then, meet TriEye’s executive team at VISION 2024 in Stuttgart, Germany, from October 8 to 10, at the TriEye booth no. 8A08, where you can experience a live demo of the TES200 and the brand new Ovi 2.0 devkit, and learn firsthand about our latest developments in SWIR imaging.

About TriEye 

TriEye is the pioneer of the world’s-first CMOS-based Short-Wave Infrared (SWIR) image  sensing solutions.  Based  on  advanced  academic  research,  TriEye’s breakthrough technology enables HD SWIR imaging and accurate deterministic 3D sensing  in  all  weather  and  ambient  lighting conditions.  The  company's semiconductor and photonics technology enabled the development of the SEDAR (Spectrum Enhanced Detection And Ranging) platform, which allows perception systems to operate and deliver reliable image data and actionable information, while reducing expenditure up to 100x the existing industry rates. For more information, visit www.trieye.tech

Go to the original article...

2024 SEMI MEMS and Imaging Summit program announced

Image Sensors World        Go to the original article...

SEMI MEMS & Imaging Sensors Summit 2024 will take place November 14-15 at the International Conference Center Munich (ICM), Messe Münich in Germany.

Thursday, 14th November 2024 

Session 1: Market Dynamics: Landscape and Growth Strategies

09:00  Welcome Remarks
Laith Altimime, President, SEMI Europe

09:20  Opening Remarks by MEMS and Imaging Committee Chair
Philippe Monnoyer, VTT Technical Research Center of Finland Ltd

09:25  Keynote: Smart Sensors for Smart Life – How Advanced Sensor Technologies Enable Life-Changing Use Cases
Stefan Finkbeiner, General Manager, Bosch Sensortec

09:45  Keynote: Sensing the World: Innovating for a More Sustainable Future
Simone Ferri, APMS Group Vice President, MEMS sub-group General Manager, STMicroelectronics

10:05  Reserved for Yole Development

10:25  Key Takeaways by MEMS and Imaging Committee Chair
Philippe Monnoyer, VTT Technical Research Center of Finland Ltd

10:30  Networking Coffee Break

Session 2: Sustainable Supply Chain Capabilities

11:10  Opening Remarks by Session Chair
Pawel Malinowski, Program Manager and Researcher, imec

11:15  A Paradigm Shift From Imaging to Vision: Oculi Enables 600x Reduction in Latency-Energy Factor for Visual Edge Applications
Charbel Rizk, Founder & CEO, Oculi

11:35  Reserved for Comet Yxlon

11:55  Key Takeaways by Session Chair
Pawel Malinowski, Program Manager and Researcher, imec

12:00  Networking Lunch

Session 3: MEMS - Exploring Future Trends for Technologies and Device Manufacturing

13:20  Opening Remarks by Session Chair
Pierre Damien Berger, MEMS Industrial Partnerships Manager, CEA LETI

13:25  Unlocking Novel Opportunities: How 300mm-capable MEMS Foundries Will Change the Game
Jessica Gomez, CEO, Rogue Valley Microdevices

13:45  Trends in Emerging MEMS
Alissa Fitzgerald, CEO, A.M. Fitzgerald & Associates, LLC

14:05  The Most Common Antistiction Films are PFAS, Now What?
David Springer, Product Manager, MVD and Release Etch Products, KLA Corporation

14:25  Reserved for Infineon

14:45  Latest Innovations in MEMS Wafer Bonding
Thomas Uhrmann, Director of Business Development, EV Group

15:05  Key Takeaways by Session Chair
Pierre Damien Berger, MEMS Industrial Partnerships Manager, CEA LETI

Session 4: Imaging - Exploring Future Trends for Technologies and Device Manufacturing

15:10  Opening Remarks by Session Chair
Stefano Guerrieri, Engineering Fellow and Key Expert Imager & Sensor Components, ams OSRAM

15:15  Topic Coming Soon
Avi Bakal, CEO & Co-founder, TriEye

15:35  Active Hyperspectral Imaging Using Extremely Fast Tunable SWIR Light Source
Jussi Soukkamaki, Lead, Hyperspectral & Imaging Technologies, VTT Technical Research Centre of Finland Ltd

15:55  Networking Coffee Break

16:40  Reserved

17:00  Reserved for CEA-Leti

17:20  Reserved for STMicroelectronics

17:40  Key Takeaways by Session Chair
Stefano Guerrieri, Engineering Fellow and Key Expert Imager & Sensor Components, ams OSRAM

Friday, 15th November 2024 

Session 5: MEMS and Imaging Young Talent

09:00  Opening Remarks by Session Chair
Dimitrios Damianos, Project Manager, Yole Group

09:05  Unlocking Infrared Multispectral Imaging with Pixelated Metasurface Technology
Charles Altuzarra, Chief Executive Officer & Co-founder, Metahelios

09:10  Electrically Tunable Dual-Band VIS/SWIR Imaging and Sensing
Andrea Ballabio, CEO, EYE4NIR

09:15  FMCW Chip-Scale LiDARs Scale Up for Large Volume Markets Thanks to Silicon Photonics Technology
Simoens François, CEO, SteerLight

09:20  ShadowChrome: A Novel Approach to an Old Problem
Geoff Rhoads, Chief Technology Officer, Transformative Optics Corporation

09:25  Feasibility Investigation of Spherically Bent Image Sensors
Amit Pandey, PhD Student, Technische Hochschule Ingolstadt

09:30  Intelligence Through Vision
Stijn Goossens, CTO, Qurv

09:35  Next Generation Quantum Dot SWIR Sensors
Artem Shulga, CEO & Founder, QDI Systems

09:40  Closing Remarks by Session Chair
Dimitrios Damianos, Project Manager, Yole Group

09:45  Networking Coffee Break

Session 6: Innovations for Next-Gen Applications: Smart Mobility

10:35  Opening Remarks by Session Chair
Bernd Dielacher, Business Development Manager MEMS, EVG

10:40  Reserved

11:00  New Topology for MEMS Advances Performance and Speeds Manufacturing
Eric Aguilar, CEO, Omnitron Sensors, Inc.

11:20  Key Takeaways by Session Chair
Bernd Dielacher, Business Development Manager MEMS, EVG

Session 7: Innovations for Next-Gen Applications: Health

11:25  Opening Remarks by Session Chair
Ran Ruby YAN, Director of HMI & HealthTech Business Line, GLOBALFOUNDRIES

11:30  Reserved

11:50  Sensors for Monitoring Vital Signs in Wearable Devices
Markus Arzberger, Senior Director, ams-OSRAM International GmbH

12:10  Pioneering Non-Invasive Wearable MIR Spectrometry for Key Health Biomarkers Analysis
Jan F. Kischkat, CEO, Quantune Technologies GmbH

12:30  Key Takeaways by Session Chair
Ran Ruby YAN, Director of HMI & HealthTech Business Line, GLOBALFOUNDRIES

12:35  End of Conference Reflections by MEMS and Imaging Committee Chair
Philippe Monnoyer, VTT Technical Research Center of Finland Ltd

12:45  Closing Remarks
Laith Altimime, President, SEMI Europe

12:50  Networking Lunch

Go to the original article...

IEEE SENSORS 2024 — image sensor topics announced

Image Sensors World        Go to the original article...

The list of topics and the authors for the following two events related to image sensor technology have been finalized for the IEEE SENSORS 2024 Conference. The conference will be held in Kobe, Japan, from 20-23 October 2024. It will provide the opportunity to hear world class speakers in the field of image sensors and to sample the sensor ecosystem that extends beyond to see how imaging fits in.

Workshop: “From Imaging to Sensing: Latest and Future Trends of CMOS Image Sensors” [Sunday, 20 October]

Organizers: Sozo Yokogawa (Sony Semiconductor Solutions corp.) • Erez Tadmor (onsemi)

Trends and Developments in State-of-the-Art CMOS Image Sensors”, Daniel McGrath, TechInsights
CMOS Image Sensor Technology: what we have solved, what are to be solved”, Eiichi Funatsu, OMNIVISION
Automotive Imaging: Beyond human Vision”, Vladi Korobov, onsemi
Recent Evolution of CMOS Image Sensor Pixel Technology”, Bumsuk Kim et al., Samsung Electronics
High precision ToF image sensor and system for 3D scanning application”, Keita Yasutomi, Shizuoka University
High-definition SPAD image sensors for computer vision applications”, Kazuhiro Morimoto, Canon Inc.
Single Photon Avalanche Diode Sensor Technologies for Pixel Size Shrinkage, Photon Detection Efficiency Enhancement and 3.36-pm-pitch Photon-counting Architecture”, Jun Ogi, Sony Semiconductor Solutions Corp.
SWIR Single-Photon Detection with Ge-on-Si Technology”, Neil Na, Artilux Inc.
From SPADs to smart sensors: ToF system innovation and AI enable endless application”, Laurent Plaza & Olivier Lemarchand, STMicroelectronics
Depth Sensing Technologies, Cameras and Sensors for VR and AR”, Harish Venkataraman, Meta Inc.
 
Focus session: Overview of The Focus Sensor on Stacking in Image Sensor, [Monday, 21 October]

Orgainizer: S-G. Wu, Brillnics

Co-chairs: DN Yaung, TSMC; John McCarten, L3 Harris

Over the past decade, 3-dimensional (3D) wafer level stacked backside Illuminated (BSI) CMOS image sensors (CIS) have achieved the rapid progress in mass production. This focus session on stacking in image sensors will have 4 invited papers to explore the sensor stack technology evolution from process development, circuit architecture to AI/edge computing in system integration.

The Productization of Stacking in Image Sensors”, Daniel McGrath, TechInsights
Evolution of Image Sensing and Computing Architectures with Stacking Device Technologies”, BC Hseih, Qualcomm
Event-based vision sensor”, Christoph Posch, Prophesee
Evolution of digital pixel sensor (DPS) and advancement by stacking technologies”, Ikeno Rimon, Brillnics

Go to the original article...

Galaxycore educational videos

Image Sensors World        Go to the original article...

 

Are you curious about how CMOS image sensors capture such clear and vivid images? Start your journey with the first episode of "CIS Explained". In this episode, we dive deep into the workings of these sophisticated sensors, from the basics of pixel arrays to the intricacies of signal conversion.
This episode serves as your gateway to understanding CMOS image sensors.


In this video, we're breaking down Quantum Efficiency (QE) and its crucial role in CIS. QE is a critical measure of how efficiently our sensors convert incoming light into electrical signals, directly affecting image accuracy and quality. This video will guide you through what QE means for CIS, its impact on your images, and how we're improving QE for better, more reliable imaging.


GalaxyCore DAG HDR Technology Film


Exploring GalaxyCore's Sensor-Shift Optical Image Stabilization (OIS) in under Two Minutes


GalaxyCore's COM packaging technology—a breakthrough in CIS packaging. This video explains how placing two suspended gold wires on the image sensor and bonding it to an IR base can enhance the durability and clarity of image sensors, prevent contamination, and ensure optimal optical alignment.

Go to the original article...

Avoiding information loss in the photon transfer method

Image Sensors World        Go to the original article...

In a recent paper titled "PCH-EM: A Solution to Information Loss in the Photon Transfer Method" in IEEE Trans. on Electron Devices, Aaron Hendrickson et al. propose a new statistical technique to estimate CIS parameters such as conversion gain and read noise.

Abstract: Working from a Poisson-Gaussian noise model, a multisample extension of the photon counting histogram expectation-maximization (PCH-EM) algorithm is derived as a general-purpose alternative to the photon transfer (PT) method. This algorithm is derived from the same model, requires the same experimental data, and estimates the same sensor performance parameters as the time-tested PT method, all while obtaining lower uncertainty estimates. It is shown that as read noise becomes large, multiple data samples are necessary to capture enough information about the parameters of a device under test, justifying the need for a multisample extension. An estimation procedure is devised consisting of initial PT characterization followed by repeated iteration of PCH-EM to demonstrate the improvement in estimating uncertainty achievable with PCH-EM, particularly in the regime of deep subelectron read noise (DSERN). A statistical argument based on the information theoretic concept of sufficiency is formulated to explain how PT data reduction procedures discard information contained in raw sensor data, thus explaining why the proposed algorithm is able to obtain lower uncertainty estimates of key sensor performance parameters, such as read noise and conversion gain. Experimental data captured from a CMOS quanta image sensor with DSERN are then used to demonstrate the algorithm’s usage and validate the underlying theory and statistical model. In support of the reproducible research effort, the code associated with this work can be obtained on the MathWorks file exchange (FEX) (Hendrickson et al., 2024).

 

RRMSE versus read noise for parameter estimates computed using constant flux implementation of PT and PCH-EM. RRMSE curves for PT μ~ and σ~ grow large near σread=0 and were clipped from the plot window.


Open access paper link: https://ieeexplore.ieee.org/document/10570238

Go to the original article...

Job Postings – Week of 18 August 2024

Image Sensors World        Go to the original article...

Omnivision

Principal Image Sensor Technology Engineer

Santa Clara, California, USA

Link

Teledyne

Product Assurance Engineer

Chelmsford, England, UK

Link

Tokyo Electron Labs

Heterogenous Integration Process Engineer I

Albany, New York, USA

Link

Fraunhofer IMS

Doktorand*in Optische Detektoren mit integrierten 2D-Materialien

Duisburg, Germany

Link

AMETEK Forza Silicon

Principal Mixed Signal Design Engineer

Pasadena, CA, USA

Link

University of Birmingham

Professor of Silicon Detector Instrumentation for Particle Physics

Birmingham, England, UK

Link

Ouster

Sensor Package Design Engineer

San Francisco, California, USA

Link

Beijing Institute of High Energy Physics

CEPC Overseas High-Level Young Talents

Beijing, China

Link

Thermo Fisher Scientific

Sr. Staff Product Engineer

Waltham, Massachusetts, USA (Remote)

Link

Go to the original article...

Harvest Imaging Forum 2024 registration open

Image Sensors World        Go to the original article...

The Harvest Imaging forum tradition continues, a next and tenth one will be organized on November 7 & 8, 2024, in Delft, the Netherlands. The basic intention of the Harvest Imaging forum is to have a scientific and technical in-depth discussion on one particular topic that is of great importance and value to digital imaging. The forum 2024 will be an in-person event.

The 2024 Harvest Imaging forum will deal with a single topic from the field of solid-state imaging world and will have only one world-level expert as the speaker:

"AI and VISION : A shallow dive into deep learning"

Prof. dr. Jan van Gemert (Delft Univ. of Technology, Nl)

Abstract: Artificial Intelligence is taking the world by storm! The AI engine is powered by “Deep Learning”. Deep learning differs from normal computer programming in that it allows computers to learn tasks from large, labelled, datasets. In this Harvest Imaging Forum we will go through all fundamentals of Deep Learning: Multi-layer perceptrons, Back-propagation, Optimization, Convolutional neural networks, Recurrent neural networks, un-/self-supervised learning and transformers and self-attention (GPT).

Bio: Jan van Gemert received a PhD degree from the University of Amsterdam in 2010. There he was a post-doctoral fellow as well as at École Normale Supérieure in Paris. Currently he leads the Computer Vision lab at Delft University of Technology. He teaches the Deep learning and Computer Vision MSc courses. His research focuses on visual inductive priors for deep learning for automatic image and video understanding. He has published over 100 peer-reviewed papers with more than 7,500 citations. See his Google scholar profile for his publications: https://scholar.google.com/citations?hl=en&user=JUdMRGcAAAAJ

Registration: The registration fee for this 2-days forum is set to 1295 Euro for an in-person attendance. Next to the cost of attending the forum, this fee for the in-person attendance does include:

  •  Coffee breaks in the mornings and afternoons,
  •  Lunch on both forum days,
  •  Dinner on the first forum day,
  •  Soft and hard copy of the presented material.

If you are interested to attend this forum, please fill out the registration form here: https://harvestimaging.com/forum_registration_2024.php

Go to the original article...

PhD thesis on a low power "time-to-first-spike" event sensor

Image Sensors World        Go to the original article...

Title: Event-based Image Sensor for low-power

Author: Mohamed AKRARAI (Universite Grenoble Alpes)

Abstract: In the framework of the OCEAN 12 European project, this PhD achieved the design, the implementation, the testing of an event based image sensor, and the publication of several scientific papers in international conferences, including renowned ones like the International Symposium on Asynchronous Circuits and Systems (ASYNC). The design of event-based image sensors, which are frameless, require a dedicated architecture and an asynchronous logic reacting to events. First, this PhD gives an overview of architectures based on a hybrid pixel matrix including TFS and DVS pixels. Indeed, this two kind of pixels are able to manage the spatial redundancy and the temporal redundancy respectively. One of the main achievement of this work is to take advantage of having both pixels inside an imager in order to reduce its output bitstream and its power consumption. Then, the design of the pixels and readout in FDSOI 28 nm technology from STMicroelectronics is detailed. Finally, two image sensors have been implemented in a testchip and tested.

Link: https://theses.hal.science/tel-04213080v1/file/AKRARAI_2023_archivage.pdf

 

Go to the original article...

EETimes article on imec

Image Sensors World        Go to the original article...

Full article: https://www.eetimes.eu/imec-getting-high-precision-sensors-to-market/

Imec: Getting High-Precision Sensors to Market

At the recent ITF World 2024, EE Times Europe talked with imec researchers to catch up on what they’re doing with high-precision sensors—and more importantly, how they make sure their innovations get into the hands of industrial players.

Imec develops sensors for cameras and displays, and it works with both light and ultrasound—for medical applications, for example. But the Leuven, Belgium–based research institute never takes technology to market itself. It either finds industrial partners—or when conditions are right, imec creates a spinoff. One way to understand how imec takes an idea from lab to fab and finds a way to get it to market is to zoom in on its approach with image sensors for cameras.

“We make image sensors that are at the beating heart of incredible cameras around the world,” said Paul Heremans, vice president of future CMOS devices and senior fellow at imec. “Our research starts with material selection and an overall new concept for sensors and goes all the way to development, engineering and low-volume manufacturing within imec’s pilot line.”

A good example is the Pharsighted E9-100S ultra-high-speed video camera, developed by Pharsighted LLC and marketed by Photron. The camera reaches 326,000 frames per second (full frame: 640 × 480 pixels) and up to 2,720,000 frames per second at a lower frame size (640 × 32 pixels), thanks to a high-speed image sensor developed and manufactured by imec.

Another example is an electron imager used in a cryo-transmission electron microscope (cryo-TEM) marketed by a U.S. company called Thermo Fisher. The instrument produces atomic resolution pictures of DNA strands and other complex molecules. These images help in the drug-discovery process by allowing researchers to understand the structure of the molecules they need to target.
Thermo Fisher uses direct electron detection imagers, developed by imec and built into the company’s Falcon direct electron detection imagers, each composed of 4K × 4K pixels. The pixels are very large to get to the ultimate sensitivity. Consequently, the chip is so large (5.7 × 5.7 cm) that only four fit on a 200-mm wafer.

A third example is hyperspectral imagers, with very special filters that detect many more colors than just red, green and blue (RGB). Hyperspectral imagers pick up tens or hundreds of spectral bands. They can achieve this level of performance because imec implements processing filters on each pixel.

“We can do that on almost any commercial imager and turn it into a hyperspectral camera,” Heremans said. “Our technology is used by plenty of customers with a range of applications—from surveillance to satellite-based Earth observation, from medical to agriculture and more.”

Spectricity

To bring some of its work on hyperspectral imagers to market, imec created a startup called Spectricity. “The whole idea is to bring this field of multispectral imaging or spectroscopy into cellphones or other high-volume products,” said Glenn Vandevoorde, CEO of Spectricity. “Our imagers can see things that are not visible to the human eye. Instead of just processing RGB data, which a traditional camera does, we take a complete spectral image, where each pixel contains 16 different color points—including near-infrared. And with that, you can detect different materials that look alike but are actually very different. Or you can do color correction on smartphones. Sometimes people look very different, depending on the ambient light. We can detect what kind of light is shining—and based on that, adjust the color.”
The first use case for cellphones is auto white balancing. When a picture is taken with a cellphone, sometimes the colors show up very differently from reality, because the camera doesn’t have an accurate white point, which is the set of values that make up the color white in an image. These values change under different conditions, which means they need to be calibrated often. All other colors are then adjusted based on the white point reference.

Traditional smartphone cameras cannot determine the ambient light accurately, so they cannot find the white point to serve as a viable reference. But the multispectral imager obtains the full spectral information of the ambient light and applies advanced AI algorithms to detect the white point, which leads to accurate auto white balancing and true color correction.

Spectricity said its sensor is being evaluated by seven out of the top eight smartphone manufacturers in the world for integration into phones. “By the end of this year, you will see several smartphone vendors launching the first phones with multispectral imagers inside,” Vandevoorde said.

While smartphones are the ultimate target for high volume, they are also very cost-competitive—and it takes a long time to introduce a new feature in a smartphone. Spectricity is targeting other smartphone applications but also applications for webcams, security cameras and in-cabin video cameras for cars. One category of use cases takes advantage of the ability of multispectral images to detect health conditions.

 

Spectricity’s spectral image sensor technology extends the paradigm of RGB color image sensors. Instead of red, green and blue filters on the pixels, many different spectral filters are deposited on the pixels, using wafer-scale, high-volume fabrication techniques. (Source: Spectricity)

 
Spectricity’s miniaturized spectral camera module, optimized for mobile devices.

“For example, you can accurately monitor how a person’s skin tone develops every day,” Vandevoorde said. “We can monitor blood flow in the skin, we can monitor moisture in the skin, we can detect melanoma and so on. These and many other things can be detected with these multispectral imagers.”
Spectricity has raised €28 million in funding since it was founded in 2018—and the startup has its own mass-production line at X-Fab, one of the company’s investors. “We have our machinery and our process installed there,” Vandevoorde said. “It’s now going through qualification—and by the end of the year, we’ll be ready for mass production to start shipping large volume to customers.” 

How imec finds the right trends to target
Spectricity is a good example of how imec spots a need and develops technology to meet that need. Spectroscopy, of course, is not new. It’s been around for decades, and researchers use it in labs to detect different materials and different gases. What’s new is that imec integrated spectroscopy onto CMOS technology and developed processes to produce it in high volumes for just a couple of dollars. Researchers worked on the idea for about 10 years—and once it was running on imec’s pilot line, the institute set up Spectricity to take it into mass production and develop applications around it. 

“We sniff around different trends,” said Xavier Rottenberg, scientific director and group leader of wave-based sensors and actuators at imec. “We’re in contact with a lot of players in the industry to get exposed to plenty of problems. Based on that, we develop a gut feeling. But gut feelings are dangerous, because it might be that you’re just hungry. However, with an educated gut feeling, sometimes your intuition is right.”

Once imec develops an idea in the lab, it takes the technology to its pilot line to develop a demonstrator. “We do proofs of concept to see how a device performs,” Rottenberg said. “Then we set up contacts in the ecosystem to form partnerships to bring the platform to a level where it can be mass-produced in an industrial fab.”

In some cases, an idea is too far out for partners to pick up for near-term profit. That’s when imec ventures out with a spinoff company, as it did with Spectricity.


Go to the original article...

Sony rebranding IMX sensors to LYTIA (?)

Image Sensors World        Go to the original article...

Link to full article: https://www.phonearena.com/news/sonys-image-sensor-makeover-imx-to-lytia-by-2026_id160402

Sony's image sensor makeover: IMX to LYTIA by 2026

... there's a buzz about Sony making a branding shift for its smartphone image sensors. According to a recent report, Sony is considering moving all its mobile image sensors, including the current IMX lineup, under the newer LYTIA brand. The company is gradually phasing out the IMX brand, and some IMX sensors have already been rebranded to LYTIA. Reportedly, the company plans to fully transition to the LYT lineup by 2026.

The report states that the 50MP IMX890 and IMX882 sensors have already been rebranded as LYT-701 and LYT-600. For instance, the LYT-600 is already used in the vivo X100 Ultra, launched in May this year.

Go to the original article...

A 100kfps X-ray imager

Image Sensors World        Go to the original article...

Marras et al. presented a paper titled "Development of the Continuous Readout Digitising Imager Array Detector" at the Topical Workshop on Electronics for Particle Physics 2023.

Abstract: Abstract: The CoRDIA project aims to develop an X-ray imager capable of continuous operation in excess of 100 kframe/s. The goal is to provide a suitable instrument for Photon Science experiments at diffraction-limited Synchrotron Rings and Free Electron Lasers considering Continuous Wave operation. Several chip prototypes were designed in a 65 nm process: in this paper we will present an overview of the challenges and solutions adopted in the ASIC design.

 
 
 
  


Go to the original article...

Pixel-level programmable regions-of-interest for high-speed microscopy

Image Sensors World        Go to the original article...

Zhang et al. from MIT recently published a paper titled "Pixel-wise programmability enables dynamic high-SNR cameras for high-speed microscopy" in Nature Communications.

Abstract: High-speed wide-field fluorescence microscopy has the potential to capture biological processes with exceptional spatiotemporal resolution. However, conventional cameras suffer from low signal-to-noise ratio at high frame rates, limiting their ability to detect faint fluorescent events. Here, we introduce an image sensor where each pixel has individually programmable sampling speed and phase, so that pixels can be arranged to simultaneously sample at high speed with a high signal-to-noise ratio. In high-speed voltage imaging experiments, our image sensor significantly increases the output signal-to-noise ratio compared to a low-noise scientific CMOS camera (~2–3 folds). This signal-to-noise ratio gain enables the detection of weak neuronal action potentials and subthreshold activities missed by the standard scientific CMOS cameras. Our camera with flexible pixel exposure configurations offers versatile sampling strategies to improve signal quality in various experimental conditions.

 

a Pixels within an ROI capture spatiotemporally-correlated physiological activity, such as signals from somatic genetically encoded voltage indicators (GEVI). b Simulated CMOS pixel outputs with uniform exposure (TE) face the trade between SNR and temporal resolution. Short TE (1.25 ms) provides high temporal resolution but low SNR. Long TE (5 ms) enhances SNR but suffers from aliasing due to low sample rate, causing spikes (10 ms interspike interval) to be indiscernible. Pixel outputs are normalized row-wise. Gray brackets: the zoomed-in view of the pixel outputs. c Simulated pixel outputs of the PE-CMOS. Pixel-wise exposure allows pixels to sample at different speeds and phases. Two examples: in the staggered configuration, the pixels sample the spiking activity with prolonged TE (5 ms) at multiple phases with offsets of (Δ = 0, 1,25, 2.5, 3.75 ms). This configuration maintains SNR and prevents aliasing, as the interspike interval exceeding the temporal resolution of a single phase is captured by phase-shifted pixels. In the multiple exposure configuration, the ROI is sampled with pixels at different speeds, resolving high-frequency spiking activity and slow varying subthreshold potentials that are challenging to acquire simultaneously at a fixed sampling rate. d The PE-CMOS pixel schematic with 6 transistors (T1-T6), a photodiode (PD), and an output (OUT). RST, TX, and SEL are row control signals. EX is a column signal that controls pixel exposure. e The pixel layout. The design achieves programmable pixel-wise exposure while maximizing the PD fill factor for high optical sensitivity.

 

a Maximum intensity projection of the sCMOS (Hamamatsu Orca Flash 4.0 v3) and the PE-CMOS videos of a cultured neuron expressing the ASAP3 GEVI protein. b ROI time series from the sCMOS sampled at 800 Hz with pixel exposure (TE) of 1.25 ms. Black trace: ROI time series. Gray trace: the time series each with 1/4 pixels of the ROI. Plotted signals are inverted from raw samples for visualization. c simultaneously imaged ROI time series of the PE-CMOS. Colored trace: the time series of phase-shifted pixels at offsets (Δ) of 0, 1.25, 2.5, and 3.75 ms each contain 1/4 pixels of the ROI. All pixels are sampled at 200 Hz with TE = 5 ms. Black trace: the interpolated ROI time series with 800 Hz equivalent sample rate. Black arrows: An example showing a spike exceeding the temporal resolution of a single phase is captured by phase-shifted pixels. Black circles: an example subthreshold event barely discernable in sCMOS is visible in the pCMOS output. d, e, f: same at panels (a, b, c) with an example showing a spike captured by the PE-CMOS but not resolvable in the sCMOS output due to low SNR (marked by the magenta arrow). g, h comparison of signal quality from smaller ROIs covering parts of the cell membrane. Gray boxes: zoomed-in view of a few examples of putative spiking events. i SNR of putative spikes events from ROIs in panel (g). A putative spiking event is recorded when the signals from either output exceed SNR > 5. Data are presented as mean values +/- SD, two-sided Wilcoxon rank-sum test for equal medians, n = 93 events, p = 2.99 × 10-24. The gain is calculated as the spike SNR in the PE-CMOS divide by the SNR in the sCMOS. All vertical scales of SNR are 5 in all subfigures.

a The intracellular potential of the cell and the ROI GEVI time-series of the PE-CMOS and sCMOS. GEVI pulse amplitude is the change in GEVI signal corresponding to each current injection pulse. It is measured as the difference between the average GEVI intensity during each current pulse and the average GEVI intensity 100 ms before and after the current injection pulse. GEVI pulse amplitude is converted into SNR by dividing the noise standard deviation. b max. projection of the cell in PE-CMOS and sCMOS. c zoomed in view of the intracellular voltage and GEVI pulses in (a). The red arrow indicates spike locations identified from the intracellular voltage. The black arrows indicate a time where intracellular potential shows a flat response when the GEVI signals in both PE-CMOS and sCMOS exhibit significant amplitude variations. These can be mistaken for spiking events. d zoomed in view of (c) showing the PE-CMOS trace can resolve two spikes with small inter-spike interval, while sCMOS at 800 Hz and 200 Hz both fail to do so. The blue arrows point to the first spike invoked by the current pulse. While the sharp rising edges make them especially challenging for image sensors to sample, the PE-CMOS can preserve their amplitudes better the sCMOS.
 

a Maximum intensity projection of the PE-CMOS videos, raw and filtered (2 × 2 spatial box filter) output at full spatial resolution. Intensity is measured by digital bits (range: 0–1023). b Maximum intensity projection divided into four sub-frames according to pixel sampling speed, each with 1/4 spatial resolution. c The ROI time series from pixels of different speeds (colored trace). Black trace: a 1040 Hz equivalent signal interpolated across all ROI pixels. d Fast sampling pixels (520 Hz) resolves high-SNR spike bursts. e–f Pixels with more prolonged exposure (TE = 2.8–5.7 ms) improves SNR to detect weak subthreshold activity (black arrow) and (f) low SNR spike. The vertical scale of SNR is 10 unless otherwise noted.


Open access article link: https://www.nature.com/articles/s41467-024-48765-5

Go to the original article...

PhD thesis on SciDVS event camera

Image Sensors World        Go to the original article...

Link: https://www.research-collection.ethz.ch/handle/20.500.11850/683623

Thesis title: A Scientific Event Camera: Theory, Design, and Measurements
Author: Rui Garcia
Advisor: Tobi Delbrück

Go to the original article...

Yole analysis of onsemi acquisition of SWIR Vision Systems

Image Sensors World        Go to the original article...

Article by Axel Clouet, Ph.D. (Yole Group)

Link: https://www.yolegroup.com/strategy-insights/onsemi-enters-the-realm-of-ir-by-acquiring-swir-vision-systems/

Onsemi, a leading CMOS image sensor supplier, has acquired SWIR Vision Systems, a pioneer in quantum-dots-based short-wave infrared (SWIR) imaging technology. Yole Group tracks and reports on these technologies through reports like Status of the CMOS Image Sensor 2024 and SWIR Imaging 2023. Yole Group’s Imaging Team discusses how this acquisition mirrors current industry trends.

SWIR Vision Systems pioneered the quantum dots platform

SWIR imaging modality has long been used in defense and industrial applications, generating $97 million in revenue for SWIR imager suppliers in 2022. However, its adoption has been limited by the high cost of InGaAs technology, the historical platform necessary to capture these wavelengths, compared to standard CMOS technology. In recent years, SWIR has attracted interest with the emergence of lower-cost technologies like quantum dots and germanium-on-silicon, both compatible with CMOS fabs and anticipated to serve the mass markets in the long term.

SWIR Vision Systems, a U.S.-based start-up, pioneered the quantum dots platform, introducing the first-ever commercial product in 2018. This company is fully vertically integrated, making its own image sensors for integration into its own cameras. 

An acquisition aligned with Onsemi’s positioning

The CMOS image sensor industry was worth $21.8 billion in 2023 and is expected to reach $28.6 billion by 2029. With a market share of 6%, onsemi is the fourth largest CMOS image sensor supplier globally. The company is the leader in the fast-growing $2.3 billion automotive segment and has a significant presence in the industrial, defense and aerospace, and medical segments.

In the short term, SWIR products will help onsemi catch up with Sony’s InGaAs products in the industrial segment by leveraging the cost advantage of quantum dots. Its existing sales channels will facilitate the adoption of quantum dots technology by camera manufacturers.

Additionally, onsemi is set to establish long-term relationships with defense customers, a segment poised for growth due to global geopolitical instability. By acquiring SWIR Vision Systems and the East Fishkill CMOS fab completed in 2022, onsemi secured its supply chain, owns the SWIR strategic technology, and has a large-volume U.S.-based factory. It is, therefore, aligned with the dual-use approach promoted by the U.S. government for its local industry.

This acquisition will contribute to faster development and adoption of the quantum dots platform without disrupting the SWIR landscape. For onsemi, it is an attractive feature to quickly attract new customers in the industrial and defense sectors and a differentiating technology for the automotive segment in the long term.

 



Go to the original article...

Nuvoton introduces new 3D time-of-flight sensor

Image Sensors World        Go to the original article...

Link: https://www.prnewswire.com/news-releases/tof-sensor-for-enhanced-object-detection-and-safety-application-nuvoton-launches-new-3d-tof-sensor-with-integrated-distance-calculation-circuit-302197512.html

TOF Sensor for Enhanced Object Detection and Safety Application: Nuvoton Launches New 3D TOF Sensor with Integrated Distance Calculation Circuit

KYOTO, Japan, July 16, 2024 /PRNewswire/ -- Nuvoton Technology Corporation Japan is set to begin mass production of a 1/4-inch VGA (640x480 pixel) resolution 3D Time-of-Flight (TOF) sensor in July 2024. This sensor is poised to revolutionize the recognition of people and objects in various indoor and outdoor environments. This capability has been achieved through Nuvoton's unique pixel design technology and distance calculation/Image Signal Processor (ISP) technology.
 

1. High Accuracy in Bright Ambient Lighting Conditions
Leveraging Nuvoton's proprietary CMOS image sensor pixel technology and CCD memory technology, the new TOF sensor has four memories in the 5.6-square-micrometer pixel compared to the three memories of its conventional TOF sensor and achieves accurate distance sensing by simultaneously controlling pulse light sources and acquiring background light signals. It can provide the precise recognition of the position and shape of people and objects under various ambient lighting conditions.


 

2. Accurate Distance Measurement for Moving Objects
With four embedded memories within each pixel, Nuvoton's new TOF sensor outputs distance images in a single frame. This innovative design significantly reduces motion blur and measurement errors in moving objects by capturing and calculating distance from four types of imaging signals within one frame. This feature is particularly suited for applications requiring dynamic object detection and recognition, such as obstacle detection for autonomous mobile robots (AMRs) and airbag intensity control in vehicles.

 

3. Integrated Distance Calculation Circuit for Fast and Accurate Sensing
Nuvoton's new TOF sensor is equipped with an integrated distance calculation circuit and a signal correction ISP, enabling it to output high-speed, high-precision distance (3D) images at up to 120 fps (QVGA) without delay. This eliminates the need for distance calculation by the system processor, reducing the processing overhead and enabling faster sensing systems. Additionally, the sensor can simultaneously output distance (3D) and IR (2D) images, useful for applications requiring both high precision and recognition/authentication functions.
 


For more information, please visit: https://www.nuvoton.com/products/image-sensors/3d-tof-sensors/kw330-series/
 

About Nuvoton Technology Corporation Japan: https://www.nuvoton.co.jp/en/

Go to the original article...

Jobs in Thermal Sensor Design

Image Sensors World        Go to the original article...

 Owl Autonomous Imaging

Website Careers Page Link

Digital Design Engineer - Link to job description

Senior Analog Design Engineer - Link to job description

Go to the original article...

International Image Sensor Society Calls for Award Nominations

Image Sensors World        Go to the original article...

The International Image Sensor Society (IISS) calls for nominations for IISS Exceptional Lifetime Achievement Award, IISS Pioneering Achievement Award, and IISS Exceptional Service Award. The Awards are to be presented at the 2025 International Image Sensor Workshop (IISW) (to be held in Japan).
 
Description of Awards:

  • IISS Exceptional Lifetime Achievement Award. This Award is made to a member of the image sensor community who has made substantial sustained and exceptional contributions to the field of solid-state image sensors over the course of their career. (Established 2013)
  • IISS Pioneering Achievement Award. This award is to recognize a person who made a pioneering achievement in image sensor technology as judged by at least 10 years of hindsight as a foundational contribution. (Established 2015)
  • IISS Exceptional Service Award. This award is presented for exceptional service to the image sensor specialist community. This category recognizes activities in editorial roles, conference leadership roles, and so on, outside of their service related to the IISS. (Established 2011)

 
Submission deadline: all nominations must be received by October 1st, 2024 using the specified entry format.

Email for submissions: 2025nominations@imagesensors.org

Note: Self-nomination is discouraged.

Go to the original article...

SAE article on L3 autonomy

Image Sensors World        Go to the original article...

Link: https://www.sae.org/news/2024/07/adas-sensor-update

Are today’s sensors ready for next-level automated driving?

SAE Level 3 automated driving marks a clear break from the lower levels of driving assistance since that is the dividing line where the driver can be freed to focus on things other than driving. While the driver may sometimes be required to take control again, responsibility in an accident can be shifted from the driver to the automaker and suppliers. Only a few cars have met regulatory approval for Level 3 operation. Thus far, only Honda (in Japan), the Mercedes-Benz S-Class and EQS sedans with Drive Pilot and BMW’s recently introduced 7 Series offer Level 3 autonomy.

With more vehicles getting L3 technology and further automated driving skills being developed, we wanted to check in with some of the key players in this tech space and hear the latest industry thinking about best practices for ADAS and AV Sensors.

Towards More Accurate 3D Object Detection

Researchers from Japan's Ritsumeikan University have developed DPPFA-Net, an innovative network that combines 3D LiDAR and 2D image data to improve 3D object detection for robots and self-driving cars. Led by Professor Hiroyuki Tomiyama, the team addressed challenges in accurately detecting small objects and aligning 2D and 3D data, especially in adverse weather conditions.

DPPFA-Net incorporates three key modules:

  •  Memory-based Point-Pixel Fusion (MPPF): Enhances robustness against 3D point cloud noise by using 2D images as a memory bank.
  •  Deformable Point-Pixel Fusion (DPPF): Focuses on key pixel positions for efficient high resolution feature fusion.
  •  Semantic Alignment Evaluator (SAE): Ensures semantic alignment between data representations during fusion.

The network outperformed existing models in the KITTI Vision Benchmark, achieving up to 7.18% improvement in average precision under various noise conditions. It also performed well in a new dataset with simulated rainfall.

Ritsumeikan University researchers said this advancement has significant implications for self driving cars and robotics. It could lead to reduced accidents, improved traffic flow and safety, and enhanced robot capabilities in various applications. The improvements in 3D object detection are expected to contribute to safer transportation, enhanced robot capabilities, and accelerated development of autonomous systems.

AEVA

Aeva has introduced Atlas, the first 4D lidar sensor designed for mass-production automotive applications. Atlas aims to enhance advanced driver assistance systems (ADAS) and autonomous driving, meeting automotive-grade requirements.

  •  The company’s sensor is powered by two key innovations: the fourth-generation lidar-on-chip module called Aeva CoreVision that incorporate all key lidar elements in a smaller package, using silicon photonics technology.
  •  Aeva X1 new system-on-chip (SoC) lidar processor that integrate data acquisition, point cloud processing, scanning system, and application software.

These innovations make Atlas 70% smaller and four times more power-efficient than Aeva's previous generation, enabling various integration options without active cooling. Atlas uses Frequency Modulated Continuous Wave (FMCW) 4D lidar technology, which offers improved object detection range and immunity to interference. It also provides a 25% greater detection range for low-reflectivity targets and a maximum range of 500 meters.

Atlas is accompanied by Aeva’s perception software, which harnesses advanced machine learning-based classification, detection and tracking algorithms. Incorporating the additional dimension of velocity data, Aeva’s perception software provides unique advantages over conventional time-of-flight 3D lidar sensors.

Atlas is expected to be available for production vehicles starting in 2025, with earlier sample availability for select customers. Aeva's co-founder and CTO Mina Rezk said that Atlas will enable OEMs to equip vehicles with advanced safety and automated driving features at highway speeds, addressing previously unsolvable challenges. Rezk believes Atlas will accelerate the industry's transition to Frequency-Modulated Continuous-Wave 4D lidar technology, which is increasingly considered the end state for lidar due to its enhanced perception capabilities and unique instant velocity data.

Luminar

Following several rocky financial months and five years of development, global automotive technology company Luminar is launching Sentinel, its full-stack software suite. Sentinel enables automakers to accelerate advanced safety and autonomous functionality, including 3D mapping, simulation, and dynamic lidar features. A study by the Swiss Re Institute showed cars equipped with Luminar lidar and Sentinel software demonstrated up to 40% reduction in accident severity.

Developed primarily in-house with support from partners, including Scale AI, Applied Intuition, and Civil Maps (which Luminar acquired in 2022), Sentinel leverages Luminar's lidar hardware and AI-based software technologies.

CEO and founder Austin Russell said Luminar has been building next-generation AI-based safety and autonomy software since 2017. “The majority of major automakers don't currently have a software solution for next-generation assisted and autonomous driving systems,” he said. “Our launch couldn't be more timely with the new NHTSA mandate for next-generation safety in all U.S.-production vehicles by 2029, and as of today, we're the only solution we know of that meets all of these requirements.”

Mobileye

Mobileye has secured design wins with a major Western automaker for 17 vehicle models launching in 2026 and beyond. The deal covers Mobileye's SuperVision, Chauffeur, and Drive platforms, offering varying levels of autonomous capabilities from hands-off, eyes-on driving to fully autonomous robotaxis.

All systems will use Mobileye's EyeQ 6H chip, integrating sensing, mapping, and driving policy. The agreement includes customizable software to maintain brand-specific experiences.
CEO Amnon Shashua called this an "historic milestone" in automated driving, emphasizing the scalability of Mobileye's technology. He highlighted SuperVision's role as a bridge to eyes-off systems for both consumer vehicles and mobility services.

Initial driverless deployments are targeted for 2026.

BMW 

BMW new 7 Series received the world’s first approval for a combination Level 2/Level 3 driving assistance systems in the same vehicle. This milestone offers drivers unique benefits from both systems.
The Level 2 BMW Highway Assistant enhances comfort on long journeys, operating at speeds up to 81 mph (130 km/h) on motorways with separated carriageways. It allows drivers to take their hands off the steering wheel for extended periods while remaining attentive. The system can also perform lane changes autonomously or at the driver's confirmation.

The Level 3 BMW Personal Pilot L3 enables highly automated driving at speeds up to 37 mph (60 km/h) in specific conditions, such as motorway traffic jams. Drivers can temporarily divert their attention from the road, but they have to retake control when prompted.

This combination of systems offers a comprehensive set of functionalities for a more comfortable and relaxing driving experience on both long and short journeys. The BMW Personal Pilot L3, which includes both systems, is available exclusively in Germany for €6,000 (around $6,500). Current BMW owners can add the L2 Highway Assistant to their vehicle, if applicable, free of charge starting August 24.

Mercedes-Benz 

Mercedes-Benz’s groundbreaking Drive Pilot Level 3 autonomous driving system is available for the S-Class and EQS Sedan. It allows drivers to disengage from driving in specific conditions, such as heavy traffic under 40 mph (64 km/h) on approved freeways under certain circumstances. The system uses advanced sensors – including radar, lidar, ultrasound, and cameras – to navigate and make decisions.
While active, Drive Pilot enables drivers to use in-car entertainment features on the central display. However, drivers must remain alert and take control when requested. Drive Pilot functions under the following conditions:

  •  Clear lane markings on approved freeways
  •  Moderate to heavy traffic with speeds under 40 mph
  •  Daytime lighting and clear weather
  •  Driver visible by camera located above driver's display
  •  The car is not in a construction zone.

Drive Pilot relies on a high-definition 3D map of the road and surroundings. It's currently certified for use on major freeways in California and parts of Nevada.

NPS

At CES 2024, Neural Propulsion Systems (NPS) demonstrated its ultra-resolution imaging radar software for automotive vision sensing. The technology significantly improves radar precision without expensive lidar sensors or weather-related limitations.

NPS CEO Behrooz Rezvani likens the improvement to enhancing automotive imaging from 20/20 to better than 20/10 vision. The software enables existing sensors to resolve to one-third of the radar beam-width, creating a 10 times denser point cloud and reducing false positives by over ten times, the company said.

The demonstration compared performance using Texas Instruments 77 GHz chipsets with and without NPS technology. Former GM R&D vice president and Waymo advisor Lawrence Burns noted that automakers can use NPS to enhance safety, performance, and cost-effectiveness of driver-assistance features using existing hardware.

NPS' algorithms are based on the Atomic Norm framework, rooted in magnetic resonance imaging technology. The software can be deployed on various sensing platforms and implemented on processors with neural network capability. Advanced applications of NPS software with wide aperture multi-band radar enable seeing through physical barriers like shrubs, trees, and buildings — and even around corners. The technology is poised to help automakers meet NHTSA's proposed stricter standards for automatic emergency braking, aiming to reduce pedestrian and bicycle fatalities on U.S. roads.

Go to the original article...

Perovskite sensor with 3x more light throughput

Image Sensors World        Go to the original article...

Link: https://www.admin.ch/gov/en/start/documentation/media-releases.msg-id-101189.html


Dübendorf, St. Gallen und Thun, 28.05.2024 - Capturing three times more light: Empa and ETH researchers are developing an image sensor made of perovskite that could deliver true-color photos even in poor lighting conditions. Unlike conventional image sensors, where the pixels for red, green and blue lie next to each other in a grid, perovskite pixels can be stacked thus greatly increasing the amount of light each individual pixel can capture.

Family, friends, vacations, pets: Today, we take photos of everything that comes in front of our lens. Digital photography, whether with a cell phone or camera, is simple and hence widespread. Every year, the latest devices promise an even better image sensor with even more megapixels. The most common type of sensor is based on silicon, which is divided into individual pixels for red, green and blue (RGB) light using special filters. However, this is not the only way to make a digital image sensor – and possibly not even the best.

A consortium comprising Maksym Kovalenko from Empa's Thin Films and Photovoltaics laboratory, Ivan Shorubalko from Empa's Transport at Nanoscale Interfaces laboratory, as well as ETH Zurich researchers Taekwang Jang and Sergii Yakunin, is working on an image sensor made of perovskite capable of capturing considerably more light than its silicon counterpart. In a silicon image sensor, the RGB pixels are arranged next to each other in a grid. Each pixel only captures around one-third of the light that reaches it. The remaining two-thirds are blocked by the color filter.
Pixels made of lead halide perovskites do not need an additional filter: it is already "built into" the material, so to speak. Empa and ETH researchers have succeeded in producing lead halide perovskites in such a way that they only absorb the light of a certain wavelength – and therefore color – but are transparent to the other wavelengths. This means that the pixels for red, green and blue can be stacked on top of each other instead of being arranged next to each other. The resulting pixel can absorb the entire wavelength spectrum of visible light. "A perovskite sensor could therefore capture three times as much light per area as a conventional silicon sensor," explains Empa researcher Shorubalko. Moreover, perovskite converts a larger proportion of the absorbed light into an electrical signal, which makes the image sensor even more efficient.

Kovalenko's team was first able to fabricate individual functioning stacked perovskite pixels in 2017. To make the next step towards real image sensors, the ETH-Empa consortium led by Kovalenko had partnered with the electronics industry. "The challenges to address include finding new materials fabrication and patterning processes, as well as design and implementation of the perovskite-compatible read-out electronic architectures", emphasizes Kovalenko. The researchers are now working on miniaturizing the pixels, which were originally up to five millimeters in size, and assembling them into a functioning image sensor. "In the laboratory, we don't produce the large sensors with several megapixels that are used in cameras," explains Shorubalko, "but with a sensor size of around 100'000 pixels, we can already show that the technology works."

Good performance with less energy
Another advantage of perovskite-based image sensors is their manufacture. Unlike other semiconductors, perovskites are less sensitive to material defects and can therefore be fabricated relatively easily, for example by depositing them from a solution onto the carrier material. Conventional image sensors, on the other hand, require high-purity monocrystalline silicon, which is produced in a slow process at almost 1500 degrees Celsius.

The advantages of perovskite-based image sensors are apparent. It is therefore not surprising that the research project also includes a partnership with industry. The challenge lies in the stability of perovskite, which is more sensitive to environmental influences than silicon. "Standard processes would destroy the material," says Shorubalko. "So we are developing new processes in which the perovskite remains stable. And our partner groups at ETH Zurich are working on ensuring the stability of the image sensor during operation."

If the project, which will run until the end of 2025, is successful, the technology will be ready for transfer to industry. Shorubalko is confident that the promise of a better image sensor will attract cell phone manufacturers. "Many people today choose their smartphone based on the camera quality because they no longer have a stand-alone camera," says the researcher. A sensor delivering excellent images in much poorer lighting conditions could be a major advantage.

Go to the original article...

Sony Imaging Business Strategy Meeting

Image Sensors World        Go to the original article...

Sony held a strategy meeting recently. Slides from the Imaging and Sensing business are available here: https://www.sony.com/en/SonyInfo/IR/library/presen/business_segment_meeting/pdf/2024/ISS_E.pdf



































Go to the original article...

Job Postings – Week of 21 July 2024

Image Sensors World        Go to the original article...

Anduril Industries

Digital IC Designer

Santa Barbara, California, USA

Link

CERN

Postdoctoral research position on detector R&D for experimental particle physics (LHCb)

Lucerne, Switzerland

Link

Ametek – Forza Silicon

Mixed Signal Design Engineer

Pasadena, California, USA

Link

Tsung-Dao Lee Institute

Postdoctoral Positions in Muon Imaging

Shanghai

Link

NASA

Far-Infrared Detectors for Space-Based Low-Background Astronomy

Greenbelt, Maryland, USA

Link

ESRF

Detector Engineer

Grenoble, France

Link

Tokyo Electron

Heterogenous Integration Process Engineer

Albany, New York, USA

Link

University of Oxford

Postdoctoral Research Assistant in Dark Matter Searches

Oxford, England, UK

Link

Lockheed Martin

Electro-Optical Senior Engineer

Denver, Colorado, USA

Link

Go to the original article...

css.php