Archives for December 2023
Fujifilm X100V long term review
Videos du jour: TinyML, Hamamatsu, ADI
Image Sensors World Go to the original article...
tinyML Asia 2022
In-memory computing and Dynamic Vision Sensors: Recipes for tinyML in Internet of Video Things
Arindam BASU , Professor, Department of Electrical Engineering, City University of Hong Kong
Vision sensors are unique for IoT in that they provide rich information but also require excessive bandwidth and energy which limits scalability of this architecture. In this talk, we will describe our recent work in using event-driven dynamic vision sensors for IoVT applications like unattended ground sensors and intelligent transportation systems. To further reduce the energy of the sensor node, we utilize In-memory computing (IMC)—the SRAM used to store the video frames are used to perform basic image processing operations and trigger the following deep neural networks. Lastly, we introduce a new concept of hybrid IMC combining multiple types of memory.
With our new photon number resolving mode the ORCA-Quest enables photon counting resolution across a full 9.4 megapixel image. See the camera in action and learn how photon number imaging pushes quantitative imaging to a new frontier.
Accurate, Mobile Object Dimensioning using Time of Flight Technology
ADI's High Resolution 3D Depth Sensing Technology coupled with advanced image stitching algorithms enables the dimensioning of non-conveyable large objects for Logistics applications. Rather than move the object to a fixed dimensioning gantry, ADI's 3D technology enables operators to take the camera to the object to perform the dimensioning function. With the same level of accuracy as fixed dimensioners, making the system mobile reduces time and cost of measurement while enhancing energy efficiency.
Nikon D80 RETRO review
CEA-Leti IEDM 2023 papers on emerging devices
Image Sensors World Go to the original article...
From Semiconductor Digest: https://www.semiconductor-digest.com/cea-leti-will-present-gains-in-ultimate-3d-rf-power-and-quantum-neuromorphic-computing-with-emerging-devices/
CEA-Leti Will Present Gains in Ultimate 3D, RF & Power, and Quantum & Neuromorphic Computing with Emerging Devices
CEA-Leti papers at IEDM 2023, Dec. 9-13, in San Francisco, will present results in multiple fields, including ultimate 3D advances in radio frequency, such as performance improvement at cryogenic temperature.
The institute will present nine papers during the conference this year. Two presentations will highlight a breakthrough in 3D sequential integration and results pushing GaN/Si HEMT closer to GaN/SiC performance at 28 GHz:
“3D Sequential Integration with Si CMOS Stacked on 28nm Industrial FDSOI with Cu-ULK iBEOL Featuring RO and HDR Pixel”, reports the world-first 3D sequential integration of CMOS over CMOS with advanced metal line levels, which brings 3DSI with intermediate BEOL closer to commercialization.
“6.6W/mm 200mm CMOS Compatible AlN/GaN/Si MIS-HEMT with In-Situ SiN Gate Dielectric and Low Temperature Ohmic Contacts” reports development of CMOS compatible 200mm SiN/AlN/GaN MIS-HEMT on silicon substrate that brings GaN/Si high electron mobility transistors (HEMT) closer to GaN/SiC performance at 28 GHz in power density. It also highlights that SiN/AlN/GaN on silicon metal-insulated semiconductor (MIS-HEMT) is a potential candidate for high power Ka-band power amplifiers.
Leti Devices Workshop
“Semiconductor Devices: Moving Towards Efficiency & Sustainability”
Dec. 10 @ 5:30 pm, Nikko Hotel, 222 Mason Street, Third Floor
The workshop will present CEA-Leti experts’ visions for and key results in efficient computing and radiofrequency devices for More than Moore applications.
CEA-Leti Presentations
Radio Frequency
RF: “A Cost Effective RF-SOI Drain Extended MOS Transistor Featuring PSAT=19dBm @28GHz & VDD=3V for 5G Power Amplifier Application”, by Xavier Garros
Session 34.2: Wednesday, Dec. 13 @ 9:30 am (Continental 7-9)
RF crypto: “RF Performance Enhancement of 28nm FD-SOI Transistors Down to Cryogenic Temperature Using Back Biasing”, by Quentin Berlingard
Session 34.3: Wednesday, Dec. 13 @ 9:55 am (Continental 7-9)
GaN RF: “6.6W/mm 200mm CMOS Compatible AlN/GaN/Si MIS-HEMT with In-Situ SiN Gate Dielectric and Low Temperature Ohmic Contacts”, by Erwan Morvan
Session 38.3: Wednesday, Dec. 13 @ 2:25 pm (Continental 4)
3D Sequential Stacking
“Ultimate Layer Stacking Technology for High Density Sequential 3D Integration”, a collaborative paper with Ionut Rad of Soitec
Session 19.5: Tuesday, Dec. 12 @ 4:00 pm (Grand Ballroom A)
“3D Sequential Integration with Si CMOS Stacked on 28nm Industrial FDSOI with Cu-ULK iBEOL Featuring RO and HDR Pixel”, by Perrine Batude
Session 29.3: Wednesday, Dec. 13 @ 9:55 am (Grand Ballroom B)
Emerging Device and Compute Technology (EDT)
“Designing Networks of Resistively-Coupled Stochastic Magnetic Tunnel Junctions for Energy-Based Optimum Search”, by Kamal Danouchi
Session 22.3: Tuesday, Dec. 12 @ 3:10 (Continental 5)
Neuromorphic Computing
Hybrid FeRAM/RRAM Synaptic Circuit Enabling On-Chip Inference and Learning at the Edge”, by Michele Martemucci (LIST)
Session 23:3: Tuesday, Dec. 12 @ 3:10 (Continental 6)
“Bayesian In-Memory Computing with Resistive Memories”, a collaborative paper with Damien Querlioz of CNRS-C2N
Session 12:3: Tuesday, Dec. 12 @ 9:55 am (Continental 1-3)
Quantum Technology
“Tunnel and Capacitive Coupling Optimization in FDSOI Spin-Qubit Devices”, by H. Niebojewski and B. Bertrand
Session 22:6: Tuesday, Dec. 12 @ 4:25 pm (Continental 5)
STMicroelectronics releases new multizone time-of-flight sensor
Image Sensors World Go to the original article...
Original article: https://www.eejournal.com/industry_news/next-generation-multizone-time-of-flight-sensor-from-stmicroelectronics-boosts-ranging-performance-and-power-saving/
Next-generation multizone time-of-flight sensor from STMicroelectronics boosts ranging performance and power saving
Target applications include human-presence sensing, gesture recognition, robotics, and other industrial uses
Geneva, Switzerland, December 14, 2023 – STMicroelectronics’ VL53L8CX, the latest-generation 8×8 multizone time-of-flight (ToF) ranging sensor, delivers a range of improvements including greater ambient-light immunity, lower power consumption, and enhanced optics.
ST’s direct-ToF sensors combine a 940nm vertical cavity surface emitting laser (VCSEL), a multizone SPAD (single-photon avalanche diode) detector array, and an optical system comprising filters and diffractive optical elements (DOE) in an all-in-one module that outperforms conventional micro lenses typically used with similar alternative sensors. The sensor projects a wide square field of view of 45° x 45° (65° diagonal) and receives reflected light to calculate the distance of objects up to 400cm away, across 64 independent zones, and up to 30 captures per second.
The new VL53L8CX boosts ranging performance with a new-generation VCSEL and advanced silicon-based meta-optics. Compared with the current VL53L5CX, the enhancements increase immunity to interference from ambient light, extending the sensor’s maximum range in daylight from 170cm to 285cm and reducing power consumption from 4.5mW to 1.6mW in low-power mode.
ST released the first multizone time-of-flight sensor with the VL53L5CX in 2021. By increasing performance, the new VL53L8CX now further extends the advantages of these sensors over alternatives with conventional optics, which have fewer native zones and lose sensitivity in the outer areas. Thanks to its true 8×8 multizone sensing, the VL53L8CX ensures uniform sensitivity and accurate ranging throughout the field of view, with superior range in ambient light.
When used for system activation and human presence detection, the VL53L8CX’s greater ambient-light immunity enables equipment to respond more consistently and quickly. As part of ST’s STGesture™ platform that also includes the STSW-IMG035 turnkey gesture-recognition software and Gesture EVK development tool, the new sensor delivers the precision needed for repeatable gesture-based interaction. In addition to motion gesture recognition, hand posture recognition is also possible leveraging the latest AI models available in the STM32ai-modelzoo on GitHub.
Moreover, the VL53L8CX provides increased accuracy for monitoring the contents of bins, containers, silos, and tanks, including liquid-level monitoring, in industrial bulk storage and warehousing. The superior accuracy can also enhance the performance of drinks machines such as coffee makers and beverage dispensers.
Mobile robots including autonomous vacuum cleaners can leverage the VL53L8CX to improve guidance capabilities like floor sensing, small object detection, collision avoidance, and cliff detection. Also, the synchronization pin enables projectors and cameras to benefit from coordinated autofocus. There is also a motion indicator, an auto-stop feature that allows real-time actions, and the sensor is immune to cover-glass crosstalk beyond 60cm. Now supporting SPI connectivity, in addition to the 1MHz I2C interface, the new sensor handles host data transfers at up to 3MHz.
Designers can quickly evaluate the VL53L8CX and jump-start their projects taking advantage of the supporting ecosystem that includes the X-NUCLEO-53L8A1 expansion board and SATEL-VL53L8 breakout boards. The P-NUCLEO-53L8A1 pack is also available, which contains a STM32F401 Nucleo microcontroller board and X-NUCLEO-53L8A1 expansion board ready to power up and start exploring.
The VL53L8CX is available now, housed in a 6.4mm x 3.0mm x 1.75mm leadless package, from $3.60 for orders of 1000 pieces.
Please visit www.st.com/VL53L8CX for more information.
3D cameras at CES 2024: Orbbec and MagikEye
Image Sensors World Go to the original article...
Annoucements below from (1) Orbbec and (2) MagikEye about their upcoming CES demos.
Orbbec releases Persee N1 camera-computer kit for 3D vision enthusiasts, powered by the NVIDIA Jetson platform
Orbbec’s feature-rich RGB-D camera-computer is a ready-to-use out-of-the box solution for 3D vision application developers and experimenters
Troy, Mich, 13 December 2023 — Orbbec, an industry leader dedicated to 3D vision systems, has developed the Persee N1, an all-in-one combination of a popular stereo-vision 3D camera and a purpose-built computer based on the NVIDIA Jetson platform, and equipped with industry-standard interfaces for the most useful accessories and data connections. Developers using the newly launched camera-computer will also enjoy the benefits of the Ubuntu OS and OpenCV libraries. Orbbec recently became an NVIDIA Partner Network (NPN) Preferred Partner.
Persee N1 delivers highly accurate and reliable data for in-door/semi-outdoor operation, ideally suited for healthtech, dimensioning, interactive gaming, retail and robotics applications, and features:
- An easy setup process using the Orbbec SDK and Ubuntu-based software environment.
- Industry-proven Gemini 2 camera, based on active stereo IR technology, which includes Orbbec’s custom ASIC for high-quality, in-camera depth processing.
- The powerful NVIDIA Jetson platform for edge AI and robotics.
- HDMI and USB ports for easy connections to a monitor and keyboard.
- Multiple USB ports for data and a POE (Power over Ethernet) port for combined data and power connections.
- Expandable storage with MicroSD and M.2 slots.
“The self-contained Persee N1 camera-computer makes it easy for computer vision developers to experiment with 3D vision,” said Amit Banerjee, Head of Platform and Partnerships at Orbbec. “This combination of our Gemini 2 RGB-D camera and the NVIDIA Jetson platform for edge AI and robotics allows AI development while at the same time enabling large-scale cloud-based commercial deployments.”
The new camera module also features official support for the widely used Open Computer Vision (OpenCV) library. OpenCV is used in an estimated 89% of all embedded vision projects according to industry reports. This integration marks the beginning of a deeper collaboration between Orbbec and OpenCV, which is operated by the non-profit Open Source Vision Foundation.
“The Persee N1 features robust support for the industry-standard computer vision and AI toolset from OpenCV,” said Dr. Satya Mullick, CEO of OpenCV. “OpenCV and Orbbec have entered a partnership to ensure OpenCV compatibility with Orbbec’s powerful new devices and are jointly developing new capabilities for the 3D vision community.”
MagikEye's Pico Image Sensor: Pioneering the Eyes of AI for the Robotics Age at CES
From Businesswire.
December 20, 2023 09:00 AM Eastern Standard Time
STAMFORD, Conn.--(BUSINESS WIRE)--Magik Eye Inc. (www.magik-eye.com), a trailblazer in 3D sensing technology, is set to showcase its groundbreaking Pico Depth Sensor at the 2024 Consumer Electronics Show (CES) in Las Vegas, Nevada. Embarking on a mission to "Provide the Eyes of AI for the Robotics Age," the Pico Depth Sensor represents a key milestone in MagikEye’s journey towards AI and robotics excellence.
The heart of the Pico Depth Sensor’s innovation lies in its use of MagikEye’s proprietary Invertible Light™ Technology (ILT), which operates efficiently on a “bare-metal” ARM M0 processor within the Raspberry Pi RP2040. This noteworthy feature underscores the sensor's ability to deliver high-quality 3D sensing without the need for specialized silicon. Moreover, while the Pico Sensor showcases its capabilities using the RP2040, the underlying technology is designed with adaptability in mind, allowing seamless operation on a variety of microcontroller cores, including those based on the popular RISC-V architecture. This flexibility signifies a major leap forward in making advanced 3D sensing accessible and adaptable across different platforms.
Takeo Miyazawa, Founder & CEO of MagikEye, emphasizes the sensor's transformative potential: “Just as personal computers democratized access to technology and spurred a revolution in productivity, the Pico Depth Sensor is set to ignite a similar transformation in the realms of AI and robotics. It is not just an innovative product; it’s a gateway to new possibilities in fields like autonomous vehicles, smart home systems, and beyond, where AI and depth sensing converge to create smarter, more intuitive solutions.”
Attendees at CES 2024 are cordially invited to visit MagikEye's booth for an exclusive first-hand experience of the Pico Sensor. Live demonstrations of MagikEye’s latest ILT solutions for next-gen 3D sensing solutions will be held from January 9-11 at the Embassy Suites by Hilton Convention Center Las Vegas. Demonstration times are limited and private reservations will be accommodated by contacting ces2024@magik-eye.com.
imec paper at IEDM 2023 on a waveguide design for color imaging
Image Sensors World Go to the original article...
News article: https://optics.org/news/14/12/11
imec presents new way to render colors with sub-micron pixel sizes
This week at the International Electron Devices Meeting, in San Francisco, CA, (IEEE IEDM 2023), imec, a Belgium-based research and innovation hub in nanoelectronics and digital technologies, has demonstrated a new method for “faithfully splitting colors with sub-micron resolution using standard back-end-of-line processing on 300mm wafers”.
imec says that the technology is poised to elevate high-end camera performance, delivering higher signal-to-noise ratio, enhanced color quality with unprecedented spatial resolution.
Designing next-generation CMOS imagers requires striking a balance between collecting all incoming photons, achieving a resolution down to photon size or diffraction limit, and accurately recording the light color.
Traditional image sensors with color filters on the pixels are still limited in combining all three requirements. While higher pixel densities would increase the overall image resolution, smaller pixels capture even less light and are prone to artifacts that result from interpolating color values from neighboring pixels.
Even though diffraction-based color splitters represent a leap forward in increasing color sensitivity and capturing light, they are still unable to improve image resolution.
'Fundamentally new' approach
imec is now proposing a fundamentally new way for splitting colors at sub-micron pixel sizes (i.e., beyond the fundamental Abbe diffraction limit) using standard back-end processing. The approach is said to “tick all the boxes” for next-generation imagers by collecting nearly all photons, increasing resolution by utilizing very small pixels, and rendering colors faithfully.
To achieve this, imec researchers built an array of vertical Si3N4 multimode waveguides in an SiO2 matrix. The waveguides have a tapered, diffraction-limited sized input (e.g., 800 x 800 nm2) to collect all the incident light.
“In each waveguide, incident photons are exciting both symmetric and asymmetric modes, which propagate through the waveguide differently, leading to a unique “beating” pattern between the two modes for a given frequency. This beating pattern enables a spatial separation at the end of the waveguides corresponding to a specific color,” said Prof. Jan Genoe, scientific director at imec.
Cost-efficient structures
The total output light from each waveguide is estimated to reach over 90% within the range of human color perception (wavelength range 400-700nm), making it superior to color filters, says imec.
Robert Gehlhaar, principal member of technical staff at imec, said, “Because this technique is compatible with standard 300-mm processing, the splitters can be produced cost-efficiently. This enables further scaling of high-resolution imagers, with the ultimate goal to detect every incident photon and its properties.
“Our ambition is to become the future standard for color imaging with diffraction-limited resolution. We are welcoming industry partners to join us on this path towards full camera demonstration.”
RGB camera measurement (100x magnification) of an array of waveguides with alternating 5 left-side-open-aperture and 5 right-side-open-aperture (the others being occluded by TiN) waveguides at a 1-micron pitch. Yellow light exits at the right part of the waveguide, whereas the blue exits at the left. The wafer is illuminated using plane wave white light. Credit: imec.
3D visualization (left) and TEM cross-section (right) of the vertical waveguide array for color splitting in BY-CR imaging. Credit: imec.
OmniVision 15MP/1MP hybrid RGB/event vision sensor (ISSCC 2023)
Image Sensors World Go to the original article...
Guo et al. from Omnivision presented a hybrid RGB/event vision sensor in a paper titled "A 3-Wafer-Stacked Hybrid 15MPixel CIS + 1 MPixel EVS with 4.6GEvent/s Readout, In-Pixel TDC and On-Chip ISP and ESP Function" at ISSCC 2023.
Abstract: Event Vision Sensors (EVS) determine, at pixel level, whether a temporal contrast change beyond a predefined threshold is detected [1–6]. Compared to CMOS image sensors (CIS), this new modality inherently provides data-compression functionality and hence, enables high-speed, low-latency data capture while operating at low power. Numerous applications such as object tracking, 3D detection, or slow-motion are being researched based on EVS [1]. Temporal contrast detection is a relative measurement and is encoded by so-called “events” being further characterized through x/y pixel location, event time-stamp (t) and the polarity (p), indicating whether an increase or decrease in illuminance has been detected.
Job Postings – Week of 17 Dec 2023
Image Sensors World Go to the original article...
Meta Image Sensor Application Engineer |
Sunnyvale, California, USA or Redmond Washington, USA |
|
California institute of Technology Detector Engineer |
Pasadena, California, USA |
|
University of Southampton PhD Studentship: Greenhouse Gas Detection using Silicon Photonics Platform |
Southampton, UK (follow “How to Apply” instructions) |
|
Space Dynamics Laboratory Electro-Optical Sensor Systems Engineer |
North Logan, Utah, USA |
|
Raytheon Process Code Engineer |
Andover, Massachusetts, USA |
|
SOLIEL Synchrotron Group Responsible Detector Group |
Saint-Aubin, France |
|
Teledyne e2v Technologies Focal Plane Engineer |
Camarillo, California, USA |
|
Pixxel Sensors Specialist (EO/IR) |
Bengaluru, Karnataka, India |
|
onsemi Summer 2024 Device Engineering Intern |
Hopewell Junction, New York, USA |
X-FAB introduces NIR SPADs on their 180nm process
Image Sensors World Go to the original article...
X-FAB Introduces New Generation of Enhanced Performance SPAD Devices focused on Near-Infrared Applications
NEWS – Tessenderlo, Belgium – Nov 16, 2023
X-FAB Silicon Foundries SE, the leading analog/mixed-signal and specialty foundry, has introduced a specific near-infrared version to its single-photon avalanche diode (SPAD) device portfolio.
X-FAB Silicon Foundries SE, the leading analog/mixed-signal and specialty foundry, has introduced a specific near-infrared version to its single-photon avalanche diode (SPAD) device portfolio. Like the previous SPAD generation, which launched in 2021, this version is based on the company’s 180nm XH018 process. The inclusion of an additional step to the fabrication workflow has resulted in significant increases in signal while still retaining the same low noise floor, without negatively affecting parameters such as dark count rate, afterpulsing and breakdown voltage.
Through this latest variant, X-FAB is successfully expanding the scope of its SPAD offering, improving its ability to address numerous emerging applications where NIR operation proves critically important. Among these are time-of-flight sensing in industrial applications, vehicle LiDAR imaging, biophotonics and FLIM research work, plus a variety of different medical-related activities. Sensitivity is boosted over the whole near-infrared (NIR) band, with respective improvements of 40% and 35% at the key wavelengths of 850nm and 905nm.
Using the new SPAD devices will reduce the complexity of visible light filtering, since UV and visible light is already suppressed. Filter designs will consequently be simpler, with fewer component parts involved. Furthermore, having exactly the same footprint dimensions as the previous SPAD generation provides a straightforward upgrade route. Customers’ existing designs can gain major performance benefits by just swapping in the new devices.
X-FAB has compiled a comprehensive PDK for the near-infrared SPAD variant, with extensive documentation and application notes featured. Models for optical and electrical simulation will provide engineers the additional design support they need, enabling them to integrate these devices into their circuitry within a short time period.
As Heming Wei, Product Marketing Manager Sensors at X-FAB explains; “Our SPAD technology has already gained a very positive market response, seeing uptake with a multitude of customers. Thanks to continuing innovation at the process level, we have now been able to develop a solution that will secure business for us within various NIR applications, across automotive, healthcare and life sciences.”
The new NIR enhanced SPAD is available now. Engineers can start their design with the new device immediately.
Apple is looking for two sensor designers
Image Sensors World Go to the original article...
These just arrived direct to us from the Apple Camera Silicon team:
Image Sensor Analog Design Engineer - Cupertino, California, USA - Link
Image Sensor Digital Design Engineer - Cupertino, California, USA - Link
Lecture by Dr. Tobi Delbruck on the history of silicon retina and event cameras
Image Sensors World Go to the original article...
Silicon Retina: History, Live Demo, and Whiteboard Pixel Design
Rockwood Memorial Lecture 2023: Tobi Delbruck, Institute of Neuroinformatics, UZH-ETH Zürich
Event Camera Silicon Retina; History, Live Demo, and Whiteboard Circuit Design
Rockwood Memorial Lecture 2023 (11/20/23)
https://inc.ucsd.edu/events/rockwood/
Hosted by: Terry Sejnowski, Ph.D. and Gert Cauwenberghs, Ph.D.
Organized by: Institute for Neural Computation, https://inc.ucsd.edu
Abstract: Event cameras electronically model spike-based sparse output from biological eyes to reduce latency, increase dynamic range, and sparsify activity in comparison to conventional imagers. Driven by the need for more efficient battery powered, always-on machine vision in future wearables, event cameras have emerged as a next step in the continued evolution of electronic vision. This lecture will have 3 parts: 1. A brief history of silicon retina development starting from Fukushima’s Neocognitron and Mahowald and Mead’s earliest spatial retinas; 2: A live demo of a contemporary frame-event DAVIS camera that includes an inertial measurement unit (IMU) vestibular system, 3: (targeted for neuromorphic analog circuit design students in the BENG 216 class), a whiteboard discussion about event camera pixel design at the transistor level, highlighting design aspects of event camera pixels which endow them with fast response even under low lighting, precise threshold matching even under large transistor mismatch, and temperature-independent event threshold.
A couple of direct job postings from Teledyne
Image Sensors World Go to the original article...
Teledyne sent us an e-mail asking us to post these jobs for the consideration of our readers:
Staff Pixel Process Engineer – CMOS Image Sensor R&D - Waterloo, Ontario, Canada - Link
CMOS Sensor Product Support - Waterloo, Ontario, Canada - Link
3D stacked BSI SPAD sensor with on-chip lens
Image Sensors World Go to the original article...
Fujisaki et al. from Sony Semiconductor (Japan) presented a paper titled "A back-illuminated 6 μm SPAD depth sensor with PDE 36.5% at 940 nm via combination of dual diffraction structure and 2×2 on-chip lens" at the 2023 IEEE Symposium on VLSI Technology and Circuits.
Abstract: We present a back-illuminated 3D-stacked 6 μm single-photon avalanche diode (SPAD) sensor with very high photon detection efficiency (PDE) performance. To enhance PDE, a dual diffraction structure was combined with 2×2 on-chip lens (OCL) for the first time. A dual diffraction structure comprises a pyramid surface for diffraction (PSD) and periodic uneven structures by shallow trench for diffraction formed on the Si surface of light-facing and opposite sides, respectively. Additionally, PSD pitch and SiO 2 film thickness buried in full trench isolation were optimized. Consequently, a PDE of 36.5% was achieved at λ = 940 nm, the world’s highest value. Owing to shield ring contact, crosstalk was reduced by about half compared to a conventionally plugged one.
Schematics of Gapless and 2x2 on-chip lens.
Cross sectional SPAD image of (a) our previous work and (b) this work.
Conference List – June 2024
Image Sensors World Go to the original article...
International SPAD Sensor Workshop (ISSW) - 4-6 Jun 2024 - Trento, Italy - Website
Advances in Imaging and Visualization at the Junction of Physics, Engineering, and Data Science - 9-14 Jun 2024 - Newry. Maine, USA - Website
Sensor+Test - 11-13 Jun 2024 - Nuremberg, Germany - Website
Smart Sensing - 12-14 Jun 2024 - Tokyo, Japan - Website
SPIE Astronomical Telescopes + Instrumentation - 15-20 Jun 2024 - Yokohama, Japan - Website
Sensors Converge - 24-26 Jun 2024 - Santa Clara, California, USA - Website
International Workshop on Radiation Imaging Detectors - 30 Jun-4 July 2024 - Lisbon, Portugal - Website
Return to Conference List index
Job Postings – Week of 10 Dec 2023
Image Sensors World Go to the original article...
Dyson Senior Camera Systems Engineer, Electronics |
Singapore |
|
Johnson & Johnson Optical Engineer – R&D |
Cincinnati, Ohio, USA |
|
Italian Space Agency PostDoc - Development of new technologies for particle detection in space |
Rome, Italy |
|
Apple Senior Firmware Engineer - Camera |
Cupertino, California, USA |
|
University of Arizona – Wyant College of Optical Sciences Postdoctoral Research Associate I |
Tucson, Arizona, USA |
|
Image Tuning Engineer, Pixel Camera |
Taipei, Taiwan |
|
Stanford University Professor, Departments of Photon Science and of Particle Physics and Astrophysics |
Menlo Park, California, USA |
|
NDI Europe GmbH Development Engineer Sensor Technology |
Radolfzell, Germany |
|
McGill University Postdoctoral Fellow - nEXO Detector Development (send e-mail) |
Montreal, Quebec, Canada |
Early announcement: Single Photon Workshop 2024
Image Sensors World Go to the original article...
Single Photon Workshop 2024
EICC Edinburgh 18-22 Nov 2024
www.spw2024.org
The 11th Single Photon Workshop (SPW) 2024 will be held 18-24 November 2024, hosted at the Edinburgh International Conference Centre.
SPW is the largest conference in the world dedicated to single-photon generation and detection technology and applications. The biennial international conference brings together a broad range of experts across academia, industry and government bodies with interests in single-photon sources, single-photon detectors, photon entanglement, photonic quantum technologies and their use in scientific and industrial applications. It is an exciting opportunity for those interested in these technologies to learn about the state of the art and to foster continuing partnerships with others seeking to advance the capabilities of such technologies.
In tandem with the scientific programme, SPW 2024 will include a major industry exhibition and networking events.
Please register your interest at www.spw2024.org
Official registration will open in January 2024.
The 2024 workshop is being jointly organized by Heriot-Watt University and University of Glasgow.
IISW2023 special issue paper: Small-pitch InGaAs photodiodes
Image Sensors World Go to the original article...
In a new paper titled "Design and Characterization of 5 μm Pitch InGaAs Photodiodes Using In Situ Doping and Shallow Mesa Architecture for SWIR Sensing" Jules Tillement et al. from STMicroelectronics, U. Grenoble and CNRS Grenoble write:
Abstract: This paper presents the complete design, fabrication, and characterization of a shallow-mesa photodiode for short-wave infra-red (SWIR) sensing. We characterized and demonstrated photodiodes collecting 1.55 μm photons with a pixel pitch as small as 3 μm. For a 5 μm pixel pitch photodiode, we measured the external quantum efficiency reaching as high as 54%. With substrate removal and an ideal anti-reflective coating, we estimated the internal quantum efficiency as achieving 77% at 1.55 μm. The best measured dark current density reached 5 nA/cm2 at −0.1 V and at 23 °C. The main contributors responsible for this dark current were investigated through the study of its evolution with temperature. We also highlight the importance of passivation with a perimetric contribution analysis and the correlation between MIS capacitance characterization and dark current performance.
Full paper (open access): https://www.mdpi.com/1424-8220/23/22/9219
Figure 1. Schematic cross section of the photodiode after different processes. (a) Photodiode fabricated by Zn diffusion or Be implantation; (b) photodiode fabrication using shallow mesa technique.
Figure 2. Band diagram of simulated structure at equilibrium with the photogenerated pair schematically represented with their path of collection.
Figure 3. Top zoom of the structure—Impact of the N-InP (a) thickness and (b) doping on the band diagram at equilibrium.
Figure 4. Simulated dark current with TCAD Synopsys tools [28]. (a) Shows evolution of the dark current when the InP SRH lifetime is modulated; (b) evolution of the dark current when the InGaAs SRH lifetime is modulated.
Figure 5. Impact of the doping concentration of the InP barrier on the carrier collection.
Figure 6. Simplified and schematic process flow of the shallow mesa-type process. (a) The full stack; (b) the definition of the pixel by etching the P layer and (c) the encapsulation and fabrication of contacts.
Figure 7. SEM views after the whole process. (a) A cross-section of the top stack where the P layer is etched and (b) a top view of the different configuration of the test structures (single in-array diode is not shown on this SEM view).
Figure 8. Schematic cross section of the structure with its potential sources of the dark current.
Figure 9. Dark current measurement on 15 μm pitch in a matrix like environment. The curve is the median of more than 100 single in-array diodes measured.
Figure 10. Dark current measurement of the ten-by-ten diode bundle. This measurement is from process B.
Figure 11. Evolution of the dark current with temperature at −0.1 V. The solid lines show the theoretical evolution of the current limited by diffusion (light blue line) and by generation recombination (purple line). The temperature measurement is performed on a bundle of ten-by-ten 5 μm pixel pitch diodes.
Figure 12. Perimetric and bulk contribution to the global dark current from measurements performed on diodes with diameter ranging from 10 to 120 μm.
Figure 13. (a) Capacitance measurement on metal–insulator–semiconductor structure. The measurement starts at 0 V then ramps to +40 V then goes to −40 V and ends at +40 V. (b) A cross section of the MIS structure. The MIS is a 300 μm diameter circle.
Figure 14. Dark current performances compared to the hysteresis measured on several different wafers.
Figure 15. Dark current measurement of a ten-by-ten bundle of 5 μm pixel pitch photodiode. The measurements are conducted at 23 °C.
Figure 16. (a) Schematic test structure for QE measurement; (b) the results of the 3D FDTD simulations conducted with Lumerical to estimate the internal QE of the photodiode.
Figure 18. Current noise for a ten-by-ten 5 μm pixel pitch photodiode bundle measured at −0.1 V.
Figure 19. Median current measurement for bundles of one hundred 3 μm pixel pitch photodiodes under dark and SWIR illumination conditions. The dark blue line represents the dark current and the pink line is the photocurrent under 1.55 μm illumination.
Figure 20. Comparison of our work in blue versus the state of the art for the fabrication of InGaAs photodiodes.Sony announces new 5MP SWIR sensor IMX992
Image Sensors World Go to the original article...
Product page: https://www.sony-semicon.com/en/products/is/industry/swir/imx992-993.html
Press release: https://www.sony-semicon.com/en/news/2023/2023112901.html
Sony Semiconductor Solutions to Release SWIR Image Sensor for Industrial Applications with Industry-Leading 5.32 Effective Megapixels Expanding the lineup for delivering high-resolution and low-light performance
Atsugi, Japan — Sony Semiconductor Solutions Corporation (SSS) today announced the upcoming release of the IMX992 short-wavelength infrared (SWIR) image sensor for industrial equipment, with the industry’s highest pixel count, at 5.32 effective megapixels.
The new sensor uses SSS’s proprietary Cu-Cu connection to achieve the industry’s smallest pixel size of 3.45 μm among SWIR image sensors. It also features an optimized pixel structure for efficiently capturing light, enabling high-definition imaging across a broad spectrum ranging from the visible to invisible short-wavelength infrared regions (wavelength: 0.4 to 1.7 μm). Furthermore, new shooting modes deliver high-quality images with significantly reduced noise in dark environments compared to conventional products.
In addition to this product, SSS will also release the IMX993 with a pixel size of 3.45 μm and an effective pixel count of 3.21 megapixels to further expand its SWIR image sensor lineup. These new SWIR image sensors with high pixel counts and high sensitivity will help contribute to the evolution of various industrial equipment.
In the industrial equipment domain in recent years, there has been increasing demand for improving productivity and preventing defective products from leaving the plant. In this context, the capacity to sense not only visible light but also light in the invisible band is in demand. SSS’s SWIR image sensors, which are capable of seamless wide spectrum imaging in the visible to invisible short-wavelength infrared range using a single camera, are already being used in various processes such as semiconductor wafer bonding and defect inspection, as well as ingredient and contaminant inspections in food production.
The new sensors enable imaging with higher resolution using pixel miniaturization, while enhancing imaging performance in low-light environments to provide higher quality imaging in inspection and monitoring applications conducted in darker conditions. By making the most of the characteristics of short-wavelength infrared light, whose light reflection and absorption properties are different from those of visible light, these products help to further expand applications in such areas as inspection, recognition and measurement, thereby contributing to improved industrial productivity.
Main Features
* High pixel count made possible by the industry’s smallest pixels at 3.45 μm, delivering high-resolution imaging
A Cu-Cu connection is used between the indium-gallium arsenide (InGaAs) layer that forms the photodiode of the light receiving unit and the silicon (Si) layer that forms the readout circuit. This design allows for a smaller pixel pitch, resulting in the industry’s smallest pixel size of 3.45 μm. This, in turn, helps achieve a compact form factor that still delivers the industry’s highest pixel count of approximately 5.32 effective megapixels on the IMX992, and approximately 3.21 effective megapixels on the IMX993. The higher pixel count enables detection of tiny objects or imaging across a wide range, contributing to significantly improved recognition and measurement precision in various inspections using short-wavelength infrared light.
Comparison of SWIR images with different resolutions: Lighting wavelength 1550 nm
(Left: Other SSS product, 1.34 effective megapixels; Right: IMX992)
* Low-noise imaging even in dark locations possible by switching the shooting mode
Inclusion of new shooting modes enables low-noise imaging without being affected by environmental brightness. In dark environments with limited light, High Conversion Gain (HCG) mode directly amplifies the signal with minimal noise after being converted to an electrical signal from light, thereby relatively reducing the amount of noise downstream. Doing so minimizes the impact of noise in dark locations, leading to greater recognition precision. On the other hand, in bright environments with plenty of light, Low Conversion Gain (LCG) mode enables imaging prioritizing the dynamic range.
Furthermore, enabling Dual Read Rolling Shutter (DRRS) outputs images from the sensor in two distinct types. These images are then composited on the camera to acquire an image with significantly reduced noise.
(Left: Other SSS product, 1.34 effective megapixels; Center: IMX992, HCG mode selected; Right: IMX992, HCG mode selected, DRRS enabled)
* Optimized pixel structure for high-sensitivity imaging across a wide range
SSS’s SWIR image sensors employ a thinner indium-phosphorous (InP) layer on top, which would otherwise inevitably absorb visible light, thereby allowing visible light to reach the indium-gallium arsenide (InGaAs) layer underneath, delivering high quantum efficiency even in the visible wavelength. The new products deliver even higher quantum efficiency by optimizing the pixel structure, enabling more uniform sensitivity characteristics across a wide wavelength band from 0.4 to 1.7 μm. Minimizing the image quality differences between wavelengths makes it possible to use the image sensor in a variety of industrial applications and contributes to improved reliability in inspection, recognition, and measurement applications.
Product Overview
Prof. Edoardo Charbon’s Talk on IR SPADs for LiDAR & Quantum Imaging
Image Sensors World Go to the original article...
SWIR/NIR SPAD Image Sensors for LIDAR and Quantum Imaging Applications, by Prof. Charbon
In this talk, prof. Charbon will review the evolution of solid-state photon counting sensors from avalanche photodiodes (APDs) to silicon photomultipliers (SiPMs) to single-photon avalanche diodes (SPADs). The impact of these sensors on LiDAR has been remarkable, however, more innovations are to come with the continuous advance of integrated SPADs and the introduction of powerful computational imaging techniques directly coupled to SPADs/SiPMs. New technologies, such as 3D-stacking in combination with Ge and InP/InGaAs SPAD sensors, are accelerating the adoption of SWIR/NIR image sensors, while enabling new sensing functionalities. Prof. Charbon will conclude the talk with a technological perspective on how all these technologies could come together in low-cost, computational-intensive image sensors, for affordable, yet powerful quantum imaging
Edoardo Charbon (SM’00 F’17) received the Diploma from ETH Zurich, the M.S. from the University of California at San Diego, and the Ph.D. from the University of California at Berkeley in 1988, 1991, and 1995, respectively, all in electrical engineering and EECS. He has consulted with numerous organizations, including Bosch, X-Fab, Texas Instruments, Maxim, Sony, Agilent, and the Carlyle Group. He was with Cadence Design Systems from 1995 to 2000, where he was the Architect of the company's initiative on information hiding for intellectual property protection. In 2000, he joined Canesta Inc., as the Chief Architect, where he led the development of wireless 3-D CMOS image sensors.
Since 2002 he has been a member of the faculty of EPFL, where is a full professor. From 2008 to 2016 he was with Delft University of Technology’s as Chair of VLSI design. Dr. Charbon has been the driving force behind the creation of deep-submicron CMOS SPAD technology, which is mass-produced since 2015 and is present in telemeters, proximity sensors, and medical diagnostics tools. His interests span from 3-D vision, LiDAR, FLIM, FCS, NIROT to super-resolution microscopy, time-resolved Raman spectroscopy, and cryo-CMOS circuits and systems for quantum computing. He has authored or co-authored over 400 papers and two books, and he holds 24 patents. Dr. Charbon is the recipient of the 2023 IISS Pioneering Achievement Award, he is a distinguished visiting scholar of the W. M. Keck Institute for Space at Caltech, a fellow of the Kavli Institute of Nanoscience Delft, a distinguished lecturer of the IEEE Photonics Society, and a fellow of the IEEE.
Job Postings – Week of 3 Dec 2023
Image Sensors World Go to the original article...
Johnson & Johnson Principal Electrical Engineer – Vision |
Santa Clara, California, USA Cincinnati, Ohio, USA |
|
Shenzhen Institute of Advanced Technology Faculty positions in Research Center for Intelligent Biomedical Materials and Devices (IBMD) |
Shenzhen, Guangdong, China |
|
Andor Technology Physicist |
Belfast, Northern Ireland, UK |
|
Bruker Application Scientist Magnetic Particle Imaging |
Ettlingen, Germany |
|
Raytheon EO - Senior Principal Optical Subsystems Engineer |
Tucson, Arizona, USA |
|
Telops R&D Project Manager |
Quebec City, Quebec, Canada |
|
CERN R&D on CMOS detectors for the new experiments at the Future Circular Collider |
Geneva, Switzerland |
|
University of Sussex PhD studentship on novel opaque scintillator detector R&D |
Brighton, UK |
|
Prophesee event sensor in 2023 VLSI symposium
Image Sensors World Go to the original article...
Schon et al from Prophesee published a paper titled "A 320 x 320 1/5" BSI-CMOS stacked event sensor for low-power vision applications" in the 2023 VLSI symposium. This paper presents some technical details about their recently announced GenX320 sensor.
Abstract
Event vision sensors acquire sparse data, making them suited for edge vision applications. However, unconventional data format, nonconstant data rates and non-standard interfaces restrain wide adoption. A 320x320 6.3μm pixel BSI stacked
event sensor, specifically designed for embedded vision, features multiple data pre-processing, filtering and formatting functions, variable MIPI and CPI interfaces and a hierarchy of power modes, facilitating operability in power-sensitive vision
applications.