Image Sensors World Go to the original article...
Exosens - Photonis Infrared
R&D Engineer in Infrared & CMOS Image Sensor Grenoble, France Link
(Note: The listing at the link is in French)
Visual Industry Guide
Image Sensors World Go to the original article...
R&D Engineer in Infrared & CMOS Image Sensor Grenoble, France Link
(Note: The listing at the link is in French)
Image Sensors World Go to the original article...
Image Sensors World Go to the original article...
OSRAM OS San Jose Sensor Characterization Engineer |
Boise, Idaho, USA |
|
DESY Instrument Scientist (Gamma Rays and UV) |
Hamburg, Germany |
|
Rockwell Automation EDGE – Support Engineer (Engineer in Training position) |
Mayfield Heights, Ohio, USA |
|
Jozef Stefan Institute Cherenkov-based PET Detector R&D, Postdoc |
Ljubljana, Slovenia |
|
onsemi Entry Level Analog Engineer |
Richardson, Texas, USA |
|
IDTechEx Sensors Analyst: Market Research & Consultancy |
London, England, UK |
|
Lockheed-Martin Corporation IR Camera Systems Engineer Early Career |
Santa Barbara, California, USA |
|
INSION GmbH Engineer Product Development Spectral Sensor Technology |
Obersulm, Germany |
Image Sensors World Go to the original article...
Image Sensors World Go to the original article...
Sony Semiconductor Solutions - America
Automotive Image Sensor Field Applications Engineer Novi, Michigan, USA Link
Image Sensors World Go to the original article...
From: https://hokuyo-usa.com/resources/blog/pioneering-autonomous-capabilities-solid-state-3d-lidar
Autonomous technologies are proliferating across industries at breakneck speed. Various sectors, like manufacturing, agriculture, storage, freight, etc., are rushing to embrace robotics, automation, and self-driving capabilities.
At the helm of this autonomous transformation is LiDAR, the eyes that allow technologies to perceive and understand their surroundings. LiDAR is like a hawk scanning the landscape with sharp vision, giving clarity and insight into what stands before it. Additionally, research solidifies the claims of increasing LiDAR usage and anticipates that the global LiDAR market will reach 5.35 billion USD by 2030.
While spinning mechanical LiDAR sensors have paved the way, acting as the eyes of autonomous systems, they remain too bulky, delicate, and expensive for many real-world applications. However, new solid-state 3D LiDAR is here to change the game. These LiDARs pack thousands of tiny, durable laser beams onto a single chip to provide unmatched reliability and affordability.
How YLM-X001 3D LiDAR Range Sensor is Transforming Scanning Capabilities
The YLM-X001 outdoor-use 3D LiDAR by Hokuyo sets new standards with groundbreaking features. The range sensor has a small form factor with 119 (W) x 85 (D) x79 (H) dimensions, allowing it to become a part of any vehicle seamlessly. Additionally, despite the small size, it boasts a scanning range of 120° horizontally and 90° vertically. Therefore, it can scan a larger scene and provide data in real-time to avoid collisions with any object.
Furthermore, at the heart of this LiDAR range sensor is the Light Control Metasurface (LCM) technology patented and protected by Lumotive, Inc. This jointly developed light detection and ranging sensor works using this beam-steering technology. It uses the deflection angle of liquid crystals without relying on mechanical parts. This digital scanning technology combines a line light laser with VCSEL Laser and liquid crystal deflection, enabling LiDAR to perform efficient 3D object recognition with high resolution.
Also, the LCM not only eliminates mechanical components but also aids in reducing multipath interference and inter-sensor interference. Reduction of both interferences results in achieving a better level of stability in measurement that was previously unattainable using mechanical LiDARs.
The YLM-X001 3D LiDAR range sensors offer dynamic digital scanning, providing stable distance accuracy in multipath and LiDAR-to-LiDAR interference. It can measure the distance of stationary and repositioning objects in the moving direction and on the road surface via continuous and dynamic scanning.
Notable Features of YLM-X001
New and market-leading features are packed inside this LiDAR, making it a better choice than mechanical LiDARs.
Using 3D LiDAR in Real World Applications
The YLM-X001 finds its stride in various applications, making it an invaluable asset in robotics.
AGV/AMR Integration
Our 3D LiDAR sensors enhance AGV/AMR navigation and obstacle detection precision. They continuously scan the environment, providing real-time data, ideal for autonomous vehicles in dynamic environments.
Additionally, the fork trucks can utilize the capabilities of 3D LiDAR for accurate detection of container and pallet entrances. Plus, it can create path plans and ensure the accurate position of the forklift.
Service Robot Operations
Robots with the capabilities of 3D LiDAR will have an enhanced framework for avoiding obstacles and monitoring road surface conditions. Whether navigating complex indoor or outdoor spaces, these robots can adapt to changing conditions with unmatched accuracy.
Enhance Autonomous Mobility with Hokuyo YLM-X001 3D LiDAR
As industries embrace autonomous technology, the need for accurate range scanning sensors increases. Solid-state LiDARs offer a small form factor and precise measurements, becoming an ideal replacement for mechanical LiDARs.
Our team at Hokuyo is working relentlessly to help you achieve the pinnacle of autonomous mobility. We are developing high-end sensor solutions for a variety of autonomous applications. Our recent development, the YLM-X001 3D LiDAR range sensors, is here for accurate obstacle detection and continuous scanning.
Technical specifications of the YLM-X001 3D LiDAR range sensor: https://www.hokuyo-aut.jp/search/single.php?serial=247#drawing
Image Sensors World Go to the original article...
CEA Leti
Image Sensors World Go to the original article...
In a paper titled "Silver telluride colloidal quantum dot infrared photodetectors and image sensors" Wang et al. from ICFO, ICREA, and Qurv Technologies (Spain) write:
Photodetectors that are sensitive in the shortwave-infrared (SWIR) range (1–2 µm) are of great interest for applications such as machine vision, autonomous driving and three-dimensional, night and adverse weather imaging, among others. Currently available technologies in the SWIR range rely on costly epitaxial semiconductors that are not monolithically integrated with complementary metal–oxide–semiconductor electronics. Solution-processed quantum dots can address this challenge by enabling low-cost manufacturing and simple monolithic integration on silicon in a back-end-of-line process. So far, colloidal quantum dot materials to access the SWIR regime are mostly based on lead sulfide and mercury telluride compounds, imposing major regulatory concerns for their deployment in consumer electronics due to the presence of toxic heavy metals. Here we report a new synthesis method for environmentally friendly silver telluride quantum dots and their application in high-performance SWIR photodetectors. The colloidal quantum dot photodetector stack employs materials compliant with the Restriction of Hazardous Substances directives and is sensitive in the spectral range from 350 nm to 1,600 nm. The room-temperature detectivity is of the order of 10^{12} Jones, the 3 dB bandwidth is in excess of 0.1 MHz and the linear dynamic range is over 118 dB. We also realize a monolithically integrated SWIR imager based on solution-processed, toxic-heavy-metal-free materials, thus paving the way for this technology to the consumer electronics market.Full paper (behind paywall): https://www.nature.com/articles/s41566-023-01345-3
Coverage in phys.org: https://phys.org/news/2024-01-toxic-quantum-dots-pave-cmos.html
Non-toxic quantum dots pave the way towards CMOS shortwave infrared image sensors for consumer electronics
Invisible to our eyes, shortwave infrared (SWIR) light can enable unprecedented reliability, function and performance in high-volume, computer vision first applications in service robotics, automotive and consumer electronics markets.
Image sensors with SWIR sensitivity can operate reliably under adverse conditions such as bright sunlight, fog, haze and smoke. Furthermore, the SWIR range provides eye-safe illumination sources and opens up the possibility of detecting material properties through molecular imaging.
Colloidal quantum dots (CQD)-based image sensor technology offers a promising technology platform to enable high-volume compatible image sensors in the SWIR.
CQDs, nanometric semiconductor crystals, are a solution-processed material platform that can be integrated with CMOS and enables access to the SWIR range. However, a fundamental roadblock exists in translating SWIR-sensitive quantum dots into key enabling technology for mass-market applications, as they often contain heavy metals like lead or mercury (IV-VI Pb, Hg-chalcogenide semiconductors).
These materials are subject to regulations by the Restriction of Hazardous Substances (RoHS), a European directive that regulates their use in commercial consumer electronic applications.
In a study published in Nature Photonics, ICFO researchers Yongjie Wang, Lucheng Peng, and Aditya Malla led by ICREA Prof. at ICFO Gerasimos Konstantatos, in collaboration with researchers Julien Schreier, Yu Bi, Andres Black, and Stijn Goossens, from Qurv, have reported on the development of high-performance infrared photodetectors and an SWIR image sensor operating at room temperature based on non-toxic colloidal quantum dots.
The study describes a new method for synthesizing size tunable, phosphine-free silver telluride (Ag2Te) quantum dots while preserving the advantageous properties of traditional heavy-metal counterparts, paving the way to the introduction of SWIR colloidal quantum dot technology in high-volume markets.
While investigating how to synthesize silver bismuth telluride (AgBiTe2) nanocrystals to extend the spectral coverage of the AsBiS2 technology to enhance the performance of photovoltaic devices, the researchers obtained silver telluride (Ag2Te) as a by-product.
This material showed a strong and tunable quantum-confined absorption akin to quantum dots. They realized its potential for SWIR photodetectors and image sensors and pivoted their efforts to achieve and control a new process to synthesize phosphine-free versions of silver telluride quantum dots, as phosphine was found to have a detrimental impact on the optoelectronic properties of the quantum dots relevant to photodetection.
In their new synthetic method, the team used different phosphine-free complexes such as a tellurium and silver precursors that led them to obtain quantum dots with well-controlled size distribution and excitonic peaks over a very broad range of the spectrum.
After fabricating and characterizing them, the newly synthesized quantum dots exhibited remarkable performances, with distinct excitonic peaks over 1,500nm—an unprecedented achievement compared to previous phosphine-based techniques for quantum dot fabrication.
The researchers then decided to implement the obtained phosphine-free quantum dots to fabricate a simple laboratory scale photodetector on the common standard ITO (Indium Tin Oxide)-coated glass substrate to characterize the devices and measure their properties.
"Those lab-scale devices are operated with shining light from the bottom. For CMOS integrated CQD stacks, light comes from the top, whereas the bottom part of the device is taken by the CMOS electronics," said Yongjie Wang, postdoc researcher at ICFO and first author of the study. "So, the first challenge we had to overcome was reverting the device setup. A process that in theory sounds simple, but in reality proved to be a challenging task."
Initially, the photodiode exhibited a low performance in sensing SWIR light, prompting a redesign that incorporated a buffer layer. This adjustment significantly enhanced the photodetector performance, resulting in a SWIR photodiode exhibiting a spectral range from 350nm to 1,600nm, a linear dynamic range exceeding 118 dB, a -3dB bandwidth surpassing 110 kHz and a room temperature detectivity of the order 10^{12} Jones.
"To the best of our knowledge, the photodiodes reported here have for the first time realized solution processed, non-toxic shortwave infrared photodiodes with figures of merit on par with other heavy-metal containing counterparts," Gerasimos Konstantatos, ICREA Prof. at ICFO and leading author of the study mentions.
"These results further support the fact that Ag2Te quantum dots emerge as a promising RoHS-compliant material for low-cost, high-performance SWIR photodetectors applications."
With the successful development of this heavy-metal-free quantum dot based photodetector, the researchers went further and teamed up with Qurv, an ICFO spin-off, to demonstrate its potential by constructing a SWIR image sensor as a case study.
The team integrated the new photodiode with a CMOS based read-out integrated circuit (ROIC) focal plane array (FPA) demonstrating for the first time a proof-of-concept, non-toxic, room temperature-operating SWIR quantum dot based image sensor.
The authors of the study tested the imager to prove its operation in the SWIR by taking several pictures of a target object. In particular, they were able to image the transmission of silicon wafers under the SWIR light as well as to visualize the content of plastic bottles that were opaque in the visible light range.
"Accessing the SWIR with a low-cost technology for consumer electronics will unleash the potential of this spectral range with a huge range of applications including improved vision systems for automotive industry (cars) enabling vision and driving under adverse weather conditions," says Gerasimos Konstantatos.
"SWIR band around 1.35–1.40 µm, can provide an eye-safe window, free of background light under day/night conditions, thus, further enabling long-range light detection and ranging (LiDAR), three-dimensional imaging for automotive, augmented reality and virtual reality applications."
Now the researchers want to increase the performance of photodiodes by engineering the stack of layers that comprise the photodetector device. They also want to explore new surface chemistries for the Ag2Te quantum dots to improve the performance and the thermal and environmental stability of the material on its way to the market.
Image Sensors World Go to the original article...
Surrey Satellite Technology Ltd. Imager Electronics Engineer |
Guildford, Surrey, UK |
|
Booz Allen Hamilton Electro-Optical and Infrared Subject Matter Expert |
Crane, Indiana, USA |
|
SOITEC BU Director Mixed Signal |
Singapore or Grenoble, France |
|
Space Dynamics Laboratory Imaging Sensor and Detector Engineer |
Logan, Utah, USA |
|
University of Science and Technology of China Postdoctoral R&D of Monolithic Active Pixel Sensors |
Hefei, Anhui, China |
|
Nokia Silicon Photonics Design Engineer |
New York, New York, USA |
|
Nokia Silicon Photonics Design Summer Co-op |
New York, New York, USA |
|
Blue River Technology Camera Systems Engineer |
Santa Clara, California, USA |
|
Thorlabs – Imaging Systems Summer Intern |
Sterling, Virginia, USA |
Image Sensors World Go to the original article...
We just received a request to list two new jobs from Transformative Optics in Portland, Oregon, USA. They describe these as:
Image Sensors World Go to the original article...
From: https://newsroom.st.com/media-center/press-item.html/t4598.html
Sphere Studios and STMicroelectronics reveal new details on the world’s largest cinema image sensor
Jan 11, 2024 Burbank, CA, and Geneva, Switzerland
Sensor custom created for Big Sky – the world’s most advanced camera system – and is used to capture ultra-high-resolution content for Sphere in Las Vegas
Sphere Entertainment Co. (NYSE: SPHR) today revealed new details on its work with STMicroelectronics (NYSE: STM) (“ST”), a global semiconductor leader serving customers across the spectrum of electronics applications, to create the world’s largest image sensor for Sphere’s Big Sky camera system. Big Sky is the groundbreaking, ultra-high-resolution camera system being used to capture content for Sphere, the next-generation entertainment medium in Las Vegas.
Inside the venue, Sphere features the world’s largest, high-resolution LED screen which wraps up, over, and around the audience to create a fully immersive visual environment. To capture content for this 160,000 sq. ft., 16K x 16K display, the Big Sky camera system was designed by the team at Sphere Studios – the in-house content studio developing original live entertainment experiences for Sphere. Working with Sphere Studios, ST manufactured a first-of-its-kind, 18K sensor capable of capturing images at the scale and fidelity necessary for Sphere’s display. Big Sky’s sensor – now the world’s largest cinema camera sensor in commercial use – works with the world’s sharpest cinematic lenses to capture detailed, large-format images in a way never before possible.
“Big Sky significantly advances cinematic camera technology, with each element representing a leap in design and manufacturing innovation,” said Deanan DaSilva, lead architect of Big Sky at Sphere Studios. “The sensor on any camera is critical to image quality, but given the size and resolution of Sphere’s display, Big Sky’s sensor had to go beyond any existing capability. ST, working closely with Sphere Studios, leveraged their extensive expertise to manufacture a groundbreaking sensor that not only expands the possibilities for immersive content at Sphere, but also across the entertainment industry.”
“ST has been on the cutting edge of imaging technology, IP, and tools to create unique solutions with advanced features and performance for almost 25 years,” said Alexandre Balmefrezol, Executive Vice President and Imaging Sub-Group General Manager, STMicroelectronics. “Building a custom sensor of this size, resolution, and speed, with low noise, high dynamic range, and seemingly impossible yield requirements, presented a truly novel challenge for ST – one that we successfully met from the very first wafer out of our 12” (300mm) wafer fab in Crolles, France.”
As a leader in the development and manufacturing of image sensors, ST’s imaging technologies and foundry services cater to a wide range of markets, including professional photography and cinematography. Big Sky’s 316 megapixel sensor is almost 7x larger and 40x higher resolution than the full-frame sensors found in high-end commercial cameras. The die, which measures 9.92cm x 8.31cm (82.4 cm2), is twice as large as a wallet-sized photograph, and only four full die fit on a 300mm wafer. The system is also capable of capturing images at 120 fps and transferring data at 60 gigabytes per second.
Big Sky also allows filmmakers to capture large-format images from a single camera without having to stitch content together from multiple cameras – avoiding issues common to stitching including near distance limitations and seams between images. Ten patents and counting have been filed by Sphere Studios in association with Big Sky’s technology.
Darren Aronofsky’s Postcard from Earth, currently showing at Sphere as part of The Sphere Experience, is the first cinematic production to utilize Big Sky. Since its debut, Postcard from Earth has transported audiences, taking them on a journey spanning all seven continents, and featuring stunning visuals captured with Big Sky that make them feel like they have traveled to new worlds without leaving their seats in Las Vegas. More information about The Sphere Experience is available at thesphere.com.
Image Sensors World Go to the original article...
OPPO, AlpsenTek and Qualcomm Boost AI Motion, Image Quality For Mobile Applications
Jan.11,2024,Las Vegas,USA—OPPO, AlpsenTek and Qualcomm Technologies, Inc. have teamed up with the goal of enhancing innovative Hybrid Vision Sensing (HVS) technology, to better extract valuable motion and image data to enhance picture quality for mobile phone applications.
OPPO and AlpsenTek will collaborate to pioneer the use of Hybrid Vision Sensing technologies, developing a data processing chain to collect relevant camera information to help enhance picture quality and allow for deblurring, augmented resolution, and slow-motion reconstruction, as well as other features required for machine sensing. This will be accomplished by leveraging Snapdragon® Mobile Platforms from Qualcomm Technologies.
“The HVS solution, with the support of hardware and algorithms, significantly enhances the capacities of smartphone cameras”, said Judd Heape VP, Product Management at Qualcomm Technologies, Inc. “We are pleased to contribute to the optimization of this new technology on our Snapdragon platforms – which will help consumers to get the best performance from their smartphone cameras, and capture what’s most precious to them.”
The Image product director, Mr. Xuan ZHANG, from OPPO commented: “Over the years, we have conducted extensive research in new sensor technologies, with a particular focus on HVS (Hybrid Vision System) technology. We have engaged in substantial collaborative developments with AlpsenTek and Qualcomm, involving numerous iterations in both chip design and algorithms. Our confidence in the potential of this technology has driven us to invest time and effort into refining it collaboratively, with the ultimate goal of pushing it towards the application on OPPO’s HyperTone Camera System.”
Motion information is crucial in photography and machine vision. Traditional image sensors collapse motion information within a period (i.e. the exposure) into a single image. This leads to motion blurs and loss of valuable motion data essential for image/video processing and machine vision algorithms.
Effectively obtaining high-fidelity motion information with a vision sensor is a top demand across various fields today. Current solutions based on conventional image sensors often rely on increasing the frame rate, which is expensive and impractical for many applications. High frame rates lead to a significant amount of data (much of it redundant) and short shutter durations, causing high system resource usage, low efficiency, and poor adaptation to lighting conditions for high-frame-rate cameras.
Event-based Vision Sensing (EVS) is an imaging technology that continuously records change/motion information through its shutter-free mechanism. It provides motion information with high time resolution and lower cost machine vision. With an in-pixel processing chain featuring logarithm amplification, EVS achieves a balance between high frame rate, high dynamic range, and low data redundancy for recording motion information.
However, EVS sensors often lack critical static pictorial information that is needed for many machine vision applications. It typically works alongside a separate traditional image sensor (RGB) to compensate for this drawback, introducing challenges in cost, system complexity, and image registration between the two types of images (EVS and RGB), offsetting many of EVS's advantages.
AlpsenTek's Hybrid Vision Sensing (HVS) technology, introduced in 2019, combines EVS and conventional imaging technology into a single sensor. The ALPIX® sensor from AlpsenTek simultaneously outputs high-quality RGB images and EVS data stream, providing a cost-effective and algorithm-friendly solution for capturing images with embedded motion information.
Jian Deng, Founder and CEO of AlpsenTek, stated, "In the current landscape of vision sensors, there is a growing expectation for more than just 2D RGB information; sensors are now anticipated to provide additional data, such as distance, spectrum, and motion. Collaborating with OPPO and Qualcomm, we collectively designed the ALPIX-Eiger® to seamlessly integrate into mobile phone applications. Considered an enhanced RGB image sensor, it boasts image quality comparable to leading mobile sensors on the market, while introducing the added functionality of EVS. Witnessing the process of bringing our technology from conception to product brings us immense excitement."
Deng further emphasized, "It's important to recognize that what truly changes the world is not the technology itself but the products that it enables. Our passion lies in bringing Hybrid Vision Sensing (HVS) into the hands of everyone. This commitment has been our driving force from the very beginning. We look forward to fruitful outcomes from this collaboration”.
Jian Deng, Founder and CEO of AlpsenTek, stated, "In the current landscape of vision sensors, there is a growing expectation for more than just 2D RGB information; sensors are now anticipated to provide additional data, such as distance, spectrum, and motion. Collaborating with OPPO and Qualcomm, we collectively designed the ALPIX-Eiger® to seamlessly integrate into mobile phone applications. Considered an enhanced RGB image sensor, it boasts image quality comparable to leading mobile sensors on the market, while introducing the added functionality of EVS. Witnessing the process of bringing our technology from conception to product brings us immense excitement."
Deng further emphasized, "It's important to recognize that what truly changes the world is not the technology itself but the products that it enables. Our passion lies in bringing Hybrid Vision Sensing (HVS) into the hands of everyone. This commitment has been our driving force from the very beginning. We look forward to fruitful outcomes from this collaboration”.
This news was also featured on EETimes: https://www.eetimes.com/oppo-alpsentek-and-qualcomm-boost-ai-motion-image-quality-for-mobile-applications/
Image Sensors World Go to the original article...
Image Processing Engineer |
Mountain View, California, USA |
|
University of Southampton PhD Studentship: Integration of Detectors for Mid-Infrared Sensors |
Southampton, England, UK |
|
CMOS Sensor, Inc. Integrated Circuit Design Engineer |
San Jose, California,, USA |
|
CMOS Sensor, Inc. Product Marketing and Sales Manager |
San Jose, California,, USA |
|
University of Melbourne Detector Assembly Technical Officer: ATLAS-ITk Silicon Detector Modules |
Parkville, Victoria, Australia |
|
Sandia National Laboratories Integrated Photonics Postdoctoral Appointee |
Albuquerque, New Mexico, USA |
|
Rutherford Appleton Laboratory Integrated Circuit and Microelectronic system Graduate Engineers |
Harwell, Oxfordshire, UK |
|
California Institute of Technology Detector Engineer |
Pasadena, California, USA |
|
ASML Design Engineer - Optical Sensor System |
Wilton, Connecticut, USA |
Image Sensors World Go to the original article...
Tontini et al. from FBK and University of Trento recently published an article titled "Histogram-less LiDAR through SPAD response linearization" in the IEEE Sensors journal.
Open access link: https://ieeexplore.ieee.org/document/10375298
Abstract: We present a new method to acquire the 3D information from a SPAD-based direct-Time-of-Flight (d-ToF) imaging system which does not require the construction of a histogram of timestamps and can withstand high flux operation regime. The proposed acquisition scheme emulates the behavior of a SPAD detector with no distortion due to dead time, and extracts the TOF information by a simple average operation on the photon timestamps ensuring ease of integration in a dedicated sensor and scalability to large arrays. The method is validated through a comprehensive mathematical analysis, whose predictions are in agreement with a numerical Monte Carlo model of the problem. Finally, we show the validity of the predictions in a real d-ToF measurement setup under challenging background conditions well beyond the typical pile-up limit of 5% detection rate up to a distance of 3.8m.
Image Sensors World Go to the original article...
Abstract: The miniaturization of image sensors in recent decades has made today’s cameras ubiquitous across many application domains, including medical imaging, smartphones, security, robotics, and autonomous transportation. However, only imagers that are an order of magnitude smaller could enable novel applications in nano-robotics, in vivo imaging, mixed reality, and health monitoring. While sensors with sub-micron pixels exist now, further miniaturization has been primarily prohibited by fundamental limitations of conventional optics. Traditional imaging systems consist of a cascade of refractive elements that correct for aberrations, and these bulky lenses impose a lower limit on camera footprint. In recent years, sub-wavelength diffractive optics, also known as meta-optics have been touted as a promising replacement for the bulky refractive optics. However, the images taken with meta-optics, to date, remain significantly inferior to the ones taken with refractive. Especially, full-color imaging with a large aperture meta-lens remains an important unsolved problem. We employ computationally designed meta-optics to solve this problem and enable ultra-compact cameras. Our solution is to design the meta-optics such that the modulation transfer function (MTF) of all the wavelength across the desired optical bandwidth are the same at the sensor plane. Additionally, the volume under the MTF curve is maximized to ensure enough information is captured enabling computational reconstruction of the image. The same intuition can be employed for different angles to mitigate geometric aberrations as well. In this talk, I will describe our efforts on achieving full-color imaging using a single meta-optic and a computational backend. Starting from traditional extended depth of focus lens [1,2], I will describe inverse-designed meta-optics [3], end-to-end designed meta-optics [4] and hybrid refractive/ meta-optics [5] for visible full-color imaging. I will also talk about how these techniques can be extended for thermal imaging [6,7].
[1] S. Colburn et al., Sci Adv 4, eaar2114 (2018).
[2] L. Huang et al., Photon. Res. 8, 1613 (2020).
[3] E. Bayati et al., Nanophotonics 11, 2531 (2022).
[4] E. Tseng et al., Nature Communications 12, 6493 (2021).
[5] S. Pinilla et al., Science Advances 9, eadg7297.
[6] L. Huang et al., Opt. Mater. Express 11, 2907 (2021).
[7] V. Saragadam et al., arXiv:2212.06345 (2023).
Biography
Professor Arka Majumdar is an associate professor in the departments of electrical and computer engineering and physics at the University of Washington (UW). He received B. Tech. from IIT-Kharagpur (2007), where he was honored with the President’s Gold Medal. He completed his MS (2009) and PhD.(2012) in Electrical Engineering at Stanford University. He spent one year at the University of California, Berkeley (2012-13), and then in Intel Labs (2013-14) as postdoc before joining UW. His research interests include developing a hybrid nanophotonic platform using emerging material systems for optical information science, imaging, and microscopy. Professor Majumdar is the recipient of multiple Young Investigator Awards from the AFOSR (2015), NSF (2019), ONR (2020) and DARPA (2021), Intel early career faculty award (2015), Amazon Catalyst Award (2016), Alfred P. Sloan fellowship (2018), UW college of engineering outstanding junior faculty award (2020), iCANX Young Scientist Award (2021), IIT-Kharagpur Young Alumni Achiever Award (2022) and DARPA Director’s Award (2023). He is co-founder and technical advisor of Tunoptix, a startup commercializing software defined meta-optics.
Image Sensors World Go to the original article...
The annual Image Sensors Europe 2024 will be held in London on March 20-21, 2024.
See below for the speakers confirmed to present at the 2024 edition in London.Link: https://www.image-sensors.com/image-sensors-europe/2020-speakers?EventId=4047&page=2
Lindsay Grant - OmniVision Technology
Federico Canini - Datalogic
Nasim Sahraei - Edgehog Advanced Technologies Inc.
Pawel Latawiec - Metalenz
Emilie Huss - STMicroelectronics
Nicolas Roux - STMicroelectronics
Abhinav Agarwal - Forza Silicon (Ametek Inc.)
Dr Claudio Jakobson - SCD
Jan Bogaerts - Gpixel
Christian Mourad - VoxelSensors
Carl Philipp Koppen - pmdtechnologies AG
Dr Artem Shulga - QDI systems
Albert Theuwissen - Harvest Imaging
Anthony Huggett - onsemi
Matthias Schaffland - Sensor to Image GmbH
Dr. Kazuhiro Morimoto - Canon Inc.
Svorad Štolc - Photoneo
Florian Domengie - Yole Intelligence
Adi Xhakoni - ams-osram
CIS Masterclass
Dr. Albert Theuwissen will give a Masterclass on "Recent Developments in the CIS World over the last 12 months" which will cover the following topics: Numbers and Market Trends, High Dynamic Range, Global Shutter, Low Noise, Colour Filter News, Phase Detective Auto-Focus Pixels, New materials, Beyond Silicon in the Near-IR, Event-Based Imagers
About Image Sensors Europe
Image Sensors Europe established and held
its first conference in 2007, and has since grown to be the go-to annual
image sensors technical and business conference. Each year this ever
evolving market continuously prompts new and exciting opportunities for
the entire supply chain.
This esteemed event provides a platform for
over 250 representatives from across the digital imaging supply chain to
engage in high calibre discussions and face-to-face networking
opportunities with key industry experts and colleagues.
2024 Key Themes:
Image Sensors World Go to the original article...
Eliminating Stray Light Image Artifacts via Invisible Image Sensor Coverglass
High-quality images are critical for machine vision applications like autonomous vehicles, surveillance systems, and industrial automation. However, lens flare caused by internal light reflections can significantly degrade image quality. This “ghosting” effect manifests as spots, starbursts, and other artifacts that obscure objects and details.
Traditional anti-reflective coatings help reduce flare by creating destructive interference to cancel out light reflections. But they fall short at wider angles where reflections still occur. Stray light hitting image sensors causes flares. These artifacts interfere with image clarity and create glare, which decreases the signal-to-noise ratio, especially in environments with high dynamic range.
Omnidirectional Anti-Reflection CMOS Coverglass
Edgehog’s Omnidirectional Anti-Reflection (OAR) nanotexturing technology takes a fundamentally different approach to eliminating reflections. Instead of coatings, OAR uses nano-scale surface textures that create a gradual transition in refractive index from air to glass. Edgehog’s texturing allows light to transmit through the surface without internal reflections, regardless of angle.
By treating the image sensor cover glass with OAR nanotexturing, Edgehog enables flare-free imaging under any lighting condition. Edgehog delivers crisper images and videos with enhanced contrast, sharpness, and color accuracy.
Case Study
Edgehog recently showcased the impact of its technology by retrofitting a camera’s stock CMOS cover glass with an OAR-treated replacement. Simulations showed OAR’s superiority in mitigating flare irradiance compared to the original glass. Real-world testing also exhibited significant flare reduction in challenging high-glare environments.
Images taken from two identical camera models showing a significant reduction in lens flare in
the bottom left of the images. The image on the left (A) is taken using an off-the-shelf FLIR Blackfly
S camera where the sensor coverglass utilizes conventional anti-reflection coatings. The right
image (B) is taken using an identical camera with the sensor coverglass replaced with Edgehog
coverglass, as shown in the schematic above.
Photos were captured simultaneously in an indoor garage. (A) off-the-shelf FLIR Blackfly S
camera and (B) identical camera setup with Edgehog-enhanced sensor coverglass.
Photos captured simultaneously outdoors on a sunny day. (A) off-the-shelf FLIR Blackfly S
camera and (B) identical camera setup with Edgehog-enhanced sensor coverglass.
Edgehog’s Seeking Manufacturing Partners
Overall, Edgehog’s nanotextured anti-reflection technology represents a revolutionary leap forward for imaging components. OAR enables reliable, high-performance vision capabilities for autonomous systems by stopping flare at the source. We are looking for manufacturing partners to scale up our manufacturing.
To learn more about eliminating lens flare with omnidirectional anti-reflection, download Edgehog’s full white paper today or email us to discover how nanotexturing can enhance image quality and enable the next generation of machine vision.
Download Edgehog’s whitepaper - https://www.edgehogtech.com/machine-vision-whitepaper
Visit Edgehog’s Website - www.edgehogtech.com
Image Sensors World Go to the original article...
After a 2-week holiday break, the listings are back. Happy New Year.
Qualcomm Technologies Camera Sensor Engineer |
San Diego, California, USA |
|
Apple Sensing Hardware Development Engineer - Electrical |
Singapore |
|
Teledyne Hybridization & Bonding Technician |
Camarillo, California, USA |
|
Gpixel Analog Design Engineer (Senior) |
Antwerp, Belgium |
|
Apple Pixel Development Engineer - RGB and Depth Sensors |
Irvine, California, USA |
|
Karlsruhe Institute of Technology KSETA Doctoral Fellow – Particle and Astrophysics |
Karlsruhe, Germany |
|
Austrian Academy of Sciences Ph.D. Student Position in Ultra-Fast Silicon Detectors for 4D tracking |
Vienna, Austria |
|
MITRE Quantum Sensors Engineer/Scientist |
Bedford, Massachusetts, USA |
|
Intuitive Surgical Image Sensor Engineer Intern |
Sunnyvale, California, USA |
Image Sensors World Go to the original article...
Samsung press release: https://semiconductor.samsung.com/emea/news-events/news/samsung-unveils-two-new-isocell-vizion-sensors-tailored-for-robotics-and-xr-applications/
Samsung Unveils Two New ISOCELL Vizion Sensors Tailored for Robotics and XR Applications
The ISOCELL Vizion 63D, a time-of-flight sensor, captures high-resolution 3D images with exceptional detail
The ISOCELL Vizion 931, a global shutter sensor, captures dynamic moments with clarity and precision
Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today introduced two new ISOCELL Vizion sensors — a time-of-flight (ToF) sensor, the ISOCELL Vizion 63D, and a global shutter sensor, the ISOCELL Vizion 931. First introduced in 2020, Samsung’s ISOCELL Vizion lineup includes ToF and global shutter sensors specifically designed to offer visual capabilities across an extensive range of next-generation mobile, commercial and industrial use cases.
“Engineered with state-of-the-art sensor technologies, Samsung’s ISOCELL Vizion 63D and ISOCELL Vizion 931 will be essential in facilitating machine vision for future high-tech applications like robotics and extended reality (XR),” said Haechang Lee, Executive Vice President of the Next Generation Sensor Development Team at Samsung Electronics. “Leveraging our rich history in technological innovation, we are committed to driving the rapidly expanding image sensor market forward.”
ISOCELL Vizion 63D: Tailored for capturing high-resolution 3D images with exceptional detail
Similar to how bats use echolocation to navigate in the dark, ToF sensors measure distance and depth by calculating the time it takes the emitted light to travel to and from an object.
Particularly, Samsung’s ISOCELL Vizion 63D is an indirect ToF (iToF) sensor that measures the phase shift between emitted and reflected light to sense its surroundings in three dimensions. With exceptional accuracy and clarity, the Vizion 63D is ideal for service and industrial robots as well as XR devices and facial authentication where high-resolution and precise depth measuring are crucial.
The ISOCELL Vizion 63D sensor is the industry’s first iToF sensor with an integrated depth-sensing hardware image signal processor (ISP). With this innovative one-chip design, it can precisely capture 3D depth information without the help of another chip, enabling up to a 40% reduction in system power consumption compared to the previous ISOCELL Vizion 33D product. The sensor can also process images at up to 60 frames per second in QVGA resolution (320x240), which is a high-demand display resolution used in commercial and industrial markets.
Based on the industry’s smallest 3.5㎛ pixel size in iToF sensors, the ISOCELL Vizion 63D achieves high Video Graphics Array (VGA) resolution (640x480) within a 1/6.4” optical format, making it an ideal fit for compact, on-the-go devices.
Thanks to backside scattering technology (BST) that enhances light absorption, the Vizion 63D sensor boasts the highest level of quantum efficiency in the industry, reaching 38% at an infrared light wavelength of 940 nanometers (nm). This enables enhanced light sensitivity and reduced noise, resulting in sharper image quality with minimal motion blur.
Moreover, the ISOCELL Vizion 63D supports both flood (high-resolution at short-range) and spot (long-range) lighting modes, significantly extending its measurable distance range from its predecessor’s five meters to 10.
ISOCELL Vizion 931: Optimized for capturing dynamic movements without distortion
The ISOCELL Vizion 931 is a global shutter image sensor tailored for capturing rapid movements without the “jello effect”. Unlike rolling shutter sensors that scan the scene line by line from top to bottom in a “rolling” manner, global shutter sensors capture the entire scene at once or “globally,” similar to how human eyes see. This allows the ISOCELL Vizion 931 to capture sharp, undistorted images of moving objects, making it well-suited for motion-tracking in XR devices, gaming systems, service and logistics robots as well as drones.
Designed in a one-to-one ratio VGA resolution (640 x 640) that packs more pixels in a smaller form factor, the ISOCELL Vizion 931 is optimal for iris recognition, eye tracking as well as facial and gesture detection in head-mounted display devices like XR headsets.
The ISOCELL Vizion 931 also achieves the industry’s highest level of quantum efficiency, delivering an impressive 60% at 850nm infrared light wavelength. This feat was made possible by incorporating Front Deep Trench Isolation (FDTI) which places an insulation layer between pixels to maximize light absorption, in addition to the BST method used in the ISOCELL Vizion 63D.
The Vizion 931 supports multi-drop that can seamlessly connect up to four cameras to the application processor using a single wire. With minimal wiring required, the sensor provides greater design flexibility for device manufactures.
Samsung ISOCELL Vizion 63D and ISOCELL Vizion 931 sensors are currently sampling to OEMs worldwide.
Coverage on PetaPixel: https://petapixel.com/2023/12/20/samsungs-new-vizion-sensors-boast-wild-tech-for-industrial-use/
Image Sensors World Go to the original article...
Samsung press release: https://semiconductor.samsung.com/emea/news-events/news/samsung-unveils-two-new-isocell-vizion-sensors-tailored-for-robotics-and-xr-applications/
Samsung Unveils Two New ISOCELL Vizion Sensors Tailored for Robotics and XR Applications
The ISOCELL Vizion 63D, a time-of-flight sensor, captures high-resolution 3D images with exceptional detail
The ISOCELL Vizion 931, a global shutter sensor, captures dynamic moments with clarity and precision
Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today introduced two new ISOCELL Vizion sensors — a time-of-flight (ToF) sensor, the ISOCELL Vizion 63D, and a global shutter sensor, the ISOCELL Vizion 931. First introduced in 2020, Samsung’s ISOCELL Vizion lineup includes ToF and global shutter sensors specifically designed to offer visual capabilities across an extensive range of next-generation mobile, commercial and industrial use cases.
“Engineered with state-of-the-art sensor technologies, Samsung’s ISOCELL Vizion 63D and ISOCELL Vizion 931 will be essential in facilitating machine vision for future high-tech applications like robotics and extended reality (XR),” said Haechang Lee, Executive Vice President of the Next Generation Sensor Development Team at Samsung Electronics. “Leveraging our rich history in technological innovation, we are committed to driving the rapidly expanding image sensor market forward.”
ISOCELL Vizion 63D: Tailored for capturing high-resolution 3D images with exceptional detail
Similar to how bats use echolocation to navigate in the dark, ToF sensors measure distance and depth by calculating the time it takes the emitted light to travel to and from an object.
Particularly, Samsung’s ISOCELL Vizion 63D is an indirect ToF (iToF) sensor that measures the phase shift between emitted and reflected light to sense its surroundings in three dimensions. With exceptional accuracy and clarity, the Vizion 63D is ideal for service and industrial robots as well as XR devices and facial authentication where high-resolution and precise depth measuring are crucial.
The ISOCELL Vizion 63D sensor is the industry’s first iToF sensor with an integrated depth-sensing hardware image signal processor (ISP). With this innovative one-chip design, it can precisely capture 3D depth information without the help of another chip, enabling up to a 40% reduction in system power consumption compared to the previous ISOCELL Vizion 33D product. The sensor can also process images at up to 60 frames per second in QVGA resolution (320x240), which is a high-demand display resolution used in commercial and industrial markets.
Based on the industry’s smallest 3.5㎛ pixel size in iToF sensors, the ISOCELL Vizion 63D achieves high Video Graphics Array (VGA) resolution (640x480) within a 1/6.4” optical format, making it an ideal fit for compact, on-the-go devices.
Thanks to backside scattering technology (BST) that enhances light absorption, the Vizion 63D sensor boasts the highest level of quantum efficiency in the industry, reaching 38% at an infrared light wavelength of 940 nanometers (nm). This enables enhanced light sensitivity and reduced noise, resulting in sharper image quality with minimal motion blur.
Moreover, the ISOCELL Vizion 63D supports both flood (high-resolution at short-range) and spot (long-range) lighting modes, significantly extending its measurable distance range from its predecessor’s five meters to 10.
ISOCELL Vizion 931: Optimized for capturing dynamic movements without distortion
The ISOCELL Vizion 931 is a global shutter image sensor tailored for capturing rapid movements without the “jello effect”. Unlike rolling shutter sensors that scan the scene line by line from top to bottom in a “rolling” manner, global shutter sensors capture the entire scene at once or “globally,” similar to how human eyes see. This allows the ISOCELL Vizion 931 to capture sharp, undistorted images of moving objects, making it well-suited for motion-tracking in XR devices, gaming systems, service and logistics robots as well as drones.
Designed in a one-to-one ratio VGA resolution (640 x 640) that packs more pixels in a smaller form factor, the ISOCELL Vizion 931 is optimal for iris recognition, eye tracking as well as facial and gesture detection in head-mounted display devices like XR headsets.
The ISOCELL Vizion 931 also achieves the industry’s highest level of quantum efficiency, delivering an impressive 60% at 850nm infrared light wavelength. This feat was made possible by incorporating Front Deep Trench Isolation (FDTI) which places an insulation layer between pixels to maximize light absorption, in addition to the BST method used in the ISOCELL Vizion 63D.
The Vizion 931 supports multi-drop that can seamlessly connect up to four cameras to the application processor using a single wire. With minimal wiring required, the sensor provides greater design flexibility for device manufactures.
Samsung ISOCELL Vizion 63D and ISOCELL Vizion 931 sensors are currently sampling to OEMs worldwide.
Coverage on PetaPixel: https://petapixel.com/2023/12/20/samsungs-new-vizion-sensors-boast-wild-tech-for-industrial-use/
Image Sensors World Go to the original article...
Samsung press release: https://semiconductor.samsung.com/emea/news-events/news/samsung-unveils-two-new-isocell-vizion-sensors-tailored-for-robotics-and-xr-applications/
Samsung Unveils Two New ISOCELL Vizion Sensors Tailored for Robotics and XR Applications
The ISOCELL Vizion 63D, a time-of-flight sensor, captures high-resolution 3D images with exceptional detail
The ISOCELL Vizion 931, a global shutter sensor, captures dynamic moments with clarity and precision
Samsung Electronics Co., Ltd., a world leader in advanced semiconductor technology, today introduced two new ISOCELL Vizion sensors — a time-of-flight (ToF) sensor, the ISOCELL Vizion 63D, and a global shutter sensor, the ISOCELL Vizion 931. First introduced in 2020, Samsung’s ISOCELL Vizion lineup includes ToF and global shutter sensors specifically designed to offer visual capabilities across an extensive range of next-generation mobile, commercial and industrial use cases.
“Engineered with state-of-the-art sensor technologies, Samsung’s ISOCELL Vizion 63D and ISOCELL Vizion 931 will be essential in facilitating machine vision for future high-tech applications like robotics and extended reality (XR),” said Haechang Lee, Executive Vice President of the Next Generation Sensor Development Team at Samsung Electronics. “Leveraging our rich history in technological innovation, we are committed to driving the rapidly expanding image sensor market forward.”
ISOCELL Vizion 63D: Tailored for capturing high-resolution 3D images with exceptional detail
Similar to how bats use echolocation to navigate in the dark, ToF sensors measure distance and depth by calculating the time it takes the emitted light to travel to and from an object.
Particularly, Samsung’s ISOCELL Vizion 63D is an indirect ToF (iToF) sensor that measures the phase shift between emitted and reflected light to sense its surroundings in three dimensions. With exceptional accuracy and clarity, the Vizion 63D is ideal for service and industrial robots as well as XR devices and facial authentication where high-resolution and precise depth measuring are crucial.
The ISOCELL Vizion 63D sensor is the industry’s first iToF sensor with an integrated depth-sensing hardware image signal processor (ISP). With this innovative one-chip design, it can precisely capture 3D depth information without the help of another chip, enabling up to a 40% reduction in system power consumption compared to the previous ISOCELL Vizion 33D product. The sensor can also process images at up to 60 frames per second in QVGA resolution (320x240), which is a high-demand display resolution used in commercial and industrial markets.
Based on the industry’s smallest 3.5㎛ pixel size in iToF sensors, the ISOCELL Vizion 63D achieves high Video Graphics Array (VGA) resolution (640x480) within a 1/6.4” optical format, making it an ideal fit for compact, on-the-go devices.
Thanks to backside scattering technology (BST) that enhances light absorption, the Vizion 63D sensor boasts the highest level of quantum efficiency in the industry, reaching 38% at an infrared light wavelength of 940 nanometers (nm). This enables enhanced light sensitivity and reduced noise, resulting in sharper image quality with minimal motion blur.
Moreover, the ISOCELL Vizion 63D supports both flood (high-resolution at short-range) and spot (long-range) lighting modes, significantly extending its measurable distance range from its predecessor’s five meters to 10.
ISOCELL Vizion 931: Optimized for capturing dynamic movements without distortion
The ISOCELL Vizion 931 is a global shutter image sensor tailored for capturing rapid movements without the “jello effect”. Unlike rolling shutter sensors that scan the scene line by line from top to bottom in a “rolling” manner, global shutter sensors capture the entire scene at once or “globally,” similar to how human eyes see. This allows the ISOCELL Vizion 931 to capture sharp, undistorted images of moving objects, making it well-suited for motion-tracking in XR devices, gaming systems, service and logistics robots as well as drones.
Designed in a one-to-one ratio VGA resolution (640 x 640) that packs more pixels in a smaller form factor, the ISOCELL Vizion 931 is optimal for iris recognition, eye tracking as well as facial and gesture detection in head-mounted display devices like XR headsets.
The ISOCELL Vizion 931 also achieves the industry’s highest level of quantum efficiency, delivering an impressive 60% at 850nm infrared light wavelength. This feat was made possible by incorporating Front Deep Trench Isolation (FDTI) which places an insulation layer between pixels to maximize light absorption, in addition to the BST method used in the ISOCELL Vizion 63D.
The Vizion 931 supports multi-drop that can seamlessly connect up to four cameras to the application processor using a single wire. With minimal wiring required, the sensor provides greater design flexibility for device manufactures.
Samsung ISOCELL Vizion 63D and ISOCELL Vizion 931 sensors are currently sampling to OEMs worldwide.
Coverage on PetaPixel: https://petapixel.com/2023/12/20/samsungs-new-vizion-sensors-boast-wild-tech-for-industrial-use/
Image Sensors World Go to the original article...
From Business Korea: https://www.businesskorea.co.kr/news/articleView.html?idxno=208769
Samsung, SK hynix Advance in AI-embedded Image Sensors
Samsung Electronics and SK hynix are making strides in commercializing “On-sensor AI” technology for image sensors, aiming to elevate their image sensor technologies centered around AI and challenge the market leader, Japan’s Sony, in dominating the next-generation market.
At the “SK Tech Summit 2023” held last month, SK hynix revealed its progress in developing On-sensor AI technology. This technology embeds an image sensor onto an AI chip, processing data directly at the sensor level, unlike traditional sensors that relay image information to the Central Processing Unit (CPU) for computation and inference. This advance is expected to be a key technology in enabling evolved Internet of Things (IoT) and smart home services, reducing power consumption and processing time.
SK hynix’s approach involves integrating an AI accelerator into the image sensor. The company is currently conducting proof-of-concept research focused on facial and object recognition features, using a Computing In Memory (CIM) accelerator, a next-generation technology capable of performing multiplication and addition operations required for AI model computations.
Additionally, SK hynix has presented its technologies for implementing On-sensor AI, including AI software and AI lightweighting, at major academic conferences like the International Conference on Computer Vision and the IEEE EDTM seminar on semiconductor manufacturing and next-generation devices.
Samsung Electronics is also rapidly incorporating AI into its image sensor business. This year, the company unveiled a 200-megapixel image sensor with an advanced zoom feature called Zoom Anyplace, which uses AI technology for automatic object tracking during close-ups. Samsung has set a long-term business goal to commercialize “Humanoid Sensors” capable of sensing and replicating human senses, with a road map to develop image sensors that can capture even the invisible by 2027.
In October, Park Yong-in, president of Samsung Electronics’ System LSI Business, emphasized at the Samsung System LSI Tech Day in Silicon Valley, the goal of pioneering the era of “Proactive AI,” advancing from generative AI through high-performance IP, short and long-range communication solutions, and System LSI Humanoids based on sensors mimicking human senses.
The push by both companies into On-sensor AI technology development is seen as a strategy to capture new AI-specific demands and increase their market share. The image sensor market, which temporarily contracted post-COVID-19 due to a downturn in the smartphone market, is now entering a new growth phase, expanding its applications from mobile to autonomous vehicles, extended reality devices, and robotics.
According to Counterpoint Research, Sony dominated the global image sensor market with a 54% share in the last year, while Samsung Electronics held second place with 29%, and SK hynix, struggling to close the gap, barely made it into the top five with 5%.
Image Sensors World Go to the original article...
In an ASME J. Electron. Packag. paper titled "Advancement of Chip Stacking Architectures and Interconnect Technologies for Image Sensors" Mei-Chien Lu writes:
Numerous technology breakthroughs have been made in image sensor development in the past two decades. Image sensors have evolved into a technology platform to support many applications. Their successful implementation in mobile devices has accelerated market demand and established a business platform to propel continuous innovation and performance improvement extending to surveillance, medical, and automotive industries. This overview briefs the general camera module and the crucial technology elements of chip stacking architectures and advanced interconnect technologies. This study will also examine the role of pixel electronics in determining the chip stacking architecture and interconnect technology of choice. It is conducted by examining a few examples of CMOS image sensors (CIS) for different functions such as visible light detection, single photon avalanche photodiode (SPAD) for low light detection, rolling shutter, and global shutter, and depth sensing and light detection and ranging (LiDAR). Performance attributes of different architectures of chip stacking are overviewed. Direct bonding followed by Via-last through silicon via (Via-last TSV) and hybrid bonding (HB) technologies are identified as newer and favorable chip-to-chip interconnect technologies for image sensor chip stacking. The state-of-the-art ultrahigh-density interconnect manufacturability is also highlighted.
Schematics of an imaging pixel array, circuit blocks and a typical 4 T-APS pixel electronics
Exemplary schematics of front side illuminated sensors (FSI-CIS) and back side illuminated sensors (BSI-CIS)
Schematics of two camera modules with image sensor packages at the bottom parts under lens modules
A micrograph of the partitioned top and bottom circuit blocks of the first stacked image sensor from SONY
Schematics of Stacked BSI pixel chip to circuit chip bonded at dielectric surfaces with peripheral via-last TSVs
Dual-photodiode stacked chips BSI-CIS processed by 65 nm/14 nm technologies
Chip-to-chip bonding and interconnect methods with (a) direct dielectric bonding followed by via-last TSVs for chip-to-chip interconnect, (b) hybrid bonding at peripheral area, and (c) hybrid bonding under pixel arrays
Pixel array, DRAM, and logic three-chip stacked image sensor by Sony Corp using dielectric-to-dielectric bonding followed by via-last TSV interconnects at peripheral areas
A SONY stacked-chip GS using pixel-level integration with (a) the pixel array chip, (b) the processor chip, and (c) cross section of stacked chips using hybrid bonding interconnects
A schematic of pixel electronics for ToF SPAD image sensor
Link to full paper (open access): https://asmedigitalcollection.asme.org/electronicpackaging/article/144/2/020801/1115637/Advancement-of-Chip-Stacking-Architectures-and
Return to top of page
Copyright © 2024 F4news Terms & Policies