3D stacked edge-AI chip with CIS + deep neural network

Image Sensors World        Go to the original article...

In a recent preprint titled "J3DAI: A tiny DNN-Based Edge AI Accelerator for 3D-Stacked CMOS Image Sensor," Tain et al. write:

This paper presents J3DAI, a tiny deep neural network-based hardware accelerator for a 3-layer 3D-stacked CMOS image sensor featuring an artificial intelligence (AI) chip integrating a Deep Neural Network (DNN)-based accelerator. The DNN accelerator is designed to efficiently perform neural
network tasks such as image classification and segmentation. This paper focuses on the digital system of J3DAI, highlighting its Performance-Power-Area (PPA) characteristics and showcasing advanced edge AI capabilities on a CMOS image sensor. To support hardware, we utilized the Aidge comprehensive software framework, which enables the programming of both the host processor and the DNN accelerator. Aidge supports post-training quantization, significantly reducing memory footprint and computational complexity, making it crucial for deploying models on resource-constrained hardware like J3DAI.
Our experimental results demonstrate the versatility and efficiency of this innovative design in the field of edge AI, showcasing its potential to handle both simple and computationally intensive tasks.
Future work will focus on further optimizing the architecture and exploring new applications to fully leverage the capabilities of J3DAI. As edge AI continues to grow in importance, innovations like J3DAI will play a crucial role in enabling real-time, low-latency, and energy-efficient AI processing at the edge.


 




Go to the original article...

Call for Papers: Image Sensors at ISSCC 2026

Image Sensors World        Go to the original article...

New for IEEE ISSCC 2026, we are pleased to announce the creation of a new sub-committee dedicated to Image Sensors & Displays. The Call for Papers includes, but is not limited to, the following topics:

Image sensors • vision sensors and event-based and computer vision sensors • LIDAR, time-of-flight, depth sensing • machine learning and edge computing for imaging applications • display drivers, touch sensing • haptic displays • interactive display and sensing technologies for AR/VR

ISSCC is the foremost global forum for presentation of advances in solid-state circuits and systems-on-a-chip. This is a great opportunity to increase the presence of image sensors at the Conference and offers a unique opportunity for engineers working at the cutting edge of IC design and application to maintain technical currency, and to network with leading experts.

For more information, contact the sub-committee chair, Bruce Rae (STMicroelectronics) via LinkedIn


Go to the original article...

STMicro and Metalenz sign new licensing deal

Image Sensors World        Go to the original article...

 STMicroelectronics and Metalenz have signed a license agreement to scale the production of metasurface optics for high-volume applications in consumer, automotive, and industrial markets.
 
This collaboration aims to meet the growing demand in sectors like smartphone biometrics, LIDAR, and robotics, as the metasurface optics market is projected to reach $2 billion by 2029.
 
ST will leverage its 300mm semiconductor and optics manufacturing platform to integrate Metalenz’s technology, ensuring greater precision and cost-efficiency at scale. Since 2022, ST has already shipped over 140 million units of metasurface optics and FlightSense modules using Metalenz IP.

Full press release below. https://newsroom.st.com/media-center/press-item.html/t4717.html 

STMicroelectronics and Metalenz Sign a New License Agreement to Accelerate Metasurface Optics Adoption
 
New license agreement enabling the proliferation of metasurface optics across high-volume consumer, automotive and industrial markets: from smartphone applications like biometrics, LIDAR and camera assist, to robotics, gesture recognition, or object detection.
 
The agreement broadens ST’s capability to use Metalenz IP to produce advanced metasurface optics while leveraging ST’s unique technology and manufacturing platform combining 300mm semiconductor and optics production, test and qualification.

 
STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics applications and Metalenz, the pioneer of metasurface optics, announced a new license agreement. The agreement broadens ST’s capability to use Metalenz IP to produce advanced metasurface optics while leveraging ST’s unique technology and manufacturing platform combining 300mm semiconductor and optics production, test and qualification.
 
“STMicroelectronics is the unique supplier on the market offering a groundbreaking combination of optics and semiconductor technology. Since 2022, we have shipped well over 140 million metasurface optics and FlightSense™ modules using Metalenz IP. The new license agreement with Metalenz bolsters our technology leadership in consumer, industrial and automotive segments, and will enable new opportunities from smartphone applications like biometrics, LIDAR and camera assist, to robotics, gesture recognition, or object detection,” underlined Alexandre Balmefrezol, Executive Vice President and General Manager of STMicroelectronics’s Imaging Sub-Group. “Our unique model, processing optical technology in our 300mm semiconductor fab, ensures high precision, cost-effectiveness, and scalability to meet the requests of our customers for high-volume, complex applications.”
 
“Our agreement with STMicroelectronics has the potential to further fast-track the adoption of metasurfaces from their origins at Harvard to adoption by market leading consumer electronics companies,” said Rob Devlin, co-founder and CEO of Metalenz. “By enabling the shift of optics production into semiconductor manufacturing, this agreement has the possibility to further redefine the sensing ecosystem. As use cases for 3D sensing continue to expand, ST’s technology leadership in the market together with our IP leadership solidifies ST and Metalenz as the dominant forces in the emergent metasurface market we created.”
 
The new license agreement aims to address the growing market opportunity for metasurface optics projected to experience significant growth to reach $2B by 2029*; largely driven by the industry’s role in emerging display and imaging applications. (*Yole Group, Optical Metasurfaces, 2024 report)
 
In 2022, metasurface technology from Metalenz, which spun out of Harvard and holds the exclusive license rights to the foundational Harvard metasurface patent portfolio, debuted with ST’s market leading direct Time-of-Flight (dToF) FlightSense modules.
 
Replacing the traditional lens stacks and shifting to metasurface optics instead has improved the optical performance and temperature stability of the FlightSense modules while reducing their size and complexity.
 
The use of 300mm wafers ensures high precision and performance in optical applications, as well as the inherent scalability and robustness advantage of semiconductor manufacturing process.

Go to the original article...

Turn your global shutter CMOS sensor into a LiDAR

Image Sensors World        Go to the original article...

In a paper titled "A LiDAR Camera with an Edge" in IOP Measurement Science and Technology journal, Oguh et al. describe an interesting approach of turning a conventional global shutter CMOS image sensor into a LiDAR. The key idea is neatly explained by these two sentences in the paper: "... we recognize a simple fact: if the shutter opens before the arrival time of the photons, the camera will see them. Otherwise, the camera will not. Thus, if the shutter jitter range remains the same and its distribution is uniform, the average intensity of the object in many camera frames will be uniquely associated with the arrival time of the photons."

Abstract: A novel light detection and ranging (LiDAR) design was proposed and demonstrated using just a conventional global shutter complementary metal-oxide-semiconductor (CMOS) camera. Utilizing the jittering rising edge of the camera shutter, the distance of an object can be obtained by averaging hundreds of camera frames. The intensity (brightness) of an object in the image is linearly proportional to the distance from the camera. The achieved time precision is about one nanosecond while the range can reach beyond 50 m using a modest setup. The new design offers a simple yet powerful alternative to existing LiDAR techniques."

 



Full paper (paywalled): https://iopscience.iop.org/article/10.1088/1361-6501/adcb5c

Go to the original article...

Samsung blog article on nanoprism pixels

Image Sensors World        Go to the original article...

News: https://semiconductor.samsung.com/news-events/tech-blog/nanoprism-optical-innovation-in-the-era-of-pixel-miniaturization/

Nanoprism: Optical Innovation in the Era of Pixel Miniaturization 

The evolution of mobile image sensors is ultimately linked to the advancement of pixel technology. The market's demand for high-quality images with smaller and thinner devices is becoming increasingly challenging, making 'fine pixel' technology a core task in the mobile image sensor industry.
In this trend, Samsung System LSI continues to advance its technology, drawing on its experience in the field of small-pixel image sensors. The recently released mobile image sensor ISOCELL JNP is the industry's first to apply Nanoprism, pushing the boundaries on the physical limitations of pixels.
Let's explore how Nanoprism, the first technology to apply Meta-Photonics to image sensors, was created and how it was implemented in ISOCELL JNP.
 
Smaller Pixels, More Light
Sensitivity in image sensors is a key factor in realizing clear and vivid images. Pixel technology has evolved over time to capture as much light as possible. Examples include the development from front-side illumination (FSI) to back-side illumination (BSI) and various technologies such as deep trench isolation (DTI).
In particular, technology has evolved in the direction of making pixels smaller and smaller to realize high-resolution images without increasing the size of smartphone camera modules. However, this has gradually reduced the sensitivity of unit pixels and caused image quality degradation due to crosstalk between pixels. As a result, it was hard to avoid the limitation of a sharp decline in image quality in low-light environments.
To solve this problem, Samsung introduced a Front Deep Trench Isolation (FDTI) structure that creates a physical barrier between pixels and also developed ISOCELL 2.0 , which isolates even the color filters on top of the pixels. Furthermore, Samsung considered an approach to innovate the optical structure of the pixel itself, which can utilize even the peripheral light that could not be accepted in the existing structure. Nanoprism was born out of this consideration.
More details on the pixel technology of Samsung can be found at the link below.
Pixel Technology
 
Nanoprism: Refracting Light to Collect More
Nanoprism is a new technology first proposed in 2017 based on Meta-Photonics source technology that Samsung Advanced Institute of Technology (SAIT) has accumulated for many years. Unlike meta-lens research, which was active in Meta-Photonics research at the time and minimized light dispersion, it used the reverse idea of maximizing dispersion to separate colors. The Nanoprism is a meta-surface-based prism structure that can perform color separation.
So, what has changed from the existing pixel structure? In the existing microlens-based optics, the microlens and the color filter of the pixel are matched 1:1, so only the light of the color corresponding to the color filter of each pixel can be accepted by the pixel. In other words, there was a physical limit that light could only be received as much as the size of the defined pixel. 

 However, Nanoprism sets an optimized optical path so that light can be directed to each color-matched pixel by placing a nanoscale structure in the microlens position. Simply put, the amount of light received by each pixel has increased, because light that was lost due to color mismatch can be sent to adjacent pixels using refraction and dispersion of light. Nanoprism allows pixels to receive more light than the existing microlens structure, and it was possible to improve the sensitivity reduction, which was a concern due to the smaller pixels.

 
Applying Nanoprism to Image Sensors
Commercializing Meta-Photonics technology in image sensors was a challenging task. Securing both customer reliability and technical completeness was vital. To operate properly as a product, not only the structure of Nanoprism had to be implemented, but also dozens of indicators had to be satisfied.
Samsung's relevant teams worked closely together, repeating the design-process-measurement loop, and made the best efforts to secure performance by considering and reflecting various scenarios from the initial design stage and establishing a reliable verification procedure.
As can be inferred from its name Nanoprism, it was especially difficult from process development to mass production because precise and complex nanometer (nm) structures had to be implemented in pixels. In order to bring the new technology to life, special techniques and methods were introduced, including CMP (Chemical Mechanical Polishing) and low-temperature processes for Nanoprism implementation as well as TDMS (Thermal Desorption Mass Spectrometry) for image sensor production.
 
ISOCELL JNP Enables Brighter and Clearer Images
ISOCELL JNP with Nanoprism has been in mass production this year, and is incorporated in recent smartphones, contributing to an enhanced user experience. Because more light can be received without loss, it is possible to take bright and clear pictures, especially in challenging light conditions. In fact, the ISOCELL JNP with Nanoprism has 25% improved sensitivity compared to the previous ISOCELL JN5 with the same specifications.


Of course, increasing the size of the image sensor can improve the overall performance of the camera, but in mobile, there is a limit to increasing the size of the image sensor indefinitely due to design constraints such as 'camera bump'. Samsung System LSI tried to break through this limitation head-on with Nanoprism. Even in situations where pixels are getting smaller, this technology has improved the sensitivity and color reproduction of each pixel, and applied to ISOCELL JNP.
More details on the product can be found at the link below.

https://semiconductor.samsung.com/image-sensor/mobile-image-sensor/isocell-jnp/ 

The need for high-resolution image implementation in the mobile market will continue. Accordingly, the trend of pixel miniaturization will continue, and even if pixels become smaller, the development of pixel technology to secure high sensitivity, quantum efficiency, and noise reduction will be required. Nanoprism is a technology to increase sensitivity among these, and Samsung aims to move towards further innovation in a direction that goes beyond the existing physical limitations.
Building on this collaboration, continued cross-functional, cross-team efforts aim to explore new direction for next-generation image sensor technologies. 

Go to the original article...

iToF webinar – onsemi’s Hyperlux ID solution

Image Sensors World        Go to the original article...

Overcoming iToF Challenges: Enabling Precise Depth Sensing for Industrial and Commercial Innovation


 

Go to the original article...

Single-photon computer vision workshop @ ICCV 2025

Image Sensors World        Go to the original article...

📸✨ Join us at ICCV 2025 for our workshop on Computer Vision with Single-Photon Cameras (CVSPC)!

🗓️  Sunday, Oct 19th, 8:15am-12:30pm at the Hawai'i Convention Center

🔗 Full Program: https://cvspc.cs.pdx.edu/

🗣️ Invited Speakers: Mohit Gupta, Matthew O'Toole, Dongyu Du, David Lindell, Akshat Dave

📍 Submit your poster and join the conversation! We welcome early ideas & in-progress work.

📝 Poster submission form: https://forms.gle/qQ7gFDwTDexy6e668

🏆 Stay tuned for a CVSPC competition announcement!

👥Organizers: Atul Ingle, Sotiris Nousias, Mel White, Mian Wei and Sacha Jungerman.


Single-photon cameras (SPCs) are an emerging class of camera technology with the potential to revolutionize the way today’s computer vision systems capture and process scene information, thanks to their extreme sensitivity, high speed capabilities, and increasing commercial availability.

They provide extreme dynamic range and long-range high-resolution 3D imaging, well beyond the capabilities of CMOS image sensors. SPCs thus facilitate various downstream computer vision applications such as low-cost, long-range cameras for self-driving cars and autonomous robots, high-sensitivity cameras for night photography and fluorescence-guided surgeries, and high dynamic range cameras for industrial machine vision and biomedical imaging applications.

The goal of this half-day workshop at ICCV 2025 is to showcase the myriad ways in which SPCs are used today in computer vision and inspire new applications. The workshop features experts on several key topics of interest, as well as a poster session to highlight in-progress work. 

We welcome submissions to CVSPC 2025 for the poster session, which we will host during the workshop. We invite posters presenting research relating to any aspect of single-photon imaging, such as those using or simulating SPADs, APDs, QIS, or other sensing methods that operate at or near the single-photon limit. Posters may be of new or prior work. If the content has been previously presented in another conference or publication, please note this in the abstract. We especially encourage submissions of in-progress work and student projects.

Please submit a 1-page abstract via this Google Form. These abstracts will be used for judging poster acceptance/rejection, and will not appear in any workshop proceedings. Please use any reasonable format that includes a title, list of authors and a short description of the poster. If this poster is associated with a previously accepted conference or journal paper please be sure to note this in the abstract and include a citation and/or a link to the project webpage.

Final poster size will be communicated to the authors upon acceptance.

Questions? Please email us at cvspc25 at gmail.

Poster Timeline:
📅 Submission Deadline: August 15, 2025
📢 Acceptance Notification: August 22, 2025 

Go to the original article...

Single-photon computer vision workshop @ ICCV 2025

Image Sensors World        Go to the original article...

📸✨ Join us at ICCV 2025 for our workshop on Computer Vision with Single-Photon Cameras (CVSPC)!

🗓️  Sunday, Oct 19th, 8:15am-12:30pm at the Hawai'i Convention Center

🔗 Full Program: https://cvspc.cs.pdx.edu/

🗣️ Invited Speakers: Mohit Gupta, Matthew O'Toole, Dongyu Du, David Lindell, Akshat Dave

📍 Submit your poster and join the conversation! We welcome early ideas & in-progress work.

📝 Poster submission form: https://forms.gle/qQ7gFDwTDexy6e668

🏆 Stay tuned for a CVSPC competition announcement!

👥Organizers: Atul Ingle, Sotiris Nousias, Mel White, Mian Wei and Sacha Jungerman.


Single-photon cameras (SPCs) are an emerging class of camera technology with the potential to revolutionize the way today’s computer vision systems capture and process scene information, thanks to their extreme sensitivity, high speed capabilities, and increasing commercial availability.

They provide extreme dynamic range and long-range high-resolution 3D imaging, well beyond the capabilities of CMOS image sensors. SPCs thus facilitate various downstream computer vision applications such as low-cost, long-range cameras for self-driving cars and autonomous robots, high-sensitivity cameras for night photography and fluorescence-guided surgeries, and high dynamic range cameras for industrial machine vision and biomedical imaging applications.

The goal of this half-day workshop at ICCV 2025 is to showcase the myriad ways in which SPCs are used today in computer vision and inspire new applications. The workshop features experts on several key topics of interest, as well as a poster session to highlight in-progress work. 

We welcome submissions to CVSPC 2025 for the poster session, which we will host during the workshop. We invite posters presenting research relating to any aspect of single-photon imaging, such as those using or simulating SPADs, APDs, QIS, or other sensing methods that operate at or near the single-photon limit. Posters may be of new or prior work. If the content has been previously presented in another conference or publication, please note this in the abstract. We especially encourage submissions of in-progress work and student projects.

Please submit a 1-page abstract via this Google Form. These abstracts will be used for judging poster acceptance/rejection, and will not appear in any workshop proceedings. Please use any reasonable format that includes a title, list of authors and a short description of the poster. If this poster is associated with a previously accepted conference or journal paper please be sure to note this in the abstract and include a citation and/or a link to the project webpage.

Final poster size will be communicated to the authors upon acceptance.

Questions? Please email us at cvspc25 at gmail.

Poster Timeline:
📅 Submission Deadline: August 15, 2025
📢 Acceptance Notification: August 22, 2025 

Go to the original article...

X-FAB’s new 180nm process for SPAD integration

Image Sensors World        Go to the original article...

News link: https://www.xfab.com/news/details/article/x-fab-expands-180nm-xh018-process-with-new-isolation-class-for-enhanced-spad-integration

X-FAB Expands 180nm XH018 Process with New Isolation Class for Enhanced SPAD Integration

NEWS – Tessenderlo, Belgium – Jun 19, 2025

New module enables more compact designs resulting in reduced chip size

X-FAB Silicon Foundries SE, the leading analog/mixed-signal and specialty foundry, has released a new isolation class within its 180nm XH018 semiconductor process. Designed to support more compact and efficient single-photon avalanche diode (SPAD) implementations, this new isolation class enables tighter functional integration, improved pixel density, and higher fill factor – resulting in smaller chip area.
SPADs are critical components in a wide range of emerging applications, including LiDAR for autonomous vehicles, 3D imaging, depth sensing in AR/VR systems, quantum communication and biomedical sensing. X-FAB already offers several SPAD devices built on its 180nm XH018 platform, with active areas ranging from 10µm to 20µm. This includes a near-infrared optimized diode for elevated photon detection probability (PDP) performance.

To enable high-resolution SPAD arrays, a compact pitch and elevated fill factor are essential. The newly released module ISOMOS1, a 25V isolation class module, allows for significantly more compact transistor isolation structures, eliminating the need for an additional mask layer and aligning perfectly with X-FAB’s other SPAD variants.

The benefits of this enhancement are evident when comparing SPAD pixel layouts. In a typical 4x3 SPAD array with 10x10µm² optical areas, the adoption of the new isolation class enables a ~25% reduction in total area and boosts fill factor by ~30% compared to the previously available isolation class. With carefully optimized pixel design, even greater gains in area efficiency and detection sensitivity are achievable.

X-FAB’s SPAD solution has been widely used in applications that require direct Time-of-Flight, such as smartphones, drones, and projectors. This new technological advancement directly benefits these applications in which high-resolution sensing with a compact footprint is essential. It enables accurate depth sensing in multiple scenarios, including industrial distance detection and robotics sensing, for example, by protecting the area around a robot and avoiding collisions when robots are working as cobots. Beyond increasing performance and integration density, the new isolation class opens up opportunities for a broader range of SPAD-based systems requiring low-noise, high-speed single-photon detection within a compact footprint.

Heming Wei, X-FAB’s Technical Marketing Manager for Optoelectronics, explains: “The introduction of a new isolation class in XH018 marks an important step forward for SPAD integration. It enables tighter layouts and better performance, while allowing for more advanced sensing systems to be developed using our proven, reliable 180 nanometer platform.”

Models and PDKs, including the new ISOMOS1 module, are now available, supporting efficient evaluation and development of next-generation SPAD arrays. X-FAB will be exhibiting at Sensors Converge 2025 in Santa Clara, California (June 24–26) at booth #847, showcasing its latest sensor technologies. 

 

 

Example design of 4x3 SPAD pixel using new compact 25 V isolation class with ISOMOS1 module (right) and with previous module (left)

Go to the original article...

Hamamatsu webinar on SPAD and SPAD arrays

Image Sensors World        Go to the original article...

 

 

The video is a comprehensive webinar on Single Photon Avalanche Diodes (SPADs) and SPAD arrays, addressing their theory, applications, and recent advancements. It is led by experts from the New Jersey Institute of Technology and Hamamatsu, discussing technical fundamentals, challenges, and innovative solutions to improve the performance of SPAD devices. Key applications highlighted include fluorescence lifetime imaging, remote gas sensing, quantum key distribution, and 3D radiation detection, showcasing SPAD's unique ability to timestamp events and enhance photon detection efficiency.

Go to the original article...

Images from the world’s largest camera

Image Sensors World        Go to the original article...

Story in Nature news: https://www.nature.com/articles/d41586-025-01973-5

First images from world’s largest digital camera leave astronomers in awe

The Rubin Observatory in Chile will map the entire southern sky every three to four nights.

The Trifid Nebula (top right) and the Lagoon Nebula, in an image made from 678 separate exposures taken at the Vera C. Rubin Observatory in Chile. Credit: NSF-DOE Vera C. Rubin Observatory

 

The Vera C. Rubin Observatory in Chile has unveiled its first images, leaving astronomers in awe of the unprecedented capabilities of the observatory’s 3,200-megapixel digital camera — the largest in the world. The images were created from shots taken during a trial that started in April, when construction of the observatory’s Simonyi Survey Telescope was completed.

...

One image (pictured) shows the Trifid Nebula and the Lagoon Nebula, in a region of the Milky Way that is dense with ionized hydrogen and with young and still-forming stars. The picture was created from 678 separate exposures taken by the Simonyi Survey Telescope in just over 7 hours. Each exposure was monochromatic and taken with one of four filters; they were combined to give the rich colours of the final product. 

Go to the original article...

ETH Zurich and Empa develop perovskite image sensor

Image Sensors World        Go to the original article...

In a new paper in Nature, a team from ETH Zurich and Empa have demonstrated a new lead halide perovskite thin-film photodetector.

Tsarev et al., "Vertically stacked monolithic perovskite colour photodetectors, " Nature (2025)
Open access paper link: https://www.nature.com/articles/s41586-025-09062-3 

News release: https://ethz.ch/en/news-und-veranstaltungen/eth-news/news/2025/06/medienmitteilung-bessere-bilder-fuer-mensch-und-maschine.html

Better images for humans and computers

Researchers at ETH Zurich and Empa have developed a new image sensor made of perovskite. This semiconductor material enables better colour reproduction and fewer image artefacts with less light. Perovskite sensors are also particularly well suited for machine vision. 

Image sensors are built into every smartphone and every digital camera. They distinguish colours in a similar way to the human eye. In our retinas, individual cone cells recognize red, green and blue (RGB). In image sensors, individual pixels absorb the corresponding wavelengths and convert them into electrical signals.

The vast majority of image sensors are made of silicon. This semiconductor material normally absorbs light over the entire visible spectrum. In order to manufacture it into RGB image sensors, the incoming light must be filtered. Pixels for red contain filters that block (and waste) green and blue, and so on. Each pixel in a silicon image sensor thus only receives around a third of the available light.

Maksym Kovalenko and his team associated with both ETH Zurich and Empa have proposed a novel solution, which allows them to utilize every photon of light for colour recognition. For nearly a decade, they have been researching perovskite-based image sensors. In a new study published in the renowned journal Nature, they show: The new technology works.

Stacked pixels
The basis for their innovative image sensor is lead halide perovskite. This crystalline material is also a semiconductor. In contrast to silicon, however, it is particularly easy to process – and its physical properties vary with its exact chemical composition. This is precisely what the researchers are taking advantage of in the manufacture of perovskite image sensors.

If the perovskite contains slightly more iodine ions, it absorbs red light. For green, the researchers add more bromine, for blue more chlorine – without any need for filters. The perovskite pixel layers remain transparent for the other wavelengths, allowing them to pass through. This means that the pixels for red, green and blue can be stacked on top of each other in the image sensor, unlike with silicon image sensors, where the pixels are arranged side-by-side.


Thanks to this arrangement, perovskite-based image sensors can, in theory, capture three times as much light as conventional image sensors of the same surface area while also providing three times higher spatial resolution. Researchers from Kovalenko's team were able to demonstrate this a few years ago, initially with individual oversized pixels made of millimeter-large single crystals.

Now, for the first time, they have built two fully functional thin-film perovskite image sensors. “We are developing the technology further from a rough proof of principle to a dimension where it could actually be used,” says Kovalenko. A normal course of development for electronic components: “The first transistor consisted of a large piece of germanium with a couple of connections. Today, 60 years later, transistors measure just a few nanometers.”

Perovskite image sensors are still in the early stages of development. With the two prototypes, however, the researchers were able to show that the technology can be miniaturized. Manufactured using thin-film processes common in industry, the sensors have reached their target size in the vertical dimension at least. “Of course, there is always potential for optimization,” notes co-author Sergii Yakunin from Kovalenko's team.

In numerous experiments, the researchers put the two prototypes, which differ in their readout technology, through their paces. Their results prove the advantages of perovskite: The sensors are more sensitive to light, more precise in colour reproduction and can offer a significantly higher resolution than conventional silicon technology. The fact that each pixel captures all the light also eliminates some of the artifacts of digital photography, such as demosaicing and the moiré effect.

Machine vision for medicine and the environment
However, consumer digital cameras are not the only area of application for perovskite image sensors. Due to the material's properties, they are also particularly suitable for use in machine vision. The focus on red, green and blue is dictated by the human eye: These image sensors work in RGB format because our eyes see in RGB mode. However, when solving specific tasks, it is advisable to specify other optimal wavelength ranges that the computer image sensor should read. Often there are more than three – so-called hyperspectral imaging.

Perovskite sensors have a decisive advantage in hyperspectral imaging. Researchers can precisely control the wavelength range they absorb by each layer. “With perovskite, we can define a larger number of colour channels that are clearly separated from each other,” says Yakunin. Silicon, with its broad absorption spectrum, requires numerous filters and complex computer algorithms. “This is very impractical even with a relatively small number of colours,” Kovalenko sums up. Hyperspectral image sensors based on perovskite could be used in medical analysis or in automated monitoring of agriculture and the environment, for example.

In the next step, the researchers want to further reduce the size and increase the number of pixels in their perovskite image sensors. Their two prototypes have pixel sizes between 0.5 and 1 millimeters. Pixels in commercial image sensors fall in the micrometer range (1 micrometre is 0.001 millimetre). “It should be possible to make even smaller pixels from perovskite than from silicon,” says Yakunin. The electronic connections and processing techniques need to be adapted for the new technology. “Today's readout electronics are optimized for silicon. But perovskite is a different semiconductor, with different material properties,” says Kovalenko. However, the researchers are convinced that these challenges can be overcome. 

Go to the original article...

STMicro releases image sensor solution for human presence detection

Image Sensors World        Go to the original article...

New technology delivers more than 20% power consumption reduction per day in addition to improved security and privacy

ST solution combines market leading Time-of-Flight (ToF) sensors and unique AI algorithms for a seamless user experience

Geneva, Switzerland, June 17, 2025 -- STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics applications, introduces a new Human Presence Detection (HPD) technology for laptops, PCs, monitors and accessories, delivering more than 20% power consumption reduction per day in addition to improved security and privacy. ST’s proprietary solution combines market-leading FlightSense™ Time-of-Flight (ToF) sensors with unique AI algorithms to deliver a hands-free fast Windows Hello authentication; and delivers a range of benefits such as longer battery lifetime, and user-privacy or wellness notifications. 

“Building on the integration of ST FlightSense technology in more than 260 laptops and PC models launched in recent years, we are looking forward to see our new HPD solution contributing to make devices more energy-efficient, secure, and user-friendly,” said Alexandre Balmefrezol, Executive Vice President and General Manager of the Imaging Sub-Group at STMicroelectronics. “As AI and sensor technology continue to advance, with greater integration of both hardware and software, we can expect to see even more sophisticated and intuitive ways of interacting with our devices, and ST is best positioned to continue to lead this market trend.” 

“Since 2023, 3D sensing in consumer applications has gained new momentum, driven by the demand for better user experiences, safety, personal robotics, spatial computing, and enhanced photography and streaming. Time-of-Flight (ToF) technology is expanding beyond smartphones and tablets into drones, robots, AR/VR headsets, home projectors, and laptops. In 2024, ToF modules generated $2.2 billion in revenue, with projections reaching $3.8 billion by 2030 (9.5% CAGR). Compact and affordable, multizone dToF sensors are now emerging to enhance laptop experiences and enable new use cases,” said Florian Domengie, PhD Principal Analyst, Imaging at Yole Group. 

The 5th generation turnkey ST solution
By integrating hardware and software components by design, the new ST solution is a readily deployable system based on FlightSense 8x8 multizones Time-of-Flight sensor (VL53L8CP) complemented by proprietary AI-based algorithms enabling functionalities such as human presence detection, multi-person detection, and head orientation tracking. This integration creates a unique ready-to-use solution for OEMs that requires no additional development for them. 

This 5th generation of sensors also integrates advanced features such as gesture recognition, hand posture recognition, and wellness monitoring through human posture analysis. 

ST’s Human Presence Detection (HPD) solution enables enhanced features such as:
-- Adaptive Screen Dimming tracks head orientation to dim the screen when the user isn’t looking, reducing power consumption by more than 20%.
-- Walk-Away Lock & Wake-on-Attention automatically locks the device when the user leaves and wakes up upon return, improving security and convenience.
-- Multi-Person Detection alerts the user if someone is looking over their shoulder, enhancing privacy.

Tailored AI algorithm
STMicroelectronics has implemented a comprehensive AI-based development process that from data collection, labeling, cleaning, AI training and integration in a mass-market product. This effort relied on thousands of data-logs from diverse sources, including contributions from workers who uploaded personal seating and movement data over several months, enabling the continuous refinement of AI algorithms. 

One significant achievement is the transformation of a Proof-Of-Concept (PoC) into a mature solution capable of detecting a laptop user's head orientation using only 8x8 pixels of distance data. This success was driven through a meticulous development process that included four global data capture campaigns, 25 solution releases over the course of a year, and rigorous quality control of AI training data. The approach also involved a tailored pre-processing method for VL53L8CP ranging data, and the design of four specialized AI networks: Presence AI, HOR (Head Orientation) AI, Posture AI, and Hand Posture AI. Central to this accomplishment was the VL53L8CP ToF sensor, engineered to optimize the Signal-To-Noise ratio (SNR) per zone, which played a critical role in advancing these achievements. 

Enhanced user experience & privacy protection
The ToF sensor ensures complete user privacy without capturing images or relying on the camera, unlike previous versions of webcam-based solutions. 

Adaptive Screen Dimming:
-- Uses AI algorithms to analyze the user's head orientation. If the user is not looking at the screen, the system gradually dims the display to conserve power.
-- Extends battery life by minimizing energy consumption.
-- Optimizes for low power consumption with AI algorithms and can be seamlessly integrated into existing PC sensor hubs.

Walk-Away Lock (WAL) & Wake-on-Approach (WOA):
-- The ToF sensor automatically locks the PC when the user moves away and wakes it upon their return, eliminating the need for manual interaction.
-- This feature enhances security, safeguards sensitive data, and offers a seamless, hands-free user experience.
-- Advanced filtering algorithms help prevent false triggers, ensuring the system remains unaffected by casual passerby.

Multi-Person Detection (MPD):
-- The system detects multiple people in front of the screen and alerts the user if someone is looking over their shoulder.
-- Enhances privacy by preventing unauthorized viewing of sensitive information.
-- Advanced algorithms enable the system to differentiate between the primary user and other nearby individuals.

Technical highlights: VL53L8CP: ST FlightSense 8x8 multizones ToF sensor. https://www.st.com/en/imaging-and-photonics-solutions/time-of-flight-sensors.html 
-- AI-based: compact, low-power algorithms suitable for integration into PC sensor hubs.
-- A complete ready-to-use solution includes hardware (ToF sensor) and software (AI algorithms).

Go to the original article...

MIPI C-PHY v3.0 upgrades data rates

Image Sensors World        Go to the original article...

News: https://www.businesswire.com/news/home/20250507526963/en/MIPI-C-PHY-v3.0-Adds-New-Encoding-Option-to-Support-Next-Generation-of-Image-Sensor-Applications

The MIPI Alliance, an international organization that develops interface specifications for mobile and mobile-influenced industries, today announced a major update to its high-performance, low-power and low electromagnetic interference (EMI) C-PHY interface specification for connecting cameras and displays. Version 3.0 introduces support for an 18-Wirestate mode encoding option, increasing the maximum performance of a C-PHY lane by approximately 30 to 35 percent. This enhancement delivers up to 75 Gbps over a short channel, supporting the rapidly growing demands of ultra-high-resolution, high-fidelity image sensors.

The new, more efficient encoding option, 32b9s, transports 32 bits over nine symbols and maintains MIPI C-PHY’s industry-leading low EMI and low power properties. For camera applications, the new mode enables the use of lower symbol rates or lane counts for existing use cases, or higher throughput with current lane counts to support new use cases involving very high-end image sensors such as:

  •  Next-generation prosumer video content creation on smartphones, with high dynamic range (HDR), smart region-of-interest detection and advanced motion vector generation
  •  Machine vision quality-control systems that can detect the smallest of defects in fast-moving production lines
  •  Advanced driver assistance systems (ADAS) in automotive that can analyze the trajectory and behavior of fast-moving objects in the most challenging lighting conditions 
C-PHY Capabilities and Performance Highlights
MIPI C-PHY supports the MIPI Camera Serial Interface 2 (MIPI CSI-2) and MIPI Display Serial Interface 2 (MIPI DSI-2) ecosystems in low-power, high-speed applications for the typical interconnect lengths found in mobile, PC compute and IoT applications. The specification:
  •  Provides high throughput, a minimized number of interconnect signals and superior power efficiency to connect cameras and displays to an application processor. This is due to efficient three-phase coding unique to C-PHY that reduces the number of system interconnects and minimizes electromagnetic emissions to sensitive RF receiver circuitry that is often co-located with C-PHY interfaces.
  •  Offers flexibility to reallocate lanes within a link because C-PHY functions as an embedded clock link
  •  Enables low-latency transitions between high-speed and low-power modes
  •  Includes an alternate low power (ALP) feature, which enables a link operation using only C-PHY’s high-speed signaling levels. An optional fast lane turnaround capability utilizes ALP and supports asymmetrical data rates, which enables implementers to optimize the transfer rates to system needs.
  •  Can coexist on the same device pins as MIPI D-PHY, so designers can develop dual-mode devices
Support for C-PHY v3.0 was included in the most recent MIPI CSI-2 v4.1 embedded camera and imaging interface specification, published in April 2024. To aid implementation, C-PHY v3.0 is backward-compatible with previous C-PHY versions.

“C-PHY is MIPI's ternary-based PHY for smartphones, IoT, drones, wearables, PCs, and automotive cameras and displays,” said Hezi Saar, chair of MIPI Alliance. “It supports low-cost, low-resolution image sensors with fewer wires and high-performance image sensors in excess of 100 megapixels. The updated specification enables forward-looking applications like cinematographic-grade video on smartphones, machine vision quality-control systems and ADAS applications in automotive.”

Forthcoming MIPI D-PHY Updates
Significant development work is continuing on MIPI's other primary shorter-reach physical layer, MIPI D-PHY. D-PHY v3.5, released in 2023, includes an embedded clock option for display applications, while the forthcoming v3.6 specification will expand embedded clock support for camera applications, targeting PC / client computing platforms. The next full version, v4.0, will further expand D-PHY’s embedded clock support for use in mobile and beyond-mobile machine vision applications, and further increase D-PHY’s data rate beyond its current 9 Gbps per lane.

Also, MIPI Alliance last year conducted a comprehensive channel signal analysis to document the longer channel lengths of both C- and D-PHY. The resulting member application note, "Application Note for MIPI C-PHY and MIPI D-PHY IT/Compute," demonstrated that both C-PHY and D-PHY can be used in larger end products, such as laptops and all-in-ones, with minimal or no changes to the specifications as originally deployed in mobile phones or tablets, or for even longer lengths by operating at a reduced bandwidth. 

Go to the original article...

MIPI C-PHY v3.0 upgrades data rates

Image Sensors World        Go to the original article...

News: https://www.businesswire.com/news/home/20250507526963/en/MIPI-C-PHY-v3.0-Adds-New-Encoding-Option-to-Support-Next-Generation-of-Image-Sensor-Applications

The MIPI Alliance, an international organization that develops interface specifications for mobile and mobile-influenced industries, today announced a major update to its high-performance, low-power and low electromagnetic interference (EMI) C-PHY interface specification for connecting cameras and displays. Version 3.0 introduces support for an 18-Wirestate mode encoding option, increasing the maximum performance of a C-PHY lane by approximately 30 to 35 percent. This enhancement delivers up to 75 Gbps over a short channel, supporting the rapidly growing demands of ultra-high-resolution, high-fidelity image sensors.

The new, more efficient encoding option, 32b9s, transports 32 bits over nine symbols and maintains MIPI C-PHY’s industry-leading low EMI and low power properties. For camera applications, the new mode enables the use of lower symbol rates or lane counts for existing use cases, or higher throughput with current lane counts to support new use cases involving very high-end image sensors such as:

  •  Next-generation prosumer video content creation on smartphones, with high dynamic range (HDR), smart region-of-interest detection and advanced motion vector generation
  •  Machine vision quality-control systems that can detect the smallest of defects in fast-moving production lines
  •  Advanced driver assistance systems (ADAS) in automotive that can analyze the trajectory and behavior of fast-moving objects in the most challenging lighting conditions 
C-PHY Capabilities and Performance Highlights
MIPI C-PHY supports the MIPI Camera Serial Interface 2 (MIPI CSI-2) and MIPI Display Serial Interface 2 (MIPI DSI-2) ecosystems in low-power, high-speed applications for the typical interconnect lengths found in mobile, PC compute and IoT applications. The specification:
  •  Provides high throughput, a minimized number of interconnect signals and superior power efficiency to connect cameras and displays to an application processor. This is due to efficient three-phase coding unique to C-PHY that reduces the number of system interconnects and minimizes electromagnetic emissions to sensitive RF receiver circuitry that is often co-located with C-PHY interfaces.
  •  Offers flexibility to reallocate lanes within a link because C-PHY functions as an embedded clock link
  •  Enables low-latency transitions between high-speed and low-power modes
  •  Includes an alternate low power (ALP) feature, which enables a link operation using only C-PHY’s high-speed signaling levels. An optional fast lane turnaround capability utilizes ALP and supports asymmetrical data rates, which enables implementers to optimize the transfer rates to system needs.
  •  Can coexist on the same device pins as MIPI D-PHY, so designers can develop dual-mode devices
Support for C-PHY v3.0 was included in the most recent MIPI CSI-2 v4.1 embedded camera and imaging interface specification, published in April 2024. To aid implementation, C-PHY v3.0 is backward-compatible with previous C-PHY versions.

“C-PHY is MIPI's ternary-based PHY for smartphones, IoT, drones, wearables, PCs, and automotive cameras and displays,” said Hezi Saar, chair of MIPI Alliance. “It supports low-cost, low-resolution image sensors with fewer wires and high-performance image sensors in excess of 100 megapixels. The updated specification enables forward-looking applications like cinematographic-grade video on smartphones, machine vision quality-control systems and ADAS applications in automotive.”

Forthcoming MIPI D-PHY Updates
Significant development work is continuing on MIPI's other primary shorter-reach physical layer, MIPI D-PHY. D-PHY v3.5, released in 2023, includes an embedded clock option for display applications, while the forthcoming v3.6 specification will expand embedded clock support for camera applications, targeting PC / client computing platforms. The next full version, v4.0, will further expand D-PHY’s embedded clock support for use in mobile and beyond-mobile machine vision applications, and further increase D-PHY’s data rate beyond its current 9 Gbps per lane.

Also, MIPI Alliance last year conducted a comprehensive channel signal analysis to document the longer channel lengths of both C- and D-PHY. The resulting member application note, "Application Note for MIPI C-PHY and MIPI D-PHY IT/Compute," demonstrated that both C-PHY and D-PHY can be used in larger end products, such as laptops and all-in-ones, with minimal or no changes to the specifications as originally deployed in mobile phones or tablets, or for even longer lengths by operating at a reduced bandwidth. 

Go to the original article...

NIT announces SWIR line sensor

Image Sensors World        Go to the original article...

New SWIR InGaAs Line Scan Sensor NSC2301 for High-Speed Industrial Inspection

New Imaging Technologies (NIT) announces the release of its latest SWIR InGaAs line scan sensor,
the NSC2301, designed for demanding industrial inspection applications. With advanced features and
performance, this new sensor sets a benchmark in SWIR imaging for production environments.

Key features

  • 0.9µm to 1.7µm spectrum
  • 2048x1px @8µm pixel pitch
  • 90e- readout noise
  • Line rate >80kHz @ 2048 pixel resolution
  • Single stage TEC cooling
  • Configurable exposure times
  • ITR & IWR readout modes

The NSC2301 features a 2048 x 1 resolution with an 8 µm pixel pitch, delivering sharp, detailed line-
scan imaging. The size format is best suited to fit standard 1.1’’ optical format optics. This SWIR line-
scan sensor supports line rates over 80 kHz, making it ideal for fast-moving inspection tasks.
With configurable exposure times and both ITR (Integration Then Read) and IWR (Integration While
Read) readout modes, the sensor offers unmatched adaptability for various lighting and motion
conditions.

Thanks to its 3 gains available, the NSC2301 provides the perfect combination of High Sensitivity
(read out noise 90e- in High Gain) and High Dynamic Range, crucial for imaging challenging materials
or capturing subtle defects in high-speed production lines.

This new sensor expands NIT’s proprietary SWIR sensor portfolio and will be officially introduced
at Laser World of Photonics 2025 in Munich.

Applications
Typical use cases for the NSC2301 include silicon wafer inspection, solar panel inspection, hot glass
quality control, waste sorting, and optical coherence tomography, especially where high-resolution
and high-speed line-scan imaging is critical.

Camera
Complementing the launch of the sensor, NIT will release LiSaSWIR v2, a high-performance camera
integrating the NSC2301, in late summer. The camera will feature Smart CameraLink for fast data
transmission and plug-and-play integration.
With the NSC2301, NIT continues its mission of delivering cutting-edge SWIR imaging technology,
developed and produced in-house.

Go to the original article...

TechInsights blog on Samsung’s hybrid bond image sensor

Image Sensors World        Go to the original article...

Link: https://www.techinsights.com/blog/samsung-unveils-first-imager-featuring-hybrid-bond-technology

In a recent breakthrough discovery by TechInsights, the Samsung GM5 imager, initially thought to be a standard back-illuminated CIS, has been revealed to feature a pioneering hybrid bond design. This revelation comes after a year-long investigation following its integration into the Google Pixel 7 Pro.

Initially cataloged as a regular back-illuminated CIS due to the absence of through silicon vias (TSVs), further analysis was prompted by its appearance in the Google Pixel 8 Pro, boasting remarkable resolution. This led to an exploratory cross-section revealing the presence of a hybrid bond, also known as Direct Bond Interconnect (DBI). 

 


Go to the original article...

Webinar on image sensors for astronomy

Image Sensors World        Go to the original article...

 

 

The Future of Detectors in Astronomy


In this webinar, experts from ESO and Caeleste explore the current trends and future directions of detector technologies in astronomy. From ground-based observatories to cutting-edge instrumentation, our speakers share insights into how sensor innovations are shaping the way we observe the universe.
 

Speakers:
Derek Ives (ESO) – Head of Detector Systems at ESO
Elizabeth George (ESO) – Detector Physicist
Ajit Kalgi (Caeleste) – Director of Design Center
Jan Vermeiren (Caeleste) – Business Development Manager

Go to the original article...

Open Letter from Johannes Solhusvik, New President of the International Image Sensor Society (IISS)

Image Sensors World        Go to the original article...

Dear all, 
 
As announced by Junichi Nakamura during the IISW’25 banquet dinner, I have now taken over as President of the International Image Sensor Society (IISS). I will do my best to serve the imaging community and to ensure the continued success of our flagship event the International Image Sensor Workshop (IISW). 
 
The workshop objective is to provide an opportunity to exchange the latest progress in image sensor and related R&D activities amongst the top image sensor technologists in the world in an informal atmosphere. 
 
With the retirement of Junichi Nakamura from the Board, as well as Vladimir Koifman who also completed his service period, two very strong image sensor technologists have joined the IISS Board, namely Min-Woong Seo (Samsung) and Edoardo Charbon (EPFL). Please join me in congratulating them. 
 
Finally, I would like to solicit any suggestions and insights from the imaging community how to improve the IISS and to start planning your paper submission to the next workshop in Canada in 2027. More information will be provided soon at our website www.imagesensors.org 
 
Best regards, 
 
Johannes Solhusvik 
President of IISS 
VP, Head of Sony Semiconductor Solutions Europe

Go to the original article...

Sony IMX479 520-pixel SPAD LiDAR sensor

Image Sensors World        Go to the original article...

Press release: https://www.sony-semicon.com/en/news/2025/2025061001.html

Sony Semiconductor Solutions to Release Stacked SPAD Depth Sensor for Automotive LiDAR Applications, Delivering High-Resolution, High-Speed Performance High-resolution, high-speed distance measuring performance contributes to safer, more reliable future mobility

Atsugi, Japan — Sony Semiconductor Solutions Corporation (SSS) today announced the upcoming release of the IMX479 stacked, direct Time of Flight (dToF) SPAD depth sensor for automotive LiDAR systems, delivering both high-resolution and high-speed performance.

The new sensor product employs a dToF pixel unit composed of 3×3 (horizontal × vertical) SPAD pixels as a minimum element to enhance measurement accuracy using a line scan methodology. In addition, SSSs proprietary device structure enables a frame rate of up to 20 fps, which is the fastest for such a high-resolution SPAD depth sensor having 520 dToF pixels. 

The new product enables the high-resolution and high-speed distance measuring performance demanded for an automotive LiDAR required in advanced driver assistance systems (ADAS) and automated driving (AD), contributing to safer and more reliable future mobility. 

LiDAR technology is crucial for the high-precision detection and recognition of road conditions and the position and shape of the objects, such as vehicles, pedestrians. There is a growing demand for further technical advancements and developments progress in LiDAR toward Level 3 automated driving, which allows for autonomous control. SPAD depth sensors use the dToF measurement method, one of the LiDAR ranging methods, that measures the distance to an object by detecting the time of flight (time difference) of light emitted from a source until it returns to the sensor after being reflected by the object.

The new sensor harnesses SSS’s proprietary technologies acquired in the development of CMOS image sensors, including the back-side illuminated, stacked structure and Cu-Cu (copper-copper) connections. By integrating the newly developed distance measurement circuits and dToF pixels on a single chip, the new product has achieved a high-speed frame rate of up to 20 fps while delivering a high resolution of 520 dToF pixels with a small pixel size of 10 μm square.

Main Features
■ Up to 20 fps frame rate, the fastest for a 520 dToF pixel SPAD depth sensor

This product consists of a pixel chip (top) with back-illuminated dToF pixels and a logic chip equipped with newly developed distance measurement circuits (bottom) using a Cu-Cu connection on a single chip. This design enables a small pixel size of 10 μm square, achieving high resolution of 520 dToF pixels. The new distance measurement circuits handle multiple processes in parallel for even better high-speed processing.

These technologies achieve a frame rate of up to 20 fps, the fastest for a 520 dToF pixel SPAD depth sensor. They also deliver capabilities equivalent to 0.05 degrees vertical angular resolution, improving the vertical detection accuracy by 2.7 times that of conventional products. These elements allow detection of three-dimensional objects that are vital to automotive LiDAR, including objects as high as 25 cm (such as a tire or other objects in the road) at a distance of 250 m.

■ Excellent distance resolution of 5 cm intervals
The proprietary circuits SSS developed to enhance the distance resolution of this product individually processes each SPAD pixel data and calculates the distance. Doing so successfully improved the LiDAR distance resolution to 5 cm intervals.

■ High, 37% photon detection efficiency enabling detection of objects up to a distance of 300 m
This product features an uneven texture on both the incident plane and the bottom of the pixels, along with an optimized on-chip lens shape. Incident light is diffracted to enhance the absorption rate to achieve a high, 37% photon detection efficiency for the 940 nm wavelength, which is commonly used on automotive LiDAR laser light sources. It allows the system to detect and recognize objects with high precision up to 300 m away even in bright light conditions where the background light is at 100,000 lux or higher.

 


 


Go to the original article...

Artilux and VisEra metalens collaboration

Image Sensors World        Go to the original article...

News release: https://www.artiluxtech.com/resources/news/1023

Artilux, the leader of GeSi (germanium-silicon) photonics technology and pioneer of CMOS (complementary metal-oxide-semiconductor) based SWIR (short-wavelength infrared) optical sensing, imaging and communication, today announced its collaboration with VisEra Technologies (TWSE: 6789) on the latest Metalens technology. The newly unveiled Metalens technology differs from traditional curved lens designs by directly fabricating, on a 12” silicon substrate, fully-planar and high-precision nanostructures for precise control of light waves. By synergizing Artilux’s core GeSi technology with VisEra’s advanced processing capabilities, the demonstrated mass-production-ready Metalens technology significantly enhances optical system performance, production efficiency and yield. This cutting-edge technology is versatile and can be broadly applied in areas such as optical sensing, optical imaging, optical communication, and AI-driven commercial applications.

Scaling the Future: Opportunities and Breakthroughs in Metalens Technology
With the rise of artificial intelligence, robotics, and silicon photonics applications, silicon chips implemented for optical sensing, imaging, and communication are set to play a pivotal role in advancing these industries. As an example, smartphones and wearables having built-in image sensing, physiological signal monitoring, and AI-assistant capabilities will become increasingly prevalent. Moreover, with high bandwidth, long reach, and power efficiency advantages, silicon photonics is poised to become a critical component for supporting future AI model training and inference in AI data centers. As hardware designs require greater miniaturization at the chip-level, silicon-based "Metalens” technology will lead and accelerate the deployment of these applications.

Metalens technology offers the benefits of single-wafer process integration and compact optical module design, paving the way for silicon chips to gain growth momentum in the optical field. According to the Global Valuates Reports, the global market for Metalens was valued at US$ 41.8 million in the year 2024 and is projected to reach a revised size of US$ 2.4 billion by 2031, growing at a CAGR up to 80% during the forecast period 2025-2031.

Currently, most optical systems rely on traditional optical lenses, which utilize parabolic or spherical surface structures to focus light and control its amplitude, phase, and polarization properties. However, this approach is constrained by physical limitations, and requires precise mechanical alignment. Additionally, the curved designs of complex optical components demand highly accurate coating and lens-formation processes. These challenges make it difficult to achieve wafer-level integration with CMOS-based semiconductor processes and optical sensors, posing a significant hurdle to the miniaturization and integration of optical systems.

Innovative GeSi and SWIR Sensing Technology Set to Drive Application Deployment via Ultra-Thin Optical Modules
Meta-Surface technology is redefining optical innovation by replacing traditional curved microlenses with ultra-thin, fully planar optical components. This advancement significantly reduces chip size and thickness, increases design freedom for optical modules, minimizes signal interference, and enables precise optical wavefront control. Unlike the emitter-end DOE (Diffraction Optical Element) technology, Artilux’s innovative Metalens technology directly manufactures silicon-based nanostructures on 12” silicon substrates with ultra-high precision. By seamlessly integrating CMOS processes and core GeSi technology on a silicon wafer, this pioneering work enhances production efficiency and yield rates, supporting SWIR wavelengths. With increased optical coupling efficiency, this technology offers versatile solutions for AI applications in optical sensing, imaging, and communication, catering to a wide range of industries such as wearables, biomedical, LiDAR, mixed reality, aerospace, and defense.

Neil Na, Co-Founder and Chief Technology Officer of Artilux, stated, "Artilux has gained international recognitions for its innovations in semiconductor technology. We are delighted to once again share our independently designed Meta-Surface solution, integrating VisEra's leading expertise in 12” wafer-level optical manufacturing processes. This collaboration successfully creates ultra-thin optical components that can precisely control light waves, and enables applications across SWIR wavelength for optical sensing, optical imaging, optical communication and artificial intelligence. We believe this technology not only holds groundbreaking value in the optical field but will also accelerate the development and realization of next-generation optical technologies."

JC Hsieh, Vice President in Research and Development Organization of VisEra, emphasized, "At VisEra, we continuously engage in global CMOS imaging and optical sensor industry developments while utilizing our semiconductor manufacturing strengths and key technologies R&D and partnerships to enhance productivity and efficiency. We are pleased that our business partner, Artilux, has incorporated VisEra’s silicon-based Metalens process technology to advance micro-optical elements integration. This collaboration allows us to break through conventional form factor limitations in design and manufacturing. We look forward to our collaboration driving more innovative applications in the optical sensing industry and accelerating the adoption of Metalens technology."

Metalens technology demonstrates critical potential in industries related to silicon photonics, particularly in enabling miniaturization, improved integration, and enhanced performance of optical components. As advancements in materials and manufacturing processes continue to refine the technology, many existing challenges are gradually being overcome. Looking ahead, Metalens are expected to become standard optical components in silicon photonics and sensing applications, driving the next wave of innovation in optical chips and expanding market opportunities.

 

Go to the original article...

Zaber application note on image sensors for microscopy

Image Sensors World        Go to the original article...

Full article link: https://www.zaber.com/articles/machine-vision-cameras-in-automated-microscopy

When to Use Machine Vision Cameras in Microscope
Situation #1: High Throughput Microscopy Applications with Automated Image Analysis Software
Machine vision cameras are ideally suited to applications which require high throughput, are not limited by low light, and where a human will not look at the raw data. Designers of systems where the acquisition and analysis of images will be automated must change their perspective of what makes a “good” image. Rather than optimizing for images that look good to humans, the goal should be to capture the “worst” quality images which can still yield unambiguous results as quickly as possible when analyzed by software. If you are using “AI”, a machine vision camera is worth considering.
A common example is imaging consumables to which fluorescent markers will hybridize to specific sites. To read these consumables, one must check each possible hybridization site for the presence or absence of a fluorescent signal.

Situation #2: When a Small Footprint is Important
The small size, integration-friendly features and cost effectiveness of machine vision cameras make them an attractive option for OEM devices where minimizing the device footprint and retail price are important considerations. How are machine vision cameras different from scientific cameras? The distinction between machine vision and scientific cameras is not as clear as it once was. The term “Scientific CMOS” (sCMOS) was introduced in the mid 2010’s as advancements of CMOS image sensor technology lead to the development of the first CMOS image sensor cameras that could challenge the performance of then-dominant CCD image sensor technology. These new “sCMOS” sensors delivered improved performance relative to the CMOS sensors that were prevalent in MV cameras of the time. Since then, thanks to the rapid pace of CMOS image sensor development, the current generation of MV oriented CMOS sensors boast impressive performance. There are now many scientific cameras with MV sensors, and many MV cameras with scientific sensors.

 




Go to the original article...

Videos of the day: UArizona and KAIST

Image Sensors World        Go to the original article...

 

UArizona Imaging Technology Laboratory's sensor processing capabilities

 


KAIST  Design parameters of freeform color splitters for image sensors

Go to the original article...

Panasonic single-photon vertical APD pixel design

Image Sensors World        Go to the original article...

In a paper titled "Robust Pixel Design Methodologies for a Vertical Avalanche Photodiode (VAPD)-Based CMOS Image Sensor" Inoue et al. from Panasonic Japan write:

We present robust pixel design methodologies for a vertical avalanche photodiode-based CMOS image sensor, taking account of three critical practical factors: (i) “guard-ring-free” pixel isolation layout, (ii) device characteristics “insensitive” to applied voltage and temperature, and (iii) stable operation subject to intense light exposure. The “guard-ring-free” pixel design is established by resolving the tradeoff relationship between electric field concentration and pixel isolation. The effectiveness of the optimization strategy is validated both by simulation and experiment. To realize insensitivity to voltage and temperature variations, a global feedback resistor is shown to effectively suppress variations in device characteristics such as photon detection efficiency and dark count rate. An in-pixel overflow transistor is also introduced to enhance the resistance to strong illumination. The robustness of the fabricated VAPD-CIS is verified by characterization of 122 different chips and through a high-temperature and intense-light-illumination operation test with 5 chips, conducted at 125 °C for 1000 h subject to 940 nm light exposure equivalent to 10 kLux. 

 

Open access link to full paper:  https://www.mdpi.com/1424-8220/24/16/5414

Cross-sectional views of a pixel: (a) a conventional SPAD and (b) a VAPD-CIS. N-type and P-type regions are drawn by blue and red, respectively.
 

(a) A chip photograph of VAPD-CIS overlaid with circuit block diagrams. (b) A circuit diagram of the VAPD pixel array. (c) A schematic timing diagram of the pixel circuit illustrated in (b).
 
(a) An illustrative time-lapsed image of the sun. (b) Actual images of the sun taken at each time after starting the experiment. The test lasted for three hours, and as time passed, the sun, initially visible on the left edge of the screen, moved to the right.

Go to the original article...

Nobel winner and co-inventor of CCD technology passes away

Image Sensors World        Go to the original article...

DPReview: https://www.dpreview.com/news/2948041351/ccd-image-sensor-pioneer-george-e-smith-passes-away-at-95 

NYTimes:  https://www.nytimes.com/2025/05/30/science/george-e-smith-dead.html

George E. Smith died at the age of 95. Working with Willard S. Boyle at Bell Labs, he invented the CCD image sensor technology.

Go to the original article...

Photonic color-splitting image sensor startup Eyeo raises €15mn

Image Sensors World        Go to the original article...

Eyeo raises €15 million seed round to give cameras perfect eyesight

  • Eyeo replaces traditional filters with advanced color-splitting technology originating from imec, world-leading research and innovation hub in nanoelectronics and digital technologies. For the first time, photons are not filtered but guided to single pixels, delivering maximum light sensitivity and unprecedented native color fidelity, even in challenging lighting conditions.
  • Compatible with any sensor, eyeo’s single photon guiding technology breaks resolution limits - enabling truly effective sub-0.5-micron pixels for ultra-compact, high-resolution imaging in XR, industrial, security, and mobile applications - where image quality is the top purchasing driver.

Eindhoven (Netherlands), May 7, 2025 – eyeo today announced it has raised €15 million in seed funding, co-led by imec.xpand, Invest-NL, joined by QBIC fund, High-Tech Gründerfonds (HTGF) and Brabant Development Agency (BOM). Eyeo revolutionizes the imaging market for consumer, industrial, XR and security applications by drastically increasing the light sensitivity of image sensors. This breakthrough unlocks picture quality, color accuracy, resolution, and cost efficiency, which was never before possible in smartphones and beyond.

The €15 million raised will drive evaluation kit development, prepare for scale manufacturing of a first sensor product, and expand commercial partnerships to bring this breakthrough imaging technology to market.

The Problem: Decades-old color filter technology throws away 70% of light, crippling sensor performance
For decades, image sensors have relied on the application of red, green, and blue color filters on pixels to make your everyday color picture or video. Color filters, however, block a large portion of the incoming light, and thereby limit the sensitivity of the camera. Furthermore, they limit the scaling of the pixel size below ~0.5 micron. These longstanding issues have stalled advancements in camera technology, constraining both image quality and sensor efficiency. In smartphone cameras, manufacturers have compensated for this limitation by increasing the sensor -and thus camera- size, to capture more light. While this improves low-light performance, it also leads to larger, bulkier cameras. Compact, high-sensitivity image sensors are essential for slimmer smartphones and emerging applications such as robotics and AR/VR devices, where size, power efficiency, and image quality are crucial.

The Breakthrough: Color-splitting via vertical waveguides
Eyeo introduces a novel image sensor architecture that eliminates the need for traditional color filters, making it possible to maximize sensitivity without increasing sensor size. Leveraging breakthrough vertical waveguide-based technology that splits light into colors, eyeo develops sensors that efficiently capture and utilize all incoming light, tripling sensitivity compared to existing technologies. This is particularly valuable in low-light environments, where current sensors struggle to gather enough light for clear, reliable imaging. Additionally, unlike traditional filters that block certain colors (information that is then interpolated through software processing), eyeo’s waveguide technology allows pixels to receive complete color data. This approach instantly doubles resolution, delivering sharper, more detailed images for applications that demand precision, such as computational photography, machine vision, and spatial computing. 

Jeroen Hoet, CEO of eyeo: “Eyeo is fundamentally redefining image sensing by eliminating decades-old limitations. Capturing all incoming light and drastically improving resolution is just the start—this technology paves the way for entirely new applications in imaging, from ultra-compact sensors to enhanced low-light performance, ultra-high resolution, and maximum image quality. We’re not just improving existing systems; we’re creating a new standard for the future of imaging.”

Market Readiness and Roadmap
Eyeo has already established partnerships with leading image sensor manufacturers and foundries to ensure the successful commercialization of its technology. The €15M seed funding will be used to improve its current camera sensor designs further, optimizing the waveguide technology for production scalability and accelerating the development of prototypes for evaluation. By working closely with industry leaders, eyeo aims to bring its advanced camera sensors to a wide range of applications, from smartphones and VR glasses to any compact device that uses color cameras. The first evaluation kits are expected to be available for selected customers within the next two years. 

Eyeo is headquartered in Eindhoven (NL), with an R&D office in Leuven (BE).

Go to the original article...

Glass Imaging raises $20mn

Image Sensors World        Go to the original article...

PR Newswire: https://www.prnewswire.com/news-releases/glass-imaging-raises-20-million-funding-round-to-expand-ai-imaging-technologies-302451849.html

Glass Imaging Raises $20 Million Funding Round To Expand AI Imaging Technologies

LOS ALTOS, Calif., May 12, 2025 /PRNewswire/ -- Glass Imaging, a company harnessing the power of artificial intelligence to revolutionize digital image quality, today unveiled a Series A funding round led by global software investor Insight Partners. The $20 million round will allow Glass Imaging to continue to refine and implement their proprietary GlassAI technologies across a wide range of camera platforms - from smartphones to drones to wearables and more. The Series A round was joined by previous Glass Imaging investors GV (Google Ventures), Future Ventures and Abstract Ventures.

Glass Imaging uses artificial intelligence to extract the full image quality potential on current and future cameras by reversing lens aberrations and sensor imperfections. Glass works with manufacturers to integrate GlassAI software to boost camera performance 10x resulting in sharper, more detailed images under various conditions that remain true to life with no hallucinations or optical distortions.

"At Glass Imaging we are building the future of imaging technology," said Ziv Attar, Founder and CEO, Glass Imaging. "GlassAI can unlock the full potential of all cameras to deliver stunning ultra-detailed results and razor sharp imagery. The range of use cases and opportunities across industry verticals are huge."

"GlassAI leverages edge AI to transform Raw burst image data from any camera into stunning, high-fidelity visuals," said Tom Bishop, Ph.D., Founder and CTO, Glass Imaging. "Our advanced image restoration networks go beyond what is possible on other solutions: swiftly correcting optical aberrations and sensor imperfections while efficiently reducing noise, delivering fine texture and real image content recovery that outperforms traditional ISP pipelines."

"We're incredibly proud to lead Glass Imaging's Series A round and look forward to what the team will build next as they seek to redefine just how great digital image quality can be," said Praveen Akkiraju, Managing Director, Insight Partners. "The ceiling for GlassAI integration across any number of platforms and use cases is massive. We're excited to see this technology expand what we thought cameras and imaging devices were capable of." Akkiraju will join Glass Imaging's board and Insight's Jonah Waldman will join Glass Imaging as a board observer.

Glass Imaging previously announced a $9.3M extended Seed funding round in 2024 led by GV and joined by Future Ventures, Abstract and LDV Capital. That funding round followed an initial Seed investment in 2021 led by LDV Capital along with GroundUP Ventures.

For more information on Glass Imaging and GlassAI visit https://www.glass-imaging.com/

Go to the original article...

Sony-Leopard Imaging collaboration LI-IMX454

Image Sensors World        Go to the original article...

From PR Newswire: https://www.prnewswire.com/news-releases/leopard-imaging-and-sony-semiconductor-solutions-collaborate-to-showcase-li-imx454-multispectral-cameras-at-automate-and-embedded-vision-summit-302452836.html

Leopard Imaging and Sony Semiconductor Solutions Collaborate to Showcase LI-IMX454 Multispectral Cameras at Automate and Embedded Vision Summit

FREMONT, Calif., May 12, 2025 /PRNewswire/ -- Leopard Imaging Inc., a global innovator in intelligent vision solutions, is collaborating with Sony Semiconductor Solutions Corporation (Sony) to present the cutting-edge LI-IMX454 Multispectral Camera at both Automate and Embedded Vision Summit.

Leopard Imaging launched LI-USB30-IMX454-MIPI-092H camera with high-resolution imaging across diverse lighting spectrums, powered by Sony's advanced IMX454 multispectral image sensor. Unlike conventional RGB sensors, Sony's IMX454 image sensor integrates eight distinct spectral filters directly onto each photodiode, allowing the camera to capture light across 41 wavelengths from 450 nm to 850 nm in a single shot utilizing Sony's dedicated signal processing—without the need for mechanical scanning or bulky spectral elements.

Multispectral imaging has historically been underutilized due to cost and complexity. With the LI-IMX454, Leopard Imaging and Sony aim to democratize access to this powerful technology by offering a compact, ready-to-integrate solution for a wide range of industries: from industrial inspection to medical diagnostics, precision agriculture, and many more.

"We're excited to collaborate with Sony to bring this next-generation imaging solution to market," said Bill Pu, President and Co-Founder of Leopard Imaging. "The LI-IMX454 cameras not only deliver high-resolution multispectral data but also integrate seamlessly with AI and machine vision systems for intelligent decision-making."

The collaboration also incorporates Sony's proprietary signal processing software, optimized to support key functions essential to multispectral imaging: defect correction, noise reduction, auto exposure control, robust non-RGB based classification, and color image generation.

Leopard Imaging and Sony will showcase live demos of LI-IMX454 cameras at both Automate and Embedded Vision Summit. To visit Automate: Huntington Place, Booth #8000 on May 12-13. To visit Embedded Vision Summit: Santa Clara Convention Center, Booth #700 on May 21 - 22. To arrange a meeting at the event, please contact marketing@leopardimaging.com.

Go to the original article...

Counterpoint Research’s CIS report

Image Sensors World        Go to the original article...

Global Smartphone CIS Shipments Climb 2% YoY in 2024

Samsung is no longer in the top-3 smartphone CIS suppliers.


  •  Global smartphone image sensor shipments rose 2% YoY to 4.4 billion units in 2024.
  • Meanwhile, the average number of cameras per smartphone declined further to 3.7 units in 2024 from 3.8 units in 2023.
  • Sony maintained its leading position, followed by GalaxyCore in second place and OmniVision in third.
  • Global smartphone image sensor shipments are expected to fall slightly YoY in 2025.

 

https://www.counterpointresearch.com/insight/post-insight-research-notes-blogs-global-smartphone-cis-shipments-climbs-2-yoy-in-2024/

Go to the original article...

IS&T EI 2025 plenary talk on imaging and AI

Image Sensors World        Go to the original article...


 

This plenary presentation was delivered at the Electronic Imaging Symposium held in Burlingame, CA over 2-6 February 2025. For more information see: http://www.electronicimaging.org

Title: Imaging in the Age of Artificial Intelligence

Abstract: AI is revolutionizing imaging, transforming how we capture, enhance, and experience visual content. Advancements in machine learning are enabling mobile phones to have far better cameras, enabling capabilities like enhanced zoom, state-of-the-art noise reduction, blur mitigation, and post-capture capabilities such as intelligent curation and editing of your photo collections, directly on device.
This talk will delve into some of these breakthroughs, and describe a few of the latest research directions that are pushing the boundaries of image restoration and generation, pointing to a future where AI empowers us to better capture, create, and interact with visual content in unprecedented ways.

Speaker: Peyman Milanfar, Distinguished Scientist, Google (United States)

Biography: Peyman Milanfar is a Distinguished Scientist at Google, where he leads the Computational Imaging team. Prior to this, he was a Professor of Electrical Engineering at UC Santa Cruz for 15 years, two of those as Associate Dean for Research. From 2012-2014 he was on leave at Google-x, where he helped develop the imaging pipeline for Google Glass. Over the last decade, Peyman's team at Google has developed several core imaging technologies that are used in many products. Among these are the zoom pipeline for the Pixel phones, which includes the multi-frame super-resolution ("Super Res Zoom") pipeline, and several generations of state of the art digital upscaling algorithms. Most recently, his team led the development of the "Photo Unblur" feature launched in Google Photos for Pixel devices.
Peyman received his undergraduate education in electrical engineering and mathematics from the UC Berkeley and his MS and PhD in electrical engineering from MIT. He holds more than two dozen patents and founded MotionDSP, which was acquired by Cubic Inc. Along with his students and colleagues, he has won multiple best paper awards for introducing kernel regression in imaging, the RAISR upscaling algorithm, NIMA: neural image quality assessment, and Regularization by Denoising (RED). He's been a Distinguished Lecturer of the IEEE Signal Processing Society and is a Fellow of IEEE "for contributions to inverse problems and super-resolution in imaging".

Go to the original article...

css.php