Paper on SPADs at the NATO Science & Technology organization meeting

Image Sensors World        Go to the original article...

A paper titled "SPAD Image Sensors for Quantum and Classical Imaging" by Prof. Edoardo Charbon was published in the STO Meetings proceedings in January 2024.

Paper link: https://www.sto.nato.int/publications/STO%20Meeting%20Proceedings/STO-MP-IST-SET-198/MP-IST-SET-198-C1-03.pdf

Abstract:
Single-photon avalanche diodes (SPADs) have been demonstrated on a variety of CMOS technologies since the early 2000s. While initially inferior to their counterparts implemented dedicated technologies, modern CMOS SPADs have recently matched them in sensitivity, noise, and timing jitter. Indeed, high time resolution, enabled by low jitter, has helped demonstrate the most impressive developments in fields of imaging and detection, including fluorescence lifetime imaging microscopy (FLIM), Förster resonance energy transfer (FRET), fluorescence correlation spectroscopy (FCS), time-of-flight positron emission tomography (TOF-PET), and light detection and ranging (LiDAR), just to name a few. The SPAD’s power of detecting single photons in pixels that can be replicated in great numbers, typically in the millions, is currently having a major impact in computational imaging and quantum imaging. These two emerging
disciplines stand to take advantage of larger and larger SPAD image sensors with increasingly low jitter and noise, and high sensitivity. Finally, due to the computational power required at pixel level, power consumption must be reduced; we thus advocate the use of in situ computational engines, which, thanks of CMOS’ economy of scale and 3D-stacking, enable vast computation density. Some examples of this trend are given, along with a general perspective on SPAD image sensors. 



Go to the original article...

Sony releases 247MP sensors

Image Sensors World        Go to the original article...

Sony recently released a new 247MP rolling shutter CIS available in monochrome and color variants: IMX811-AAMR and IMX811-AAQR.







Go to the original article...

Four new videos about the industry

Image Sensors World        Go to the original article...

Here are few new videos from image sensor companies.

Two about new hardware built around image sensors:

  • Trinamix-ST under-OLED face recognition camera

 


  • Prophesee AR glassses demo

 


One about new facilities:

  • An official opening of TSMC-Sony plant in Kumamoto where Sony will manufacture its new image sensors:

 


And one about a new sensor series:

  • Omnivision presents its new generation of automotive HDR sensors:

 

Go to the original article...

Artilux announces room temperature GeSi SPAD

Image Sensors World        Go to the original article...

 
HSINCHU, Feb. 22, 2024 /PRNewswire/ -- Artilux, the renowned leader of GeSi (germanium-silicon) photonics technology for CMOS (complementary metal-oxide-semiconductor) based SWIR (short-wavelength infrared) sensing and imaging, announced today that the research team at Artilux has made a breakthrough in advancing SWIR GeSi SPAD (single-photon avalanche diode) technology, which has been recognized and published by Nature, one of the world's most prestigious scientific journals. The paper, titled "Room temperature operation of germanium-silicon single-photon avalanche diode," presented the Geiger-mode operation of a high-performing GeSi avalanche photodiode at room temperature, which in the past was limited to operation at a low temperature below at least 200 Kelvin. Nature's rigorous peer-review process ensures that only research of the highest caliber and broadest interest is published, and the acceptance and publication of the paper in Nature is another pivotal mark in exemplifying Artilux's leadership in CMOS-based SWIR sensing and imaging.

The research work, led by Dr. Neil Na, CTO of Artilux, has unveiled a CMOS-compatible GeSi SPAD operated at room temperature and elevated temperatures, featuring a noise-equivalent power improvement over previously demonstrated Ge-based SPADs by several orders of magnitude. The paper showcases key parameters of the GeSi SPAD, including dark count rate, single-photon detection probability at SWIR spectrum, timing jitter, after-pulsing characteristic time, and after-pulsing probability, at a low breakdown voltage and a small excess bias. As a proof of concept, three-dimensional point-cloud images were captured with TOF (direct time-of-flight) technique using the GeSi SPAD. "When we started the project, there were overwhelming evidence in the literature indicating that a room-temperature operation of GeSi SPAD is simply not possible," said Dr. Na, "and I am proud of our team turning the scientific research into a commercial reality against all odds."

The findings set a new milestone in CMOS photonics. The potential deployment of single-photon sensitive SWIR sensors, imagers, and photonic integrated circuits shall unlock critical applications in TOF sensors and imagers, LiDAR (light detection and ranging), bio-photonics, quantum computing and communication, artificial intelligence, robotics, and more. Artilux is committed to continuing its leadership in CMOS photonics technology, aiming to further contribute to the scientific community and photonics industry.

Abstract of article in Nature (Feb 2024): https://www.nature.com/articles/s41586-024-07076-x
The ability to detect single photons has led to the advancement of numerous research fields. Although various types of single-photon detector have been developed, because of two main factors—that is, (1) the need for operating at cryogenic temperature and (2) the incompatibility with complementary metal–oxide–semiconductor (CMOS) fabrication processes—so far, to our knowledge, only Si-based single-photon avalanche diode (SPAD) has gained mainstream success and has been used in consumer electronics. With the growing demand to shift the operation wavelength from near-infrared to short-wavelength infrared (SWIR) for better safety and performance, an alternative solution is required because Si has negligible optical absorption for wavelengths beyond 1 µm. Here we report a CMOS-compatible, high-performing germanium–silicon SPAD operated at room temperature, featuring a noise-equivalent power improvement over the previous Ge-based SPADs by 2–3.5 orders of magnitude. Key parameters such as dark count rate, single-photon detection probability at 1,310 nm, timing jitter, after-pulsing characteristic time and after-pulsing probability are, respectively, measured as 19 kHz µm−2, 12%, 188 ps, ~90 ns and <1%, with a low breakdown voltage of 10.26 V and a small excess bias of 0.75 V. Three-dimensional point-cloud images are captured with direct time-of-flight technique as proof of concept. This work paves the way towards using single-photon-sensitive SWIR sensors, imagers and photonic integrated circuits in everyday life.


Go to the original article...

Nikon to acquire RED.com

Image Sensors World        Go to the original article...

From Nikon newsroom: https://www.nikon.com/company/news/2024/0307_01.html

Nikon to Acquire US Cinema Camera Manufacturer RED.com, LLC

March 7, 2024

TOKYO - Nikon Corporation (Nikon) hereby announces its entry into an agreement to acquire 100% of the outstanding membership interests of RED.com, LLC (RED) whereby RED will become a wholly-owned subsidiary of Nikon, pursuant to a Membership Interest Purchase Agreement with Mr. James Jannard, its founder, and Mr. Jarred Land, its current President, subject to the satisfaction of certain closing conditions thereunder.

Since its establishment in 2005, RED has been at the forefront of digital cinema cameras, introducing industry-defining products such as the original RED ONE 4K to the cutting-edge V-RAPTOR [X] with its proprietary RAW compression technology. RED's contributions to the film industry have not only earned it an Academy Award but have also made it the camera of choice for numerous Hollywood productions, celebrated by directors and cinematographers worldwide for its commitment to innovation and image quality optimized for the highest levels of filmmaking and video production.

This agreement was reached as a result of the mutual desires of Nikon and RED to meet the customers’ needs and offer exceptional user experiences that exceed expectations, merging the strengths of both companies. Nikon's expertise in product development, exceptional reliability, and know-how in image processing, as well as optical technology and user interface along with RED’s knowledge in cinema cameras, including unique image compression technology and color science, will enable the development of distinctive products in the professional digital cinema camera market.

Nikon will leverage this acquisition to expand the fast-growing professional digital cinema camera market, building on both companies' business foundations and networks, promising an exciting future of product development that will continue to push the boundaries of what is possible in film and video production.

Go to the original article...

IEEE ICCP 2024 Call for Papers, Submission Deadline March 22, 2024

Image Sensors World        Go to the original article...

Call for Papers: IEEE International Conference on Computational Photography (ICCP) 2024
https://iccp-conference.org/iccp2024/call-for-papers/
Submission Deadline: March 22, 2024 @ 23:59 CET

ICCP is an international venue for disseminating and discussing new scholarly work in computational photography, novel imaging, sensors and optics techniques. This year, ICCP will take place in EPFL, Lausanne Switzerland, on July 22-24th!

As in previous years, ICCP is coordinating with the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) for a special issue on Computational Photography to be published after the conference.

 ICCP 2024 seeks novel and high-quality submissions in all areas of computational photography, including, but not limited to:

  •  High-performance imaging.
  •  Computational cameras, illumination, and displays.
  •  Advanced image and video processing.
  •  Integration of imaging, physics, and machine learning.
  •  Organizing and exploiting photo / video collections.
  •  Structured light and time-of-flight imaging.
  •  Appearance, shape, and illumination capture.
  •  Computational optics (wavefront coding, digital holography, compressive sensing, etc.).
  •  Sensor and illumination hardware.
  •  Imaging models and limits
  •  Physics-based rendering, neural rendering, and differentiable rendering.
  •  Applications: imaging on mobile platforms, scientific imaging, medicine and biology, user interfaces, AR/VR systems.

Learn more on the ICCP 2024 website, and submit your latest advancements by Friday, 22nd March, 2024.

The call for posters and demo will be published soon with a deadline end of April. It will also be a great opportunity to advertise your work.

 



Go to the original article...

Prophesee Qualcomm demo at Mobile World Congress

Image Sensors World        Go to the original article...

Prophesee and Qualcomm recently showcased their "blur free" mobile photography technology at the Mobile World Congress in Barcelona.

Press release: https://prophesee-1.reportablenews.com/pr/prophesee-s-metavision-image-deblur-solution-for-smartphones-is-now-production-ready-seamlessly-optimized-for-the-snapdragon-8-gen-3-mobile-platform

February 27, 2024 – Paris, France - Prophesee SA, inventor of the most advanced neuromorphic vision systems, today announced that the progress achieved through its collaboration with Qualcomm Technologies, Inc. has now reached production stage. A live demo during Mobile World Congress Barcelona is showcasing Prophesee’s native compatibility with premium Snapdragon® mobile platforms, bringing the speed, efficiency, and quality of neuromorphic-enabled vision to cameras in mobile devices.

Prophesee’s event-based Metavision sensors and AI, optimized for use with Snapdragon platforms now brings motion blur cancellation and overall image quality to unprecedented levels, especially in the most challenging scenarios faced by conventional frame-based RGB sensors, fast-moving and low-light scenes.

“We have made significant progress since we announced this collaboration in February 2023, achieving the technical milestones that demonstrate the impressive impact on image quality our event-based technology has in mobile devices containing Snapdragon mobile platforms. As a result, our Metavision Deblur solution has now reached production readiness,” said Luca Verre, CEO and co-founder of Prophesee. “We look forward to unleashing the next generation of Smartphone's photography and video with Prophesee's Metavision.”

“Qualcomm Technologies is thrilled to continue our strong collaboration with Prophesee, joining efforts to efficiently optimize Prophesee’s event-based Metavision technology for use with our flagship Snapdragon 8 Gen 3 Mobile Platform. This will deliver significant enhancements to image quality and bring new features enabled by event cameras’ shutter-free capability to devices powered by Snapdragon mobile platforms,” said Judd Heape, VP of Product Management at Qualcomm Technologies, Inc.

How it works
Prophesee’s breakthrough sensors add a new sensing dimension to mobile photography. They change the paradigm in traditional image capture by focusing only on changes in a scene, pixel by pixel, continuously, at extreme speeds.

Each pixel in the Metavision sensor embeds a logic core, enabling it to act as a neuron.
They each activate themselves intelligently and asynchronously depending on the amount of photons they sense. A pixel activating itself is called an event. In essence, events are driven by the scene’s dynamics, not an arbitrary clock anymore, so the acquisition speed always matches the actual scene dynamics.

High-performance event-based deblurring is achieved by synchronizing a frame-based and Prophesee’s event-based sensor. The system then fills the gaps between and inside the frames with microsecond events to algorithmically extract pure motion information and repair motion blur.
Learn more: https://www.prophesee.ai/event-based-vision-mobile/

Go to the original article...

Preprint on "Skipper-in-CMOS" image sensor

Image Sensors World        Go to the original article...

A recent preprint on ArXiv https://arxiv.org/abs/2402.12516 titled presents a new CMOS image sensor designed to achieve sub-electron read noise and photon number resolving capability.

Skipper-in-CMOS: Non-Destructive Readout with Sub-Electron Noise Performance for Pixel Detectors

Abstract: The Skipper-in-CMOS image sensor integrates the non-destructive readout capability of Skipper Charge Coupled Devices (Skipper-CCDs) with the high conversion gain of a pinned photodiode in a CMOS imaging process, while taking advantage of in-pixel signal processing. This allows both single photon counting as well as high frame rate readout through highly parallel processing. The first results obtained from a 15 x 15 um^2 pixel cell of a Skipper-in-CMOS sensor fabricated in Tower Semiconductor's commercial 180 nm CMOS Image Sensor process are presented. Measurements confirm the expected reduction of the readout noise with the number of samples down to deep sub-electron noise of 0.15rms e-, demonstrating the charge transfer operation from the pinned photodiode and the single photon counting operation when the sensor is exposed to light. The article also discusses new testing strategies employed for its operation and characterization.







Go to the original article...

Samsung defends AI editing on photos

Image Sensors World        Go to the original article...

From TechRadar: https://www.techradar.com/phones/samsung-galaxy-phones/there-is-no-such-thing-as-a-real-picture-samsung-defends-ai-photo-editing-on-galaxy-s24

"There is no such thing as a real picture": Samsung defends AI photo editing on Galaxy S24

Like most technology conferences in recent months, Samsung’s latest Galaxy Unpacked event was dominated by conversations surrounding AI. From two-way call translation to gesture-based search, the Samsung Galaxy S24 launched with several AI-powered tricks up its sleeve – but one particular feature is already raising eyebrows.

Set to debut on the Galaxy S24 and its siblings, Generative Edit will allow users to artificially erase, recompose and remaster parts of an image in a bid to achieve photographic perfection. This isn’t a new concept, and any edits made using this generative AI tech will result in a watermark and metadata changes. But the seamlessness with which the Galaxy S24 enables such edits has understandably left some Unpacked-goers concerned.

Samsung, however, is confident that its new Generative Edit feature is ethical, desirable and even necessary in today’s misinformation-filled world. In a revealing interview with TechRadar, Samsung’s Head of Customer Experience, Patrick Chomet, defended the company’s position on AI and its implications.

“There was a very nice video by Marques Brownlee last year on the moon picture,” Chomet told us. “Everyone was like, ‘Is it fake? Is it not fake?’ There was a debate around what constitutes a real picture. And actually, there is no such thing as a real picture. As soon as you have sensors to capture something, you reproduce [what you’re seeing], and it doesn’t mean anything. There is no real picture. [...] You can try to define a real picture by saying, ‘I took that picture’, but if you used AI to optimize the zoom, the autofocus, the scene – is it real? Or is it all filters? There is no real picture, full stop.”
“But still, questions around authenticity are very important,” Chomet continued, “and we [Samsung] go about this by recognizing two consumer needs; two different customer intentions. Neither of them are new, but generative AI will accelerate one of them.

“One intention is wanting to capture the moment – wanting to take a picture that’s as accurate and complete as possible. To do that, we use a lot of AI filtering, modification and optimization to erase shadows, reflections and so on. But we are true to the user's intention, which was to capture that moment.

“Then there is another intention, which is wanting to make something. When people go on Instagram, they add a bunch of funky black and white stuff – they create a new reality. Their intention isn’t to recreate reality, it’s to make something new. So [Generative Edit] isn’t a totally new idea. Generative AI tools will accelerate that intention exponentially in the next few years [...] so there is a big customer need to distinguish between the real and the new. That’s why our Generative Edit feature adds a watermark and edits the metadata, and we’re working with regulatory bodies to ensure people understand the difference.”

On the subject of AI regulation, Chomet said that Samsung "is very aligned with European regulations on AI," noting that governments are right to express early concerns around the potential implications of widespread AI use.

"The industry needs to be responsible and it needs to be regulated," added Chomet, noting that Samsung is actively working on that. "Our new technology is amazing and powerful – but like anything, it can be used in good and bad ways. So, it’s appropriate to think deeply about the bad ways.”

As for how Generative Edit will end up being used on Samsung's new Galaxy phones, only time will tell. Perhaps the feature will simply help average smartphone users (i.e. those unfamiliar with Photoshop) get the photos they really want, rather than facilitate mass photo fakery. Indeed, it still remains to be seen whether generative AI tech as a whole will be a benefit or a hindrance to society as we know it.


Go to the original article...

GPixel on the verge of IPO?

Image Sensors World        Go to the original article...

From: http://www.myzaker.com/article/65d3ce24b15ec01a56438179

(Translated with Google Translate)

...

Against the backdrop of an improving market, Changchun Changguangchenxin Microelectronics Co., Ltd. (hereinafter referred to as "Changguangchenxin"), a domestic company specializing in CMOS image sensors, has recently launched its IPO application on the Shanghai Stock Exchange Science and Technology Innovation Board. to the inquiry stage.

In this IPO, Changguang Chenxin plans to raise 1.557 billion yuan, and the funds are planned to be invested in the research and development and industrialization projects of CMOS image sensors in different directions, including the field of machine vision, scientific instruments, and professional imaging. At the same time, funds are also planned to be invested in high-end CMOS image sensor R&D center construction projects and to supplement working capital.

However, in recent years, Changguangchenxin's net profit has turned from profit to loss during the reporting period. Moreover, as a company that wants to be listed on the Science and Technology Innovation Board, Changguangchenxin's R&D expense rate has been decreasing year by year, and the detailed list of R&D expenses has been focused on by the Shanghai Stock Exchange. 

...

Go to the original article...

Andes and MetaSilicon collaborate on automotive CIS

Image Sensors World        Go to the original article...

From Yahoo Finance news:

Andes Technology and MetaSilicon Collaborate to Build the World’s First Automotive-Grade CMOS Image Sensor Product Using RISC-V IP SoC

Hsinchu Taiwan, Feb. 22, 2024 (GLOBE NEWSWIRE) -- RISC-V IP vendor Andes Technology and edge computing chip provider MetaSilicon jointly announced that the MetaSilicon MAT Series is the world's first automotive-grade CMOS image sensor series using RISC-V IP SoC, using Andes' AndesCore™ N25F-SE processor. They are designed in accordance with the ISO26262 functional safety standard to achieve ASIL-B level and follow the AEC-Q100 Grade 2 to achieve a high level of safety and reliability. And by using technologies such as HDR, advanced imaging can be achieved in a simple, economical, and efficient system. They not only address the effects of high dynamic range, high sensitivity, and high color reproduction, but also meet the application requirements of ADAS decision-making.

The N25F-SE from Andes Technology is a 32-bit RISC-V CPU core that can support the standard IMACFD instruction set, which includes an efficient integer instruction set and a single/double precision floating point operation instruction set. The N25F-SE's high-efficiency five-stage pipeline achieves a good balance between high operating frequency and a streamlined design. It also has rich configurable options and flexible interface configuration, which greatly simplify the SoC development. In addition, the N25F-SE has obtained the ISO 26262 ASIL-B full compliance certification, which enables the image sensor chip to meet the vehicle-level safety requirement. For the development of MetaSilicon's automotive-grade chips, the N25F-SE and its safety package provide a good fit CPU solution and together with Andes’ technical support shorten the chip development time significantly.

MetaSilicon has first-class innovative R&D capabilities and has developed several cutting-edge technologies including LOFIC (Lateral Overflow Integration Capacitor) + DCG (Dual Conversion Gain) HDR (High-Dynamic Range), which meet the high-quality image requirements for smart car vision applications. The MAT Series 1MP CMOS image sensor chip has low power consumption and high dynamic range (HDR) characteristics. Its effective image resolution is 1280 H * 960 V, and it can support high dynamic range image output up to 60fps @120dB. The other MAT Series 3MP CIS has multiple capabilities such as low power consumption, ultra-high dynamic range (HDR), on-chip ISP, LFM, etc. Its effective image resolution is 1920 H * 1536 V, and can support up to 60fps frame rate, and the dynamic range can reach the industry-leading 140dB+. These chips can provide reliable high-quality image information for intelligent automotive applications.

"The N25F-SE provides a safety package, which includes a safety manual, safety analysis report and a development interface outline. The N25F-SE and its safety package are effective, high-performance and flexible automotive solutions. They can significantly reduce the time required to design automotive grade SoCs and to comply with the ISO 26262 standard", said Dr. Charlie Su, President and CTO of Andes Technology. "We are very pleased that N25F-SE's IP and safety package efficiently support MetaSilicon shorten the development time for its two automotive-grade chips. We also look forward to more cooperation between the two companies in the future to create more innovative products."

Jianhua Zheng, CTO of MetaSilicon said, “Among the various sensors used in automotive ADAS applications, visual image processing is particularly important. Once the image is not accurate and timely enough, it will directly lead to errors in the judgment of the back-end algorithm, so HDR performance requirements are extremely high. MetaSilicon's LOFIC+DCG HDR technology can achieve an ultra-high dynamic range of 140dB+ to meet practical application needs in the automotive ADAS field. We are honored to work closely with Andes Technology on two high-performance chips, using the world's first ISO 26262 certified RISC-V core N25F-SE that meets the functional safety standards. As a result, we can shorten the product development time and achieve functional safety goals."

Go to the original article...

VPS Semi presents a 600MP image sensor

Image Sensors World        Go to the original article...

From: http://www.vpssemi.com/NewsDetail?id=72 (Translated to English with Google Translate)

New product release 
VPS800 - New large area array image sensor chip released for wide-area surveillance
 


On September 6, the 24th China International Optoelectronics Expo kicked off at the Shenzhen Baoan International Convention and Exhibition Center. At this Optoelectronic Expo, Nanjing VPS Semiconductor Technology Co., Ltd. released a new product for the wide-area monitoring field - VPS800 large area array imaging. This series of chips has a pixel count of over 600 million, a pixel size of 0.7 microns, and supports 16 ROIs (regions of interest). It can provide imaging solutions at longer distances and a wider range, expanding the new boundaries of existing wide-area monitoring solutions.

The VPS800 large area array imaging chip is based on the internally-developed vertical charge transfer imaging device (VPS) as the core. It has a single-chip pixel size of more than 600 million, which can solve the problems of complexity, large volume, and high power consumption of existing large area array camera systems. It achieves long range and large field of view while reducing size, weight, power consumption, and cost, allowing coverage of a wider range clearly while obtaining more micro details. Currently, it is mainly used in security monitoring, commercial satellites, industrial inspection, etc. 

For scenarios that require both large-scale observation and the acquisition of a large number of micro-details, the VPS800 large-area image sensor chip can support long-distance fixed-point shooting. With one imaging, large-scale observation can be achieved and the fine details of the entire image can be retained.
 
For scenarios presenting large target areas and high resolution such as commercial satellite surveillance, imaging sensors are required to be "small" and "light". The VPS800 large-area imaging chip can support a single-chip pixel size of more than 600 million without the need for splicing. It is small in size and light in weight - more in line with the demand scenarios of micro/nano satellites.
 
It is worth mentioning that the chip supports 16 ROI (Region of Interest) functions, which allows users to read sensor information from any area, thus reducing the amount of information read. The target can be continuously observed through one frame, and it can also achieve multi-target synchronous tracking. It can be used as a supplementary solution to existing security monitoring solutions, expanding the observation scope and application boundaries of existing security monitoring solutions.

Note: This startup was previously featured in a blogpost from 2022: https://image-sensors-world.blogspot.com/2022/01/vps-semiconductor-raises-100m-rmb-in.html

Go to the original article...

STMicroelectronics announces new ToF Sensors

Image Sensors World        Go to the original article...

VD55H1 Low-Noise Low-Power iToF Sensor
-- New design feat, packing 672 x 804 sensing pixels in a tiny chip size and can map a three-dimensional surface in great detail by measuring distance to over half a million points.
-- Lanxin Technology will use the VD55H1 for intelligent obstacle avoidance and high-precision docking in mobile robots; MRDVS will enhance its 3D cameras adding high-accuracy depth-sensing. 



VL53L9 dToF 3D Lidar Module
-- New high-resolution sensor with 5cm – 9m ranging distance ensures accurate depth measurements for camera assistance, hand tracking, and gesture recognition.
-- VR systems use the VL53L9 to depict depth more accurately within 2D and 3D imaging, improving mapping for immersive gaming and other applications like 3D avatars.

The two new products announced will enhance safer mobile robots in industrial environments​ and smart homes as well as enable advanced VR applications.



The VL53L9CA is a state of the art, dToF 3D lidar (light detection and ranging) module with market leading resolution of up to 2.3k zones and accurate ranging from 5cm to 10m.


Full press release:

STMicroelectronics expands into 3D depth sensing with latest time-of-flight sensors

STMicroelectronics (NYSE: STM), a global semiconductor leader serving customers across the spectrum of electronics applications, announced an all-in-one, direct Time-of-Flight (dToF) 3D LiDAR (Light Detection And Ranging) module with market-leading 2.3k resolution, and revealed an early design win for the world’s smallest 500k-pixel indirect Time-of-Flight (iToF) sensor.
 
“ToF sensors, which can accurately measure the distance to objects in a scene, are driving exciting new capabilities in smart devices, home appliances, and industrial automation. We have already delivered two billion sensors into the market and continue to extend our unique portfolio, which covers all types from the simplest single-zone devices up to our latest high-resolution 3D indirect and direct ToF sensors,” said Alexandre Balmefrezol, General Manager, Imaging Sub-Group at STMicroelectronics. “Our vertically integrated supply chain, covering everything from pixel and metasurface lens technology and design to fabrication, with geographically diversified in-house high-volume module assembly plants, lets us deliver extremely innovative, highly integrated, and high-performing sensors.”
 
The VL53L9, announced today, is a new direct ToF 3D LiDAR device with a resolution of up to 2.3k zones. Integrating a dual scan flood illumination, unique in the market, the LiDAR can detect small objects and edges and captures both 2D infrared (IR) images and 3D depth map information. It comes as a ready-to-use low power module with its on-chip dToF processing, requiring no extra external components or calibration. Additionally, the device delivers state-of-the-art ranging performance from 5cm to 10 meters.
 
VL53L9’s suite of features elevates camera-assist performance, supporting macro up to telephoto photography. It enables features such as laser autofocus, bokeh, and cinematic effects for still and video at 60fps (frame per second). Virtual reality (VR) systems can leverage accurate depth and 2D images to enhance spatial mapping for more immersive gaming and other VR experiences like virtual visits or 3D avatars. In addition, the sensor’s ability to detect the edges of small objects at short and ultra-long ranges makes it suitable for applications such as virtual reality or SLAM (simultaneous localization and mapping).
 
ST is also announcing news of its VD55H1 ToF sensor, including the start of volume production and an early design win with Lanxin Technology, a China-based company focusing on mobile-robot deep-vision systems. MRDVS, a subsidiary company, has chosen the VD55H1 to add high-accuracy depth-sensing to its 3D cameras. The high-performance, ultra-compact cameras with ST’s sensor inside combine the power of 3D vision and edge AI, delivering intelligent obstacle avoidance and high-precision docking in mobile robots.

In addition to machine vision, the VD55H1 is ideal for 3D webcams and PC applications, 3D reconstruction for VR headsets, people counting and activity detection in smart homes and buildings. It packs 672 x 804 sensing pixels in a tiny chip size and can accurately map a three-dimensional surface by measuring distance to over half a million points. ST’s stacked-wafer manufacturing process with backside illumination enables unparalleled resolution with smaller die size and lower power consumption than alternative iToF sensors in the market. These characteristics give the sensors their excellent credentials in 3D content creation for webcams and VR applications including virtual avatars, hand modeling and gaming.

First samples of the VL53L9 are already available for lead customers and mass production is scheduled for early 2025. The VD55H1 is in full production now.

Pricing information and sample requests are available at local ST sales offices. ST will showcase a range of ToF sensors including the VL53L9 and explain more about its technologies at Mobile World Congress 2024, in Barcelona, February 26-29, at booth 7A61.
 

Go to the original article...

Teledyne acquires Adimec

Image Sensors World        Go to the original article...

From Metrology News: https://metrology.news/teledyne-to-acquire-high-performance-camera-specialist-adimec/

Teledyne to Acquire High-Performance Camera Specialist Adimec

Teledyne Technologies has announced that it has entered into an agreement to acquire Adimec Holding B.V. and its subsidiaries (Adimec). Adimec, founded in 1992 and headquartered in Eindhoven, Netherlands, develops customized high-performance industrial and scientific cameras for applications where image quality is of paramount importance.

​“Adimec possesses uniquely complementary technology, products and customers in the shared strategic focus areas of healthcare, global defense, and semiconductor and electronics inspection,” said Edwin Roks, Chief Executive Officer of Teledyne. “For decades and from our own X-ray imaging business headquartered in Eindhoven, I have watched Adimec grow to become a leader in niche applications requiring truly accurate images for precise decision making in time-critical processes.”

Joost van Kuijk, Adimec’s Chief Executive Officer, commented, “It is with great pleasure that we are able to announce publicly that Adimec will become part of Teledyne. Adimec’s success has always been built on ensuring imaging excellence in demanding applications through an unwavering focus on individual customer requirements by our expert engineers and designers.”

Adimec co- Chief Executive Officer, Alex de Boer added, “As a leader in advanced imaging technologies for industrial and scientific markets, Teledyne is the perfect company to build further on the strong foundation the founders and management have established over the past three decades. The entire Adimec team is looking forward to contributing to an exciting future with Teledyne while extending technical boundaries to support our customers with cameras – perfectly optimized to their application needs.”

Go to the original article...

Teledyne acquires Adimec

Image Sensors World        Go to the original article...

From Metrology News: https://metrology.news/teledyne-to-acquire-high-performance-camera-specialist-adimec/

Teledyne to Acquire High-Performance Camera Specialist Adimec

Teledyne Technologies has announced that it has entered into an agreement to acquire Adimec Holding B.V. and its subsidiaries (Adimec). Adimec, founded in 1992 and headquartered in Eindhoven, Netherlands, develops customized high-performance industrial and scientific cameras for applications where image quality is of paramount importance.

​“Adimec possesses uniquely complementary technology, products and customers in the shared strategic focus areas of healthcare, global defense, and semiconductor and electronics inspection,” said Edwin Roks, Chief Executive Officer of Teledyne. “For decades and from our own X-ray imaging business headquartered in Eindhoven, I have watched Adimec grow to become a leader in niche applications requiring truly accurate images for precise decision making in time-critical processes.”

Joost van Kuijk, Adimec’s Chief Executive Officer, commented, “It is with great pleasure that we are able to announce publicly that Adimec will become part of Teledyne. Adimec’s success has always been built on ensuring imaging excellence in demanding applications through an unwavering focus on individual customer requirements by our expert engineers and designers.”

Adimec co- Chief Executive Officer, Alex de Boer added, “As a leader in advanced imaging technologies for industrial and scientific markets, Teledyne is the perfect company to build further on the strong foundation the founders and management have established over the past three decades. The entire Adimec team is looking forward to contributing to an exciting future with Teledyne while extending technical boundaries to support our customers with cameras – perfectly optimized to their application needs.”

Go to the original article...

Computational Imaging Photon by Photon

Image Sensors World        Go to the original article...



Arizona Optical Sciences Colloquium: Andreas Velten, "Computational Imaging Photon by Photon"

Abstract
Our cameras usually measure light as an analog flux that varies as a function of space and time. This approximation ignores the quantum nature of light which is actually made of discrete photons each of which is collected at a sensor pixel at an instant in time. Single photon cameras have pixels that can detect photons and the timing of their arrival resulting in cameras with unprecedented capabilities. Concepts like motion blur, exposure time, and dynamic range that are essential to conventional cameras do not really apply to single photon sensors. In this presentation I will cover computational imaging capabilities enabled by single photon cameras and their applications.

The extreme time resolution of single photon cameras enables time of flight measurements we use for Non-Line-of-Sight (NLOS) Imaging. NLOS systems reconstruct images of scene using indirect light from reflections off a diffuse relay surface. After illuminating the relay surface with short pulses, the returning light is detected with high time resolution single photon cameras. We thereby capture video of the light propagation in the visible scene and reconstruct images of hidden parts of the scene.

Over the past decade NLOS imaging has seen rapid progress and we can now capture and reconstruct hidden scenes in real time and with high image quality. In this presentation I will give an overview over the imaging using single photon avalanche diodes, reconstruction methods, and applications driving NLOS imaging and provide an outlook for future development.

Bio
Andreas Velten is Associate Professor at the Department of Biostatistics and Medical Informatics and the Department of Electrical and Computer Engineering at the University of Wisconsin-Madison and directs the Computational Optics Group. He obtained his PhD with Prof. Jean-Claude Diels in Physics at the University of New Mexico in Albuquerque and was a postdoctoral associate of the Camera Culture Group at the MIT Media Lab. He has included in the MIT TR35 list of the world's top innovators under the age of 35 and is a senior member of NAI, OSA, and SPIE as well as a member of Sigma Xi. He is co-Founder of Onlume, a company that develops surgical imaging systems, and Ubicept, a company developing single photon imaging solutions.

Go to the original article...

SolidVue develops solid-state LiDAR chip

Image Sensors World        Go to the original article...

From PR Newswire: https://www.prnewswire.com/news-releases/solidvue-koreas-exclusive-developer-of-lidar-sensor-chips-showcasing-world-class-technological-capabilities-302018487.html

SolidVue, Korea's Exclusive Developer of LiDAR Sensor Chips Showcasing World-Class Technological Capabilities 

SEOUL, South Korea, Dec. 19, 2023 /PRNewswire/ -- SolidVue Inc., Korea's exclusive enterprise specialized in CMOS LiDAR (Light Detection and Ranging) sensor IC development, once again proved its global technological prowess by announcing its achievement of two LiDAR-related papers being accepted at the upcoming 'ISSCC (International Solid-State Circuits Conference) 2024'.

Established in 2020, SolidVue focuses on designing SoCs (System-on-Chip) for LiDAR sensors that comprehensively assesses the shapes and distances of surrounding objects. This is a pivotal technology assured to see significant growth in industries such as but not limited to autonomous vehicles and smart cities.

Jaehyuk Choi, the CEO of SolidVue, disclosed the company's development of Solid-State LiDAR sensor chips, aiming to replace all components of traditional mechanical LiDAR with semiconductors. This innovation is expected to reduce volume by up to one-tenth and costs by around one-hundredth compared to the aforementioned mechanical LiDAR.

Utilizing its proprietary CMOS SPAD (Single Photon Avalanche Diode) technology, SolidVue's LiDAR sensor chips flawlessly detect even minute particles of light, enhancing measurement precision. The company focuses on all LiDAR detection ranges (short, medium, long), notably making advancements in the medium-to-long distance sector suited for autonomous vehicles and robotics. By the third quarter of this year, they meticulously developed an Engineering Sample (ES) of a Solid-State LiDAR sensor chip capable of measuring up to 150 meters, and are aiming for mass production by the end of 2024.
Choi emphasized SolidVue's independent development of various core technologies such as SPAD devices, LiDAR sensor architectures, and integrated image signal processors, while also highlighting the advantage of SolidVue's single-chip design in cost and size reduction compared to the multi-chip setup of traditional mechanical LiDAR sensors.

SolidVue's technological prowess has been repeatedly acknowledged at the ISSCC, marking a remarkable achievement for a Korean fabless company. At the forthcoming ISSCC 2024, SolidVue is set to showcase its groundbreaking advancements, including a 50-meter mid-range Solid-State LiDAR sensor that features a resolution of 320x240 pixels and optimized memory efficiency. Additionally, a 10-meter short-range Flash LiDAR will be presented, characterized by its 160x120 pixel resolution and an ultra-low power consumption of 3-uW per pixel. These significant innovations are the result of collaborative efforts between SolidVue, Sungkyunkwan University, and UNIST.

Ahead of full product commercialization, SolidVue's focal point is securing domestic and international clients as well as attracting investments. In January, they plan to make their debut at the 'CES 2024', the world's largest electronics exhibition, by showcasing their 150-m LiDAR sensor chip ES products with the aim of initiating discussions and collaborations with leading global LiDAR suppliers.

Since its establishment, SolidVue has secured a cumulative $6 million in investments. Key Korean VCs such as KDB Bank, Smilegate Investment, Quantum Ventures Korea, Quad Ventures, among others, have participated as financial investors. Additionally, Furonteer, a company specializing in automated equipment for automotive camera modules, joined as SolidVue's first strategic investor.

CEO Choi stated, "Aligning with the projected surge in LiDAR demand post-2026, we are laying the groundwork for product commercialization." He added, "We are heavily engaged in joint research and development with major Korean corporations, discussing numerous LiDAR module supply deals, and exploring collaborations with global companies for overseas market penetration."

SolidVue’s LiDAR sensor chip and demonstration images (Photo=SolidVue)


Go to the original article...

Semiconductor Engineering article about noise in CMOS image sensors

Image Sensors World        Go to the original article...

Semiconductor Engineering published an article on dealing with noise in CMOS image sensors: https://semiengineering.com/dealing-with-noise-in-image-sensors/

Dealing With Noise In Image Sensors

The expanding use and importance of image sensors in safety-critical applications such as automotive and medical devices has transformed noise from an annoyance into a life-threatening problem that requires a real-time solution.

In consumer cameras, noise typically results in grainy images, often associated with poor lighting, the speed at which an image is captured, or a faulty sensor. Typically, that image can be cleaned up afterward, such as reducing glare in a selfie. But in cars, glare in an ADAS image system can affect how quickly the brakes are applied. And in vehicles or medical devices, systems are so complex that external effects can affect images, including heat, electromagnetic interference, and vibration. This can be particularly problematic in AI-enabled computer vision systems where massive amounts of data need to be processed at extremely high speeds. And any of this can be affected by aging circuits, due to dielectric breakdown or changes in signal paths due to electromigration.

Thresholds for noise tolerance vary by application. “A simple motion-activated security camera or animal-motion detection system at a zoo can tolerate much more noise and operate at much lower resolution than a CT scanner or MRI system used in life-saving medical contexts,” said Brad Jolly, senior applications engineer at Keysight. “[Noise] can mean anything that produces errors in a component or system that acquires any form of image, including visible light, thermal, X-ray, radio frequency (RF), and microwave.”

Tolerance is also determined by human perception, explained Andreas Suess, senior manager for novel image sensor systems in OmniVision’s Office of the CTO. “Humans perceive an image as pleasing with a signal-to-noise ratio (SNR) of >20dB, ideally >40dB. But objects can often be seen at low SNR levels of 1dB or less. For computational imaging, in order to deduce what noise level can be accepted one needs to be aware of their application-level quality metrics and study the sensitivity of these metrics against noise carefully.”

Noise basics for imaging sensors
No noise is ideal, but it’s an unrealistic goal. “With an image sensor, noise is inevitable,” said Isadore Katz, senior marketing director at Siemens Digital Industries Software. “It’s when you’ve got a pixel value that’s currently out of range with respect to what you would have expected at that point. You can’t design it out of the sensor. It’s just part of the way image sensors work. The only thing you can do is post-process it away. You say to yourself, ‘That’s not the expected value. What should it have been?’”

Primarily noise is categorized as fixed pattern noise and temporal noise, and both explain why engineers must cope with its inevitability. “Temporal noise is a fundamental process based on the quantization of light (photons) and charge (electrons),” said Suess. “When capturing an amount of light over a given exposure, one will observe a varying amount of photons which is known as photon shot noise, which is a fundamental noise process present in all imaging devices.” In fact, even without the presence of light, a dark signal, also known as dark current, can exhibit shot noise.

Worse, even heat alone can cause noise, which can cause difficulties for ADAS sensors under extreme conditions. “An image sensor has to work over the brightest and darkest conditions; it also has to work at -20 degrees and up to 120 degrees,” said Jayson Bethurem, vice president of marketing and business development at Flex Logix. “All CMOS sensors run slower and get noisier when it’s hotter. They run faster, a little cleaner, when it’s cold, but only up to a certain point. When it gets too cold, they start to have other negative effects. Most of these ICs self-heat when they’re running, so noise gets inserted there too. The only way to get rid of that is to filter it out digitally.”

Fixed-pattern noise stems from process non-uniformities, as well as design choices and can cause offset, gain or settling artifacts. Fixed pattern noise can manifest itself as variations in quantum efficiency, offset or gain, as well as read noise. Mitigating fixed pattern noise requires effort on process, device, circuit design, and signal processing levels.

Fig. 1: Noise issues and resolution. Source: Flex Logix

In addition, noise affects both digital and analog systems. “Digital systems always start by digitizing data from some analog source, so digital systems start with all the same noise issues that analog systems do,” Jolly said. “In addition, digital systems must deal with quantization and pixelation issues, which always arise whenever some analog signal value is converted into a bit string. If the bits are then subjected to a lossy compression algorithm, this introduces additional noise. Furthermore, the increase in high-speed digital technologies such as double data rate memory (DDRx), quadrature amplitude modulation (QAM-x), non-return-to-zero (NRZ) line coding, pulse amplitude modulation (PAM), and other complex modulation schemes means that reflections and cross-channel coupling introduce noise into the system, possibly to the point of bit slipping and bit flipping. Many of these issues may be automatically handled by error correcting mechanisms within the digital protocol firmware or hardware.”
 
Noise can be introduced anywhere along the imaging chain and create a wide range of problems. “For example, the object being imaged may have shadows, occlusions, internal reflections, non-coplanarity issues, parallax, or even subtle vibrations, especially in a manufacturing environment,” Jolly explained. “In such situations, noise can complicate inspections. For example, a multi-layer circuit board being imaged with X-ray technology could have solder joint shadows if there are overlapping grid array components on the top and bottom of the board.”
 
Variability in the alignment between the image sensor and the subject of the image — rotational or translational offset, and planar skew — may add to the variability. And thermal gradients in the gap between the subject and the sensor may introduce noise, such as heat shimmer on a hot road. Low light and too-fast image capture also may introduce noise.
 
There are other issues to consider, as well. “A lens in the imaging chain may introduce noise, including chromatic aberration, spherical aberration, and errors associated with microscopic dust or lens imperfections. The lens controls the focus, depth of field, and focal plane of the image, all of which are key aspects of image acquisition. Finally, the imaging sensing hardware itself has normal manufacturing variability and thermal responses, even when operating in its specified range. A sensor with low resolution or low dynamic range is also likely to distort an image. Power integrity issues in the lines that power the sensor may show up as noise in the image. Finally, the camera’s opto-electronic conversion function (OECF) will play a key role in image quality,” Jolly added.
 
External sources of noise also can include flicker, which needs to be resolved for clear vision.

Fig. 2: Flicker from LED traffic lights or traffic signs poses a serious challenge for HDR solutions, preventing driver-assistance and autonomous driving systems from being able to correctly detect lighted traffic signs. Source: OmniVision

Imaging basics for ADAS 

While noise would seem to be a critical problem for ADAS sensors, given the potential for harm or damage, it’s actually less of an issue than for something like a consumer camera, where out-of-range pixels can ruin an image. ADAS is not concerned with aesthetics. It focuses on a binary decision — brake or not brake. In fact, ADAS algorithms are trained on lower-resolution images, and ignore noise that would be a product-killer in a consumer camera.

For example, to find a cat in the middle of an image, first the image is “segmented,” a process in which a bounding box is drawn around a potential object of interest. Then the image is fed into a neural net, and each bounding region is evaluated. The images are labeled, and then an algorithm can train itself to identify what’s salient. “That’s a cat. We should care about it and brake. It’s a skunk. We don’t care about it. Run it over,” said Katz. That may sound like a bad joke, but ADAS algorithms actually are trained to assign lower values to certain animals.

“It is about safety in the end, not so much ethics,” Katz said. “Even if someone does not care about moose, the car still has to brake because of the danger to the passengers. Hitting the brakes in any situation can pose a risk.” But higher values are assigned to cats and dogs, rather than skunks and squirrels.

If an object is fully or partly occluded by another object or obscured by light flare, it will require more advanced algorithms to correctly discern what it is. After the frame is received from the camera and has gone through basic image signal processing, the image is then presented to a neural net.

“Now you’ve left the domain of image signal processing and entered the domain of computer vision, which starts with a frame or sequence of frames that have been cleaned up and are ready for presentation,” said Katz. “Then you’re going to package those frames up and send them off to an AI algorithm for training, or you’re going to take those images and then process them on a local neural net, which will start by creating bounding boxes around each of the artifacts that are inside the frame. If the AI can’t recognize an object in the frame it’s examining, it will try to recognize it in the following or preceding frames.”

In a risky situation, the automatic braking system has about 120ms to respond, so all of this processing needs to happen within the car. In fact, there may not even be time to route from the sensor to the car’s own processor. “Here are some numbers to think about,” said Katz. “At 65 mph, a car is moving at 95 feet per second. At 65 mph, it takes about 500 feet to come to a complete stop. So even at 32.5 mph in a car, it will travel 47 feet in 1 second. If the total round trip from sensor to AI to brake took a half-second, you would be 25 feet down the road and still need to brake. Now keep in mind that the sensor is capturing images at about 30 frames per second. So every 33 milliseconds, the AI has to make another decision.”

In response, companies are using high-level synthesis to develop smart sensors, in which an additional die — with all the traditional functions of an image signal processor (ISP), such as noise reduction, deblurring, and edge detection — is sandwiched directly adjacent to the sensor.

“It’s now starting to include computer vision capability, which can be algorithmic or AI-driven,” said Katz. “You’ll start to see a smart sensor that has a neural net built inside. It could even be a reprogrammable neural net, so you can make updates for the different weights and parameters as soon as it gets smarter.”

If such a scheme succeeds, it means that a sensor could perform actions locally, allowing for real-time decisions. It also could repackage the information to be stored and processed in the cloud or car, for later training to increase accurate, rapid decision-making. In fact, many modern ISPs can already dynamically compensate for image quality. “For example, if there is a sudden change from bright light to low light, or vice-versa, the ISP can detect this and change the sensor settings,” he said. “However, this feedback occurs well before the image gets to the AI and object detection phase, such that subsequent frames are cleaner going into the AI or object detection.”

One application that already exists is driver monitoring, which presents another crucial noise issue for designers. “The car can have the sun shining right in your face, saturating everything, or the complete opposite where it’s totally dark and the only light is emitting off your dashboard,” said Bethurem. “To build an analog sensor and the associated analog equipment to have that much dynamic range and the required level of detail, that’s where noise is a challenge, because you can’t build a sensor of that much dynamic range to be perfect. On the edges, where it’s really bright or over-saturated bright, it’s going to lose quality, which has to get made up. And those are sometimes the most dangerous times, when you want to make sure the driver is doing what they’re supposed to be doing.”

AI and noise

The challenges of noise and the increasing intelligence of sensors have also attracted the attention of the AI community.

“There are already AI systems capable of filling in occluded parts of a digital image,” said Tony Chan Carusone, CTO at Alphawave Semi. “This has obvious potential for ADAS. However, to perform this at the edge in real-time will require new dedicated processing elements to provide the immediate feedback required for safety-critical systems. This is a perfect example of an area where we can expect to see new custom silicon solutions.”

Steve Roddy, chief marketing officer at Quadric, notes that path already is being pioneered. “Look at Android’s/Google’s ‘Magic Eraser’ functionality in phones – quickly deleting photo-bombers and other background objects and filling in the blanks. Doing the same on an automotive sensor to remove occlusions and ‘fill in the blanks’ is a known solved problem. Doing it in real time is a simple compute scaling problem. In 5nm technology today, ~10mm2 can get you a full 40 TOPs of fully programmable GPNPU capability. That’s a tiny fraction of the large (> 400 mm2) ADAS chips being designed today. Thus, there’s likely to be more than sufficient programmable GPNPU compute capability to tackle these kinds of use cases.”

Analyzing noise 

Analyzing noise in image sensors is a challenging and active area of research that dates back more than 50 years. The general advice from vendors is to talk to them directly to determine if their instrumentation aligns with a project’s specific needs.

“Noise is of a lot of interest to customers,” said Samad Parekh, product manager for analog/RF simulation at Synopsys. “There are many different ways of dealing with it, and some are very well understood. You can represent the noise in a closed form expression, and because of that you can very accurately predict what the noise profile is going to look like. Other mechanisms are not as well understood or are not as linear. Because those are more random, there’s a lot more effort required to characterize the noise or design with that constraint in mind.”

Best practices 

Keysight’s Jolly offered day-to-day advice for reducing and managing noise in image sensor projects:

  • Clearly define the objectives of the sensor as part of the overall system. For example, a slow, low-resolution thermal imager or vector network analyzer may reveal information about subcutaneous or subdural disease or injury that would be invisible to a high-resolution, high-speed visible light sensor. Work with your component and module vendors to understand what noise analysis and denoising they have already done. You will learn a lot and be able to leverage a lot of excellent work that has already been accomplished. Also, consider image noise throughout the total product life cycle and use simulation tools early in your design phase to minimize issues caused by sub-optimal signal integrity or power integrity.
  • Analyze the problem from the perspective of the end user. What are their objectives? What are their concerns? What skills do they possess? Can they make appropriate interventions and modifications? What is their budget? It may turn out, for example, that a fully automated system with a higher amount of noise may be more appropriate for some applications than a more complex system that can achieve much lower noise.
  • Become familiar with camera, optical, and imaging standards that are available, such as ISO 9358, 12232, 12233, 14524, and 15739, as well as European Machine Vision Association (EMVA) 1288.
  •  Investigate the latest research on the use of higher mathematics, statistics, and artificial intelligence in de-noising. Some of these techniques include expectation maximization estimation, Bayesian estimation, linear minimum mean square error estimation, higher-order partial differential equations, and convolutional neural networks.

Future approaches 

While current ADAS systems may tolerate more noise than other forms of imaging, that may not be the case in the future. A greater variety of use cases will push image sensors towards higher resolutions, which in turn will require more localized processing and noise reduction.

“A lot of the image processing in the past was VGA, but applications like internal cabin monitoring, such as eye-tracking the driver and passengers to recognize what’s going inside the cabin — including monitoring driver alertness or whether someone got left behind in the backseat — are going to start to drive us towards higher-resolution images,” Katz said. “In turn, that’s going to start to mandate increasing levels of noise reduction, dealing with image obstructions, and with being able to process a lot more data locally. When you go from VGA to 720 to 1020 up to 4k, you’re increasing the number of pixels you have to operate with by 4X. Every one of these demands more and more localized processing. That’s where we’ll end up going.”

Go to the original article...

Talk on digital camera misunderstandings and HDR

Image Sensors World        Go to the original article...

Wayne Prentice presented a talk titled "Digital Camera Myths, Mis-statements and Misunderstandings" at the NY chapter meeting of IS&T (Society for imaging Science and Tech.) on 17 Jan. 2024.

Abstract: The digital camera system is deceptively complex.  Understanding camera operation/design requires some knowledge of the parts:  photometry, radiometry, optics, sensor physics, sensor design, signal processing, image processing, color science, statistics, human perception, and image/video encoding. With all these parts, it is easy to miss something. This talk was inspired by interactions with co-workers and clients.  It has been my experience that some subtle, yet important points are often missed and can lead to suboptimal product and design decisions that could be avoided. The goal of this talk is to fill in some of those gaps.


Another version of the talk at RIT imaging science weekly seminar on Feb 7, 2024:

 
CIS Weekly Seminar: Wayne Prentice - Digital Camera Myths, Misstatements, and Misunderstandings

Wayne Prentice also gave a talk in 2022 on HDR at the NY IS&T chapter meeting:

 
High Dynamic Range (HDR) Imaging: Theory and Practical Considerations


Bio: Wayne has been working in the imaging industry for over 35 years. He has a BSEE from Clarkson University and a Masters in Imaging Science from RIT. Wayne has worked on imaging equipment ranging from x-ray, CAT scanners, MRI, extra-terrestrial imaging, and digital cameras. Much of Wayne's digital camera experience came from 17 years working at Kodak R&D, product development for digital cameras. He holds 16 US patents in digital imaging. At Kodak Wayne became the lead image scientist and manager for Digital Camera R&D group. He was responsible for competitive testing, image quality testing, new feature development, and image science aspects of product commercialization. Wayne has worked as an independent contractor over the past 5 years providing solutions to a wide range of imaging challenges, mostly in the areas of developing custom camera applications, computer vision and HDR imaging.

Go to the original article...

A message from IEEE Sensors 2024 conference co-chair

Image Sensors World        Go to the original article...

In my role as Industrial Co-chair of IEEE SENSORS 2024 conference to be held this year in Kobe, Japan, in October, I want to invite the participation of the image sensor community. SENSORS is a vibrant conference – 1000 attendees in Vienna for SENSORS 2023 – covering sensors devices and systems. I can testify that there is much overlap in the issues addressed in sensors, but for historical reasons it appears that this is a conference that the image sensor community has not had on their radar. I, along with my Industrial Co-Chair Sozo Yokogawa of SONY Semiconductor, would like to change this.

Our proposal is to highlight image sensor technology at the conference through a combination of focused sessions, keynote speakers, a workshop, tutorial, and networking possibilities. I would like to use as a model the success efforts that I have been involved in over many years as part being involved with the technical committees at IEDM and ISSCC. To accomplish this we would like to reach out to our image sensor community to help promote this goal through networking and through volunteering informally or formally.

The sponsoring IEEE Sensor Council, of which I am an AdCom member, has two initiatives that are of note related to this proposal. One initiative is to increase industrial involvement in a way that prioritizes the healthy technical interaction of industry, academia, and laboratories. The other initiative is to develop  close ties between conference participation and the high-impact council-sponsored Sensor Journal and Sensor Letters, enabling both the publishing of work from the conference in the journals and providing a path where accepted papers in the journals are accepted also for presentation at SENSORS.

I have discussed this informally in our community over the last year with positive comments. I look forward to feedback, but most importantly, support of this goal. I look forward to hearing from you and seeing many of you in Kobe.


Dan McGrath
TechInsights Inc.
AdCom member, IEEE Solid State Circuits Society & IEEE Sensor Council
dmcgrath@ieee.org

 

Go to the original article...

More videos: Vision Research, Sick IVP, Teledyne e2v, onsemi

Image Sensors World        Go to the original article...

Vision Research publishes a EMVA 1288 webinar on camera performance evaluation:


SICK IVP explains the recent image sensor innovations:


Teledyne e2v talks about selecting and matching the optics to an image sensor:



Onsemi explains its eHDR approach:


Go to the original article...

NIST develops SNSPD detector array for mid-IR

Image Sensors World        Go to the original article...

Phys.org covered a recently published paper titled "A 64-pixel mid-infrared single-photon imager based on superconducting nanowire detectors" by a team from NIST in the journal Applied Physics Letters. 

Abstract:

A large-format mid-infrared single-photon imager with very low dark count rates would enable a broad range of applications in fields like astronomy and chemistry. Superconducting nanowire single-photon detectors (SNSPDs) are a mature photon-counting technology as demonstrated by their figures of merit such as high detection efficiencies and very low dark count rates. However, scaling SNSPDs to large array sizes for mid-infrared applications requires sophisticated readout architectures in addition to superconducting materials development. In this work, an SNSPD array design that combines a thermally coupled row-column multiplexing architecture with a thermally coupled time-of-flight transmission line was developed for mid-infrared applications. The design requires only six cables and can be scaled to larger array sizes. The demonstration of a 64-pixel array shows promising results for wavelengths between 3.4 μm and 10 μm, which will enable the use of this single-photon detector technology for a broad range of new applications.

From phys.org: https://phys.org/news/2024-01-wavelength-scientific-exploration-photon-detectors.html

NIST researchers have unveiled a new kind of single-photon detector array that can identify individual particles of light (photons). It's useful for spectroscopy, where scientists observe how molecules absorb different colors (or wavelengths) of light. Each molecule has its own color fingerprint on the light spectrum.

This particular detector can catch single photons in the mid-infrared. Here's how the array works: Multiple super-cold detectors are connected to one another (shown above) in a grid of sorts with an electrical current flowing through. When a photon strikes one of the detectors, it creates a hot spot and acts as a dam to block the current for a short amount of time.

The researchers developed a new technique to determine where, along the columns and rows, the hot spot is. From there, they can create single-photon pictures.

The whole setup is challenging because mid-infrared waves are longer and have less energy to cause the hot spots, compared to visible light, for example. But the scientists have a few tricks up their sleeve and used them to make it work.


Go to the original article...

Canon’s twisted photodiodes improve autofocus

Image Sensors World        Go to the original article...

IEEE Spectrum has a recent article discussing a 2023 IEDM paper from Canon.

Paper: Shirahige et al., "40-5. Cross Dual-Pixel Twisted-Photodiode Image Sensor for All-Directional Auto Focus", IEDM 2023.

Spectrum article: https://spectrum.ieee.org/autofocus-canon-twisted-diode

Diodes at Right Angles Double Autofocus Capacity: Canon twists photosensor rules to build new tech from familiar parts

Above are images of a rotating object using Canon's twisted photodiode autofocus [middle column] and a standard dual pixel autofocus [right column]. The gray column is the raw image and the top and bottom rows were taken at different times. Courtesy: Canon
 

In 2013, Canon introduced its first dual-pixel autofocus, a technology that allows almost every pixel in a photo sensor to help focus the image it takes. Now Canon researchers say they’ve developed a new improvement on their previous improvement to autofocus tech. And this new approach finds its focus faster, better, and in lower light—without requiring new components and technologies to be invented first. It simply involves one small twist.

Shirahige said they have developed a new image sensor whose photodiodes are perpendicular to each other. This “cross dual-pixel twisted-photodiode,” they note, performs better than autofocus sensors in the marketplace today [that place] two photodiodes under a shared lens, which allowed the sensor to detect when incoming light on both diodes was in phase, and therefore, in focus. [Even earlier technique was to] sample [a few image] pixels to adjust the camera lens based on the contrast in the image, a slower method. [In any case, the] focusing pixels could not record image data, so there was always a trade-off between autofocusing ability and image quality. Instead, the dual-pixel autofocus approach made it possible for almost every pixel in the sensor to contribute to focusing the lens ahead of shooting, and to then contribute information to the final photo. The advantages included speed, better focus in low-light situations, and better focus across a greater fraction of the image.

However, these multipixel photodiodes have a disadvantage: the arrangement of photodiodes favors light on one axis at the cost of the other. [...]Canon’s new structure, which they call a twisted-photodiode image sensor, stacks two identical photodiodes, one oriented to capture horizontal patterns and the other rotated ninety degrees to capture vertical patterns. Because the horizontally and vertically oriented photodiodes are the same type of components, the data each generates requires no more extra processing time or power than that of any other diode in the system. So the overall autofocus speed is higher. The orthogonal-diode arrangement, by virtue of is comparable simplicity, also achieves faster readouts than more complex quadruple or other elaborate photodiode structures.

Canon’s team reported that their system is also much faster at capturing the electrons transferred from the photodiodes, capturing as many as 121,000 electrons with the same lag as previous photodiodes, which is more than double the capacity of comparable earlier systems.

[Canon did] not provide an estimate of when the technology might appear in commercial systems.

Go to the original article...

New Videos from onsemi, AI Storm, ST, EPFL

Image Sensors World        Go to the original article...

Onsemi emphasizes its internal fab capabilities:

 

At CES, AI Storm describes the analog AI approach contained in its image sensor:
 


Also at CES, ST presents its new 3D sensing approach:
 

 

EPFL presents a combo of spiking neuron processor and  SPAD sensor:

EPFL also presents a burst SPAD imager:


Go to the original article...

NIT and INSP collaboration on quantum dot SWIR imager

Image Sensors World        Go to the original article...

A video on the different stages in the development of sensors for infrared cameras from Institut des NanoSciences de Paris (INSP):

  

A press release from November 2023 related to this technology:

NIT and INSP will exhibit the world’s first HgTe CQD SWIR camera during the Forum Innovation Defense held in Paris on 23-28 November.

NIT (New Imaging Technologies) and INSP (Institute of Nanosciences of Paris) are proud to announce the debut of the world’s first Short-Wave Infrared (SWIR) camera featuring an innovative HgTe (Mercury Telluride) Quantum Dot focal plane array sensor. This groundbreaking technological achievement will be showcased during the Forum Innovation Defense, taking place in Paris from November 23 to November 28, 2023.

The collaboration between NIT and INSP has resulted in a pioneering SWIR infrared camera, utilizing the advanced HgTe quantum dot sensor technology, which promises unprecedented capabilities in defense and security applications.

Selected by the French Ministry of Defense, NIT, and INSP will present the culmination of years of dedicated research and development efforts in this revolutionary camera. The development of the CQD (Colloidal Quantum Dot) sensor was made possible through funding provided by the French Defense Procurement Agency (DGA) and the National Research Agency, part of a rigorous three-year R&D program.

Go to the original article...

Optical Imaging and Photography Book Announcement

Image Sensors World        Go to the original article...

De Gruyter published a second edition of "Optical Imaging and Photography" book by Ulrich Teubner and Hans Josef Brückner

Different imaging systems and sensors are reviewed as well as lenses and aberrations, image intensification and processing. The second and enlarged edition has been updated by actual developments and complemented by the topic of smart phone camera photography.

Go to the original article...

SWIR Systems Announces Handheld Mobile Camera

Image Sensors World        Go to the original article...

SWIR Vision Systems Announces Acuros GO 6 MP Handheld SWIR Camera Empowering Mobile SWIR Imaging with Cutting-Edge CQD Sensor Technology


Durham, North Carolina, January 22, 2024 — SWIR Vision Systems, a leader in short-wavelength infrared (SWIR) imaging technology, proudly introduces the Acuros® GO 6 MP SWIR camera, a groundbreaking portable, handheld mirrorless camera featuring the company's high-resolution Colloidal Quantum Dot SWIR sensor technology.

The Acuros GO provides users with unprecedented flexibility, portability, and performance for diverse imaging applications and markets including defense, law enforcement, first responder applications, agricultural imaging, industrial vision, scientific, and consumer photography. 

The SWIR capabilities of the Acuros GO make it valuable for imaging through degraded visual environments such as rain, snow, haze, smog, smoke, and dust. The reduced atmospheric scattering of SWIR photons enables exceptional long-range imaging, allowing photographers to capture sweeping panoramas and immersive vistas. By combining the camera's broad spectral response with optical filters, the camera can be used for detecting and imaging moisture, sugar content, hydrocarbons, and other infrared chemical signatures.

The Acuros GO is a ruggedized, IP67-rated camera with a mirrorless design, offering versatility and durability for on-the-go imaging needs.

Key Features of the Acuros GO 6 MP Mirrorless Camera include:
  • 3064 x 2040 pixel resolution using the new 7µm pitch Acuros CQD sensor
  • Broadband spectral sensitivity from 400 nm to 1700 nm
  • Battery powered operation
  • Global snapshot shutter design with video frame rates of 30 fps
  • Digital shutter speeds up to 1/100,000 (10 us) to capture high-speed events without motion blur
  • Automatic Gain Control (AGC), Auto Exposure (AE), and dynamic sensor calibrations (NUCs) for high-quality image capture across various light intensities and environmental conditions
Ethan Klem, SWIR Vision’s Chief Technology Officer commented, “The Acuros GO brings portable infrared imaging to vision professionals and photography enthusiasts looking to leverage the capabilities of near and shortwave infrared imaging.”

For more information about the Acuros GO 6 MP SWIR Camera and SWIR Vision Systems' CQD sensor technology, please visit  www.swirvisionsystems.com/acuros-go-camera/.
 

The Camera:

 
Acuros GO 6 MP Camera Front

 
Acuros GO 6 MP Camera Back

Acuros GO 6 MP Camera Specification

 

Go to the original article...

Hokuyo solid-state LiDAR uses Lumotive’s beamsteering technology

Image Sensors World        Go to the original article...

From: https://hokuyo-usa.com/resources/blog/pioneering-autonomous-capabilities-solid-state-3d-lidar

Hokuyo YLM-X001

Autonomous technologies are proliferating across industries at breakneck speed. Various sectors, like manufacturing, agriculture, storage, freight, etc., are rushing to embrace robotics, automation, and self-driving capabilities.

At the helm of this autonomous transformation is LiDAR, the eyes that allow technologies to perceive and understand their surroundings. LiDAR is like a hawk scanning the landscape with sharp vision, giving clarity and insight into what stands before it. Additionally, research solidifies the claims of increasing LiDAR usage and anticipates that the global LiDAR market will reach 5.35 billion USD by 2030.

While spinning mechanical LiDAR sensors have paved the way, acting as the eyes of autonomous systems, they remain too bulky, delicate, and expensive for many real-world applications. However, new solid-state 3D LiDAR is here to change the game. These LiDARs pack thousands of tiny, durable laser beams onto a single chip to provide unmatched reliability and affordability.

How YLM-X001 3D LiDAR Range Sensor is Transforming Scanning Capabilities
The YLM-X001 outdoor-use 3D LiDAR by Hokuyo sets new standards with groundbreaking features. The range sensor has a small form factor with 119 (W) x 85 (D) x79 (H) dimensions, allowing it to become a part of any vehicle seamlessly. Additionally, despite the small size, it boasts a scanning range of 120° horizontally and 90° vertically. Therefore, it can scan a larger scene and provide data in real-time to avoid collisions with any object.

Furthermore, at the heart of this LiDAR range sensor is the Light Control Metasurface (LCM) technology patented and protected by Lumotive, Inc. This jointly developed light detection and ranging sensor works using this beam-steering technology. It uses the deflection angle of liquid crystals without relying on mechanical parts. This digital scanning technology combines a line light laser with VCSEL Laser and liquid crystal deflection, enabling LiDAR to perform efficient 3D object recognition with high resolution.

Also, the LCM not only eliminates mechanical components but also aids in reducing multipath interference and inter-sensor interference. Reduction of both interferences results in achieving a better level of stability in measurement that was previously unattainable using mechanical LiDARs.
The YLM-X001 3D LiDAR range sensors offer dynamic digital scanning, providing stable distance accuracy in multipath and LiDAR-to-LiDAR interference. It can measure the distance of stationary and repositioning objects in the moving direction and on the road surface via continuous and dynamic scanning.

Notable Features of YLM-X001
New and market-leading features are packed inside this LiDAR, making it a better choice than mechanical LiDARs.

  • ROS2 Compatible: A globally accepted standard software platform with open-source libraries helping you to develop and run robotics applications efficiently.
  • Ethernet 1000BASE-T: The interface is Ethernet 1000BASE-T compatible, ensuring fast, precise, and stable integration into various robotic systems.
  • 0.5m to 7m Detection Range: The wide range makes it suitable for close and distant monitoring.
  • Distance x 0.5% Deviation: It ensures an exceptional distance accuracy with a Distance x 0.5% deviation. At a distance of 5m under 100,0000lx illumination, the LiDAR provides an accuracy of 25mm.
  • 10Hz or More Frame Rate: YLM-X001 delivers real-time data for dynamic environments with a 10Hz or more frame rate. It offers QVGS (320 x 240) in standard mode and VGS (640 x 480) in high-resolution mode. The angular resolution is 0.375° or less (0.188° in high-resolution mode) for detailed and accurate scanning.

Using 3D LiDAR in Real World Applications
The YLM-X001 finds its stride in various applications, making it an invaluable asset in robotics.

AGV/AMR Integration
Our 3D LiDAR sensors enhance AGV/AMR navigation and obstacle detection precision. They continuously scan the environment, providing real-time data, ideal for autonomous vehicles in dynamic environments.
Additionally, the fork trucks can utilize the capabilities of 3D LiDAR for accurate detection of container and pallet entrances. Plus, it can create path plans and ensure the accurate position of the forklift.

Service Robot Operations
Robots with the capabilities of 3D LiDAR will have an enhanced framework for avoiding obstacles and monitoring road surface conditions. Whether navigating complex indoor or outdoor spaces, these robots can adapt to changing conditions with unmatched accuracy.

Enhance Autonomous Mobility with Hokuyo YLM-X001 3D LiDAR
As industries embrace autonomous technology, the need for accurate range scanning sensors increases. Solid-state LiDARs offer a small form factor and precise measurements, becoming an ideal replacement for mechanical LiDARs.

Our team at Hokuyo is working relentlessly to help you achieve the pinnacle of autonomous mobility. We are developing high-end sensor solutions for a variety of autonomous applications. Our recent development, the YLM-X001 3D LiDAR range sensors, is here for accurate obstacle detection and continuous scanning.

Technical specifications of the YLM-X001 3D LiDAR range sensor: https://www.hokuyo-aut.jp/search/single.php?serial=247#drawing

Go to the original article...

Paper on non-toxic quantum dot SWIR sensors in Nature Photonics

Image Sensors World        Go to the original article...

In a paper titled "Silver telluride colloidal quantum dot infrared photodetectors and image sensors" Wang et al. from  ICFO, ICREA, and Qurv Technologies (Spain) write:

Photodetectors that are sensitive in the shortwave-infrared (SWIR) range (1–2 µm) are of great interest for applications such as machine vision, autonomous driving and three-dimensional, night and adverse weather imaging, among others. Currently available technologies in the SWIR range rely on costly epitaxial semiconductors that are not monolithically integrated with complementary metal–oxide–semiconductor electronics. Solution-processed quantum dots can address this challenge by enabling low-cost manufacturing and simple monolithic integration on silicon in a back-end-of-line process. So far, colloidal quantum dot materials to access the SWIR regime are mostly based on lead sulfide and mercury telluride compounds, imposing major regulatory concerns for their deployment in consumer electronics due to the presence of toxic heavy metals. Here we report a new synthesis method for environmentally friendly silver telluride quantum dots and their application in high-performance SWIR photodetectors. The colloidal quantum dot photodetector stack employs materials compliant with the Restriction of Hazardous Substances directives and is sensitive in the spectral range from 350 nm to 1,600 nm. The room-temperature detectivity is of the order of 10^{12} Jones, the 3 dB bandwidth is in excess of 0.1 MHz and the linear dynamic range is over 118 dB. We also realize a monolithically integrated SWIR imager based on solution-processed, toxic-heavy-metal-free materials, thus paving the way for this technology to the consumer electronics market.
Full paper (behind paywall): https://www.nature.com/articles/s41566-023-01345-3

Coverage in phys.org:  https://phys.org/news/2024-01-toxic-quantum-dots-pave-cmos.html

Non-toxic quantum dots pave the way towards CMOS shortwave infrared image sensors for consumer electronics

Invisible to our eyes, shortwave infrared (SWIR) light can enable unprecedented reliability, function and performance in high-volume, computer vision first applications in service robotics, automotive and consumer electronics markets.

Image sensors with SWIR sensitivity can operate reliably under adverse conditions such as bright sunlight, fog, haze and smoke. Furthermore, the SWIR range provides eye-safe illumination sources and opens up the possibility of detecting material properties through molecular imaging.

Colloidal quantum dots (CQD)-based image sensor technology offers a promising technology platform to enable high-volume compatible image sensors in the SWIR.

CQDs, nanometric semiconductor crystals, are a solution-processed material platform that can be integrated with CMOS and enables access to the SWIR range. However, a fundamental roadblock exists in translating SWIR-sensitive quantum dots into key enabling technology for mass-market applications, as they often contain heavy metals like lead or mercury (IV-VI Pb, Hg-chalcogenide semiconductors).
These materials are subject to regulations by the Restriction of Hazardous Substances (RoHS), a European directive that regulates their use in commercial consumer electronic applications.

In a study published in Nature Photonics, ICFO researchers Yongjie Wang, Lucheng Peng, and Aditya Malla led by ICREA Prof. at ICFO Gerasimos Konstantatos, in collaboration with researchers Julien Schreier, Yu Bi, Andres Black, and Stijn Goossens, from Qurv, have reported on the development of high-performance infrared photodetectors and an SWIR image sensor operating at room temperature based on non-toxic colloidal quantum dots.


The study describes a new method for synthesizing size tunable, phosphine-free silver telluride (Ag2Te) quantum dots while preserving the advantageous properties of traditional heavy-metal counterparts, paving the way to the introduction of SWIR colloidal quantum dot technology in high-volume markets.
While investigating how to synthesize silver bismuth telluride (AgBiTe2) nanocrystals to extend the spectral coverage of the AsBiS2 technology to enhance the performance of photovoltaic devices, the researchers obtained silver telluride (Ag2Te) as a by-product.

This material showed a strong and tunable quantum-confined absorption akin to quantum dots. They realized its potential for SWIR photodetectors and image sensors and pivoted their efforts to achieve and control a new process to synthesize phosphine-free versions of silver telluride quantum dots, as phosphine was found to have a detrimental impact on the optoelectronic properties of the quantum dots relevant to photodetection.

In their new synthetic method, the team used different phosphine-free complexes such as a tellurium and silver precursors that led them to obtain quantum dots with well-controlled size distribution and excitonic peaks over a very broad range of the spectrum.

After fabricating and characterizing them, the newly synthesized quantum dots exhibited remarkable performances, with distinct excitonic peaks over 1,500nm—an unprecedented achievement compared to previous phosphine-based techniques for quantum dot fabrication.

The researchers then decided to implement the obtained phosphine-free quantum dots to fabricate a simple laboratory scale photodetector on the common standard ITO (Indium Tin Oxide)-coated glass substrate to characterize the devices and measure their properties.

"Those lab-scale devices are operated with shining light from the bottom. For CMOS integrated CQD stacks, light comes from the top, whereas the bottom part of the device is taken by the CMOS electronics," said Yongjie Wang, postdoc researcher at ICFO and first author of the study. "So, the first challenge we had to overcome was reverting the device setup. A process that in theory sounds simple, but in reality proved to be a challenging task."

Initially, the photodiode exhibited a low performance in sensing SWIR light, prompting a redesign that incorporated a buffer layer. This adjustment significantly enhanced the photodetector performance, resulting in a SWIR photodiode exhibiting a spectral range from 350nm to 1,600nm, a linear dynamic range exceeding 118 dB, a -3dB bandwidth surpassing 110 kHz and a room temperature detectivity of the order 10^{12} Jones.

"To the best of our knowledge, the photodiodes reported here have for the first time realized solution processed, non-toxic shortwave infrared photodiodes with figures of merit on par with other heavy-metal containing counterparts," Gerasimos Konstantatos, ICREA Prof. at ICFO and leading author of the study mentions.

"These results further support the fact that Ag2Te quantum dots emerge as a promising RoHS-compliant material for low-cost, high-performance SWIR photodetectors applications."
With the successful development of this heavy-metal-free quantum dot based photodetector, the researchers went further and teamed up with Qurv, an ICFO spin-off, to demonstrate its potential by constructing a SWIR image sensor as a case study.

The team integrated the new photodiode with a CMOS based read-out integrated circuit (ROIC) focal plane array (FPA) demonstrating for the first time a proof-of-concept, non-toxic, room temperature-operating SWIR quantum dot based image sensor.

The authors of the study tested the imager to prove its operation in the SWIR by taking several pictures of a target object. In particular, they were able to image the transmission of silicon wafers under the SWIR light as well as to visualize the content of plastic bottles that were opaque in the visible light range.

"Accessing the SWIR with a low-cost technology for consumer electronics will unleash the potential of this spectral range with a huge range of applications including improved vision systems for automotive industry (cars) enabling vision and driving under adverse weather conditions," says Gerasimos Konstantatos.

"SWIR band around 1.35–1.40 µm, can provide an eye-safe window, free of background light under day/night conditions, thus, further enabling long-range light detection and ranging (LiDAR), three-dimensional imaging for automotive, augmented reality and virtual reality applications."
Now the researchers want to increase the performance of photodiodes by engineering the stack of layers that comprise the photodetector device. They also want to explore new surface chemistries for the Ag2Te quantum dots to improve the performance and the thermal and environmental stability of the material on its way to the market.

 

Go to the original article...

STMicroelectronics manufactured Sphere’s Big Sky 18K custom image sensor

Image Sensors World        Go to the original article...

From: https://newsroom.st.com/media-center/press-item.html/t4598.html

Sphere Studios and STMicroelectronics reveal new details on the world’s largest cinema image sensor 

Jan 11, 2024 Burbank, CA, and Geneva, Switzerland
Sensor custom created for Big Sky – the world’s most advanced camera system – and is used to capture ultra-high-resolution content for Sphere in Las Vegas


 

Sphere Entertainment Co. (NYSE: SPHR) today revealed new details on its work with STMicroelectronics (NYSE: STM) (“ST”), a global semiconductor leader serving customers across the spectrum of electronics applications, to create the world’s largest image sensor for Sphere’s Big Sky camera system. Big Sky is the groundbreaking, ultra-high-resolution camera system being used to capture content for Sphere, the next-generation entertainment medium in Las Vegas.
 
Inside the venue, Sphere features the world’s largest, high-resolution LED screen which wraps up, over, and around the audience to create a fully immersive visual environment. To capture content for this 160,000 sq. ft., 16K x 16K display, the Big Sky camera system was designed by the team at Sphere Studios – the in-house content studio developing original live entertainment experiences for Sphere. Working with Sphere Studios, ST manufactured a first-of-its-kind, 18K sensor capable of capturing images at the scale and fidelity necessary for Sphere’s display. Big Sky’s sensor – now the world’s largest cinema camera sensor in commercial use – works with the world’s sharpest cinematic lenses to capture detailed, large-format images in a way never before possible.
 
“Big Sky significantly advances cinematic camera technology, with each element representing a leap in design and manufacturing innovation,” said Deanan DaSilva, lead architect of Big Sky at Sphere Studios. “The sensor on any camera is critical to image quality, but given the size and resolution of Sphere’s display, Big Sky’s sensor had to go beyond any existing capability. ST, working closely with Sphere Studios, leveraged their extensive expertise to manufacture a groundbreaking sensor that not only expands the possibilities for immersive content at Sphere, but also across the entertainment industry.”
 
“ST has been on the cutting edge of imaging technology, IP, and tools to create unique solutions with advanced features and performance for almost 25 years,” said Alexandre Balmefrezol, Executive Vice President and Imaging Sub-Group General Manager, STMicroelectronics. “Building a custom sensor of this size, resolution, and speed, with low noise, high dynamic range, and seemingly impossible yield requirements, presented a truly novel challenge for ST – one that we successfully met from the very first wafer out of our 12” (300mm) wafer fab in Crolles, France.”
 
As a leader in the development and manufacturing of image sensors, ST’s imaging technologies and foundry services cater to a wide range of markets, including professional photography and cinematography. Big Sky’s 316 megapixel sensor is almost 7x larger and 40x higher resolution than the full-frame sensors found in high-end commercial cameras. The die, which measures 9.92cm x 8.31cm (82.4 cm2), is twice as large as a wallet-sized photograph, and only four full die fit on a 300mm wafer. The system is also capable of capturing images at 120 fps and transferring data at 60 gigabytes per second.
 
Big Sky also allows filmmakers to capture large-format images from a single camera without having to stitch content together from multiple cameras – avoiding issues common to stitching including near distance limitations and seams between images. Ten patents and counting have been filed by Sphere Studios in association with Big Sky’s technology.
 
Darren Aronofsky’s Postcard from Earth, currently showing at Sphere as part of The Sphere Experience, is the first cinematic production to utilize Big Sky. Since its debut, Postcard from Earth has transported audiences, taking them on a journey spanning all seven continents, and featuring stunning visuals captured with Big Sky that make them feel like they have traveled to new worlds without leaving their seats in Las Vegas. More information about The Sphere Experience is available at thesphere.com.

Go to the original article...

css.php