Teledyne acquires Adimec

Image Sensors World        Go to the original article...

From Metrology News: https://metrology.news/teledyne-to-acquire-high-performance-camera-specialist-adimec/

Teledyne to Acquire High-Performance Camera Specialist Adimec

Teledyne Technologies has announced that it has entered into an agreement to acquire Adimec Holding B.V. and its subsidiaries (Adimec). Adimec, founded in 1992 and headquartered in Eindhoven, Netherlands, develops customized high-performance industrial and scientific cameras for applications where image quality is of paramount importance.

​“Adimec possesses uniquely complementary technology, products and customers in the shared strategic focus areas of healthcare, global defense, and semiconductor and electronics inspection,” said Edwin Roks, Chief Executive Officer of Teledyne. “For decades and from our own X-ray imaging business headquartered in Eindhoven, I have watched Adimec grow to become a leader in niche applications requiring truly accurate images for precise decision making in time-critical processes.”

Joost van Kuijk, Adimec’s Chief Executive Officer, commented, “It is with great pleasure that we are able to announce publicly that Adimec will become part of Teledyne. Adimec’s success has always been built on ensuring imaging excellence in demanding applications through an unwavering focus on individual customer requirements by our expert engineers and designers.”

Adimec co- Chief Executive Officer, Alex de Boer added, “As a leader in advanced imaging technologies for industrial and scientific markets, Teledyne is the perfect company to build further on the strong foundation the founders and management have established over the past three decades. The entire Adimec team is looking forward to contributing to an exciting future with Teledyne while extending technical boundaries to support our customers with cameras – perfectly optimized to their application needs.”

Go to the original article...

Teledyne acquires Adimec

Image Sensors World        Go to the original article...

From Metrology News: https://metrology.news/teledyne-to-acquire-high-performance-camera-specialist-adimec/

Teledyne to Acquire High-Performance Camera Specialist Adimec

Teledyne Technologies has announced that it has entered into an agreement to acquire Adimec Holding B.V. and its subsidiaries (Adimec). Adimec, founded in 1992 and headquartered in Eindhoven, Netherlands, develops customized high-performance industrial and scientific cameras for applications where image quality is of paramount importance.

​“Adimec possesses uniquely complementary technology, products and customers in the shared strategic focus areas of healthcare, global defense, and semiconductor and electronics inspection,” said Edwin Roks, Chief Executive Officer of Teledyne. “For decades and from our own X-ray imaging business headquartered in Eindhoven, I have watched Adimec grow to become a leader in niche applications requiring truly accurate images for precise decision making in time-critical processes.”

Joost van Kuijk, Adimec’s Chief Executive Officer, commented, “It is with great pleasure that we are able to announce publicly that Adimec will become part of Teledyne. Adimec’s success has always been built on ensuring imaging excellence in demanding applications through an unwavering focus on individual customer requirements by our expert engineers and designers.”

Adimec co- Chief Executive Officer, Alex de Boer added, “As a leader in advanced imaging technologies for industrial and scientific markets, Teledyne is the perfect company to build further on the strong foundation the founders and management have established over the past three decades. The entire Adimec team is looking forward to contributing to an exciting future with Teledyne while extending technical boundaries to support our customers with cameras – perfectly optimized to their application needs.”

Go to the original article...

Computational Imaging Photon by Photon

Image Sensors World        Go to the original article...



Arizona Optical Sciences Colloquium: Andreas Velten, "Computational Imaging Photon by Photon"

Abstract
Our cameras usually measure light as an analog flux that varies as a function of space and time. This approximation ignores the quantum nature of light which is actually made of discrete photons each of which is collected at a sensor pixel at an instant in time. Single photon cameras have pixels that can detect photons and the timing of their arrival resulting in cameras with unprecedented capabilities. Concepts like motion blur, exposure time, and dynamic range that are essential to conventional cameras do not really apply to single photon sensors. In this presentation I will cover computational imaging capabilities enabled by single photon cameras and their applications.

The extreme time resolution of single photon cameras enables time of flight measurements we use for Non-Line-of-Sight (NLOS) Imaging. NLOS systems reconstruct images of scene using indirect light from reflections off a diffuse relay surface. After illuminating the relay surface with short pulses, the returning light is detected with high time resolution single photon cameras. We thereby capture video of the light propagation in the visible scene and reconstruct images of hidden parts of the scene.

Over the past decade NLOS imaging has seen rapid progress and we can now capture and reconstruct hidden scenes in real time and with high image quality. In this presentation I will give an overview over the imaging using single photon avalanche diodes, reconstruction methods, and applications driving NLOS imaging and provide an outlook for future development.

Bio
Andreas Velten is Associate Professor at the Department of Biostatistics and Medical Informatics and the Department of Electrical and Computer Engineering at the University of Wisconsin-Madison and directs the Computational Optics Group. He obtained his PhD with Prof. Jean-Claude Diels in Physics at the University of New Mexico in Albuquerque and was a postdoctoral associate of the Camera Culture Group at the MIT Media Lab. He has included in the MIT TR35 list of the world's top innovators under the age of 35 and is a senior member of NAI, OSA, and SPIE as well as a member of Sigma Xi. He is co-Founder of Onlume, a company that develops surgical imaging systems, and Ubicept, a company developing single photon imaging solutions.

Go to the original article...

SolidVue develops solid-state LiDAR chip

Image Sensors World        Go to the original article...

From PR Newswire: https://www.prnewswire.com/news-releases/solidvue-koreas-exclusive-developer-of-lidar-sensor-chips-showcasing-world-class-technological-capabilities-302018487.html

SolidVue, Korea's Exclusive Developer of LiDAR Sensor Chips Showcasing World-Class Technological Capabilities 

SEOUL, South Korea, Dec. 19, 2023 /PRNewswire/ -- SolidVue Inc., Korea's exclusive enterprise specialized in CMOS LiDAR (Light Detection and Ranging) sensor IC development, once again proved its global technological prowess by announcing its achievement of two LiDAR-related papers being accepted at the upcoming 'ISSCC (International Solid-State Circuits Conference) 2024'.

Established in 2020, SolidVue focuses on designing SoCs (System-on-Chip) for LiDAR sensors that comprehensively assesses the shapes and distances of surrounding objects. This is a pivotal technology assured to see significant growth in industries such as but not limited to autonomous vehicles and smart cities.

Jaehyuk Choi, the CEO of SolidVue, disclosed the company's development of Solid-State LiDAR sensor chips, aiming to replace all components of traditional mechanical LiDAR with semiconductors. This innovation is expected to reduce volume by up to one-tenth and costs by around one-hundredth compared to the aforementioned mechanical LiDAR.

Utilizing its proprietary CMOS SPAD (Single Photon Avalanche Diode) technology, SolidVue's LiDAR sensor chips flawlessly detect even minute particles of light, enhancing measurement precision. The company focuses on all LiDAR detection ranges (short, medium, long), notably making advancements in the medium-to-long distance sector suited for autonomous vehicles and robotics. By the third quarter of this year, they meticulously developed an Engineering Sample (ES) of a Solid-State LiDAR sensor chip capable of measuring up to 150 meters, and are aiming for mass production by the end of 2024.
Choi emphasized SolidVue's independent development of various core technologies such as SPAD devices, LiDAR sensor architectures, and integrated image signal processors, while also highlighting the advantage of SolidVue's single-chip design in cost and size reduction compared to the multi-chip setup of traditional mechanical LiDAR sensors.

SolidVue's technological prowess has been repeatedly acknowledged at the ISSCC, marking a remarkable achievement for a Korean fabless company. At the forthcoming ISSCC 2024, SolidVue is set to showcase its groundbreaking advancements, including a 50-meter mid-range Solid-State LiDAR sensor that features a resolution of 320x240 pixels and optimized memory efficiency. Additionally, a 10-meter short-range Flash LiDAR will be presented, characterized by its 160x120 pixel resolution and an ultra-low power consumption of 3-uW per pixel. These significant innovations are the result of collaborative efforts between SolidVue, Sungkyunkwan University, and UNIST.

Ahead of full product commercialization, SolidVue's focal point is securing domestic and international clients as well as attracting investments. In January, they plan to make their debut at the 'CES 2024', the world's largest electronics exhibition, by showcasing their 150-m LiDAR sensor chip ES products with the aim of initiating discussions and collaborations with leading global LiDAR suppliers.

Since its establishment, SolidVue has secured a cumulative $6 million in investments. Key Korean VCs such as KDB Bank, Smilegate Investment, Quantum Ventures Korea, Quad Ventures, among others, have participated as financial investors. Additionally, Furonteer, a company specializing in automated equipment for automotive camera modules, joined as SolidVue's first strategic investor.

CEO Choi stated, "Aligning with the projected surge in LiDAR demand post-2026, we are laying the groundwork for product commercialization." He added, "We are heavily engaged in joint research and development with major Korean corporations, discussing numerous LiDAR module supply deals, and exploring collaborations with global companies for overseas market penetration."

SolidVue’s LiDAR sensor chip and demonstration images (Photo=SolidVue)


Go to the original article...

Job Postings – Week of 18 February 2024

Image Sensors World        Go to the original article...

Pacific Biosciences

Staff CMOS Sensor Test Engineer

Menlo Park, California, USA

Link

Vilnius University

PostDoc in Experimental HEP

Vilnius, Lithuania

Link

Precision Optics Corporation

Electrical Engineer

Windham, Maine, USA

Link

CEDES

(Senior) Design Engineer in Product Development

Singapore

Link

Karl Storz

Image Processing Engineer V

Goleta, California, USA

Link

Karl Storz

Development Engineer - Image Processing

Tuttlingen, Germany

Link

University of Warwick

PhD Studentship: Towards Silicon Photonics Based Gas Sensors

Coventry, UK

Link

Leidos

Optical Test Engineer

Dayton, Ohio, USA

Link

Teledyne e2v Semiconductors

MBE Growth Production Engineer

Camarillo, California, USA

Link

Go to the original article...

Conference List – July 2024

Image Sensors World        Go to the original article...

The 9th International Smart Sensor Technology Exhibition - 3-5 Jul 2024 - Seoul, Korea (South) - Website

17th International Conference on Scintillating Materials and their Applications - 8-12 Jul 2024 - Milan, Italy - Website

Optica Sensing and Imaging Congresses - 15-19 Jul 2024 - Toulouse, France  - Website

International Conference on Imaging, Signal Processing and Communications - 19-21 Jul 2024 - Fukuoka, Japan - Website

American Association of Physicists in Medicine Annual Meeting - 21-25 Jul 2024 - Los Angeles, California, USA - Website

If you know about additional local conferences, please add them as comments.

Return to Conference List index

Go to the original article...

Semiconductor Engineering article about noise in CMOS image sensors

Image Sensors World        Go to the original article...

Semiconductor Engineering published an article on dealing with noise in CMOS image sensors: https://semiengineering.com/dealing-with-noise-in-image-sensors/

Dealing With Noise In Image Sensors

The expanding use and importance of image sensors in safety-critical applications such as automotive and medical devices has transformed noise from an annoyance into a life-threatening problem that requires a real-time solution.

In consumer cameras, noise typically results in grainy images, often associated with poor lighting, the speed at which an image is captured, or a faulty sensor. Typically, that image can be cleaned up afterward, such as reducing glare in a selfie. But in cars, glare in an ADAS image system can affect how quickly the brakes are applied. And in vehicles or medical devices, systems are so complex that external effects can affect images, including heat, electromagnetic interference, and vibration. This can be particularly problematic in AI-enabled computer vision systems where massive amounts of data need to be processed at extremely high speeds. And any of this can be affected by aging circuits, due to dielectric breakdown or changes in signal paths due to electromigration.

Thresholds for noise tolerance vary by application. “A simple motion-activated security camera or animal-motion detection system at a zoo can tolerate much more noise and operate at much lower resolution than a CT scanner or MRI system used in life-saving medical contexts,” said Brad Jolly, senior applications engineer at Keysight. “[Noise] can mean anything that produces errors in a component or system that acquires any form of image, including visible light, thermal, X-ray, radio frequency (RF), and microwave.”

Tolerance is also determined by human perception, explained Andreas Suess, senior manager for novel image sensor systems in OmniVision’s Office of the CTO. “Humans perceive an image as pleasing with a signal-to-noise ratio (SNR) of >20dB, ideally >40dB. But objects can often be seen at low SNR levels of 1dB or less. For computational imaging, in order to deduce what noise level can be accepted one needs to be aware of their application-level quality metrics and study the sensitivity of these metrics against noise carefully.”

Noise basics for imaging sensors
No noise is ideal, but it’s an unrealistic goal. “With an image sensor, noise is inevitable,” said Isadore Katz, senior marketing director at Siemens Digital Industries Software. “It’s when you’ve got a pixel value that’s currently out of range with respect to what you would have expected at that point. You can’t design it out of the sensor. It’s just part of the way image sensors work. The only thing you can do is post-process it away. You say to yourself, ‘That’s not the expected value. What should it have been?’”

Primarily noise is categorized as fixed pattern noise and temporal noise, and both explain why engineers must cope with its inevitability. “Temporal noise is a fundamental process based on the quantization of light (photons) and charge (electrons),” said Suess. “When capturing an amount of light over a given exposure, one will observe a varying amount of photons which is known as photon shot noise, which is a fundamental noise process present in all imaging devices.” In fact, even without the presence of light, a dark signal, also known as dark current, can exhibit shot noise.

Worse, even heat alone can cause noise, which can cause difficulties for ADAS sensors under extreme conditions. “An image sensor has to work over the brightest and darkest conditions; it also has to work at -20 degrees and up to 120 degrees,” said Jayson Bethurem, vice president of marketing and business development at Flex Logix. “All CMOS sensors run slower and get noisier when it’s hotter. They run faster, a little cleaner, when it’s cold, but only up to a certain point. When it gets too cold, they start to have other negative effects. Most of these ICs self-heat when they’re running, so noise gets inserted there too. The only way to get rid of that is to filter it out digitally.”

Fixed-pattern noise stems from process non-uniformities, as well as design choices and can cause offset, gain or settling artifacts. Fixed pattern noise can manifest itself as variations in quantum efficiency, offset or gain, as well as read noise. Mitigating fixed pattern noise requires effort on process, device, circuit design, and signal processing levels.

Fig. 1: Noise issues and resolution. Source: Flex Logix

In addition, noise affects both digital and analog systems. “Digital systems always start by digitizing data from some analog source, so digital systems start with all the same noise issues that analog systems do,” Jolly said. “In addition, digital systems must deal with quantization and pixelation issues, which always arise whenever some analog signal value is converted into a bit string. If the bits are then subjected to a lossy compression algorithm, this introduces additional noise. Furthermore, the increase in high-speed digital technologies such as double data rate memory (DDRx), quadrature amplitude modulation (QAM-x), non-return-to-zero (NRZ) line coding, pulse amplitude modulation (PAM), and other complex modulation schemes means that reflections and cross-channel coupling introduce noise into the system, possibly to the point of bit slipping and bit flipping. Many of these issues may be automatically handled by error correcting mechanisms within the digital protocol firmware or hardware.”
 
Noise can be introduced anywhere along the imaging chain and create a wide range of problems. “For example, the object being imaged may have shadows, occlusions, internal reflections, non-coplanarity issues, parallax, or even subtle vibrations, especially in a manufacturing environment,” Jolly explained. “In such situations, noise can complicate inspections. For example, a multi-layer circuit board being imaged with X-ray technology could have solder joint shadows if there are overlapping grid array components on the top and bottom of the board.”
 
Variability in the alignment between the image sensor and the subject of the image — rotational or translational offset, and planar skew — may add to the variability. And thermal gradients in the gap between the subject and the sensor may introduce noise, such as heat shimmer on a hot road. Low light and too-fast image capture also may introduce noise.
 
There are other issues to consider, as well. “A lens in the imaging chain may introduce noise, including chromatic aberration, spherical aberration, and errors associated with microscopic dust or lens imperfections. The lens controls the focus, depth of field, and focal plane of the image, all of which are key aspects of image acquisition. Finally, the imaging sensing hardware itself has normal manufacturing variability and thermal responses, even when operating in its specified range. A sensor with low resolution or low dynamic range is also likely to distort an image. Power integrity issues in the lines that power the sensor may show up as noise in the image. Finally, the camera’s opto-electronic conversion function (OECF) will play a key role in image quality,” Jolly added.
 
External sources of noise also can include flicker, which needs to be resolved for clear vision.

Fig. 2: Flicker from LED traffic lights or traffic signs poses a serious challenge for HDR solutions, preventing driver-assistance and autonomous driving systems from being able to correctly detect lighted traffic signs. Source: OmniVision

Imaging basics for ADAS 

While noise would seem to be a critical problem for ADAS sensors, given the potential for harm or damage, it’s actually less of an issue than for something like a consumer camera, where out-of-range pixels can ruin an image. ADAS is not concerned with aesthetics. It focuses on a binary decision — brake or not brake. In fact, ADAS algorithms are trained on lower-resolution images, and ignore noise that would be a product-killer in a consumer camera.

For example, to find a cat in the middle of an image, first the image is “segmented,” a process in which a bounding box is drawn around a potential object of interest. Then the image is fed into a neural net, and each bounding region is evaluated. The images are labeled, and then an algorithm can train itself to identify what’s salient. “That’s a cat. We should care about it and brake. It’s a skunk. We don’t care about it. Run it over,” said Katz. That may sound like a bad joke, but ADAS algorithms actually are trained to assign lower values to certain animals.

“It is about safety in the end, not so much ethics,” Katz said. “Even if someone does not care about moose, the car still has to brake because of the danger to the passengers. Hitting the brakes in any situation can pose a risk.” But higher values are assigned to cats and dogs, rather than skunks and squirrels.

If an object is fully or partly occluded by another object or obscured by light flare, it will require more advanced algorithms to correctly discern what it is. After the frame is received from the camera and has gone through basic image signal processing, the image is then presented to a neural net.

“Now you’ve left the domain of image signal processing and entered the domain of computer vision, which starts with a frame or sequence of frames that have been cleaned up and are ready for presentation,” said Katz. “Then you’re going to package those frames up and send them off to an AI algorithm for training, or you’re going to take those images and then process them on a local neural net, which will start by creating bounding boxes around each of the artifacts that are inside the frame. If the AI can’t recognize an object in the frame it’s examining, it will try to recognize it in the following or preceding frames.”

In a risky situation, the automatic braking system has about 120ms to respond, so all of this processing needs to happen within the car. In fact, there may not even be time to route from the sensor to the car’s own processor. “Here are some numbers to think about,” said Katz. “At 65 mph, a car is moving at 95 feet per second. At 65 mph, it takes about 500 feet to come to a complete stop. So even at 32.5 mph in a car, it will travel 47 feet in 1 second. If the total round trip from sensor to AI to brake took a half-second, you would be 25 feet down the road and still need to brake. Now keep in mind that the sensor is capturing images at about 30 frames per second. So every 33 milliseconds, the AI has to make another decision.”

In response, companies are using high-level synthesis to develop smart sensors, in which an additional die — with all the traditional functions of an image signal processor (ISP), such as noise reduction, deblurring, and edge detection — is sandwiched directly adjacent to the sensor.

“It’s now starting to include computer vision capability, which can be algorithmic or AI-driven,” said Katz. “You’ll start to see a smart sensor that has a neural net built inside. It could even be a reprogrammable neural net, so you can make updates for the different weights and parameters as soon as it gets smarter.”

If such a scheme succeeds, it means that a sensor could perform actions locally, allowing for real-time decisions. It also could repackage the information to be stored and processed in the cloud or car, for later training to increase accurate, rapid decision-making. In fact, many modern ISPs can already dynamically compensate for image quality. “For example, if there is a sudden change from bright light to low light, or vice-versa, the ISP can detect this and change the sensor settings,” he said. “However, this feedback occurs well before the image gets to the AI and object detection phase, such that subsequent frames are cleaner going into the AI or object detection.”

One application that already exists is driver monitoring, which presents another crucial noise issue for designers. “The car can have the sun shining right in your face, saturating everything, or the complete opposite where it’s totally dark and the only light is emitting off your dashboard,” said Bethurem. “To build an analog sensor and the associated analog equipment to have that much dynamic range and the required level of detail, that’s where noise is a challenge, because you can’t build a sensor of that much dynamic range to be perfect. On the edges, where it’s really bright or over-saturated bright, it’s going to lose quality, which has to get made up. And those are sometimes the most dangerous times, when you want to make sure the driver is doing what they’re supposed to be doing.”

AI and noise

The challenges of noise and the increasing intelligence of sensors have also attracted the attention of the AI community.

“There are already AI systems capable of filling in occluded parts of a digital image,” said Tony Chan Carusone, CTO at Alphawave Semi. “This has obvious potential for ADAS. However, to perform this at the edge in real-time will require new dedicated processing elements to provide the immediate feedback required for safety-critical systems. This is a perfect example of an area where we can expect to see new custom silicon solutions.”

Steve Roddy, chief marketing officer at Quadric, notes that path already is being pioneered. “Look at Android’s/Google’s ‘Magic Eraser’ functionality in phones – quickly deleting photo-bombers and other background objects and filling in the blanks. Doing the same on an automotive sensor to remove occlusions and ‘fill in the blanks’ is a known solved problem. Doing it in real time is a simple compute scaling problem. In 5nm technology today, ~10mm2 can get you a full 40 TOPs of fully programmable GPNPU capability. That’s a tiny fraction of the large (> 400 mm2) ADAS chips being designed today. Thus, there’s likely to be more than sufficient programmable GPNPU compute capability to tackle these kinds of use cases.”

Analyzing noise 

Analyzing noise in image sensors is a challenging and active area of research that dates back more than 50 years. The general advice from vendors is to talk to them directly to determine if their instrumentation aligns with a project’s specific needs.

“Noise is of a lot of interest to customers,” said Samad Parekh, product manager for analog/RF simulation at Synopsys. “There are many different ways of dealing with it, and some are very well understood. You can represent the noise in a closed form expression, and because of that you can very accurately predict what the noise profile is going to look like. Other mechanisms are not as well understood or are not as linear. Because those are more random, there’s a lot more effort required to characterize the noise or design with that constraint in mind.”

Best practices 

Keysight’s Jolly offered day-to-day advice for reducing and managing noise in image sensor projects:

  • Clearly define the objectives of the sensor as part of the overall system. For example, a slow, low-resolution thermal imager or vector network analyzer may reveal information about subcutaneous or subdural disease or injury that would be invisible to a high-resolution, high-speed visible light sensor. Work with your component and module vendors to understand what noise analysis and denoising they have already done. You will learn a lot and be able to leverage a lot of excellent work that has already been accomplished. Also, consider image noise throughout the total product life cycle and use simulation tools early in your design phase to minimize issues caused by sub-optimal signal integrity or power integrity.
  • Analyze the problem from the perspective of the end user. What are their objectives? What are their concerns? What skills do they possess? Can they make appropriate interventions and modifications? What is their budget? It may turn out, for example, that a fully automated system with a higher amount of noise may be more appropriate for some applications than a more complex system that can achieve much lower noise.
  • Become familiar with camera, optical, and imaging standards that are available, such as ISO 9358, 12232, 12233, 14524, and 15739, as well as European Machine Vision Association (EMVA) 1288.
  •  Investigate the latest research on the use of higher mathematics, statistics, and artificial intelligence in de-noising. Some of these techniques include expectation maximization estimation, Bayesian estimation, linear minimum mean square error estimation, higher-order partial differential equations, and convolutional neural networks.

Future approaches 

While current ADAS systems may tolerate more noise than other forms of imaging, that may not be the case in the future. A greater variety of use cases will push image sensors towards higher resolutions, which in turn will require more localized processing and noise reduction.

“A lot of the image processing in the past was VGA, but applications like internal cabin monitoring, such as eye-tracking the driver and passengers to recognize what’s going inside the cabin — including monitoring driver alertness or whether someone got left behind in the backseat — are going to start to drive us towards higher-resolution images,” Katz said. “In turn, that’s going to start to mandate increasing levels of noise reduction, dealing with image obstructions, and with being able to process a lot more data locally. When you go from VGA to 720 to 1020 up to 4k, you’re increasing the number of pixels you have to operate with by 4X. Every one of these demands more and more localized processing. That’s where we’ll end up going.”

Go to the original article...

Talk on digital camera misunderstandings and HDR

Image Sensors World        Go to the original article...

Wayne Prentice presented a talk titled "Digital Camera Myths, Mis-statements and Misunderstandings" at the NY chapter meeting of IS&T (Society for imaging Science and Tech.) on 17 Jan. 2024.

Abstract: The digital camera system is deceptively complex.  Understanding camera operation/design requires some knowledge of the parts:  photometry, radiometry, optics, sensor physics, sensor design, signal processing, image processing, color science, statistics, human perception, and image/video encoding. With all these parts, it is easy to miss something. This talk was inspired by interactions with co-workers and clients.  It has been my experience that some subtle, yet important points are often missed and can lead to suboptimal product and design decisions that could be avoided. The goal of this talk is to fill in some of those gaps.


Another version of the talk at RIT imaging science weekly seminar on Feb 7, 2024:

 
CIS Weekly Seminar: Wayne Prentice - Digital Camera Myths, Misstatements, and Misunderstandings

Wayne Prentice also gave a talk in 2022 on HDR at the NY IS&T chapter meeting:

 
High Dynamic Range (HDR) Imaging: Theory and Practical Considerations


Bio: Wayne has been working in the imaging industry for over 35 years. He has a BSEE from Clarkson University and a Masters in Imaging Science from RIT. Wayne has worked on imaging equipment ranging from x-ray, CAT scanners, MRI, extra-terrestrial imaging, and digital cameras. Much of Wayne's digital camera experience came from 17 years working at Kodak R&D, product development for digital cameras. He holds 16 US patents in digital imaging. At Kodak Wayne became the lead image scientist and manager for Digital Camera R&D group. He was responsible for competitive testing, image quality testing, new feature development, and image science aspects of product commercialization. Wayne has worked as an independent contractor over the past 5 years providing solutions to a wide range of imaging challenges, mostly in the areas of developing custom camera applications, computer vision and HDR imaging.

Go to the original article...

A message from IEEE Sensors 2024 conference co-chair

Image Sensors World        Go to the original article...

In my role as Industrial Co-chair of IEEE SENSORS 2024 conference to be held this year in Kobe, Japan, in October, I want to invite the participation of the image sensor community. SENSORS is a vibrant conference – 1000 attendees in Vienna for SENSORS 2023 – covering sensors devices and systems. I can testify that there is much overlap in the issues addressed in sensors, but for historical reasons it appears that this is a conference that the image sensor community has not had on their radar. I, along with my Industrial Co-Chair Sozo Yokogawa of SONY Semiconductor, would like to change this.

Our proposal is to highlight image sensor technology at the conference through a combination of focused sessions, keynote speakers, a workshop, tutorial, and networking possibilities. I would like to use as a model the success efforts that I have been involved in over many years as part being involved with the technical committees at IEDM and ISSCC. To accomplish this we would like to reach out to our image sensor community to help promote this goal through networking and through volunteering informally or formally.

The sponsoring IEEE Sensor Council, of which I am an AdCom member, has two initiatives that are of note related to this proposal. One initiative is to increase industrial involvement in a way that prioritizes the healthy technical interaction of industry, academia, and laboratories. The other initiative is to develop  close ties between conference participation and the high-impact council-sponsored Sensor Journal and Sensor Letters, enabling both the publishing of work from the conference in the journals and providing a path where accepted papers in the journals are accepted also for presentation at SENSORS.

I have discussed this informally in our community over the last year with positive comments. I look forward to feedback, but most importantly, support of this goal. I look forward to hearing from you and seeing many of you in Kobe.


Dan McGrath
TechInsights Inc.
AdCom member, IEEE Solid State Circuits Society & IEEE Sensor Council
dmcgrath@ieee.org

 

Go to the original article...

Job posting request from onsemi – Ireland

Image Sensors World        Go to the original article...

 onsemi

Analog Design Engineer      Cork, Ireland      Link

Go to the original article...

Job Postings – Week of 11 February 2024

Image Sensors World        Go to the original article...

IBM Research

Device Engineer

Albany, New York, USA

Link

Onsemi

Product Engineer

Meridian, Idaho, USA

Link

MicroTech Global

Experienced Characterization Engineer

Mechelen, Belgium

Link

Apple

Camera Electrical Engineer

San Diego, California, USA

Link

Leonardo UK

Principal Test Engineer

Southampton, Hampshire,  UK

Link

Ball Aerospace

Project Engineer II - Focal Plane Subsystems

Boulder Colorado, USA

Link

Institute of High Energy Physics

Postdoc positions with the JUNO program – TAO detector

Beijing, China

Link

Cepton Technologies

System Test Technician

San Jose, California, USA

Link

The Open University

Spaceflight Detector Characterisation Engineer

Milton Keynes, UK

Link

Go to the original article...

More videos: Vision Research, Sick IVP, Teledyne e2v, onsemi

Image Sensors World        Go to the original article...

Vision Research publishes a EMVA 1288 webinar on camera performance evaluation:


SICK IVP explains the recent image sensor innovations:


Teledyne e2v talks about selecting and matching the optics to an image sensor:



Onsemi explains its eHDR approach:


Go to the original article...

NIST develops SNSPD detector array for mid-IR

Image Sensors World        Go to the original article...

Phys.org covered a recently published paper titled "A 64-pixel mid-infrared single-photon imager based on superconducting nanowire detectors" by a team from NIST in the journal Applied Physics Letters. 

Abstract:

A large-format mid-infrared single-photon imager with very low dark count rates would enable a broad range of applications in fields like astronomy and chemistry. Superconducting nanowire single-photon detectors (SNSPDs) are a mature photon-counting technology as demonstrated by their figures of merit such as high detection efficiencies and very low dark count rates. However, scaling SNSPDs to large array sizes for mid-infrared applications requires sophisticated readout architectures in addition to superconducting materials development. In this work, an SNSPD array design that combines a thermally coupled row-column multiplexing architecture with a thermally coupled time-of-flight transmission line was developed for mid-infrared applications. The design requires only six cables and can be scaled to larger array sizes. The demonstration of a 64-pixel array shows promising results for wavelengths between 3.4 μm and 10 μm, which will enable the use of this single-photon detector technology for a broad range of new applications.

From phys.org: https://phys.org/news/2024-01-wavelength-scientific-exploration-photon-detectors.html

NIST researchers have unveiled a new kind of single-photon detector array that can identify individual particles of light (photons). It's useful for spectroscopy, where scientists observe how molecules absorb different colors (or wavelengths) of light. Each molecule has its own color fingerprint on the light spectrum.

This particular detector can catch single photons in the mid-infrared. Here's how the array works: Multiple super-cold detectors are connected to one another (shown above) in a grid of sorts with an electrical current flowing through. When a photon strikes one of the detectors, it creates a hot spot and acts as a dam to block the current for a short amount of time.

The researchers developed a new technique to determine where, along the columns and rows, the hot spot is. From there, they can create single-photon pictures.

The whole setup is challenging because mid-infrared waves are longer and have less energy to cause the hot spots, compared to visible light, for example. But the scientists have a few tricks up their sleeve and used them to make it work.


Go to the original article...

A Sony Job Opening in Switzerland

Image Sensors World        Go to the original article...

Sony Advanced Visual Sensing AG

Image Sensor Design Engineer        Schlieren, Switzerland        Link

Go to the original article...

A Sony Job Opening in Switzerland

Image Sensors World        Go to the original article...

Sony Advanced Visual Sensing AG

Image Sensor Design Engineer        Schlieren, Switzerland        Link

Go to the original article...

AI Chat Engines Express Their Opinions on CMOS Image Sensor Origins

Image Sensors World        Go to the original article...

We asked four AI chat engines a simple question, "Who invented CMOS Image Sensors?" There seems to have been a lot of variety in the training sets.

Microsoft Copilot (Bing) based on GPT-4:

The inventor of the CMOS (complementary metal-oxide semiconductor) active pixel sensor camera-on-a-chip is Eric Fossum. He invented this technology while working at NASA’s Jet Propulsion Laboratory in 1993. The CMOS image sensor technology is now used in more than 6 billion cameras produced each year, including nearly every smartphone.

ChatGPT-3.5:

The invention of CMOS (complementary metal-oxide-semiconductor) image sensors is attributed to several individuals and organizations. However, the basic concept of CMOS technology was first proposed by Chih-Tang Sah and Frank Wanlass in the late 1960s. They were researchers working at Fairchild Semiconductor, a company that played a crucial role in the development of several semiconductor technologies.

Since then, numerous engineers, scientists, and companies have contributed to the advancements and commercialization of CMOS image sensors. Sony is one major company known for its contributions to image sensor technology, as they introduced the first commercial CMOS image sensors in the 1990s. Other notable companies involved in image sensor development include Canon, Samsung, Panasonic, and OmniVision Technologies. It is important to note that the development of CMOS image sensors is a result of collective efforts from several individuals and organizations over the years.

Anthropic Claude: 

The invention of the CMOS image sensor can be attributed to a few key pioneers:

- Eric Fossum - While working at NASA's Jet Propulsion Laboratory in the early 1990s, Eric Fossum led the team that developed the first functioning CMOS image sensor with intra-pixel conversion. This type of active pixel sensor became the basis for modern CMOS image sensors.

- Nobukazu Teranishi - Working at Olympus in the late 1960s, Teranishi patented some of the earliest concepts for using MOS technology for imaging, laying the groundwork for future CMOS sensors.

- Peter Noble - In the late 1960s, while at Plessey Semiconductors in the UK, Peter Noble patented ideas for using MOS capacitors for image sensing. This helped establish early CMOS imaging concepts.

- Wen H. Ko and Robert C. Gee - In the late 1960s at Fairchild Semiconductor, Ko and Gee developed one of the first MOS imaging chips, building on emerging CMOS technology.

- Steve Sasson - While an engineer at Kodak in the 1970s, Sasson worked on some of the earliest CCD digital camera sensors, which inspired the transition from CCD to CMOS sensors.


So in summary, the foundational invention of CMOS image sensors relied on key contributions from Eric Fossum, Nobukazu Teranishi, Peter Noble, Wen H. Ko, and Robert C. Gee between the 1960s to 1990s, as CMOS fabrication techniques matured. The CMOS sensor revolutionized digital photography.

Google Bard:

The question of "who invented CMOS image sensors" has a slightly nuanced answer because the technology evolved through different stages and improvements by various individuals and teams. Here's a breakdown:

Conceptual foundation:The underlying technology, Complementary Metal-Oxide-Semiconductor (CMOS), was conceived by Frank Wanlass at Fairchild Semiconductor in 1963.

Active Pixel Sensor (APS) invention:While CCDs dominated early image sensors, the CMOS Active Pixel Sensor (APS), a crucial component of modern CMOS image sensors, was developed in the mid-1980s by Olympus in Japan. Tsutomu Nakamura'steam fabricated the first NMOS APS in 1985.

Advancements for commercial viability:Eric Fossum, an American engineer, is often credited with making CMOS image sensors commercially viable. He and his team at the NASA Jet Propulsion Laboratory in the 1990s significantly improved the technology by inventing a new pixel design with all necessary functions on a single chip, addressing limitations like power consumption and sensitivity.

Further refinements:Since then, numerous researchers and companies have continued to contribute to the development and improvement of CMOS image sensors. These advancements have led to their widespread adoption in various applications, from digital cameras and smartphones to medical imaging and scientific instruments.

Therefore, attributing the invention solely to one person might not be entirely accurate. However, Eric Fossum's contribution in improving the technology and making it commercially viable deserves considerable recognition.

Chat engines tend to present topics as if they were students who never went to class. Do you think they did any better here?

 

Go to the original article...

Canon’s twisted photodiodes improve autofocus

Image Sensors World        Go to the original article...

IEEE Spectrum has a recent article discussing a 2023 IEDM paper from Canon.

Paper: Shirahige et al., "40-5. Cross Dual-Pixel Twisted-Photodiode Image Sensor for All-Directional Auto Focus", IEDM 2023.

Spectrum article: https://spectrum.ieee.org/autofocus-canon-twisted-diode

Diodes at Right Angles Double Autofocus Capacity: Canon twists photosensor rules to build new tech from familiar parts

Above are images of a rotating object using Canon's twisted photodiode autofocus [middle column] and a standard dual pixel autofocus [right column]. The gray column is the raw image and the top and bottom rows were taken at different times. Courtesy: Canon
 

In 2013, Canon introduced its first dual-pixel autofocus, a technology that allows almost every pixel in a photo sensor to help focus the image it takes. Now Canon researchers say they’ve developed a new improvement on their previous improvement to autofocus tech. And this new approach finds its focus faster, better, and in lower light—without requiring new components and technologies to be invented first. It simply involves one small twist.

Shirahige said they have developed a new image sensor whose photodiodes are perpendicular to each other. This “cross dual-pixel twisted-photodiode,” they note, performs better than autofocus sensors in the marketplace today [that place] two photodiodes under a shared lens, which allowed the sensor to detect when incoming light on both diodes was in phase, and therefore, in focus. [Even earlier technique was to] sample [a few image] pixels to adjust the camera lens based on the contrast in the image, a slower method. [In any case, the] focusing pixels could not record image data, so there was always a trade-off between autofocusing ability and image quality. Instead, the dual-pixel autofocus approach made it possible for almost every pixel in the sensor to contribute to focusing the lens ahead of shooting, and to then contribute information to the final photo. The advantages included speed, better focus in low-light situations, and better focus across a greater fraction of the image.

However, these multipixel photodiodes have a disadvantage: the arrangement of photodiodes favors light on one axis at the cost of the other. [...]Canon’s new structure, which they call a twisted-photodiode image sensor, stacks two identical photodiodes, one oriented to capture horizontal patterns and the other rotated ninety degrees to capture vertical patterns. Because the horizontally and vertically oriented photodiodes are the same type of components, the data each generates requires no more extra processing time or power than that of any other diode in the system. So the overall autofocus speed is higher. The orthogonal-diode arrangement, by virtue of is comparable simplicity, also achieves faster readouts than more complex quadruple or other elaborate photodiode structures.

Canon’s team reported that their system is also much faster at capturing the electrons transferred from the photodiodes, capturing as many as 121,000 electrons with the same lag as previous photodiodes, which is more than double the capacity of comparable earlier systems.

[Canon did] not provide an estimate of when the technology might appear in commercial systems.

Go to the original article...

Job Postings – Week of 4 February 2024

Image Sensors World        Go to the original article...

Ametek – Forza Silicon

Principal Pixel Architect

Pasadena, California, USA

Link

L3Harris

Operations Manager

Katowice, Poland

Link

L3Harris

Infrared Sensor Engineering Technician D

Mason, Ohio, USA

Link

National Research Council Canada

Postdoctoral Fellowship, Quantum Plasmonic Sensors

Ottawa, Ontario, Canada

Link

GlobalFoundries Fab8

Principal Metrology Process Engineer

Malta, New York, USA

Link

SICK AG

SICK Summer University (for teens)

Waldkirch, Germany

(webpage in German)

Link

Sandia National Laboratories

Integrated Photonics Postdoctoral Appointee (US citizen)

Albuquerque, New Mexico, USA

Link

OPPO

Camera Image Quality Design and Evaluation Engineer

Yokohama, Japan

(webpage in Japanese)

Link

Brookhaven National Laboratory

Detector Development Scientist

Upton, New York, USA

Link

Go to the original article...

Job Postings from Spain

Image Sensors World        Go to the original article...

Qurv, in Barcelona, Spain, plans to expand its sensor design team with staff additions:

  • Process Integration Engineer
  • Wide-Spectrum Image Sensor CMOS Pixel Designer
  • Quantum Film Process Engineer

These openings and more are on the Qurv website.

Go to the original article...

New Videos from onsemi, AI Storm, ST, EPFL

Image Sensors World        Go to the original article...

Onsemi emphasizes its internal fab capabilities:

 

At CES, AI Storm describes the analog AI approach contained in its image sensor:
 


Also at CES, ST presents its new 3D sensing approach:
 

 

EPFL presents a combo of spiking neuron processor and  SPAD sensor:

EPFL also presents a burst SPAD imager:


Go to the original article...

NIT and INSP collaboration on quantum dot SWIR imager

Image Sensors World        Go to the original article...

A video on the different stages in the development of sensors for infrared cameras from Institut des NanoSciences de Paris (INSP):

  

A press release from November 2023 related to this technology:

NIT and INSP will exhibit the world’s first HgTe CQD SWIR camera during the Forum Innovation Defense held in Paris on 23-28 November.

NIT (New Imaging Technologies) and INSP (Institute of Nanosciences of Paris) are proud to announce the debut of the world’s first Short-Wave Infrared (SWIR) camera featuring an innovative HgTe (Mercury Telluride) Quantum Dot focal plane array sensor. This groundbreaking technological achievement will be showcased during the Forum Innovation Defense, taking place in Paris from November 23 to November 28, 2023.

The collaboration between NIT and INSP has resulted in a pioneering SWIR infrared camera, utilizing the advanced HgTe quantum dot sensor technology, which promises unprecedented capabilities in defense and security applications.

Selected by the French Ministry of Defense, NIT, and INSP will present the culmination of years of dedicated research and development efforts in this revolutionary camera. The development of the CQD (Colloidal Quantum Dot) sensor was made possible through funding provided by the French Defense Procurement Agency (DGA) and the National Research Agency, part of a rigorous three-year R&D program.

Go to the original article...

Another Job Opening in France

Image Sensors World        Go to the original article...

Exosens - Photonis Infrared

R&D Engineer in Infrared & CMOS Image Sensor       Grenoble, France       Link

(Note: The listing at the link is in French)

Go to the original article...

Optical Imaging and Photography Book Announcement

Image Sensors World        Go to the original article...

De Gruyter published a second edition of "Optical Imaging and Photography" book by Ulrich Teubner and Hans Josef Brückner

Different imaging systems and sensors are reviewed as well as lenses and aberrations, image intensification and processing. The second and enlarged edition has been updated by actual developments and complemented by the topic of smart phone camera photography.

Go to the original article...

Job Postings – Week of 28 January 2024

Image Sensors World        Go to the original article...

OSRAM OS San Jose

Sensor Characterization Engineer

Boise, Idaho, USA

Link

DESY

Instrument Scientist (Gamma Rays and UV)

Hamburg, Germany

Link

Rockwell Automation

EDGE – Support Engineer (Engineer in Training position)

Mayfield Heights, Ohio, USA

Link

Jozef Stefan Institute

Cherenkov-based PET Detector R&D, Postdoc

Ljubljana, Slovenia

Link

onsemi

Entry Level Analog Engineer

Richardson, Texas, USA

Link

IDTechEx

Sensors Analyst: Market Research & Consultancy

London, England, UK

Link

Lockheed-Martin Corporation

IR Camera Systems Engineer Early Career

Santa Barbara, California, USA

Link

INSION GmbH

Engineer Product Development Spectral Sensor Technology

Obersulm, Germany

Link

Go to the original article...

SWIR Systems Announces Handheld Mobile Camera

Image Sensors World        Go to the original article...

SWIR Vision Systems Announces Acuros GO 6 MP Handheld SWIR Camera Empowering Mobile SWIR Imaging with Cutting-Edge CQD Sensor Technology


Durham, North Carolina, January 22, 2024 — SWIR Vision Systems, a leader in short-wavelength infrared (SWIR) imaging technology, proudly introduces the Acuros® GO 6 MP SWIR camera, a groundbreaking portable, handheld mirrorless camera featuring the company's high-resolution Colloidal Quantum Dot SWIR sensor technology.

The Acuros GO provides users with unprecedented flexibility, portability, and performance for diverse imaging applications and markets including defense, law enforcement, first responder applications, agricultural imaging, industrial vision, scientific, and consumer photography. 

The SWIR capabilities of the Acuros GO make it valuable for imaging through degraded visual environments such as rain, snow, haze, smog, smoke, and dust. The reduced atmospheric scattering of SWIR photons enables exceptional long-range imaging, allowing photographers to capture sweeping panoramas and immersive vistas. By combining the camera's broad spectral response with optical filters, the camera can be used for detecting and imaging moisture, sugar content, hydrocarbons, and other infrared chemical signatures.

The Acuros GO is a ruggedized, IP67-rated camera with a mirrorless design, offering versatility and durability for on-the-go imaging needs.

Key Features of the Acuros GO 6 MP Mirrorless Camera include:
  • 3064 x 2040 pixel resolution using the new 7µm pitch Acuros CQD sensor
  • Broadband spectral sensitivity from 400 nm to 1700 nm
  • Battery powered operation
  • Global snapshot shutter design with video frame rates of 30 fps
  • Digital shutter speeds up to 1/100,000 (10 us) to capture high-speed events without motion blur
  • Automatic Gain Control (AGC), Auto Exposure (AE), and dynamic sensor calibrations (NUCs) for high-quality image capture across various light intensities and environmental conditions
Ethan Klem, SWIR Vision’s Chief Technology Officer commented, “The Acuros GO brings portable infrared imaging to vision professionals and photography enthusiasts looking to leverage the capabilities of near and shortwave infrared imaging.”

For more information about the Acuros GO 6 MP SWIR Camera and SWIR Vision Systems' CQD sensor technology, please visit  www.swirvisionsystems.com/acuros-go-camera/.
 

The Camera:

 
Acuros GO 6 MP Camera Front

 
Acuros GO 6 MP Camera Back

Acuros GO 6 MP Camera Specification

 

Go to the original article...

A New FAE Opening at Sony USA

Image Sensors World        Go to the original article...

Sony Semiconductor Solutions - America

Automotive Image Sensor Field Applications Engineer     Novi, Michigan, USA    Link

Go to the original article...

Hokuyo solid-state LiDAR uses Lumotive’s beamsteering technology

Image Sensors World        Go to the original article...

From: https://hokuyo-usa.com/resources/blog/pioneering-autonomous-capabilities-solid-state-3d-lidar

Hokuyo YLM-X001

Autonomous technologies are proliferating across industries at breakneck speed. Various sectors, like manufacturing, agriculture, storage, freight, etc., are rushing to embrace robotics, automation, and self-driving capabilities.

At the helm of this autonomous transformation is LiDAR, the eyes that allow technologies to perceive and understand their surroundings. LiDAR is like a hawk scanning the landscape with sharp vision, giving clarity and insight into what stands before it. Additionally, research solidifies the claims of increasing LiDAR usage and anticipates that the global LiDAR market will reach 5.35 billion USD by 2030.

While spinning mechanical LiDAR sensors have paved the way, acting as the eyes of autonomous systems, they remain too bulky, delicate, and expensive for many real-world applications. However, new solid-state 3D LiDAR is here to change the game. These LiDARs pack thousands of tiny, durable laser beams onto a single chip to provide unmatched reliability and affordability.

How YLM-X001 3D LiDAR Range Sensor is Transforming Scanning Capabilities
The YLM-X001 outdoor-use 3D LiDAR by Hokuyo sets new standards with groundbreaking features. The range sensor has a small form factor with 119 (W) x 85 (D) x79 (H) dimensions, allowing it to become a part of any vehicle seamlessly. Additionally, despite the small size, it boasts a scanning range of 120° horizontally and 90° vertically. Therefore, it can scan a larger scene and provide data in real-time to avoid collisions with any object.

Furthermore, at the heart of this LiDAR range sensor is the Light Control Metasurface (LCM) technology patented and protected by Lumotive, Inc. This jointly developed light detection and ranging sensor works using this beam-steering technology. It uses the deflection angle of liquid crystals without relying on mechanical parts. This digital scanning technology combines a line light laser with VCSEL Laser and liquid crystal deflection, enabling LiDAR to perform efficient 3D object recognition with high resolution.

Also, the LCM not only eliminates mechanical components but also aids in reducing multipath interference and inter-sensor interference. Reduction of both interferences results in achieving a better level of stability in measurement that was previously unattainable using mechanical LiDARs.
The YLM-X001 3D LiDAR range sensors offer dynamic digital scanning, providing stable distance accuracy in multipath and LiDAR-to-LiDAR interference. It can measure the distance of stationary and repositioning objects in the moving direction and on the road surface via continuous and dynamic scanning.

Notable Features of YLM-X001
New and market-leading features are packed inside this LiDAR, making it a better choice than mechanical LiDARs.

  • ROS2 Compatible: A globally accepted standard software platform with open-source libraries helping you to develop and run robotics applications efficiently.
  • Ethernet 1000BASE-T: The interface is Ethernet 1000BASE-T compatible, ensuring fast, precise, and stable integration into various robotic systems.
  • 0.5m to 7m Detection Range: The wide range makes it suitable for close and distant monitoring.
  • Distance x 0.5% Deviation: It ensures an exceptional distance accuracy with a Distance x 0.5% deviation. At a distance of 5m under 100,0000lx illumination, the LiDAR provides an accuracy of 25mm.
  • 10Hz or More Frame Rate: YLM-X001 delivers real-time data for dynamic environments with a 10Hz or more frame rate. It offers QVGS (320 x 240) in standard mode and VGS (640 x 480) in high-resolution mode. The angular resolution is 0.375° or less (0.188° in high-resolution mode) for detailed and accurate scanning.

Using 3D LiDAR in Real World Applications
The YLM-X001 finds its stride in various applications, making it an invaluable asset in robotics.

AGV/AMR Integration
Our 3D LiDAR sensors enhance AGV/AMR navigation and obstacle detection precision. They continuously scan the environment, providing real-time data, ideal for autonomous vehicles in dynamic environments.
Additionally, the fork trucks can utilize the capabilities of 3D LiDAR for accurate detection of container and pallet entrances. Plus, it can create path plans and ensure the accurate position of the forklift.

Service Robot Operations
Robots with the capabilities of 3D LiDAR will have an enhanced framework for avoiding obstacles and monitoring road surface conditions. Whether navigating complex indoor or outdoor spaces, these robots can adapt to changing conditions with unmatched accuracy.

Enhance Autonomous Mobility with Hokuyo YLM-X001 3D LiDAR
As industries embrace autonomous technology, the need for accurate range scanning sensors increases. Solid-state LiDARs offer a small form factor and precise measurements, becoming an ideal replacement for mechanical LiDARs.

Our team at Hokuyo is working relentlessly to help you achieve the pinnacle of autonomous mobility. We are developing high-end sensor solutions for a variety of autonomous applications. Our recent development, the YLM-X001 3D LiDAR range sensors, is here for accurate obstacle detection and continuous scanning.

Technical specifications of the YLM-X001 3D LiDAR range sensor: https://www.hokuyo-aut.jp/search/single.php?serial=247#drawing

Go to the original article...

Two Job Postings from Employers in France

Image Sensors World        Go to the original article...

CEA Leti

CMOS Imagers Senior Expert             Grenoble, France            Link

ISAE SUPAERO

Postdoctoral Position                        Toulouse, France            Link
"Characterization of avalanche effects and radiation robustness evaluation of charge transfer
devices for imagers dedicated to space applications" (EU citizenship required)

Go to the original article...

Paper on non-toxic quantum dot SWIR sensors in Nature Photonics

Image Sensors World        Go to the original article...

In a paper titled "Silver telluride colloidal quantum dot infrared photodetectors and image sensors" Wang et al. from  ICFO, ICREA, and Qurv Technologies (Spain) write:

Photodetectors that are sensitive in the shortwave-infrared (SWIR) range (1–2 µm) are of great interest for applications such as machine vision, autonomous driving and three-dimensional, night and adverse weather imaging, among others. Currently available technologies in the SWIR range rely on costly epitaxial semiconductors that are not monolithically integrated with complementary metal–oxide–semiconductor electronics. Solution-processed quantum dots can address this challenge by enabling low-cost manufacturing and simple monolithic integration on silicon in a back-end-of-line process. So far, colloidal quantum dot materials to access the SWIR regime are mostly based on lead sulfide and mercury telluride compounds, imposing major regulatory concerns for their deployment in consumer electronics due to the presence of toxic heavy metals. Here we report a new synthesis method for environmentally friendly silver telluride quantum dots and their application in high-performance SWIR photodetectors. The colloidal quantum dot photodetector stack employs materials compliant with the Restriction of Hazardous Substances directives and is sensitive in the spectral range from 350 nm to 1,600 nm. The room-temperature detectivity is of the order of 10^{12} Jones, the 3 dB bandwidth is in excess of 0.1 MHz and the linear dynamic range is over 118 dB. We also realize a monolithically integrated SWIR imager based on solution-processed, toxic-heavy-metal-free materials, thus paving the way for this technology to the consumer electronics market.
Full paper (behind paywall): https://www.nature.com/articles/s41566-023-01345-3

Coverage in phys.org:  https://phys.org/news/2024-01-toxic-quantum-dots-pave-cmos.html

Non-toxic quantum dots pave the way towards CMOS shortwave infrared image sensors for consumer electronics

Invisible to our eyes, shortwave infrared (SWIR) light can enable unprecedented reliability, function and performance in high-volume, computer vision first applications in service robotics, automotive and consumer electronics markets.

Image sensors with SWIR sensitivity can operate reliably under adverse conditions such as bright sunlight, fog, haze and smoke. Furthermore, the SWIR range provides eye-safe illumination sources and opens up the possibility of detecting material properties through molecular imaging.

Colloidal quantum dots (CQD)-based image sensor technology offers a promising technology platform to enable high-volume compatible image sensors in the SWIR.

CQDs, nanometric semiconductor crystals, are a solution-processed material platform that can be integrated with CMOS and enables access to the SWIR range. However, a fundamental roadblock exists in translating SWIR-sensitive quantum dots into key enabling technology for mass-market applications, as they often contain heavy metals like lead or mercury (IV-VI Pb, Hg-chalcogenide semiconductors).
These materials are subject to regulations by the Restriction of Hazardous Substances (RoHS), a European directive that regulates their use in commercial consumer electronic applications.

In a study published in Nature Photonics, ICFO researchers Yongjie Wang, Lucheng Peng, and Aditya Malla led by ICREA Prof. at ICFO Gerasimos Konstantatos, in collaboration with researchers Julien Schreier, Yu Bi, Andres Black, and Stijn Goossens, from Qurv, have reported on the development of high-performance infrared photodetectors and an SWIR image sensor operating at room temperature based on non-toxic colloidal quantum dots.


The study describes a new method for synthesizing size tunable, phosphine-free silver telluride (Ag2Te) quantum dots while preserving the advantageous properties of traditional heavy-metal counterparts, paving the way to the introduction of SWIR colloidal quantum dot technology in high-volume markets.
While investigating how to synthesize silver bismuth telluride (AgBiTe2) nanocrystals to extend the spectral coverage of the AsBiS2 technology to enhance the performance of photovoltaic devices, the researchers obtained silver telluride (Ag2Te) as a by-product.

This material showed a strong and tunable quantum-confined absorption akin to quantum dots. They realized its potential for SWIR photodetectors and image sensors and pivoted their efforts to achieve and control a new process to synthesize phosphine-free versions of silver telluride quantum dots, as phosphine was found to have a detrimental impact on the optoelectronic properties of the quantum dots relevant to photodetection.

In their new synthetic method, the team used different phosphine-free complexes such as a tellurium and silver precursors that led them to obtain quantum dots with well-controlled size distribution and excitonic peaks over a very broad range of the spectrum.

After fabricating and characterizing them, the newly synthesized quantum dots exhibited remarkable performances, with distinct excitonic peaks over 1,500nm—an unprecedented achievement compared to previous phosphine-based techniques for quantum dot fabrication.

The researchers then decided to implement the obtained phosphine-free quantum dots to fabricate a simple laboratory scale photodetector on the common standard ITO (Indium Tin Oxide)-coated glass substrate to characterize the devices and measure their properties.

"Those lab-scale devices are operated with shining light from the bottom. For CMOS integrated CQD stacks, light comes from the top, whereas the bottom part of the device is taken by the CMOS electronics," said Yongjie Wang, postdoc researcher at ICFO and first author of the study. "So, the first challenge we had to overcome was reverting the device setup. A process that in theory sounds simple, but in reality proved to be a challenging task."

Initially, the photodiode exhibited a low performance in sensing SWIR light, prompting a redesign that incorporated a buffer layer. This adjustment significantly enhanced the photodetector performance, resulting in a SWIR photodiode exhibiting a spectral range from 350nm to 1,600nm, a linear dynamic range exceeding 118 dB, a -3dB bandwidth surpassing 110 kHz and a room temperature detectivity of the order 10^{12} Jones.

"To the best of our knowledge, the photodiodes reported here have for the first time realized solution processed, non-toxic shortwave infrared photodiodes with figures of merit on par with other heavy-metal containing counterparts," Gerasimos Konstantatos, ICREA Prof. at ICFO and leading author of the study mentions.

"These results further support the fact that Ag2Te quantum dots emerge as a promising RoHS-compliant material for low-cost, high-performance SWIR photodetectors applications."
With the successful development of this heavy-metal-free quantum dot based photodetector, the researchers went further and teamed up with Qurv, an ICFO spin-off, to demonstrate its potential by constructing a SWIR image sensor as a case study.

The team integrated the new photodiode with a CMOS based read-out integrated circuit (ROIC) focal plane array (FPA) demonstrating for the first time a proof-of-concept, non-toxic, room temperature-operating SWIR quantum dot based image sensor.

The authors of the study tested the imager to prove its operation in the SWIR by taking several pictures of a target object. In particular, they were able to image the transmission of silicon wafers under the SWIR light as well as to visualize the content of plastic bottles that were opaque in the visible light range.

"Accessing the SWIR with a low-cost technology for consumer electronics will unleash the potential of this spectral range with a huge range of applications including improved vision systems for automotive industry (cars) enabling vision and driving under adverse weather conditions," says Gerasimos Konstantatos.

"SWIR band around 1.35–1.40 µm, can provide an eye-safe window, free of background light under day/night conditions, thus, further enabling long-range light detection and ranging (LiDAR), three-dimensional imaging for automotive, augmented reality and virtual reality applications."
Now the researchers want to increase the performance of photodiodes by engineering the stack of layers that comprise the photodetector device. They also want to explore new surface chemistries for the Ag2Te quantum dots to improve the performance and the thermal and environmental stability of the material on its way to the market.

 

Go to the original article...

Job Postings – Week of 21 January 2024

Image Sensors World        Go to the original article...

Surrey Satellite Technology Ltd.

Imager Electronics Engineer

Guildford, Surrey, UK

Link

Booz Allen Hamilton

Electro-Optical and Infrared Subject Matter Expert

Crane, Indiana, USA

Link

SOITEC

BU Director Mixed Signal

Singapore or Grenoble, France

Link

Space Dynamics Laboratory

Imaging Sensor and Detector Engineer

Logan, Utah, USA

Link

University of Science and Technology of China

Postdoctoral R&D of Monolithic Active Pixel Sensors

Hefei, Anhui, China

Link

Nokia

Silicon Photonics Design Engineer

New York, New York, USA

Link

Nokia

Silicon Photonics Design Summer Co-op

New York, New York, USA

Link

Blue River Technology

Camera Systems Engineer

Santa Clara, California, USA

Link

Thorlabs – Imaging Systems

Summer Intern

Sterling, Virginia, USA

Link

Go to the original article...

css.php