Event cameras for GPS-free drone navigation

Image Sensors World        Go to the original article...

Link: https://spectrum.ieee.org/drone-gps-alternatives

A recent article in IEEE Spectrum titled "Neuromorphic Camera Helps Drones Navigate Without GPS High-end positioning tech comes to low-cost UAVs" discusses efforts in using neuromorphic cameras to achieve GPS-free navigation for drones.

Some excerpts:

[GPS] signals are vulnerable to interference from large buildings, dense foliage, or extreme weather and can even be deliberately jammed. [GPS-free navigation systems that rely only on] accelerometers and gyroscopes [suffer from] errors [that] accumulate over time and can ultimately cause a gradual drift. ... Visual navigation systems  [consume] considerable computing and data resources.

A pair of navigation technology companies has now teamed up to merge the approaches and get the best of both worlds. NILEQ, a subsidiary of British missile-maker MBDA based in Bristol, UK, makes a low-power visual navigation system that relies on neuromorphic cameras. This will now be integrated with a fiber optic-based INS developed by Advanced Navigation in Sydney, Australia, to create a positioning system that lets low-cost drones navigate reliably without GPS.

[...]

[Their proprietary algorithms] process the camera output in real-time to create a terrain fingerprint for the particular patch of land the vehicle is passing over. This is then compared against a database of terrain fingerprints generated from satellite imagery, which is stored on the vehicle. [...]

The companies are planning to start flight trials of the combined navigation system later this year, adds Shaw, with the goal of getting the product into customers hands by the middle of 2025.

Go to the original article...

Quantum Solutions announces SWIR camera based on quantum dot technology

Image Sensors World        Go to the original article...

Link: https://quantum-solutions.com/product/q-cam-swir-camera/#description

Oxford, UK – November 26, 2024 – Quantum Solutions proudly announces the release of the Q.Cam™ , an advanced Short-Wave Infrared (SWIR) camera designed for outdoor applications.

Redefining SWaP for Outdoor Applications:
The Q.Cam™ ™ sets a new standard for low Size, Weight, and Power (SWaP) in SWIR cameras, making it ideal for outdoor applications where space is limited and visibility in challenging conditions like smoke, fog, and haze is crucial.

Developed in collaboration with a leading partner, the Q.Cam™ is the first USB 3.0 camera featuring Quantum Solutions’ state-of-the-art Quantum Dot SWIR sensor, offering VGA resolution (640 x 512 pixels) with a wide spectral range of 400 nm to 1700 nm.

The Q.Cam™ is incredibly compact, weighing only 35 grams with dimensions of 35 x 25 x 25 mm³, making it perfect for integration in space-constrained environments. Its TEC-less design minimizes power consumption to an impressive <1.3 Watts, ideal for battery-powered operation.

Overcoming Outdoor Challenges:
Using SWIR cameras outdoors has traditionally been challenging due to varying lighting conditions and temperature-related image quality fluctuations requiring re-calibration of the camera to adjust to changing conditions. The Q.Cam™ addresses these issues with its advanced image correction technology, which automatically adjusts for factors like gain, temperature offset, and illumination. The camera can perform more than150+ automatic calibrations on the fly, ensuring consistent, high-quality images even in challenging and constantly changing outdoor environments. This advanced correction capability enables a TEC-less design, significantly reducing power consumption without compromising image quality.


The integration of proprietary Quantum Dot technology allows Quantum Solutions to offer the Q.Cam™ as a cost-effective and accessible solution for bringing SWIR imaging to a wider range of outdoor applications.

Seamless Integration and Flexibility:
The Q.Cam™ comes equipped with a user-friendly USB 3.0 interface, a Graphical User Interface (GUI), and Python scripts for easy integration and control.

ITAR-Free and Ready for Global Deployment:
The Q.Cam™ is an ITAR-free product with a short lead time of 3 weeks, making it readily
available for global deployment in a variety of sectors, including:
• Security and Surveillance
• Defence
• Search and Rescue
• Environmental Monitoring
• Robotics and Machine Vision
• Automotive

Key Features of Q.Cam™ :
• Quantum Dot SWIR Sensor: 640 x 512 pixels, 400 nm - 1700 nm spectral range
• Best-in-class SWaP: 35 g, 35 x 25 x 25 mm³, <1.3 W power consumption
• Built-in Automatic Image Correction: Up to 150+ automatic image corrections
(Gain, Offset, Temperature, and Illumination)
• Cost-Effective and Accessible: Among the most affordable SWIR cameras
available in the market
• Frame Rate up to 60 Hz; Global Shutter
• Operating Temperature: -20°C to 50°C

Go to the original article...

Video of the Day: tutorial on iToF imagers

Image Sensors World        Go to the original article...


Abstract:
"Indirect Time of Flight 3D imaging is an emerging technology used in 3D cameras. The technology is based on measuring the time of flight of modulated light. It allows to generate fine grain depth images with several hundreds of thousand image points. I-TOF has become a standard solution for face recognition and authentication. Recently I-TOF is also used in various new applications, such as computational photography, gesture recognition and robotics. This talk will introduce the basic operation principle of an I-TOF 3D imager IC. The integrated building blocks will be discussed and the analog operation of an I-TOF pixel will be addressed in detail. System level topics of the camera module will also be covered to provide a complete overview of the technology."
This presentation was recorded as part of the lecture "Selected Topics of Advanced Analog Chip Design" from the Institute of Electronics at TU Graz.
Special thanks to Dr. Timuçin Karaca, for the insightful presentation.

Go to the original article...

Exosens (prev. Photonis) acquires Noxant

Image Sensors World        Go to the original article...

News link: https://optics.org/news/15/11/29

Exosens eyes further expansion with Noxant deal
20 Nov 2024

French imaging and analytical technology group aiming to add MWIR camera specialist to growing portfolio.

Exosens, the France-based technology group previously known as Photonis, is set to further grow its burgeoning camera portfolio with the acquisition of Noxant.

Located in the Paris suburbs, Noxant specializes in high-performance cooled imagers operating at mid-infrared wavelengths.

The agreement between the two firms allows Exosens to enter into exclusive negotiations to pursue the acquisition, and if consummated it would complement existing camera expertise in the form of Xenics, Telops, and another pending acquisition, Night Vision Laser Spain (NVLS).

Gas imaging
Noxant sells its range of cameras for applications including surveillance, scientific research, industrial testing, and gas detection - the latter said to represent a “strong synergistic addition” to Exosens’ existing camera offering.

Exosens CEO Jérôme Cerisier said: “Through this acquisition, we would broaden Exosens' technological spectrum by offering cutting-edge cooled infrared solutions to meet the growing demands of our OEM customers.

“Noxant's expertise in cooled infrared technology aligns perfectly with our mission to deliver high-performance, reliable imaging solutions for critical applications.

“Furthermore, the synergies between Noxant and Telops would strengthen our research and development capabilities and accelerate our innovation in infrared technologies.”

At the moment Noxant serves OEMs primarily, whereas Telops tends to target end users, meaning opportunities for cross-selling under the Exosens umbrella organization.

Its products include the “NoxCore” range of camera cores, “NoxCam” cameras, and the “GasCore” series of high-performance optical gas imaging cameras. Offering a spectral range of 3-5 µm in the MWIR or 7-10 µm in the long-wave infrared (LWIR), these are able to image a large number of process and pollutant gases including methane, carbon dioxide, and nitrous oxide.

Commenting on the likely business combination, Noxant chairman Laurent Dague suggested that joining forces with Exosens would represent a “perfect match”, and a deal that would enable Noxant to continue delivering advanced cooled infrared technology while benefiting from Exosens' much larger scale and customer reach.

Growing business
While Noxant’s 22 employees generated annual revenues of approximately €12 million in the 12 months ending June 2024, Exosens’ most recent financial results showed sales of €274 million for the nine months up to September 30 this year.

That figure represented a 33 per cent jump on the same period in 2023, largely due to much higher sales of the firm’s microwave amplification products, which contributed €200 million to the total.

Meanwhile Exosens’ detection and imaging businesses contributed close to €77 million, up from €47 million for the same nine-monthly period last year - partly through the addition of Telops and Photonis Germany (formerly ProxiVision).

Not all of those sales relate to optical technology, with the company also selling neutron and gamma-ray detectors used in the nuclear industry.

Last month Exosens announced that it had signed a definitive agreement to acquire NVLS, which produces man-portable night vision and thermal devices from its base in Madrid.

That deal should see NVLS further develop its business in Spain, Latin America and Asia, while also broadening Exosens’ know-how in optical and mechanical technologies.

Go to the original article...

Gpixel announces GSPRINT5514 global shutter CIS

Image Sensors World        Go to the original article...

Press release: https://www.einpresswire.com/article/761834209/gsprint5514-a-new-high-sensitivity-14mp-bsi-global-shutter-cis-targeting-high-speed-machine-vision-and-4k-video

GSPRINT5514 a new High Sensitivity 14MP BSI Global Shutter CIS targeting high speed machine vision and >4k video.

CHANGCHUN, CHINA, November 19, 2024 /EINPresswire.com/ -- Gpixel announces GSPRINT5514BSI, the fifth sensor in the popular GSPRINT series of high-speed global shutter CMOS image sensors. The sensor is pin compatible with GSPRINT4510 and GSPRINT4521 for easy design into existing camera platforms.

GSPRINT5514BSI features 4608 x 3072 pixels, each 5.5 µm square – a 4/3 aspect ratio 4k sensor compatible with APS-C optics. With 10-bit output GSPRINT5514BSI achieves 670 frames per second. In 12-bit mode the sensor outputs 350 fps.

Using backside illumination technology, the sensor achieves 86% quantum efficiency at 510 nm and 17% at 200 nm for UV applications. The sensor offers dual gain HDR readout, maximizing 15 ke- full well capacity with a minimum < 2.0 e- noise to achieve an outstanding 78.3 dB of dynamic range. Analog 1x2 binning increases the full well capacity to 30 ke-.

Up to 8 vertically oriented regions of interest can be defined to operate the sensor at increased frame rates. The image data is output via 84 sub-LVDS channels at 1.2 Gbps. For applications in which the maximum frame rate is not required, multiplexing modes are available to reduce the number out output channels by any multiple of two.

The sensor features on-chip sequencer, SPI control, PLL, and both analog and digital temperature sensors.
“The GSPRINT family of image sensors have enabled new use cases in high-speed machine vision and offer unprecedented value to the 4k video market,” says Wim Wuyts, Gpixel’s Chief Commercial Officer. “We will continue to expand this product line to meet the needs of customers across the growing diversity of applications demanding high speed, excellent image quality, and a high dynamic range. From a technology perspective we are proud to extend our GSPRINT series with the second BSI Global Shutter product, opening a wavelength extension into DUV.”

The GSPRINT5514BSI is available in monochrome or color variants with either sealed or removable cover glass and is assembled in a 454-pin µPGA package.
Samples and evaluation systems are available now.



Go to the original article...

Sony releases IMX925 stacked global-shutter CIS

Image Sensors World        Go to the original article...

Press release: https://www.sony-semicon.com/en/news/2024/2024111901.html

Product page: https://www.sony-semicon.com/en/products/is/industry/gs/imx925-926.html

Sony Semiconductor Solutions to Release an Industrial CMOS Image Sensor with Global Shutter for High-Speed Processing and High Pixel Count Offering an Expanded, High-Precision Product Lineup Supporting Faster Recognition and Inspection

Atsugi, Japan — Sony Semiconductor Solutions Corporation (SSS) today announced the upcoming release of the IMX925 stacked CMOS image sensor with back-illuminated pixel structure and global shutter. This new product offers 394 fps high-speed processing and a high, 24.55-effective-megapixel count and is optimized for industrial equipment imaging.

The new sensor product is equipped with the Pregius S™ global shutter technology made possible by SSS’s original pixel structure, delivering a compact design with minimal noise and high-quality imaging performance. It also employs a new circuit structure that optimizes pixel reading and sensor drive in the A/D converter, making processing approximately four times faster and twice as energy efficient as conventional products.

Along with the IMX925, SSS will also release three models with different sensor sizes and frame rates. The expanded product lineup will help make recognition and inspection tasks faster and more precise, improving productivity in the industrial equipment domain, where this kind of superior performance is increasingly in demand.

With factory automation progressing, demand continues to grow for machine vision cameras capable of fast, high-quality imaging for a variety of objects in the industrial equipment domain. By employing a global shutter capable of capturing moving subjects free of distortion together with a proprietary back-illuminated pixel structure, SSS’s global-shutter CMOS image sensors deliver superb pixel characteristics, including high sensitivity and saturation capacity. They are mainly being used to recognize and inspect precision components such as electronic devices.

The IMX925 sensor is compact enough to be C mount compatible, the most common mounting standard for machine vision cameras. It has a total of 24.55 effective megapixels and offers a higher frame rate than previous models thanks to the enhanced high-speed signal processing. These features enable increased image capture per unit of time, thereby reducing measurement and inspection process times and helping to save energy. The product is also expected to be useful in advanced inspection processes such as 3D inspections which employ multiple image data.

Main Features
■New circuit structure with optimized sensor drive for high-speed imaging and power saving
The new sensor models employ a new circuit structure that optimizes pixel reading and sensor drive in the A/D converter. Reducing the data output time enables high-speed imaging, so the IMX925 delivers a frame rate of 394 fps, about four times faster than conventional products. Power consumption is also more than twice as efficient as on conventional products. The reduced power consumption and shorter measurement and inspection times will contribute to improved productivity in various applications.
■Global shutter with original pixel structure for high-definition imaging in a compact package
The new products are equipped with SSS’s proprietary Pregius S global shutter technology. The back-illuminated pixels and stacked structure enable high sensitivity and saturation capacity on very small, 2.74 µm pixels. This structure delivers 24.55 effective megapixels on the IMX925 in a C-mount-compatible 1.2-type size, delivering a high pixel count in a compact package. This design also ensures that the sensors can capture fast-moving objects free of distortion, which in turn makes the products highly useful in compact, high-definition machine vision cameras that can be easily installed on equipment and manufacturing lines.
■Higher data transmission per lane for higher camera precision and speed
The new products employ SSS’s own embedded clock high-speed interface SLVS-EC™, which supports up to 12.5 Gbps/lane. With high-resolution image data transmitted on fewer data lanes than in the past, FPGA options are expanded, supporting the development of high-precision, high-speed cameras.

 



Go to the original article...

2029 forecast: Image sensors market will be worth $29.62B

Image Sensors World        Go to the original article...

Link: https://www.novuslight.com/image-sensors-market-worth-29-62-billion-by-2029_N13346.html

The global image sensor market is expected to be valued at USD 20.66 billion in 2024 and is projected to reach USD 29.62 billion by 2029, growing at a CAGR of 7.5% from 2024–2029 according to a new report by MarketsandMarkets. Additions to existing applications in various industries and technological advancements in image sensor product offerings are key factors driving the expansion of the image sensor market. Restraints such as High Manufacturing costs hinder market growth. However, factors such as Integration with other technologies provide lucrative opportunities for market players in the coming years.

Area Scan image sensors by array type to hold the highest CAGR during the forecast period.
Area scan image sensors will have the highest CAGR in the image sensor market for years due to versatile applications in numerous industries. Since it captures images in a two-dimensional format, area scan image sensors find broad application in machine vision in manufacturing, quality assurance, and automated inspection systems. Growing demand for automation of processes in industries is an imperative factor driving this growth. Area scan sensors make high-speed image acquisition possible with good measurement accuracy, which are essential factors for maintaining product quality as well as operational efficiency. Improvements in the form of better resolution and sensitivity along with AI integration are also improving the performance of area scan image sensors. Their high-speed, real-time image processing capacity supports applications in the automotive, healthcare, and logistics sectors, where swift decision-making is important. The rise of smart factories and Industry 4.0 projects has increased the demand for area scan sensors, which are essential for automation and data analytics functions. With more and more industries embracing high-performance imaging solutions, the best position of area scan image sensors will be leading in growth rates and innovation in the market.


More than 16 MP by resolution to exhibit highest market share during the forecast period
Over the next few years, image sensors with more than 16 MP resolution will likely rule the market because they can fulfill the fast-growing demands of high-quality imaging in various applications. Manufacturers are increasingly incorporating higher-resolution sensors into smartphones, digital cameras, and professional equipment due to increasing consumer demands for superior image quality. These further fuels the demand for visually beautiful images and videos because of the proliferation of social media and digital content creation. Other areas include the automotive, healthcare, and security fields. For example, advanced driver-assistance systems (ADAS) in automobiles would require detailed imaging of the lane and pedestrians; these pixels need to be greater, hence the requirement of such sensors. Medical imaging devices need high-resolution imaging for the accurate diagnostics that are provided. Thus, the increased low-light sensitivity and higher readout speed will make the high-resolution sensor more attractive. As such, the image sensors market greater than 16 MP will significantly increase as it goes well with the industry trend.


Industrial sector to hold the highest CAGR during the forecast period
Industrial, the largest application segment for image sensors, is anticipated to have the highest CAGR in the image sensor market during the next couple of years due to the growing application of automation, robotics, and machine vision systems. Rising efficiency and precision needs among industries in the manufacturing process also create an enormous demand for advanced imaging technologies. Therefore, image sensors play a very important role in quality control; real-time inspection and monitoring products are possible to ensure compliance with stringent quality standards. The establishment of Industry 4.0, which involves a convergence of IoT devices and smart technologies in the manufacturing process, is another reason behind the demand for high-performance image sensors. The sensors are more capable of collecting more data, which are analyzed for predictive purposes to minimize downtime.
Furthermore, developing more applications in the autonomous vehicles, logistics, and warehousing area significantly contributes to the increase in the requirement for advanced imaging solutions. Industrial applications will transform with the use oftransform using sensor technologies like 3D imaging and AI-enhanced vision systems. The system could offer clearer operational efficiency.
 

Asia Pacific in the image sensor industry to exhibit the highest CAGR during the forecast period
This will be where the highest CAGR in image sensors over the next years will emerge, spurred by various strong drivers in the region. The area is also where leading electronic manufacturing bases reside, among other factors that include some strong economies globally, like China, Japan, South Korea, and Taiwan. Some investment in this area for their development in researching and making discoveries is accelerating image sensor innovation in these markets. This growth is driven by the increasing demand for consumer electronics, including good-quality camera smartphones and tablets, as consumers want better imaging. Asia Pacific also grows due to fast industrialization and urbanization; image sensors are adopted in many industries, including automotive, healthcare, and surveillance. The automotive segment is especially booming for ADAS, which uses highly quality image sensors that include safety features. Furthermore, government projects like smart city projects help encourage surveillance and monitoring solutions; consequently, the demand also rises in the image sensor market. Collectively, these factors position the Asia Pacific region for robust growth, making it a key player in the global image sensor landscape. 


Key Players
The image sensor companies includes major Tier I and II players like Sony Corporation (Japan), Samsung. (South Korea), Omnivision (US), Semiconductor Components Industries, LLC (US) and STMicroelectronics (Switzerland), Panasonic Holdings Corporation (Japan), Canon Inc. (Japan), HAMAMATSU PHOTONICS KK. (Japan), Teledyne Technologies Incorporated. (US), SK HYNIX INC. (South Korea), Himax Technologies Inc. (Taiwan) and others. These players have a strong market presence in advanced packaging across various countries in North America, Europe, Asia Pacific, and the Rest of the World (RoW).

Go to the original article...

Single Photon Avalanche Diodes – Buyer’s Guide

Image Sensors World        Go to the original article...

Photoniques magazine published an article titled "Single photon avalanches diodes" by Angelo Gulinatti (Politecnico di Milano).

Abstract: Twenty years ago the detection of single photons was little more than a scientific curiosity reserved to a few specialists. Today it is a flourishing field with an ecosystem that extends from university laboratories to large semiconductor manufacturers. This change of paradigm has been stimulated by the emergence of critical applications that rely on single photon detection, and by technical progresses in the detector field. The single photon avalanche diode has unquestionably played a major role in this process.

Full article [free access]: https://www.photoniques.com/articles/photon/pdf/2024/02/photon2024125p63.pdf

 


Figure 1: Fluorescence lifetime measured by time-correlated single-photon counting (TCSPC). The sample is excited by a pulsed laser and the delay between the excitation pulse and the emitted photon is measured by a precision clock. By repeating multiple times, it is possible to build a histogram of the delays that reproduces the shape of the optical signal.



Figure 3: By changing the operating conditions or the design parameters, it is possible to improve some
performance metrics at the expenses of others.



Go to the original article...

IEDM 2024 Program is Live

Image Sensors World        Go to the original article...

70th Annual IEEE International Electron Devices Meeting (IEDM) will be held December 7-11, 2024 in San Francisco California. Session #41 is on the topic of "Advanced Image Sensors":

https://iedm24.mapyourshow.com/8_0/sessions/session-details.cfm?scheduleid=58

Title: 41 | ODI | Advanced Image Sensors
Description:
This session includes 6 papers on latest image sensor technologies developments. To be noticed this year the multiple ways of stacking layer with new features. The first stack involves a dedicated AI image processing layer based on neural networks for a 50 Mpix sensor. The second one shows progress on small pixel noise with 2-layer pixel and additional intermediate interconnection. Third stack, very innovative with organic pixel on top of conventional Si based ITOF pixel for true single device RGB-Z sensor. All three papers are authored by Sony Semiconductors. InAs QD image sensors are also reported for the first time as a lead-free option for SWIR imaging by both IMEC and Sony Semiconductors Also progress in conventional IR global shutter with newly nitrated MIM capacitor and optimized DTI filling for crosstalk and QE improvement is presented by Samsung semiconductor.

Wednesday, December 11, 2024 - 01:35 PM
41-1 | A Novel 1/1.3-inch 50 Megapixel Three-wafer-stacked CMOS Image Sensor with DNN Circuit for Edge Processing
This study reports the first ever 3-wafer-stacked CMOS image sensor with DNN circuit. The sensor was fabricated using wafer-on-wafer-on-wafer process and DNN circuit was placed on the bottom wafer to ensure heat dissipation. This device can incorporate the HDR function and enlarge the pixel array area to remarkably improve image-recognition.


Wednesday, December 11, 2024 - 02:00 PM
41-2 | Low Dark Noise and 8.5k e− Full Well Capacity in a 2-Layer Transistor Stacked 0.8μm Dual Pixel CIS with Intermediate Poly-Si Wiring
This paper demonstrates a 2-layer transistor pixel stacked CMOS image sensor with the world’s smallest 0.8μm dual pixel. We improved the layout flexibility with intermediate poly-Si wiring technique. Our advanced 2-layer pixel device achieved low input-referred random noise of 1.3 e−rms and high full well capacity of 8.5k e−.


Wednesday, December 11, 2024 - 02:25 PM
41-3 | A High-Performance 2.2μm 1-Layer Pixel Global Shutter CMOS Image Sensor for Near-Infrared Applications
A high performance and low cost 2.2μm 1-layer pixel near infrared (NIR) global shutter (G/S) CMOS image sensor (CIS) was demonstrated. In order to improve quantum efficiency (QE), thick silicon with high aspect ratio full-depth deep trench isolation (FDTI) and backside scattering technology are implemented. Furthermore, thicker sidewall oxide for deep trench isolation and oxide filled FDTI were applied to enhance a modulation transfer function (MTF). In addition, 3-dimensional metal-insulator-metal capacitors were introduced to suppress temporal noise (TN). As a result, we have demonstrated industry-leading NIR G/S CIS with 2.71e- TN, dark current of 8.8e-/s, 42% QE and 58% MTF.


Wednesday, December 11, 2024 - 03:15 PM
41-4 | First Demonstration of 2.5D Out-of-Plane-Based Hybrid Stacked Super-Bionic Compound Eye CMOS Chip with Broadband (300-1600 nm) and Wide-Angle (170°) Photodetection
We propose a hybrid stacked CMOS bionic chip. The surface employs a fabrication process involving binary-pore anodic aluminum oxide (AAO) templates and integrates monolayer graphene (Gr) to mimic the compound eyes, thereby enhancing detection capabilities in the ultraviolet and visible ranges. Utilizing a 2.5D out-of-plane architecture, it achieves a wide-angle detection effect (170°) equivalent to curved surfaces while enhancing absorption in the 1550 nm communication band to nearly 100%. Additionally, through-silicon via (TSV) technology is integrated for wafer-level fabrication, and a CMOS 0.18-µm integrated readout circuit is developed, achieving the super-bionic compound eye chip based on hybrid stacked integration.


Wednesday, December 11, 2024 - 03:40 PM
41-5 | Pseudo-direct LiDAR by deep-learning-assisted high-speed multi-tap charge modulators
A virtually direct LiDAR system based on an indirect ToF image sensor and charge-domain temporal compressive sensing combined with deep learning is demonstrated. This scheme has high spatio-temporal sampling efficiency and offers advantages such as high pixel count, high photon-rate tolerance, immunity to multipath interference, constant power consumption regardless of incident photon rates, and motion artifact-free. The importance of increasing the number of taps of the charge modulator is suggested by simulation.


Wednesday, December 11, 2024 - 04:05 PM
41-6 | A Color Image Sensor Using 1.0-μm Organic Photoconductive Film Pixels Stacked on 4.0-μm Si Pixels for Near-Infrared Time-of-Flight Depth Sensing
We have developed an image sensor capable to simultaneously acquire high-resolution RGB images with good color reproduction and parallax-free ranging information by 1.0-μm organic photoconductive film RGB pixels stacked on 4.0-μm NIR silicon pixels for iToF depth sensing.


Wednesday, December 11, 2024 - 04:30 PM
41-7 | Pb-free Colloidal InAs Quantum Dot Image Sensor for Infrared
We developed an image sensor using colloidal InAs quantum dot (QD) for photoconversion. After spincoating the QDs on a wafer and standard semiconductor processing, the sensor exhibited infrared sensitivity and imaging capability. This approach facilitates easier production of lead-free infrared sensors for consumer use.


Wednesday, December 11, 2024 - 04:55 PM
41-8 | Lead-Free Quantum Dot Photodiodes for Next Generation Short Wave Infrared Optical Sensors
Colloidal quantum dot sensors are disruptingimaging beyond the spectral limits of silicon. In this paper,we present imagers based on InAs QDs as alternative for 1stgeneration Pb-based stacks. New synthesis method yields 9nm QDs optimized for 1400 nm and solution-phase ligandexchange results in uniform 1-step coating. Initial EQE is17.4% at 1390 nm on glass and 5.8% EQE on silicon(detectivity of 7.4 × 109 Jones). Using metal-oxide transportlayers and >300 hour air-stability enable compatibility withfab manufacturing. These results are a starting point towardsthe 2nd generation quantum dot SWIR imagers.


Also of interest, the following talk in Tuesday's session "Major Consumer Image Sensor Innovations Presented at IEDM"

Description: Authors: Albert Theuwissen, Harvest Imaging
Image Sensors past, and progress made over the years

Go to the original article...

AMS Osram Q3 2024 Results

Image Sensors World        Go to the original article...




Go to the original article...

Videos from EPIC Neuromorphic Cameras Online Meeting

Image Sensors World        Go to the original article...

Presentations from the recent EPIC Online Technology Meeting on Neuromorphic Cameras are available on YouTube:

 

IKERLAN – DVS Pre-processor & Light-field DVS – Xabier Iturbe, NimbleAI EU Project Coordinator
IKERLAN is a leading technology centre in providing competitive value to industry, since 1974. They offer integral solutions in three main areas: digital technologies and artificial intelligence, embedded electronic systems and cybersecurity, and mechatronic and energy technologies. They currently have a team of more than 400 people and offices in Arrasate-Mondragón, Donostialdea and Bilbao. As a cooperative member of the MONDRAGON Corporation and the Basque Research and Technology Alliance (BRTA), IKERLAN represents a sustainable, competitive business model in permanent transformation.


FlySight – Neuromorphic Sensor for Security and Surveillance – Niccolò Camarlinghi, Head of Research
FlySight S.r.l. (A single member Company) is the Defense and Security subsidiary company of Flyby Group, a satellite remote sensing solutions company.
FlySight team offers integrated solutions of data exploitation, image processing, avionic data/sensors fusion. Our products are mainly dedicated to the exploitation of data captured by many sensors typeand our solutions are intended both for the on-ground as well as for the on-board segments.
Through itsexperience in C4ISR (Command, Control, Communications, Computers, Intelligence, Surveillance and Reconnaissance), FlySight offers innovative software development and geospatial application technology programs (GIS) customized for the best results.
Our staff can apply the right COTS for your specific mission.
The instruments and products developed for this sector can find application as dual use tools also in many civil fields like Environmental Monitoring, Oil & Gas, Precision Farming and Maritime/Coastal Planning.


VoxelSensors – Active Event Sensors : an Event-based Approach to Single-photon Sensing of Sparse Optical Signals – Ward van der Tempel, CTO
VoxelSensors is at the forefront of 3D perception, providing cutting-edge sensors and solutions for seamless integration of the physical and digital worlds. Our patented Switching Pixels® Active Event Sensor (SPAES) technology represents a novel category of efficient 3D perception systems, delivering exceptionally low latency with ultra-low power consumption by capturing a new Voxel with fewer than 10 photons. SPAES is a game-changing innovation that unlocks the true potential of fully immersive experiences for both consumer electronics and enterprise AR/VR/MR wearables.


PROPHESEE – Christoph Posch, Co-Founder and CTO
Prophesee is the inventor of the world’s most advanced neuromorphic vision systems. Prophesee’s patented sensors and AI algorithms, introduce a new computer vision paradigm based on how the human eye and brain work. Like the human vision, it sees events: essential, actionable motion information in the scene, not a succession of conventional images.


SynSense – Neuromorphic Processing and Applications – Dylan Muir, VP, Global Research Operations
SynSense is a leading-edge neuromorphic computing company. It provides dedicated mixed-signal/fully digital neuromorphic processors which overcome the limitations of legacy von Neumann computers to provide an unprecedented combination of ultra-low power consumption and low-latency performance. SynSense was founded in March 2017 based on advances in neuromorphic computing hardware developed at the Institute of Neuroinformatics of the University of Zurich and ETH Zurich. SynSense is developing “full-stack” custom neuromorphic processors for a variety of artificial-intelligence (AI) edge-computing applications that require ultra-low-power and ultra-low-latency features, including autonomous robots, always-on co-processors for mobile and embedded devices, wearable health-care systems, security, IoT applications, and computing at the network edge.


Thales – Eric Belhaire, Senior Expert in the Technical Directorate
Thales (Euronext Paris: HO) is a global leader in advanced technologies specialized in three business domains: Defence & Security, Aeronautics & Space, and Cybersecurity & Digital identity. It develops products and solutions that help make the world safer, greener and more inclusive.

Go to the original article...

"Photon inhibition" to reduce SPAD camera power consumption

Image Sensors World        Go to the original article...

In a paper titled "Photon Inhibition for Energy-Efficient Single-Photon Imaging" presented at the European Conference on Computer Vision (ECCV) 2024 Lucas Koerner et al. write:

Single-photon cameras (SPCs) are emerging as sensors of choice for various challenging imaging applications. One class of SPCs based on the single-photon avalanche diode (SPAD) detects individual photons using an avalanche process; the raw photon data can then be processed to extract scene information under extremely low light, high dynamic range, and rapid motion. Yet, single-photon sensitivity in SPADs comes at a cost — each photon detection consumes more energy than that of a CMOS camera. This avalanche power significantly limits sensor resolution and could restrict widespread adoption of SPAD-based SPCs. We propose a computational-imaging approach called photon inhibition to address this challenge. Photon inhibition strategically allocates detections in space and time based on downstream inference task goals and resource constraints. We develop lightweight, on-sensor computational inhibition policies that use past photon data to disable SPAD pixels in real-time, to select the most informative future photons. As case studies, we design policies tailored for image reconstruction and edge detection, and demonstrate, both via simulations and real SPC captured data, considerable reduction in photon detections (over 90% of photons) while maintaining task performance metrics. Our work raises the question of “which photons should be detected?”, and paves the way for future energy-efficient single-photon imaging.

 







 

Lucas Koerner, Shantanu Gupta, Atul Ingle, and Mohit Gupta. "Photon Inhibition for Energy-Efficient Single-Photon Imaging." In European Conference on Computer Vision, pp. 90-107 (2024)
[preprint link]

Go to the original article...

Hamamatsu acquires BAE Systems Imaging [Update: Statement from Fairchild Imaging]

Image Sensors World        Go to the original article...

Press release: https://www.hamamatsu.com/us/en/news/announcements/2024/20241105000000.html

Acquisition of BAE Systems Imaging Solutions, Inc. Strengthening the Opto-semiconductor segment and accelerating value-added growth

2024/11/05
Hamamatsu Photonics K.K.

Photonics Management Corp. (Bridgewater, New Jersey, USA), a subsidiary of Hamamatsu Photonics K.K. (Hamamatsu City, Japan), has purchased the stock of BAE Systems Imaging Solutions, Inc. a subsidiary of BAE Systems, Inc. (Falls Church, Virginia, USA). In recognition of the company’s deep roots starting in 1920 as the Fairchild Aerial Camera Corporation, the company will return to the name first used in 2001, Fairchild Imaging.

Fairchild Imaging is a semiconductor manufacturer specializing in high-performance CMOS image sensors in the visible to near-infrared and X-ray regions, and it has the world’s best low-noise CMOS image sensor design technology. Fairchild Imaging’s core products include scientific CMOS image sensors for scientific measurement applications that simultaneously realize high sensitivity, high-speed readout, and low noise, as well as X-ray CMOS image sensors for dental and medical diagnostic applications.

Fairchild Imaging’s core products are two-dimensional CMOS image sensors that take pictures in dark conditions where low noise is essential. These products complement Hamamatsu Photonics’ one-dimensional CMOS image sensors, which are used for analytical instruments and factory automation applications such as displacement meters and encoders. Therefore, Fairchild Imaging’s technologies will enhance Hamamatsu’s CMOS image sensor product line.

Through the acquisition of shares, we expect the following:

1. Promote sales activities of Fairchild Imaging’s products by utilizing the global sales network currently established by Hamamatsu Photonics Group.
2. While Hamamatsu Photonics’ dental business serves the European and the Asian regions including Japan, Fairchild Imaging serves North America. This will lead to the expansion and strengthening of our worldwide dental market share.
3. Fairchild Imaging will become Hamamatsu’s North American design center for 2D, low-noise image sensors. This will strengthen CMOS image sensor design resources and utilize our North American and Japanese locations to provide worldwide marketing and technical support.
4. Create new opportunities and products by combining Fairchild Imaging’s CMOS image sensor design technology with Hamamatsu Photonics’ MEMS technology to support a wider range of custom CMOS image sensors and provide higher value-added products.

BAE Systems is retaining the aerospace and defense segment of the BAE Systems Imaging Solution’s portfolio, which was transferred to the BAE Systems, Inc. Electronic Systems sector, prior to the closing of this stock purchase transaction.

Fairchild Imaging will continue their operating structure and focus on developing and providing superior products and solutions to their customers.
 
 
[Update Nov 6, 2024: statement from Fairchild Imaging]
 
We are very happy to announce a new chapter in the storied history of Fairchild Imaging! BAE Systems, Inc., which had owned the stock of Fairchild Imaging, Inc. for the past 13 years, has processed a stock sale to Photonics Management Corporation, a subsidiary of Hamamatsu Photonics K.K. Resuming the identity as Fairchild Imaging, Inc., we will operate as an independent, yet wholly owned, US entity.

Fairchild Imaging is a CMOS imaging sensor design and manufacturing company, specializing in high-performance image sensors. Our x-ray and visible spectrum sensors provide class leading performance in x-ray, and from ultraviolet through visible and into near-infrared wavelengths. Fairchild Imaging’s core products include medical x-ray sensors for superior diagnostics, as well as scientific CMOS (sCMOS) sensors for measurement applications that simultaneously realize high sensitivity, fast readout, high dynamic range, and ultra-low noise in 4K resolution.
 
Marc Thacher, CEO of Fairchild Imaging, said:
“Joining the Hamamatsu family represents a great opportunity for Fairchild Imaging. Building upon decades of imaging excellence, we look forward to bringing new innovations and technologies to challenging imaging applications like scientific, space, low-light, machine vision, inspection, and medical diagnostics. The acquisition by Hamamatsu will help drive growth and agility as we continue as a design leader for our customers, partners, and employees.”
 
As part of this new chapter, Fairchild Imaging is unveiling its latest evolution of sCMOS sensors: sCMOS 3.1. These patented, groundbreaking imagers redefine the limits of what is possible in CMOS sensors for the most demanding of imaging applications.

Go to the original article...

Lynred announces 8.5um pitch thermal sensor

Image Sensors World        Go to the original article...

Link: https://ala.associates/wp-content/uploads/2024/09/241001-Lynred-8.5-micron-EN-.pdf

Lynred demonstrates smallest thermal imaging sensor for future Automatic Emergency Braking Systems (AEB) at AutoSens Europe 

Prototype 8.5 µm pixel pitch technology that shrinks by 50% the volume size of thermal cameras is designed to help automotive OEMs meet tougher future AEB system requirements, particularly at night.

Grenoble, France, October 1, 2024 – Lynred, a leading global provider of high-quality infrared sensors for the aerospace, defense and commercial markets, today announces it will demonstrate a prototype 8.5 µm pixel pitch sensor during AutoSens Europe, a major international event for automotive engineers, in Barcelona, Spain, October 8 – 10, 2024. The 8.5 µm pixel pitch technology is the smallest infrared sensor candidate for future Automatic Emergency Braking (AEB) and Advanced Driver Assistance Systems (ADAS).

The prototype, featuring half the surface of current 12 µm thermal imaging sensors for automotive applications, will enable system developers to build much smaller cameras for integration in AEB systems.

Following a recent ruling by the US National Highway Traffic Safety Administration (NHTSA), AEB systems will be mandatory in all light vehicles by 2029. It sets tougher rules for road safety at night.

The NHTSA sees driver assistance technologies and the deployment of sensors and subsystems as holding the potential to reduce traffic crashes and save thousands of lives per year. The European Traffic Safety Council (ETSC) also recognizes that AEB systems need to work better in wet, foggy and low-light conditions.

Thermal imaging sensors can detect and identify objects in total darkness. As automotive OEMs need to upgrade the performance of AEB systems within all light vehicles, Lynred is preparing a full roadmap of solutions set to help achieve this compliance. Currently gearing up for high volume production of its automotive qualified 12µm product offer, Lynred is ready to deliver the key component enabling Pedestrian Automatic Emergency Braking (PAEB) systems to work in adverse conditions, particularly at night, when more than 75% of pedestrian fatalities occur.

Lynred is among the first companies to demonstrate a longwave infrared (LWIR) pixel pitch technology for ADAS and PAEB systems that will optimize the size to performance ratio of future generation cameras. The 8.5µm pixel pitch technology will divide by two the volume of a thermal imaging camera, resulting in easier integration for OEMs, while successfully maintaining the same performance standards as larger-sized LWIR models.

Go to the original article...

Pixelplus new product videos

Image Sensors World        Go to the original article...

 

PKA210 Seamless RGB-IR Image Sensor

PG7130KA Global shutter Image Sensor


Go to the original article...

IISW 2025 Final Call for Papers is out

Image Sensors World        Go to the original article...

The 2025 International Image Sensor Workshop (IISW) provides a biennial opportunity to present innovative work in the area of solid-state image sensors and share new results with the image sensor community. The event is intended for image sensor technologists; in order to encourage attendee interaction and a shared experience, attendance is limited, with strong acceptance preference given to workshop presenters. As is the tradition, the 2025 workshop will emphasize an open exchange of information among participants in an informal, secluded setting beside the Awaji Island in Hyōgo, Japan.

The scope of the workshop includes all aspects of electronic image sensor design and development. In addition to regular oral and poster papers, the workshop will include invited talks and announcement of International Image Sensors Society (IISS) Award winners.

Submission of abstracts:
An abstract should consist of a single page of maximum 500-words text with up to two pages of illustrations (3 pages maximum), and include authors’ name(s), affiliation, mailing address, telephone number, and e-mail address.


The deadline for abstract submission is 11:59pm, Thursday Dec 19, 2024 (GMT).
To submit an abstract, please go to: https://cmt3.research.microsoft.com/IISW2025 

 

Go to the original article...

Space & Scientific CMOS Image Sensors Workshop

Image Sensors World        Go to the original article...

The preliminary program for Space & Scientific CMOS Image Sensors Workshop to be held on 26th & 27th November in Toulouse Labège is available.

Registration: https://evenium.events/space-and-scientific-cmos-image-sensors-2024/








Go to the original article...

Call for Nominations for the 2025 Walter Kosonocky Award

Image Sensors World        Go to the original article...

International Image Sensor Society calls for nominations for the 2025 Walter Kosonocky Award for Significant Advancement in Solid-State Image Sensors.
 
The Walter Kosonocky Award is presented biennially for THE BEST PAPER presented in any venue during the prior two years representing significant advancement in solid-state image sensors. The award commemorates the many important contributions made by the late Dr. Walter Kosonocky to the field of solid-state image sensors. Personal tributes to Dr. Kosonocky appeared in the IEEE Transactions on Electron Devices in 1997. Founded in 1997 by his colleagues in industry, government and academia, the award is also funded by proceeds from the International Image Sensor Workshop.
 
The award is selected from nominated papers by the Walter Kosonocky Award Committee, announced and presented at the International Image Sensor Workshop (IISW), and sponsored by the International Image Sensor Society (IISS). The winner is presented with a certificate, complementary registration to the IISW, and an honorarium.
 
Please send us an email nomination for this year's award, with a pdf file of the nominated paper (that you judge is the best paper published/ presented in calendar years 2023 and 2024) as well as a brief description (less than 100 words) of your reason nominating the paper. Nomination of a paper from your company/ institute is also welcome.
 
The deadline for receiving nominations is January 15th, 2025.
 
Your nominations should be sent to Yusuke Oike (2025nominations@imagesensors.org), Secretary of the IISS Award Committee.

Go to the original article...

Single Photon Workshop 2024 Program Available

Image Sensors World        Go to the original article...

The 11th Single Photon Workshop will be held at the Edinburgh International Conference Centre (EICC) over the five-day period, 18-22nd November 2024.

The full program is available here: https://fitwise.eventsair.com/2024singlephotonworkshop/programme

Here are some image-sensor specific sessions and talks:

Wednesday Nov 20, 2024 Session Title: Superconducting Photon Detectors 1
Chair: Dmitry Morozov
4:40 PM - 5:10 PM
Demonstration of a 400,000 pixel superconducting single-photon camera
Invited Speaker - Adam McCaughan - National Institute of Standards and Technology (NIST)
5:10 PM - 5:15 PM
Company Symposium: Photon Spot Platinum Sponsor Speaker: Vikas Anant
5:15 PM - 5:30 PM
Development of Superconducting Wide Strip Photon Detector Paper Number: 112 Speaker: Shigehito Miki - National Institute of Information and Communications Technology (NICT)
5:30 PM - 5:45 PM
Superconducting nanowire single photon detectors arrays for quantum optics Paper Number: 34 Speaker: Val Zwiller - KTH Royal Institute of Technology
5:45 PM - 6:00 PM
Single photon detection up to 2 µm in pair of parallel microstrips based on NbRe ultrathin films
Paper Number: 80 Speaker: Loredana Parlato - University of Naples Federico II
6:00 PM - 6:15 PM
Reading out SNSPDs with Opto-Electronic Converters Paper Number: 87 Speaker: Frederik Thiele - Paderborn Univeristy
6:15 PM - 6:30 PM
Development of Mid to Far-Infrared Superconducting Nanowire Single Photon Detectors Paper Number: 195 Speaker: Sahil Patel - California Institute Of Technology

Thursday Nov 21, 2024 Session Title: Superconducting Photon Detectors 2
Chair: Martin J Stevens
8:30 AM - 8:45 AM
Opportunities and challenges for photon-number resolution with SNSPDs Paper Number: 148 Speaker: Giovanni V Resta - ID Quantique
8:45 AM - 9:00 AM
Detecting molecules at the quantum yield limit for mass spectroscopy with arrays of NbTiN superconducting nanowire detectors Paper Number: 61 Speaker: Ronan Gourgues - Single Quantum
9:00 AM - 9:30 AM
Current state of SNSPD arrays for deep space optical communication Invited Speaker - Emma E Wollman - California Institute Of Technology
9:30 AM - 9:35 AM
Company Symposium: Quantum Opus/MPD presentation Platinum Sponsors
9:35 AM - 9:50 AM
Novel kinetic inductance current sensor for transition-edge sensor readout Paper Number:238 Speaker: Paul Szypryt - National Institute of Standards and Technology (NIST)
9:50 AM - 10:05 AM
Quantum detector tomography for high-Tc SNSPDs Paper Number: 117 Speaker: Mariia Sidorova - Humboldt University of Berlin
10:05 AM - 10:20 AM
Enhanced sensitivity and system integration for infrared waveguide-integrated superconducting nanowire single-photon detectors Paper Number: 197 Speaker: Adan Azem - University Of British Columbia

 

Thursday Nov 21, 2024 Session Title: SPADs 1
Chair: Chee Hing Tan
11:00 AM - 11:30 AM
A 3D-stacked SPAD Imager with Pixel-parallel Computation for Diffuse Correlation Spectroscopy
Invited Speaker - Robert Henderson - University of Edinburgh
11:30 AM - 11:45 AM
High temporal resolution 32 x 1 SPAD array module with 8 on-chip 6 ps TDCs
Paper Number: 182 Speaker: Chiara Carnati - Politecnico Di Milano
11:45 AM - 12:00 PM
A 472 x 456 SPAD Array with In-Pixel Temporal Correlation Capability and Address-Based Readout for Quantum Ghost Imaging Applications
Paper Number: 186 Speaker :Massimo Gandola - Fondazione Bruno Kessler
12:00 PM - 12:15 PM
High Performance Time-to-Digital Converter for SPAD-based Single-Photon Counting applications
Paper Number: 181 Speaker: Davide Moschella - Politecnico Di Milano
12:15 PM - 12:30 PM
A femtosecond-laser-written programmable photonic circuit directly interfaced to a silicon SPAD array
Paper Number: 271 Speaker: Francesco Ceccarelli - The Istituto di Fotonica e Nanotecnologie (CNR-IFN)

Thursday Nov 21, 2024 Session Title: SPADs 2
Chair: Alberto Tosi
2:00 PM - 2:30 PM
Ge-on-Si Technology Enabled SWIR Single-Photon Detection
Invited Speaker - Neil Na - Artilux
2:30 PM - 2:45 PM
The development of pseudo-planar Ge-on-Si single-photon avalanche diode detectors for photon detection in the short-wave infrared spectral region
Paper Number: 254 Speaker: Lisa Saalbach - Heriot-Watt University
2:45 PM - 3:00 PM
Hybrid integration of InGaAs/InP single photon avalanche diodes array and silicon photonics chip
Paper Number: 64 Speaker: Xiaosong Ren - Tsinghua University
3:00 PM - 3:15 PM
Dark Current and Dark Count Rate Dependence on Anode Geometry of InGaAs/InP Single-Photon Avalanche Diodes
Paper Number: 248 Speaker: Rosemary Scowen - Toshiba Research Europe
3:15 PM - 3:30 PM
Compact SAG-based InGaAs/InP SPAD for 1550nm photon counting
Paper Number: 111 Speaker: Ekin Kizilkan - École Polytechnique Fédérale de Lausanne (EPFL)

Thursday Nov 21, 2024 Session Title: Single-photon Imaging and Sensing 1
Chair: Aurora Maccarone
4:15 PM - 4:45 PM
Single Photon LIDAR goes long Range
Invited Speaker - Feihu Xu - USTC China
4:45 PM - 5:00 PM
The Deep Space Optical Communication Photon Counting Camera
Paper Number: 11 Speaker: Alex McIntosh - MIT Lincoln Laboratory
5:00 PM - 5:15 PM
Human activity recognition with Single-Photon LiDAR at 300 m range
Paper Number: 232 Speaker: Sandor Plosz - Heriot-Watt University
5:15 PM - 5:30 PM
Detection Times Improve Reflectivity Estimation in Single-Photon Lidar
Paper Number: 273 Speaker: Joshua Rapp - Mitsubishi Electric Research Laboratories
5:30 PM - 5:45 PM
Bayesian Neuromorphic Imaging for Single-Photon LiDAR
Paper Number: 57 Speaker: Dan Yao - Heriot-Watt University
5:45 PM - 6:00 PM
Single Photon FMCW LIDAR for Vibrational Sensing and Imaging
Paper Number: 23 Speaker: Theodor Staffas - KTH Royal Institute of Technology

Friday Nov 22, 2024 Session Title: Single-photon Imaging 2
9:00 AM - 9:15 AM
Quantum-inspired Rangefinding for Daytime Noise Resistance
Paper Number:208 Speaker: Weijie Nie - University of Bristol
9:15 AM - 9:30 AM
High resolution long range 3D imaging with ultra-low timing jitter superconducting nanowire single-photon detectors
Paper Number: 296 Speaker: Aongus McCarthy - Heriot-Watt University
9:30 AM - 9:45 AM
A high-dimensional imaging system based on an SNSPD spectrometer and computational imaging
Paper Number: 62 Speaker: Mingzhong Hu - Tsinghua University
9:45 AM - 10:00 AM
Single-photon detection techniques for real-time underwater three-dimensional imaging
Paper Number: 289 Speaker: Aurora Maccarone - Heriot-Watt University
10:00 AM - 10:15 AM
Photon-counting measurement of singlet oxygen luminescence generated from PPIX photosensitizer in biological media
Paper Number: 249 Speaker: Vikas - University of Glasgow
10:15 AM - 10:30 AM
A Plug and Play Algorithm for 3D Video Super-Resolution of single-photon data
Paper Number:297 Speaker: Alice Ruget - Heriot-Watt University

Friday Nov 22, 2024 Session Title: Single-photon Imaging and Sensing 2
11:00 AM - 11:30 AM
Hyperspectral Imaging with Mid-IR Undetected Photons
Invited Speaker - Sven Ramelow - Humboldt University of Berlin
11:30 AM - 11:45 AM
16-band Single-photon imaging based on Fabry-Perot Resonance
Paper Number: 35 Speaker: Chufan Zhou - École Polytechnique Fédérale de Lausanne (EPFL)
11:45 AM - 12:00 PM
High-frame-rate fluorescence lifetime microscopy with megapixel resolution for dynamic cellular imaging
Paper Number: 79 Speaker: Euan Millar - University of Glasgow
12:00 PM - 12:15 PM
Beyond historical speed limitation in time correlated single photon counting without distortion: experimental measurements and future developments
Paper Number: 237 Speaker: Giulia Acconcia - Politecnico Di Milano
12:15 PM - 12:30 PM
Hyperspectral mid-infrared imaging with undetected photons
Paper Number: 268 Speaker: Emma Pearce - Humboldt University of Berlin
12:30 PM - 12:45 PM
Determination of scattering coefficients of brain tissues by wide-field time-of-flight measurements with single photon camera.
Paper Number: 199 Speaker: André Stefanov - University Of Bern

Go to the original article...

Image sensor basics

Image Sensors World        Go to the original article...

These lecture slides by Prof. Yuhao Zhu at U. Rochester are a great first introduction to how an image sensor works. A few selected slides are shown below. For the full slide deck visit: https://www.cs.rochester.edu/courses/572/fall2022/decks/lect10-sensor-basics.pdf

 













Go to the original article...

SLVS-EC IF Standard v3 released

Image Sensors World        Go to the original article...

Link: http://jiia.org/en/slvs-ec-if-standard-version-3-0-has-been-released/

Embedded Vision I/F WG has released "SLVS-EC IF Standard Version 3.0”.
Version 3.0 supports up to 10Gbps/lane, which is 2x faster than Version 2.0, and improved data transmission efficiency.

Link: https://www.m-pression.com/solutions/hardware/slvs-ec-rx-30-ip

SLVS-EC v3.0 Rx IP is an interface IP core that runs on Altera® FPGAs. Using this IP, you can quickly and easily implement products that support the latest SLVS-EC standard v3.0. You will also receive an "Evaluation kit" for early adoption.

  •  Altera® FPGAs can receive signals directly from the SLVS-EC Interface.
  •  Compatible with the latest SLVS-EC Specification Version 3.0.
  •  Supports powerful De-Skew function. Enables board design without considering Skew that occurs between lanes.
  •  "Evaluation kit”(see below) is available for speedy evaluation at the actual device level.

 About SLVS-EC:

SLVS-EC (Scalable Low Voltage Signaling with Embedded Clock) is an interface standard for high-speed & high-resolution image sensors developed by Sony Semiconductor Solutions Corporation. The SLVS-EC standard is standardized by JIIA (Japan Industrial Imaging Association).



Go to the original article...

Emberion 50 euro CQD SWIR imager

Image Sensors World        Go to the original article...


From: https://invision-news.de/allgemein/extrem-kostenguenstiger-swir-sensor/

Emberion is introducing an extremely cost-effective SWIR sensor that covers a range from 400 to 2,000 nm and whose manufacturing costs for large quantities are less than €50. The sensors are smaller and lighter, which expands the application possibilities of this technology in a wide range of applications. They combine Emberion's existing patented Quantom Dot technology with the patented wafer-level packaging.

Press release from Emberion: https://www.emberion.com/emberion-oy-introduces-groundbreaking-ultra-low-cost-swir-sensor/

The unique SWIR image sensor’s manufacturing cost is less than 50€ in large volume production.

Espoo, Finland — 1.10.2024 — The current cost level of SWIR imaging technology seriously limits the use of SWIR imaging in a variety of industrial, defense & surveillance, automotive and professional/consumer applications. Emberion Oy, a leading innovator in quantum dot based shortwave infrared sensing technology, is excited to announce its new ultra-low cost SWIR (Short-Wave Infrared) sensor that brings the sensor production cost down to €50 level in large volumes. This revolutionary product is set to deliver high-performance infrared imaging to truly mass-market applications such as automotive and consumer electronics as well as enabling much wider deployment of SWIR imaging in industrial, defence and surveillance applications. The revolutionary sensors are also smaller in size and weight, further extending the possibilities to use this technology in a variety of use cases. Emberion is already shipping extended range high speed SWIR cameras and will bring first ultra-low cost sensor based products to the market in 2025.

 

Bringing Advanced Imaging to Everyday Devices at a fraction of current cost

The new Emberion sensor family is designed to make advanced shortwave infrared technology accessible to wider markets, including large volume markets such as automotive sensing and consumer electronics. The new ultra-low cost SWIR sensor combines Emberion’s existing patented quantum dot sensor technology with Emberion’s patented wafer-level packaging to drastically reduce the manufacturing costs of packaged sensors. Current InGaAs and quantum dot based image sensors are typically packaged in metal or ceramic casings with a total production cost for packaged imagers in the range of several hundred euros to a few thousand euros depending on sensor technology, imager wavelength range, packaging choices and production volumes. Emberion’s sensors are manufactured and packaged on a full wafer with up to 100 imagers on a single 8” wafer, making the production cost of a single sensor to be a fraction of current alternatives. In addition to low cost, the sensor enables high integration of functionality into the in-house designed read-out IC, reduces size and weight, and provides stability in performance, enabling new functionalities in everyday technology that were once only available in high-end or niche markets.

Examples of applications that require low-cost, compact sensors:

  • Automotive Industry: Enhanced driver assistance systems (ADAS) with improved visibility in demanding weather conditions for increased safety and performance.
  • Consumer Electronics: Integrating SWIR sensors into smartphones and wearable devices, allowing for facial recognition in all lighting conditions, gesture control, and material identification.
  • Augmented and Virtual Reality (AR/VR): Enabling more accurate environmental sensing for immersive, real-world interaction in AR/VR environments.
  • Drones: Precision vision systems for navigation and object detection in both consumer and defence markets.

Some of the key benefits of the Emberion SWIR sensor include:

  • Cost Efficiency: Thanks to wafer-level packaging, the production process is streamlined, making this sensor by magnitude more affordable than any existing SWIR solution. Also, the high sensor integration level with image processing embedded into the sensor decreases the need for image post processing significantly and decreases the need for camera components on system level.
  • Size, weight and power (SWaP) optimization: The miniature and power efficient design is ideal for space-constrained applications like consumer electronics and automotive components. The high sensor integration level is also a significant contributor to the system SWaP optimization.
  • Stability: The wafer-level packaging improves the sensor stability and protection and makes it suitable for demanding environments like automotive and outdoor applications. It can also be integrated into external packaging if needed, e.g. LCC or metal packaging.
  • Extended Wavelength Sensitivity: Covering a range from 400 nm to 2000 nm, ideal for detecting objects and scenes extending the spectral range beyond traditional SWIR sensors.

Go to the original article...

CVSENS raises series A funding

Image Sensors World        Go to the original article...

CVSENS is a high-performance CIS design company headquartered in Shenzhen:  http://www.cvsens.com/language/en/

Original news in Chinese: https://laoyaoba.com/n/919232

Translation from Google Translate:

AVC Semiconductor completes a new round of financing of hundreds of millions of yuan to accelerate the localization of high-end CMOS image sensor chips

Recently, CVSENS successfully completed its A round of financing of hundreds of millions of yuan. The financing was led by Hanlian Semiconductor Industry Fund , and co-invested with Zhejiang University Education Foundation and Shanghai Anchuang Chuangxin , which indicates the market's high recognition and confidence in CVSENS.

As a leading CMOS image sensor chip developer in China, Chuangshi Semiconductor focuses on the design and development of high-value-added CMOS image sensor chips, and is committed to providing customers with better quality and more efficient services and products. With more than 15 years of experience in high-end product development, the core team of Chuangshi Semiconductor has broken through the core technology barriers of high-end CIS in various application fields. At present, more than ten CIS chips have been launched, all of which have been successfully taped out at one time, covering multiple application directions such as smart security, low-power IoT, smart cars, and machine vision. Many of the industry's first innovative products have won unanimous praise from clients. In the future, Chuangshi Semiconductor will continue to deepen its image sensor technology, promote industrial upgrading, and lead the new direction of industry development.

Hanlian Semiconductor Industry Fund said: We are optimistic about the huge development space in the field of image sensors and the market opportunities for domestic manufacturers. The Chuangshi Semiconductor team has excellent technical capabilities, business focus and product innovation capabilities, and is a new force in the industry with comprehensive competitiveness. At the same time, working with Chuangshi Semiconductor is an important part of Hanlian Semiconductor Industry Fund's layout in the field of vision. We hope that in the future, Chuangshi Semiconductor will work closely with other projects in our system to jointly develop first-class products and forward-looking innovative technologies in the industry, and provide better product solutions for more application scenarios.

Shanghai Anchuang Chuangxin Enterprise Management Consulting Partnership stated: As a corporate consulting and investment institution focusing on the high-tech field, we are very optimistic about the image sensor chip R&D team of AVC Semiconductor and its outstanding product innovation capabilities. This investment not only provides financial support for AVC Semiconductor, but also uses our ecosystem resources and industry-leading technologies to provide AVC Semiconductor with in-depth industrial links through innovation empowerment, helping to achieve longer-term development goals. We are full of confidence in participating in this investment in AVC Semiconductor, and look forward to helping AVC Semiconductor achieve greater success in technological innovation, market expansion and brand building, and work with AVC Semiconductor to create a new chapter in the image sensor industry.

The founder of AVC Semiconductor said: "I am very honored to receive joint investment from Hanlian Semiconductor Industry Fund, the Education Foundation of my alma mater Zhejiang University, and Shanghai Anchuang Chuangxin Enterprise Management Consulting Partnership. This is not only a recognition of AVC Semiconductor's past achievements, but will also help the company further promote technological innovation, enhance market competitiveness, and inject vitality into the company's long-term development. Since its establishment, AVC Semiconductor has been focusing on the research and development and innovation of CMOS image sensor chips. Its products and services are widely used in many fields such as automotive vision, smart security, low-power IoT, machine vision, and medical vision, constantly promoting technological progress and meeting market demand. We also look forward to working with more partners to jointly promote the innovative development of the image sensor industry."

Transvision Semiconductor will continue to take technological innovation as the core driving force, uphold the core concept of "gratitude, pragmatism and courage to innovate", actively seize market opportunities, continuously expand market share, strengthen industrial chain collaboration, and practice sustainable development, aiming to become a global leading CIS solution provider and provide customers with better quality and more efficient services and products.

Go to the original article...

Galaxycore chip-on-module packaging for CIS

Image Sensors World        Go to the original article...

Link: https://en.gcoreinc.com/news/detail-69

 


The performance of an image sensor relies not only on its design and manufacturing but also on the packaging technology.

CIS packaging is particularly challenging, as any particle in the environment that drops on the sensor surface during the process can cause a significant affect on the final image quality. GalaxyCore’s COM (Chip on Module) packaging technology has revolutionized traditional CSP (Chip Scale Package) and COB (Chip on Board) methods, enhancing the performance, reliability, and applicability of the optical system of camera modules.

Birth of the COM Packaging

Before the advent of COM packaging, CSP and COB were the predominant packaging choices for CIS. CSP places a layer of glass on the sensor to prevent dust. However, the glass also reflects some light, thus degrading image quality. COB requires an exceptionally demanding environment, typically a Class 100 clean room.

Is there an alternative? GalaxyCore’s technical team developed an innovative solution by directly suspending gold wire to serve as pins. In the fantastic microscopic realm, the short gold wire becomes hard and elastic, which can used directly as pins.

At GalaxyCore’s Class 100 clean rooms in the packaging and testing factory in Jiashan City, Zhejiang Province, a fully-automated high-precision equipment bonds the gold wire to the image sensor with exacting accuracy. The sensor is then mounted on a filter base, and the other end of the gold wire is suspended as the pin. The pin is subsequently soldered by the camera module manufacturer to the FPCB. When assembled with a lens and the actuator, a complete camera module can be formed.

We were pleasantly surprised to discover that the performance and reliability of the COM packaging are on par with, or even exceed, those of high-end COB packaging.

Three Advantages for System-level Improvement

1. Enhanced Optical System Performance
The COM packaging notably enhances the optical system performance of camera modules. In the COB packaging, the chip is directly mounted on the FPCB. However, the FPCB is prone to deformation during production, which may lead to the tilt of the optical axis and further affect the image quality.
In GalaxyCore’s COM packaging, both the chip and lens use the filter base as the benchmark, thus mitigating the optical axis tilt caused by FPCB deformation. This significantly improves the edge resolution of images, especially in large aperture and high-pixel camera modules.

2. Improved Module Reliability and Flexibility
In the COM packaging, due to a certain distance between the chip and the FPCB, the camera module is subject to greater back pressure, thus improving the reliability and durability of the module.
In the COB packaging, the CIS directly mounted on the FPCB is more sensitive to the back pressure, and the SFR (i.e. image resolution) is more likely to be affected. By contrast, in the COM packaging, the CIS chip is relatively isolated and suspended, making it hard for the back pressure to directly act on the CIS chip. As such, a better image resolution can be achieved. Different from the COB packaging, the COM packaging connects the chip pins and pads through soldering. This solution reduces the material requirements for the FPCB and further enhances its adaptability and flexibility.

3. Minimized Module
In the COM packaging, FPCB can be hollowed out to allow the chip to sink into it. Compared to the COB packaging with direct mounting of chip on the FPCB or reinforcement of steel sheets, the COM solution can control the back pressure more effectively and reduce the requirements for steel sheet thickness. This enhances the height advantage of the overall packaging module, to meet cell phones’ stringent requirements for space. This advantage is more notable in devices seeking thin and light designs.

GalaxyCore’s COM packaging ensures both high performance and reliability for the optical system while simplifying the subsequent production processes for module manufacturers. This method reduces the dependence on dust-free environments and enhances quality, yield, and efficiency. With the mass production of COM chips and further application of this technology, it will deliver improved imaging performance across a broader range of end products.

Go to the original article...

EI2025 late submissions deadline tomorrow Oct 15, 2024

Image Sensors World        Go to the original article...

Electronic Imaging 2025 is accepting submissions --- late submission deadline is tomorrow (Oct 15, 2024). The Electronic Imaging Symposium comprises 17 technical conferences to be held in person at the Hyatt Regency San Francisco Airport in Burlingame, California.


IMPORTANT DATES

Journal-first (JIST/JPI) Submissions Due 15 Aug
Final Journal-first manuscripts due 31 Oct
Late Submission Deadline 15 Oct
FastTrack Proceedings Manuscripts Due 8 Jan 2025
All Outstanding Manuscripts Due 21 Feb 2025

Registration Opens mid-Oct
Demonstration Applications Due 21 Dec
Early Registration Ends 18 Dec


Hotel Reservation Deadline 10 Jan
Symposium Begins 2 Feb
Non-FastTrack Proceedings Manuscripts Due
21 Feb

There are three submission options to fit your publication needs: journal, conference, and abstract-only.



Go to the original article...

Another PhD Defense Talk on Event Cameras

Image Sensors World        Go to the original article...

Thesis title: A Scientific Event Camera: Theory, Design, and Measurements
Author: Rui Garcia
Advisor: Tobi Delbrück


 See also, earlier post about the PhD thesis abstract and full text link: https://image-sensors-world.blogspot.com/2024/08/phd-thesis-on-scidvs-event-camera.html

The full thesis text is available here after the embargo ends in July 2026: https://www.research-collection.ethz.ch/handle/20.500.11850/683623

Go to the original article...

Artilux paper on room temperature quantum computing using Ge-Si SPADs

Image Sensors World        Go to the original article...

Neil Na et al from Artilux and UMass Boston have published a paper titled "Room-temperature photonic quantum computing in integrated silicon photonics with germanium–silicon single-photon avalanche diodes" in APL Quantum.

Abstract: Most, if not all, photonic quantum computing (PQC) relies upon superconducting nanowire single-photon detectors (SNSPDs) typically based on niobium nitride (NbN) operated at a temperature <4 K. This paper proposes and analyzes 300 K waveguide-integrated germanium–silicon (GeSi) single-photon avalanche diodes (SPADs) based on the recently demonstrated normal-incidence GeSi SPADs operated at room temperature, and shows that their performance is competitive against that of NbN SNSPDs in a series of metrics for PQC with a reasonable time-gating window. These GeSi SPADs become photon-number-resolving avalanche diodes (PNRADs) by deploying a spatially-multiplexed M-fold-waveguide array of M GeSi SPADs. Using on-chip waveguided spontaneous four-wave mixing sources and waveguided field-programmable interferometer mesh circuits, together with the high-metric SPADs and PNRADs, high-performance quantum computing at room temperature is predicted for this PQC architecture.

Link: https://doi.org/10.1063/5.0219035

Schematic plot of the proposed room-temperature PQC paradigm with integrated SiPh using the path degree of freedom of single photons: single photons are generated through SFWM (green pulses converted to blue and red pulses) in SOI rings (orange circles), followed by active temporal multiplexers (orange boxes that block the blue pulses), and active spatial multiplexers (orange boxes that convert serial pulses to parallel pulses) (quantum sources), manipulated by a FPIM using cascaded MZIs (quantum circuits), and measured by the proposed waveguide GeSi SPADs as SPDs and/or NPDs (quantum detectors). An application-specific integrated circuit (ASIC) layer is assumed to be flipped and bonded on the PIC layer with copper (Cu)–Cu pillars (yellow lines) connected wafer-level hybrid bond, or with metal bumps (yellow lines) connected chip-on-wafer-on-substrate (CoWoS) packaging. The off-chip fiber couplings are either for the pump lasers or the optical delay lines.

 


 (a) Top view of the proposed waveguide GeSi SPAD, in which the materials assumed are listed. (b) Cross-sectional view of the proposed waveguide GeSi SPAD, in which the variables for optimizing QE are illustrated.

 

 

(a) QE of the proposed waveguide GeSi SPAD without the Al back mirror, simulated at 1550 nm as a function of coupler length and Ge length. (b) QE of the proposed waveguide GeSi SPAD with the Al back mirror, simulated at 1550 nm as a function of gap length and Ge length. (c) QE of the proposed waveguide GeSi SPAD with the Al back mirror, simulated as a function of wavelength centered at 1550 ± 50 nm (around the C band) and 1310 ± 50 nm (around the O band), given the optimal conditions, that is, coupler length equal to 1.4 μm, gap length equal to 0.36 μm, and Ge length equal to 14.2 μm. While the above data are obtained by 2D FDTD simulations, we also verify that for Ge width >1 μm and mesa design rule <200 nm, there is little difference between the data obtained by 2D and 3D FDTD simulations.


Dark current of GeSi PD at −1 V reverse bias, normalized by its active region circumference, plotted as a function of active region diameter. The experimental data (blue dots) consist of the average dark current between two device repeats (the ratio of the standard deviation to the average is <2%) for five different active region diameters. The linear fitting (red line) shows the bulk dark current density and the surface dark current density with its slope and intercept, respectively.



For the scheme of photon-based PQC: (a) The probability of successfully detecting N photon state and (b) the fidelity of detecting N photon state, using M spatially-multiplexed waveguide GeSi SPADs at 300 K as an NPD. (c) The difference in the probabilities of successfully detecting N photon state, and (b) the difference in the fidelities of detecting N photon state, using M spatially-multiplied waveguide GeSi SPADs at 300 K and NbN SNSPDs at 4 K as NPDs. Note that no approximation is used in the formula for plotting these figures.



For the scheme of qubit-based PQC: (a) The probability of successfully detecting N qubit state, and the fidelity of detecting N qubit state, using single waveguide GeSi SPADs at 300 K as SPDs. (b) The difference in the probabilities of successfully detecting N qubit state, and the difference in the fidelities of detecting N qubit state, using single waveguide GeSi SPADs at 300 K and NbN SNSPDs at 4 K as SPDs. Note that no approximation is used in the formula for plotting these figures.




Go to the original article...

Image sensors review paper

Image Sensors World        Go to the original article...

Eric Fossum, Nobukazu Teranishi, and Albert Theuwissen have published a review paper titled "Digital Image Sensor Evolution and New Frontiers" in the Annual Review of Vision Science.

Link: https://doi.org/10.1146/annurev-vision-101322-105538

Abstract:

This article reviews nearly 60 years of solid-state image sensor evolution and identifies potential new frontiers in the field. From early work in the 1960s, through the development of charge-coupled device image sensors, to the complementary metal oxide semiconductor image sensors now ubiquitous in our lives, we discuss highlights in the evolutionary chain. New frontiers, such as 3D stacked technology, photon-counting technology, and others, are briefly discussed.



Figure 1  Illustration of a four-phase charge-coupled device diagram, a potential well diagram, and clock charts. As four clocks switch sequentially, the potential wells move rightward together with the charge packets.

Figure 2  Illustration of a (three-phase) interline-transfer (ILT) charge-coupled device (CCD) showing (left) a unit cell with a photodiode (PD) and vertical CCD and (right) the entire ILT CCD image sensor. The photosignal moves from the PD into the vertical CCD, and then into the horizontal CCD to the sense node and output amplifier.



Figure 3  A pinned PD in an interline-transfer CCD with one phase of the CCD shift register (VCCD) shown. (a) A physical cross-section and (b) a potential diagram showing the electrons transferring from the PD to the VCCD. Abbreviations: CCD, charge-coupled device; CS, channel stop; PD, photodiode; TG, transfer gate; VCCD, vertical CCD.



Figure 4  Microlenses to concentrate light on the photoactive area of a pixel. (a) Top view. (b) Cross-sections for different thermal-flow times. Images courtesy of NEC Corp.

Figure 5  A 16-Mpixel stitched complementary metal oxide semiconductor image sensor on a 6-inch-diameter wafer. Figure reproduced from Ay & Fossum (2006).


Figure 6  (a) Complementary metal oxide semiconductor (CMOS) image sensor block diagram. (b) Photograph of early Photobit CMOS image sensor chip for webcams. (Left) Digital logic for control and input-output (I/O) functions. (Top right) The pixel array. (Bottom right) The column-parallel analog signal processing and analog-to-digital converter (ADC) circuits. Photo courtesy of E.R.F.


Figure 7  An illustrative PPD 4-T active pixel with intrapixel charge transfer. (a) A circuit schematic (Fossum & Hondongwa 2014). (b) A band diagram looking vertically through the PPD showing the photon, electron–hole pair, and SW. (c) A physical cross-section showing doping levels (Fossum 2023). Abbreviations: COL BUS, column bus line; FD, floating diffusion; PPD, pinned photodiode; RST, reset gate; SEL, select gate; SF, source-follower; SW, storage well; TG, transfer gate.



Figure 8  Illustrative example of (a) a frontside-illuminated pixel and (b) a backside-illuminated (BSI) pixel showing the better light gathering capability of the BSI pixel.



Figure 9  Illustrative cross-sectional comparison of (a) a backside-illuminated device and (b) 3D stacked image sensors where the lower layer is used for additional circuitry.



Figure 10  Quanta image sensor concept showing the spatial distribution of jot outputs (left), an expanded view of jot output bit planes at different time slices (center), and gray-scale image pixels formed from spatiotemporal neighborhoods of jots (right). Figure adapted from Ma et al. (2022a).

Go to the original article...

Hamamatsu completes acquisition of NKT Photonics

Image Sensors World        Go to the original article...

Press release: https://www.hamamatsu.com/us/en/news/featured-products_and_technologies/2024/20240531000000.html

Acquisition completion of NKT Photonics. Accelerating growth in the semiconductor, quantum, and medical fields through laser business enhancement.

Hamamatsu Photonics K.K. (hereinafter referred to as “Hamamatsu Photonics”) is pleased to announce the completion of the previously published acquisition of NKT Photonics A/S (hereinafter referred to as “NKT Photonics”).
 
NKT Photonics is the leading supplier of high-performance fiber lasers and photonic crystal fibers. Based on their unique fiber technology, the laser products fall within three major product lines:

  1.  Supercontinuum White Light Lasers (SuperK): The SuperK lasers deliver high brightness in a broad spectral range (400 nm-2500 nm), and are used within bio-imaging, semiconductor metrology, and device-characterization.
  2.  Single-Frequency DFB Fiber Lasers (Koheras): The Koheras lasers have extremely high wavelength stability and low noise, and are ideal for fiber sensing, quantum computing, and quantum sensing.
  3.  Ultra-short pulse Lasers (aeroPULSE and Origami): This range of lasers consists of picosecond and femtosecond pulsed lasers with excellent beam quality and stability. The lasers are mainly used within ophthalmic surgery, bio-imaging, and optical processing applications.

 
The acquisition enables us to combine Hamamatsu Photonics’ detectors and cameras with NKT Photonics' lasers and fibers, thereby offering unique system solutions to the customers.
 
One special market of interest is the rapidly growing quantum computing area. Here NKT Photonics’ Koheras lasers serve customers with trapped ions systems requiring high power narrow linewidth lasers with extremely high wavelength stability and low noise. The same customers use Hamamatsu Photonics’ high-sensitivity cameras and sensors to detect the quantum state of the qubits. Together, we will be able to provide comprehensive solutions including lasers, detectors, and optical devices for the quantum-technology market.
 
Another important area of collaboration is the semiconductor market. With the trend toward more complex three-dimensional semiconductor devices, there is an increasing demand for high precision measurement equipment covering a wide range of wavelengths. By combining NKT Photonics' broadband SuperK lasers with Hamamatsu Photonics’ optical sensors and measuring devices, we can supply expanded solutions for semiconductor customers needing broader wavelength coverage, multiple measurement channels, and higher sensitivity.
 
Finally, in the hyperspectral imaging market, high-brightness light sources with a broad spectral range from visible to near-infrared (400 nm-2500 nm) are essential. Additionally, unlike halogen lamps, since no heat generation occur, the demand for NKT Photonics' SuperK is increasing. We can provide optimal solutions by integrating it with Hamamatsu Photonics’s image sensors and cameras, leveraging the unique compound semiconductor technologies.
 
With this acquisition, Hamamatsu Photonics Group now possesses a very broad range of technologies within light sources, lasers, and detectors. The combination of NKT Photonics and Hamamatsu Photonics will help us to drive our technology to the next level. NKT Photonics will continue their operating structure and focus on providing superior products and solutions to their customers.

Go to the original article...

SeeDevice Inc files complaint

Image Sensors World        Go to the original article...

From GlobeNewswire: https://www.globenewswire.com/news-release/2024/09/13/2945864/0/en/SeeDevice-Inc-Files-Complaint-In-U-S-District-Court-Against-Korean-Broadcasting-System.html

SeeDevice Inc. Files Complaint In U.S. District Court Against Korean Broadcasting System

ORANGE, California, Sept. 13, 2024 (GLOBE NEWSWIRE) -- SeeDevice Inc. (“SeeDevice”), together with its CEO and founder Dr. Hoon Kim, has filed a Complaint in the U.S. District Court for the Central District of California against Korean Broadcasting System (KBS), and its U.S. subsidiary KBS America, Inc. (collectively, “KBS”) for trade libel and defamation. The claims are based on an August 25, 2024, broadcast KBS is alleged to have published on its YouTube channel and KBS-america.com (“The KBS Broadcast”).

The complaint asserts that KBS Broadcast published false and misleading statements regarding the viability and legitimacy of SeeDevice and Dr. Kim’s QMOS™ (quantum effect CMOS) SWIR image sensor, as a result of having omitted the fact that in 2009, and again in 2012, the Seoul High Court and Seoul Administrative Court found Dr. Kim’s sensor to be legitimate.

Dr. Kim’s QMOS™ sensor has garnered industry praise and recognition and is the subject of numerous third-party awards. In the past year alone, SeeDevice has been recognized with four awards for outstanding leadership and innovative technology: "20 Most Innovative Business Leaders to Watch 2023" by Global Business Leaders, "Top 10 Admired Leaders 2023" by Industry Era, "Most Innovative Image Technology Company 2023" by Corporate Vision, and “Company of the Year” of the Top 10 Semiconductor Tech Startups 2023 by Semiconductor Review. 

In their lawsuit, SeeDevice and Dr. Kim seek retraction of KBS’s defamatory broadcast, and a correction of the record, in addition to significant monetary damages and injunctive relief preventing further misconduct by KBS.

Go to the original article...

css.php