Glass Imaging raises $20mn

Image Sensors World        Go to the original article...

PR Newswire: https://www.prnewswire.com/news-releases/glass-imaging-raises-20-million-funding-round-to-expand-ai-imaging-technologies-302451849.html

Glass Imaging Raises $20 Million Funding Round To Expand AI Imaging Technologies

LOS ALTOS, Calif., May 12, 2025 /PRNewswire/ -- Glass Imaging, a company harnessing the power of artificial intelligence to revolutionize digital image quality, today unveiled a Series A funding round led by global software investor Insight Partners. The $20 million round will allow Glass Imaging to continue to refine and implement their proprietary GlassAI technologies across a wide range of camera platforms - from smartphones to drones to wearables and more. The Series A round was joined by previous Glass Imaging investors GV (Google Ventures), Future Ventures and Abstract Ventures.

Glass Imaging uses artificial intelligence to extract the full image quality potential on current and future cameras by reversing lens aberrations and sensor imperfections. Glass works with manufacturers to integrate GlassAI software to boost camera performance 10x resulting in sharper, more detailed images under various conditions that remain true to life with no hallucinations or optical distortions.

"At Glass Imaging we are building the future of imaging technology," said Ziv Attar, Founder and CEO, Glass Imaging. "GlassAI can unlock the full potential of all cameras to deliver stunning ultra-detailed results and razor sharp imagery. The range of use cases and opportunities across industry verticals are huge."

"GlassAI leverages edge AI to transform Raw burst image data from any camera into stunning, high-fidelity visuals," said Tom Bishop, Ph.D., Founder and CTO, Glass Imaging. "Our advanced image restoration networks go beyond what is possible on other solutions: swiftly correcting optical aberrations and sensor imperfections while efficiently reducing noise, delivering fine texture and real image content recovery that outperforms traditional ISP pipelines."

"We're incredibly proud to lead Glass Imaging's Series A round and look forward to what the team will build next as they seek to redefine just how great digital image quality can be," said Praveen Akkiraju, Managing Director, Insight Partners. "The ceiling for GlassAI integration across any number of platforms and use cases is massive. We're excited to see this technology expand what we thought cameras and imaging devices were capable of." Akkiraju will join Glass Imaging's board and Insight's Jonah Waldman will join Glass Imaging as a board observer.

Glass Imaging previously announced a $9.3M extended Seed funding round in 2024 led by GV and joined by Future Ventures, Abstract and LDV Capital. That funding round followed an initial Seed investment in 2021 led by LDV Capital along with GroundUP Ventures.

For more information on Glass Imaging and GlassAI visit https://www.glass-imaging.com/

Go to the original article...

Sony-Leopard Imaging collaboration LI-IMX454

Image Sensors World        Go to the original article...

From PR Newswire: https://www.prnewswire.com/news-releases/leopard-imaging-and-sony-semiconductor-solutions-collaborate-to-showcase-li-imx454-multispectral-cameras-at-automate-and-embedded-vision-summit-302452836.html

Leopard Imaging and Sony Semiconductor Solutions Collaborate to Showcase LI-IMX454 Multispectral Cameras at Automate and Embedded Vision Summit

FREMONT, Calif., May 12, 2025 /PRNewswire/ -- Leopard Imaging Inc., a global innovator in intelligent vision solutions, is collaborating with Sony Semiconductor Solutions Corporation (Sony) to present the cutting-edge LI-IMX454 Multispectral Camera at both Automate and Embedded Vision Summit.

Leopard Imaging launched LI-USB30-IMX454-MIPI-092H camera with high-resolution imaging across diverse lighting spectrums, powered by Sony's advanced IMX454 multispectral image sensor. Unlike conventional RGB sensors, Sony's IMX454 image sensor integrates eight distinct spectral filters directly onto each photodiode, allowing the camera to capture light across 41 wavelengths from 450 nm to 850 nm in a single shot utilizing Sony's dedicated signal processing—without the need for mechanical scanning or bulky spectral elements.

Multispectral imaging has historically been underutilized due to cost and complexity. With the LI-IMX454, Leopard Imaging and Sony aim to democratize access to this powerful technology by offering a compact, ready-to-integrate solution for a wide range of industries: from industrial inspection to medical diagnostics, precision agriculture, and many more.

"We're excited to collaborate with Sony to bring this next-generation imaging solution to market," said Bill Pu, President and Co-Founder of Leopard Imaging. "The LI-IMX454 cameras not only deliver high-resolution multispectral data but also integrate seamlessly with AI and machine vision systems for intelligent decision-making."

The collaboration also incorporates Sony's proprietary signal processing software, optimized to support key functions essential to multispectral imaging: defect correction, noise reduction, auto exposure control, robust non-RGB based classification, and color image generation.

Leopard Imaging and Sony will showcase live demos of LI-IMX454 cameras at both Automate and Embedded Vision Summit. To visit Automate: Huntington Place, Booth #8000 on May 12-13. To visit Embedded Vision Summit: Santa Clara Convention Center, Booth #700 on May 21 - 22. To arrange a meeting at the event, please contact marketing@leopardimaging.com.

Go to the original article...

Conference List – November 2025

Image Sensors World        Go to the original article...

IEEE Nuclear Science Symposium, Medical Imaging Conference, and Room-Temperature Semiconductor Detectors Symposium - 1-8 November 2025 - Yokohama, Japan - Website

SPIE Future Sensing Technologies 2025 - 11-13 November 2025 - Yokohama, Japan - Website

14th International "Hiroshima" Symposium on the Development and Application of Semiconductor Tracking Detectors (HSTD 14) - 16-21 November 2025 - Taipei, Taiwan - Website

Compamed - 17-20 November 2025 - Dusseldorf, Germany - Website

SEMI MEMS & Imaging Sensors Summit 2025 - 19-20 November 2025 - Munich, Germany - Website

17th Symposium Sensor Data Fusion: Trends, Solutions and Applications - Bonn, Germany - 24-26 November 2025 - Website

RSNA 2025 - 30 November-4 December 2025 - Chicago, Illinois, USA - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Go to the original article...

Counterpoint Research’s CIS report

Image Sensors World        Go to the original article...

Global Smartphone CIS Shipments Climb 2% YoY in 2024

Samsung is no longer in the top-3 smartphone CIS suppliers.


  •  Global smartphone image sensor shipments rose 2% YoY to 4.4 billion units in 2024.
  • Meanwhile, the average number of cameras per smartphone declined further to 3.7 units in 2024 from 3.8 units in 2023.
  • Sony maintained its leading position, followed by GalaxyCore in second place and OmniVision in third.
  • Global smartphone image sensor shipments are expected to fall slightly YoY in 2025.

 

https://www.counterpointresearch.com/insight/post-insight-research-notes-blogs-global-smartphone-cis-shipments-climbs-2-yoy-in-2024/

Go to the original article...

IS&T EI 2025 plenary talk on imaging and AI

Image Sensors World        Go to the original article...


 

This plenary presentation was delivered at the Electronic Imaging Symposium held in Burlingame, CA over 2-6 February 2025. For more information see: http://www.electronicimaging.org

Title: Imaging in the Age of Artificial Intelligence

Abstract: AI is revolutionizing imaging, transforming how we capture, enhance, and experience visual content. Advancements in machine learning are enabling mobile phones to have far better cameras, enabling capabilities like enhanced zoom, state-of-the-art noise reduction, blur mitigation, and post-capture capabilities such as intelligent curation and editing of your photo collections, directly on device.
This talk will delve into some of these breakthroughs, and describe a few of the latest research directions that are pushing the boundaries of image restoration and generation, pointing to a future where AI empowers us to better capture, create, and interact with visual content in unprecedented ways.

Speaker: Peyman Milanfar, Distinguished Scientist, Google (United States)

Biography: Peyman Milanfar is a Distinguished Scientist at Google, where he leads the Computational Imaging team. Prior to this, he was a Professor of Electrical Engineering at UC Santa Cruz for 15 years, two of those as Associate Dean for Research. From 2012-2014 he was on leave at Google-x, where he helped develop the imaging pipeline for Google Glass. Over the last decade, Peyman's team at Google has developed several core imaging technologies that are used in many products. Among these are the zoom pipeline for the Pixel phones, which includes the multi-frame super-resolution ("Super Res Zoom") pipeline, and several generations of state of the art digital upscaling algorithms. Most recently, his team led the development of the "Photo Unblur" feature launched in Google Photos for Pixel devices.
Peyman received his undergraduate education in electrical engineering and mathematics from the UC Berkeley and his MS and PhD in electrical engineering from MIT. He holds more than two dozen patents and founded MotionDSP, which was acquired by Cubic Inc. Along with his students and colleagues, he has won multiple best paper awards for introducing kernel regression in imaging, the RAISR upscaling algorithm, NIMA: neural image quality assessment, and Regularization by Denoising (RED). He's been a Distinguished Lecturer of the IEEE Signal Processing Society and is a Fellow of IEEE "for contributions to inverse problems and super-resolution in imaging".

Go to the original article...

Brillnics mono-IR global shutter sensor

Image Sensors World        Go to the original article...

Miyauchi et al. from Brillnics Inc., Japan published a paper titled "A 3.96-μm, 124-dB Dynamic-Range, Digital-Pixel Sensor With Triple- and Single-Quantization Operations for Monochrome and Near-Infrared Dual-Channel Global Shutter Operation" in IEEE JSSC (May 2025).

Abstract: This article presents a 3.96- μ m, 640×640 pixel stacked digital pixel sensor capable of capturing co-located monochrome (MONO) and near-infrared (NIR) frames simultaneously in a dual-channel global shutter (GS) operation. A super-pixel structure is proposed with diagonally arranged 2×2 MONO and NIR sub-pixels. To enhance visible light sensitivity, large and small non-uniform micro-lenses are formed on the MONO and NIR sub-pixels, respectively. Each floating diffusion (FD) shared super-pixel is connected to an in-pixel analog-to-digital converter and two banks of 10-bit static random access memories (SRAMs) to enable the dual-channel GS operation. To achieve high dynamic range (DR) in the MONO channel, a triple-quantization (3Q) operation is performed. Furthermore, a single-channel digital-correlated double sampling (D-CDS) 3Q operation is implemented. The fabricated sensor achieved 6.2-mW low power consumption at 30 frames/s with dual-channel capture. The MONO channel achieved 124-dB DR in the 3Q operation and 60 dB for the NIR channel. The sensor fits the stringent form-factor requirement of an augmented reality headset by consolidating MONO and NIR imaging capabilities.

Open access link: https://ieeexplore.ieee.org/document/10706075 

Concept of HDR dual-channel GS operation.
 
 

Pixel level co-located MONO and NIR sub-pixels.

 

Sub-pixel and SRAM-bank usage. (a) Dual-channel operation. (b) Single-channel digital-CDS operation.

Fabricated chip. (a) Chip micrograph. (b) Chip top-level block diagram.

 

Photo-response and SNR curves of digital-CDS operation (after linearization).

 

Sample images captured by dual-channel operation. (a) MONO frame (HDR image). (b) NIR frame ( 2× gain for visual).

Go to the original article...

Sony SSSpeculations

Image Sensors World        Go to the original article...

Several news sources are repeating a Bloomberg report that Sony is considering partially spinning off its semiconductor business.

https://finance.yahoo.com/news/sony-reportedly-mulling-semiconductor-unit-155046940.html 

Sony Group is contemplating a spinoff of its semiconductor unit, a move that could see Sony Semiconductor Solutions become an independent entity as early as this year, reports Bloomberg. The move, which is still under discussion, is part of the group’s strategy to streamline business operations and concentrate on core entertainment sector. The potential spinoff would involve distributing most of Sony's holding in the chip business to its shareholders while retaining a minority stake.

https://www.trendforce.com/news/2025/04/29/news-sony-reportedly-mulls-chip-division-spinoff-and-listing-to-strengthen-entertainment-focus/ 

According to Bloomberg, sources indicate that Sony Group is weighing the spin-off of its semiconductor subsidiary, Sony Semiconductor Solutions, with an IPO potentially taking place as early as this year. Another report from Bloomberg adds that the move would mark the PlayStation maker’s latest step in streamlining its operations and strengthening its focus on entertainment. As noted by the report, sources indicate that Sony is exploring a “partial spin-off” structure, under which the parent company would retain a stake in the subsidiary.

Go to the original article...

Paper on pixel reverse engineering technique

Image Sensors World        Go to the original article...

In an ArXiV preprint titled "Multi-Length-Scale Dopants Analysis of an Image Sensor via Focused Ion Beam-Secondary Ion Mass Spectrometry and Atom Probe Tomography", Guerguis et al write:

The following article presents a multi-length-scale characterization approach for investigating doping chemistry and spatial distributions within semiconductors, as demonstrated using a state-of-the-art CMOS image sensor. With an intricate structural layout and varying doping types/concentration levels, this device is representative of the current challenges faced in measuring dopants within confined volumes using conventional techniques. Focused ion beam-secondary ion mass spectrometry is applied to produce large-
area compositional maps with a sub-20 nm resolution, while atom probe tomography is used to extract atomic-scale quantitative dopant profiles. Leveraging the complementary capabilities of the two methods, this workflow is shown to be an effective approach for resolving nano- and micro- scale dopant information, crucial for optimizing the performance and reliability of advanced semiconductor devices.

Preprint: https://arxiv.org/pdf/2501.08980 


Go to the original article...

Paper on pixel reverse engineering technique

Image Sensors World        Go to the original article...

In an ArXiV preprint titled "Multi-Length-Scale Dopants Analysis of an Image Sensor via Focused Ion Beam-Secondary Ion Mass Spectrometry and Atom Probe Tomography", Guerguis et al write:

The following article presents a multi-length-scale characterization approach for investigating doping chemistry and spatial distributions within semiconductors, as demonstrated using a state-of-the-art CMOS image sensor. With an intricate structural layout and varying doping types/concentration levels, this device is representative of the current challenges faced in measuring dopants within confined volumes using conventional techniques. Focused ion beam-secondary ion mass spectrometry is applied to produce large-
area compositional maps with a sub-20 nm resolution, while atom probe tomography is used to extract atomic-scale quantitative dopant profiles. Leveraging the complementary capabilities of the two methods, this workflow is shown to be an effective approach for resolving nano- and micro- scale dopant information, crucial for optimizing the performance and reliability of advanced semiconductor devices.

Preprint: https://arxiv.org/pdf/2501.08980 


Go to the original article...

Lecture on fundamentals of CMOS image sensors

Image Sensors World        Go to the original article...

 The Fundamentals of CMOS Image Sensors with Richard Crisp 


This video provides a sneak peek of "CMOS Image Sensors: Technology, Applications, and Camera Design Methodology," an SPIE course taught by imaging systems expert Richard Crisp. The course covers everything from the basics of photon capture to sensor architecture and real-world system implementation.
The preview highlights key differences between CCD and CMOS image sensors, delves into common sensor architectures such as rolling shutter and global shutter, and explains the distinction between frontside and backside illumination.
It also introduces the primary noise sources in image sensors and how they can be managed through design and optimization techniques such as photon transfer analysis and MTF assessment.
You'll also see how the course approaches imaging system design using a top-down methodology. This includes considerations regarding pixel architecture, optics, frame rate, and data bandwidth, all demonstrated through practical examples, such as a networked video camera design.
Whether you're an engineer, scientist, or technical manager working with imaging systems, this course is designed to help you better understand the technology behind modern CMOS image sensors and how to make informed design choices. Enjoy!

Go to the original article...

3D effects in time-delay integration sensor pixels

Image Sensors World        Go to the original article...

Guo et al. from Changchun Institute of Optics, University of Chinese Academy of Sciences, and Gpixel Inc. published a paper titled "Study on 3D Effects on Small Time Delay Integration Image Sensor Pixels" in Sensors.

Abstract: This paper demonstrates the impact of 3D effects on performance parameters in small-sized Time Delay Integration (TDI) image sensor pixels. In this paper, 2D and 3D simulation models of 3.5 μm × 3.5 μm small-sized TDI pixels were constructed, utilizing a three-phase pixel structure integrated with a lateral anti-blooming structure. The simulation experiments reveal the limitations of traditional 2D pixel simulation models by comparing the 2D and 3D structure simulation results. This research validates the influence of the 3D effects on the barrier height of the anti-blooming structure and the full well potential and proposes methods to optimize the full well potential and the operating voltage of the anti-blooming structure. To verify the simulation results, test chips with pixel sizes of 3.5 μm × 3.5 μm and 7.0 μm × 7.0 μm were designed and manufactured based on a 90 nm CCD-in-CMOS process. The measurement results of the test chips matched the simulation data closely and demonstrated excellent performance: the 3.5 μm × 3.5 μm pixel achieved a full well capacity of 9 ke- while maintaining a charge transfer efficiency of over 0.99998.

Paper link [open access]: https://www.mdpi.com/1424-8220/25/7/1953

Go to the original article...

Hamamatsu SPAD tutorial

Image Sensors World        Go to the original article...

 SPAD and SPAD Arrays: Theory, Practice, and Applications

 

The video is a comprehensive webinar on Single Photon Avalanche Diodes (SPADs) and SPAD arrays, addressing their theory, applications, and recent advancements. It is led by experts from the New Jersey Institute of Technology and Hamamatsu, discussing technical fundamentals, challenges, and innovative solutions to improve the performance of SPAD devices. Key applications highlighted include fluorescence lifetime imaging, remote gas sensing, quantum key distribution, and 3D radiation detection, showcasing SPAD's unique ability to timestamp events and enhance photon detection efficiency.

Go to the original article...

Speculation about Samsung exiting CIS business?

Image Sensors World        Go to the original article...

Recent speculative news article suggest that Samsung is weighing exiting CIS business after recent exit by SK Hynix.

News source: https://www.digitimes.com/news/a20250312PD213/cis-samsung-sk-hynix-business-lsi.html

SK Hynix is shutting down its CMOS image sensor (CIS) business, fueling industry speculation over whether Samsung Electronics will follow suit. Samsung's system LSI division, which oversees its CIS operations, is undergoing an operational diagnosis...

Go to the original article...

ICCP 2024 Keynote on Event Cameras

Image Sensors World        Go to the original article...

 

In this keynote held at the 2024 International Conference on Computational Photography, Prof. Davide Scaramuzza from the University of Zurich presents a visionary keynote on event cameras, which are bio-inspired vision sensors that outperform conventional cameras with ultra-low latency, high dynamic range, and minimal power consumption. He dives into the motivation behind event-based cameras, explains how these sensors work, and explores their mathematical modeling and processing frameworks. He highlights cutting-edge applications across computer vision, robotics, autonomous vehicles, virtual reality, and mobile devices while also addressing the open challenges and future directions shaping this exciting field.
00:00 - Why event cameras matter to robotics and computer vision

07:24 - Bandwidth-latency tradeoff
08:24 - Working principle of the event camera
10:50 - Who sells event cameras
12:27 - Relation between event cameras and the biological eye
13:19 - Mathematical model of the event camera
15:35 - Image reconstruction from events
18:32 - A simple optical-flow algorithm
20:20 - How to process events in general
21:28 - 1st order approximation of the event generation model
23:56 - Application 1: Event-based feature tracking
25:03 - Application 2: Ultimate SLAM
26:30 - Application 3: Autonomous navigation in low light
27:38 - Application 4: Keeping drones fly when a rotor fails
31:06 - Contrast maximization for event cameras
34:14 - Application 1: Video stabilization
35:16 - Application 2: Motion segmentation
36:32 - Application 3: Dodging dynamic objects
38:57 - Application 4: Catching dynamic objects
39:41 - Application 5: High-speed inspection at Boeing and Strata
41:33 - Combining events and RGB cameras and how to apply deep learning
45:18 - Application 1: Slow-motion video
48:34 - Application 2: Video deblurring
49:45 - Application 3: Advanced Driving Assistant Systems
56:34 - History and future of event cameras
58:42 - Reading material and Q&A

Go to the original article...

Sony releases SPAD-based depth sensor

Image Sensors World        Go to the original article...

From PetaPixel: https://petapixel.com/2025/04/15/sony-unveils-the-worlds-smallest-and-lightest-lidar-depth-sensor/

Sony announced the AS-DT1, the world’s smallest and lightest miniature precision LiDAR depth sensor.

Measuring a mere 29 by 29 by 31 millimeters (1.14 by 1.14 by 1.22 inches) excluding protrusions, the Sony AS-DT1 LiDAR Depth Sensor relies upon sophisticated miniaturization and optical lens technologies from Sony’s machine vision industrial cameras to accurately measure distance and range. The device utilizes “Direct Time of Flight” (dToF) LiDAR technology and features a Sony Single Photon Avalanche Diode (SPAD) image sensor. 

From the official Sony webpage: https://pro.sony/ue_US/products/lidar/as-dt1

  • 1.14 (W) x 1.14 (H) x 1.22 in (D)
  • 50 g (1.1 oz)
  • Utilizes dToF LiDAR technology
  • Single Photon Avalanche Diode (SPAD) sensor
  • Range distance of 40 m (131 ft) indoor, 20 m (65.6 ft) outdoor
  • Lightweight aluminum alloy housing structure
  • 2 USB-C ports
  • Connector for external power, UART interface and trigger
  • HFoV 30° or more
  • Maximum measurement range at 15 fps, 50 percent reflectivity, center: Indoor: 131.23 ft and Outdoor: 65.62 ft
  • Measurement accuracy at 10 m: Indoor/Outdoor: ±0.2 in
  • Distance resolution: 0.98 in
  • Frame rate: 30 fps
  • 15 fps @ Maximum ranging distance mode
  • Number of ranging points 576(24 x 24)
  • Laser wavelength 940 nm
  • Dimensions 1.14 (W) x 1.14 (H) x 1.22 in (D) (excluding protrusions)
  • Weight 1.1 oz or less


 

Go to the original article...

Conference List – October 2025

Image Sensors World        Go to the original article...

ASNT Annual Conference - 6-9 October 2025 - Orlando, Florida, USA - Website

Scientific Detector Workshop 6-10 October 2025 - Canberra, Australia - Website

AutoSens Europe - 7-9 October 2025 - Barcelona, Spain - Website

SPIE/COS Photonics Asia - 12-14 October 2025 - Beijing, China - Website

BioPhotonics Conference - 14-16 October 2025 - Online - Website 

IEEE Sensors Conference - 19-22 October 2025 - Vancouver, British Columbia, Canada - Website 

Optica Laser Congress and Exhibition - 19-23 October 2025 - Prague, Czech Republic - Website

OPTO Taiwan - 22-24 October 2025 - Taipei, Taiwan - Website

Image Sensors Asia - 30-31 October 2025 - Seoul, South Korea - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Go to the original article...

Paper on RGBC-IR color filter array

Image Sensors World        Go to the original article...

Tripurari Singh, Image Algorithmics (US); Mritunjay Singh, Image Algorithmics presented a paper titled "RGBC-IR: A CFA for single exposure dark flash" at Electronic Imaging 2025.

Abstract: Modern RGB-IR cameras have evolved to capture accurate colors and NIR from a single sensor. While these cameras can employ their RGB images to effectively denoise IR, they contain too few IR pixels to do the reverse: denoise RGB with IR.Improving low light RGB with an IR illuminator is an important feature for upcoming automotive applications where cabins have to be kept dark at night so as not to distract the driver. Current solutions to this problem either discard the IR cut filter and take separate RGB and IR exposures and suffer from poor colors. Or employ a bulky beam splitter architecture with separate RGB and IR sensors.We propose a camera with a novel RGBC-IR color filter array containing clear pixels that are sensitive to both visible light and IR. Its RGB pixels feature an IR attenuating coating while its IR pixels contain a black filter that blocks visible light.Mulitspectral demosaicking techniques are used to reconstruct RGB and IR images, as well as a high SNR luminance image containing the Clear, RGB and IR signals. Fusion techniques developed for beam splitter RGB-IR cameras are used to denoise RGB and IR using the luminance.

 

 












Go to the original article...

Conference List – September 2025

Image Sensors World        Go to the original article...

IEEE 2025 International Conference on Multisensor Fusion and Integration for Intelligent Systems - 2-4 September 2025 - College Station, Texas, USA - Website

IEEE European Solid-State Electronics Research Conference - 8-11 September 2025 - Munich, Germany - Website

IEEE International Conference on Sensors and Nanotechnology (SENNANO) - 10-11 September 2025 - Selangor, Malaysia - Website

Sensor Expo Japan - 10-12 September 2025 - Tokyo, Japan - Website

IEEE International Conference on Image Processing - 14-17 September 2025 - Anchorage, Alaska, USA - Website

Sensor China Expo & Conference - 24-26 September 2025 - Shanghai, China - Website

SPIE Sensors + Imaging 2025 - 15-18 September 2025 - Madrid, Spain - Website

17th Topical Seminar on Innovative Particle and Radiation Detectors (IPRD25) - 15-19 September 2025 - Siena, Italy - Website

11th International Conference on Sensors and Electronic Instrumentation Advances - 24-26 September 2025 - Ponta Delgada (Azores), Portugal - Website

RADiation and its Effects on Components and Systems (RADECS) - 29 September-3 October 2025 - Antwerp, Belgium - Website

If you know about additional local conferences, please add them as comments.

Return to Conference List index

Go to the original article...

Conference List – August 2025

Image Sensors World        Go to the original article...

Low Temperature Quantum Detectors - 3-6 August 2025 - Helsinki, Finland - Website

SPIE Optics & Photonics - 3-7 Aug 2025 - San Diego, California, USA - Website

VERTEX 2025: 33rd International Workshop on Vertex Detectors - 25-29 August 2025 - Knoxville, Tennessee, USA - Website

BNL Physics and Detector Simulation Meeting - 26 August 2025 - Zoom online - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Go to the original article...

Conference List – July 2025

Image Sensors World        Go to the original article...

10th International Smart Sensor Technology Exhibition - 2-4 July 2025 - Goyang, South Korea - Website

26th International Workshop on Radiation Detectors - 6-10 July 2025 - Bratislava, Slovakia - Website

IEEE Sensors Applications Symposium - 8-10 July 2025 - Newcastle Upon Tyne, United Kingdom - Website

Ninth International Conference on Imaging, Signal Processing and Communications - 11-13 July 2025 - Osaka, Japan - Website

IEEE Nuclear & Space Radiation Effects Conference (NSREC) 14-18 July 2025 - Nashville, Tennessee, USA - Website

Optica Sensing Congress - 20-24 July 2025 - Long Beach, California, USA - Website

American Association of Physicists in Medicine 67th Annual Meeting and Exhibition - 27-30 July 2025 - Washington, D.C., USA - Website

The 2nd International Conference on AI Sensors and Transducers - 29 July–3 August 2025 - Kuala Lumpur, Malaysia - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Go to the original article...

IDS launches new industrial camera series featuring Prophesee

Image Sensors World        Go to the original article...

PARIS, France and OBERSULM, Germany – March 5, 2025 - IDS Imaging Development Systems GmbH,  market leader in industrial machine vision, and Prophesee SA, inventor of the most advanced neuromorphic vision systems, today announced that IDS’ new uEye EVS camera line incorporates the high-speed, dynamic range and data efficiency of the Prophesee-Sony IMX636HD event-based vision sensor to offer new capabilities for industrial machine vision applications.

The result of extensive collaboration between the two companies, the solution features Prophesee’s proven neuromorphic approach to capturing fast-moving objects with significantly less data processing, power and blur than traditional frame-based methods. With these capabilities, the uEye EVS camera is the ideal solution for applications that require real-time machine vision processing at very high speed, such as optical monitoring of vibrations or high-speed motion analysis.

The camera benefits from Prophesee’s event-based vision’s ability to capture only relevant events in a scene. In contrast to conventional image sensors, it does not capture every image completely at regular intervals (frames) but only reacts to changes within a scene. It transmits events depending on when and where the brightness in its field of view changes - for each individual sensor pixel. The temporal resolution, i.e. the minimum measurable time difference between two successive changes in brightness, can be less than 100 microseconds.

The sensor is supported by Metavision SDK, a seamlessly integrated suite of software tools and models, APIs, and other training and development resources from Prophesee for efficient evaluation, visualization, and customization.

"This partnership combines our mutual areas of expertise to realize the benefits of event-based vision, including remarkable temporal resolution which make the cameras optimised for analysing highly dynamic scenes. It enables best conditions for capturing fast object movements without loss of information, comparable to an image-based frame rate of more than 10,000 images per second," explains Patrick Schick, Product Owner 3D & Vision Software. “At the same time, the sensor ignores all motionless areas of its field of view and thus generates 10 to 1000 times less data than image-based variants. This saves memory and computing time.”

“IDS cameras are well known to address the toughest machine vision use cases and with the incorporation of Prophesee event-based vision technologies, it strengthens its offering to provide far more performance, power efficiency and accuracy, even in the most challenging conditions,” says Luca Verre, CEO and co-founder of Prophesee. “We are excited to see how the efforts of this tight collaboration have resulted in the new uEye EVS camera which leverages the potential of our sensors and development environment to deliver new value to its customers.”

About IDS Imaging Development Systems GmbH:
IDS Imaging Development Systems GmbH is a leading manufacturer of industrial cameras and pioneer in industrial image processing. The owner-managed, environmentally certified company develops high-performance and versatile 2D and 3D cameras as well as models with artificial intelligence (AI) or with streaming/event recording feature. The almost unlimited range of applications covers multiple non-industrial and industrial sectors of equipment, plant and mechanical engineering.
Since its foundation in 1997 as a two-man company, IDS has developed into an independent, ISO and environmental-friendly certified family business with around 320 employees. The headquarters in Obersulm, Germany, is both a development and production site. With subsidiaries in the USA, Japan, South Korea and the UK, as well as further representative offices in France, Benelux and India, the technology company has a global presence.

About Prophesee
Prophesee is the inventor of the world’s most advanced neuromorphic vision systems. Prophesee’s patented sensors and AI algorithms, introduce a new computer vision paradigm based on how the human eye and brain work. Like the human vision, it sees events: essential, motion information in the scene, not a succession of conventional images. This breakthrough method allows for unprecedented speed (>10 000fps time resolution equivalent), dynamic range (>120dB), data volume (10x to 1000x less) and power efficiency (<10 mW). Prophesee bio-inspired revolution opens a new path to absolute efficiency and safety in autonomous driving, IoT and Industry 4.0. Prophesee reveals the invisible.   For more information, please visit www.prophesee.ai.

Go to the original article...

SK hynix plans to exit CMOS image sensor business

Image Sensors World        Go to the original article...

Various news agencies reporting that SK hynix is exiting the CIS business to focus on AI.

https://www.trendforce.com/news/2025/03/06/news-sk-hynix-reportedly-exits-cis-to-focus-on-ai-memory-amid-weak-demand-and-fierce-china-competition/

Amid the AI-driven HBM boom, SK hynix is exiting its non-core CMOS image sensor (CIS) business, according to ZDNet and Edaily.

The ZDNet report suggests that SK hynix used to supply CMOS sensors for Samsung’s Galaxy Z3 and Chinese smartphones, but struggled to expand due to weak market demand and rising competition from Chinese newcomers.

According to SK hynix, its CIS division, launched in 2007, gained expertise in logic semiconductors beyond memory. However, the company decided to shift resources from CIS to AI memory to strengthen its AI-focused strategy, as per ZDNet.

Another report from fnews notes that SK hynix entered the image sensor market in 2008 by acquiring Silicon File. In 2019, it established a CIS R&D center in Japan and launched the “Black Pearl” sensor brand.

However, while trailing behind Sony and Samsung on the CIS business, SK hynix has been gradually downsizing the division, according to Edaily.

In late 2024, the company placed its CIS development team under the Future Technology Research Institute amid ongoing discussions about the business’s declining profitability, the Edaily report indicates.

https://www.thelec.net/news/articleView.html?idxno=5177 

SK Hynix is existing the CMOS image sensor (CIS) business, TheElec has learned.

The company will instead focus fully on AI memory products. Those working at its CIS business unit will be transferred to teams working on high-bandwidth memory (HBM).

In a recent internal communication event with employees, SK Hynix said the AI era has come and that the company has achieved “great results” in the AI memory sector.

The company was in the middle of a “great transition” to become a core AI company, SK Hynix told employees.

Technology and expertise that its CIS business unit will be crucial in solidifying its position as a global AI company, SK Hynix added.

SK Hynix started its CIS business in 2007 and since then attempted to expand its market share in the mobile market. But the unit continued to mark low profitability and its existence was always questioned.

In its year’s end reshuffle lats year, the business unit was moved to be under the supervision of the Future Technology Lab. These teams are more research oriented than teams under the supervision of the CEO.
SK Hynix CEO Kwak Noh-jung was also known to be strongly in favor of continuing the CIS business unit prior to the exit.

The company, during the vent, also said it plans to become a full stack AI memory provider.

Go to the original article...

Conference List – June 2025

Image Sensors World        Go to the original article...

Low-Temperature Detectors Conference - 1-6 June 2025 - Santa Fe, New Mexico, USA - Website

International Image Sensor Workshop - 2-5 June 2025 - Hyogo, Japan - Website

Symposium on VLSI Technology and Circuits - 8-12 June 2025 - Kyoto, Japan - Website

AutoSens USA 2025 - 10-12 June 2025 - Detroit, Michigan, USA - Website

Photonics for Quantum - 16-19 June 2025 - Waterloo, Ontario, Canada - Website

Smart Sensing - 18-20 June 2025 - Tokyo, Japan - Website

Sensors and Sensing Technology - 19-21 June 2025 - Zurich, Switzerland - Website

22nd International Conference on IC Design and Technology (ICICDT) - 23-25 June 2025 - Lecce, Italy - Website

Sensors Converge - 24-26 June - Santa Clara, California, USA - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Go to the original article...

Conference List – May 2025

Image Sensors World        Go to the original article...

CLEO - Congress on Lasers and Electro-Optics - 4-9 May 2025 - Long Beach, California, USA - Website

Sensor+Test - 6-8 May 2025 - Nuremberg, Germany - Website

Automate - 12-15 May 2025 - Detroit, Michigan, USA - Website

Quantum Photonics Conference, Networking and Trade Exhibition - 13-14 May 2025 - Erfurt, Germany - Website

IEEE Sensors in Spotlight - 16 May 2025 - Boston, Massachusetts, USA - Website

AllSensors 2025 - 18-22 May 2025 - Nice, France - Website

Biosensors 2025 - 19-22 May 2025 - Lisbon, Portugal - Website

Embedded Vision Summit - 20-22 May 2025 - Santa Clara, California, USA - Website

IEEE International Symposium on Integrated Circuits and Systems - 25-28 May 2025 - London, UK - Website

5th International Electronic Conference on Biosensors - 26-28 May 2025 - Online - Website

LOPS 2025 - Fort Lauderdale, Florida, USA -31 May-2 June 2025 - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index


Go to the original article...

International Conference on Computational Photography (ICCP) 2025 call for papers

Image Sensors World        Go to the original article...

ICCP is the premier annual conference on computational imaging. The conference brings together researchers with interests broadly related to advancing computational imaging, from theory to systems to applications, including sensors, optics, algorithms, machine intelligence, vision science and perception.
ICCP 2025 will be an in-person event at the University of Toronto in Toronto, Canada, from July 21 – 23, 2025.

ICCP 2025 seeks novel and high-quality submissions in all areas of computational imaging—from theory to systems to applications, including sensors, optics, algorithms, machine intelligence, vision science, and perception—as well as the following topics of interest.

  •  High-performance imaging
  •  Computational cameras, illumination, and displays
  •  Advanced image and video processing
  •  Integration of imaging, physics, and machine learning
  •  Organizing and exploiting photo/video collections
  •  Structured light and time-of-flight imaging
  •  Appearance, shape, and illumination capture
  •  Computational optics (wavefront coding, digital holography, compressive sensing, etc.)
  •  Sensor and illumination hardware
  •  Imaging models and limits
  •  Physics-based rendering, neural rendering, and differentiable rendering
  •  Applications: imaging on mobile platforms, scientific imaging, medicine and biology, user interfaces, AR/VR systems
Two Integrated Paper Tracks
As in previous years, ICCP is coordinating with the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) for a special issue on Computational Photography to be published after the conference. All submissions to ICCP will undergo a common review process and be judged for acceptance to either:
  1.  The PAMI special issue: for papers that describe entirely novel work (i.e., not extensions of published conference papers) and are also of archival quality with comprehensive evaluation and analysis.
  2.  The ICCP Proceedings: for papers that meet traditional conference criteria for quality and novelty but do not meet the criteria for (1) above.

Reviewing will be double-blind, and authors will be allowed a rebuttal after initial reviews. After review, the program chairs will inform the authors of accepted papers whether their paper has been selected for the special issue or the conference proceedings (see the Review and Decision Process section below for further details). Both sets of accepted papers will be presented as talks at the conference.

Please visit this page for more details and submission link: https://iccp2025.iccp-conference.org/#callforpapers

Paper submission deadline (firm, no extensions)     April 9, 2025 at 2359h Pacific Time.


Go to the original article...

Future of Image Sensors: IS&T Rochester NY chapter talk by John McCarten

Image Sensors World        Go to the original article...


The Future of Image Sensors, John McCarten

John McCarten presented a talk as part of the Society for Imaging Science and Technology (IS&T) Rochester NY chapter seminar series on 22 Jan. 2025.


John McCarten studied Physics at Cornell University and currently works for L3Harris. Since 2001, John’s focus has been on image sensors and cameras. He has worked with semiconductor foundries on four continents. He has over 30 patents and has been the technical lead on development projects that have brought in over a billion dollars in sales.


00:00 - Introduction
00:45 - Future of Image Sensors
50:22 - Discussion

Go to the original article...

IISW 2025 preliminary technical program available

Image Sensors World        Go to the original article...

 

New! IISW25 Technical Program (preliminary): Link

Venue: Awaji Yumebutai Int. Conf. Center, Hyogo, Japan.
Date: 2–5 June, 2025
 
New! Pre-registration information: Link

Authors can find the Paper Numbers or Poster Numbers in the Program to complete the required pre-registration.

After collecting all the camera-ready files, the final Program with timetables will be posted here.

New! IISW25 Right to Publish Form: Link

Authors need to download, sign, print to PDF, and submit it along with the camera-ready 4-page paper by 03/22/25.

Submit files to the same CMT site where you submitted the abstracts : https://cmt3.research.microsoft.com/IISW2025
 
 
General Workshop Co-Chairs
Yusuke Oike – Sony (Japan)
Shoji Kawahito – Shizuoka University and SUiCTE
Technical Program Chairs
Calvin Chao – TSMC
Rihito Kuroda – Tohoku University
IISS Board of Directors
Calvin Chao – TSMC
Boyd Fowler – OmniVision
Robert Henderson – The University of Edinburgh
Vladimir Koifman – Analog Value
Rihito Kuroda – Tohoku University
Guy Meynants – Photolitics
Junichi Nakamura – Brillnics
Shouleh Niksad – Jet Propulsion Lab.
Yusuke Oike – Sony (Japan)
Johannes Solhusvik – Sony (Norway)
Daniel Van Blerkom – Forza Silicon-Ametek
Yibing Michelle Wang – Samsung Semiconductor

Go to the original article...

TriEye SWIR machine vision solutions

Image Sensors World        Go to the original article...

 


Discover TriEye's high-performance SWIR-based machine vision solutions, designed to enhance visibility and accuracy across various applications. This webinar explores the unique capabilities of SWIR (Short-Wave Infrared) technology and its impact on machine vision systems.

Go to the original article...

Improving color sensitivity in low light using nano-prisms, light pillars, and color splitters

Image Sensors World        Go to the original article...

A recent article in IEEE Spectrum discusses three approaches to improve color throughput in low-light settings: nano-prisms, light pillars, and color splitters.

Link: https://spectrum.ieee.org/smartphone-camera-sensors-next-gen/nano-light-pillars-bring-low-light-images-into-focus

 


Using color splitters, an image sensor can increase its overall sensitivity by having light appropriate to each sensor channeled directly to it. (imec)


“Nano-pillars” are a light channeling form of a metasurface that, a little like Imec's color splitter, also direct specific wavelengths of light to the detector pixels best suited to receive the light. (VisEra Technologies)

Samsung's new nano-prism image has a sensitivity to light sources at more oblique angles compared to some conventional pixel tech today. (Samsung)


Go to the original article...

IISW 2025 pre-registration is open!

Image Sensors World        Go to the original article...

TLDR; Register as soon as possible starting February 17, as the number of attendees is limited! 

IISW 2025 Announcement of Pre-registration
 
The 2025 International Image Sensor Workshop (IISW) provides a biennial opportunity to present innovative work in the area of solid-state image sensors and share new results with the image sensor community. The event is intended for image sensor technologists; in order to encourage attendee interaction and a shared experience, attendance is limited, with strong acceptance preference given to workshop presenters. As is the tradition, the 2025 workshop will emphasize an open exchange of information among participants in an informal, secluded setting on the Awaji Island in Hyōgo, Japan.
 
The pre-registration, along with the workshop program, will be open from February 17th, 2025. Details about pre-registration have been made available in advance at:
 
https://imagesensors.org/2025-international-image-sensor-workshop/
 
Priority seating will be given to presenters of accepted papers, resulting in a limited number of seats available for other attendees. Registration will generally be on a first-come, first-served basis. However, in line with the workshop’s commitment to fostering diverse and lively discussions, the organizers reserve the right to adjust allocations to ensure a balanced representation of affiliations.


Go to the original article...

css.php