Conference List – July 2026

Image Sensors World        Go to the original article...

2nd International Conference on Optical Imaging and Detection Technology (OIDT 2026) - 3-5 July 2026 - Yulin, China - Website

New Developments in Photodetection - 6-10 July 2026 - Troyes, France - Website

11th International Smart Sensor Technology Exhibition - 8-10 July 2026 - Goyang, South Korea - Website

Tenth International Conference on Imaging, Signal Processing and Communications - 11-13 July 2026 - Kobe, Japan - Website

IEEE International Conference on Flexible Printable Sensors and Systems - 12-15 July 2026 - Atlanta, Georgia, USA - Website

Optica Sensing Congress - 12-17 July 2026 - Maastricht, Netherlands - Website

IEEE Sensors Applications Symposium - 15-17 July 2026 - Vitoria, Brazil - Website

American Association of Physicists in Medicine 67th Annual Meeting and Exhibition - 19-22 July 2026 - Vancouver, BC, Canada - Website

IEEE Nuclear & Space Radiation Effects Conference (NSREC) - 20-24 July 2026 - San Juan, Puerto Rico, USA - Website

34th International Workshop  on Vertex Detectors - 20-24 July 2026 - Stoos, Switzerland - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Go to the original article...

Synthetic aperture imager

Image Sensors World        Go to the original article...

Link: scitechdaily.com/this-breakthrough-image-sensor-lets-scientists-see-tiny-details-from-far-away/

Open-access paper: Multiscale aperture synthesis imager  https://www.nature.com/articles/s41467-025-65661-8

A new lens-free imaging system uses software to see finer details from farther away than optical systems ever could before.

Imaging technology has reshaped how scientists explore the universe – from charting distant galaxies using radio telescope arrays to revealing tiny structures inside living cells. Despite this progress, one major limitation has remained unresolved. Capturing images that are both highly detailed and wide in scope at optical wavelengths has required bulky lenses and extremely precise physical alignment, making many applications difficult or impractical.

Researchers at the University of Connecticut may have found a way around this obstacle. A new study led by Guoan Zheng, a biomedical engineering professor and director of the UConn Center for Biomedical and Bioengineering Innovation (CBBI), along with his team at the University of Connecticut College of Engineering, was published in Nature Communications. The work introduces a new imaging strategy that could significantly expand what optical systems can do in scientific research, medicine, and industrial settings.

Why Synthetic Aperture Imaging Breaks Down at Visible Light

“At the heart of this breakthrough is a longstanding technical problem,” said Zheng. “Synthetic aperture imaging – the method that allowed the Event Horizon Telescope to image a black hole – works by coherently combining measurements from multiple separated sensors to simulate a much larger imaging aperture.”

This approach works well in radio astronomy because radio waves have long wavelengths, which makes precise coordination between sensors achievable. Visible light operates on a much smaller scale. At those wavelengths, the physical accuracy needed to keep multiple sensors synchronized becomes extremely difficult to maintain, placing strict limits on traditional optical synthetic aperture systems.

Letting Software Do the Synchronizing

The Multiscale Aperture Synthesis Imager (MASI) addresses this challenge in a fundamentally different way. Instead of requiring sensors to remain perfectly synchronized during measurement, MASI allows each optical sensor to collect light on its own. Computational algorithms are then used to align and synchronize the data after it has been captured.

Zheng describes the concept as similar to several photographers observing the same scene. Rather than taking standard photographs, each one records raw information about the behavior of light waves. Software later combines these independent measurements into a single image with exceptionally high detail.

This computational approach to phase synchronization removes the need for rigid interferometric setups, which have historically prevented optical synthetic aperture imaging from being widely used in real-world applications.

How MASI Captures and Rebuilds Light

MASI differs from conventional optical systems in two major ways. First, it does not rely on lenses to focus light. Instead, it uses an array of coded sensors placed at different locations within a diffraction plane. Each sensor records diffraction patterns, which describe how light waves spread after interacting with an object. These patterns contain both amplitude and phase information that can later be recovered using computational methods.

After the complex wavefield from each sensor is reconstructed, the system digitally extends the data and mathematically propagates the wavefields back to the object plane. A computational phase synchronization process then adjusts the relative phase differences between sensors. This iterative process increases coherence and concentrates energy in the combined image.

This software-based optimization is the central advance. By aligning data computationally rather than physically, MASI overcomes the diffraction limit and other restrictions that have traditionally governed optical imaging.

A Virtual Aperture With Fine Detail

The final result is a virtual synthetic aperture that is larger than any single sensor. This allows the system to achieve sub-micron resolution while still covering a wide field of view, all without using lenses.
Traditional lenses used in microscopes, cameras, and telescopes force engineers to balance resolution against working distance. To see finer details, lenses usually must be placed very close to the object, sometimes just millimeters away. That requirement can limit access, reduce flexibility, or make certain imaging tasks invasive.

MASI removes this constraint by capturing diffraction patterns from distances measured in centimeters and reconstructing images with sub-micron detail. Zheng compares this to being able to examine the fine ridges of a human hair from across a desk rather than holding it just inches from your eye.

Scalable Applications Across Many Fields

“The potential applications for MASI span multiple fields, from forensic science and medical diagnostics to industrial inspection and remote sensing,” said Zheng, “But what’s most exciting is the scalability – unlike traditional optics that become exponentially more complex as they grow, our system scales linearly, potentially enabling large arrays for applications we haven’t even imagined yet.”

The Multiscale Aperture Synthesis Imager represents a shift in how optical imaging systems can be designed. By separating data collection from synchronization and replacing bulky optical components with software-controlled sensor arrays, MASI shows how computation can overcome long-standing physical limits. The approach opens the door to imaging systems that are highly detailed, adaptable, and capable of scaling to sizes that were previously out of reach.

Go to the original article...

Eric Fossum receives 2026 Draper Prize for Engineering

Image Sensors World        Go to the original article...

Link: https://home.dartmouth.edu/news/2026/01/eric-fossum-awarded-draper-prize-engineering

Eric R. Fossum, the John H. Krehbiel Sr. Professor for Emerging Technologies, has been awarded the 2026 Charles Stark Draper Prize for Engineering, which is granted every two years by the National Academy of Engineering and is one of the world’s preeminent honors for engineering achievement.
The NAE recognized Fossum “for innovation, development, and commercialization of the complementary metal-oxide semiconductor active pixel image sensor,” an invention that remains the core technology behind roughly 7 billion cameras produced each year.

“Eric Fossum is a pioneering semiconductor device physicist and engineer whose invention of the CMOS active pixel image sensor, or ‘camera on a chip,’ has transformed imaging across everyday life, industry, and scientific discovery,” the NAE said in announcing the prize, which includes a $500,000 cash award.
The honor is the latest in a string of accolades for Fossum, who in addition to his role as a professor at Thayer School of Engineering also serves as vice provost for entrepreneurship and technology transfer and directs the PhD Innovation Program.

His other honors include the Queen Elizabeth Prize for Engineering, the National Medal for Technology and Innovation awarded at a White House ceremony last year, and a Technical Emmy Award recognizing the transformative impact of Fossum’s invention. 
Today, CMOS image sensors, which were intended to make digital cameras for space faster, better, and cheaper, are behind billions of captures in a vast variety of settings—selfies, high-definition videos, dental X-rays, and space images.

“Eric Fossum’s inventions have revolutionized digital imaging across industries,” says President Sian Leah Beilock. “His work is a prime example of how the applied research our faculty foster and undertake can drive innovation and improve our world.” 

Research for NASA

Tasked with creating smaller cameras for NASA spacecraft that would use less energy, Fossum led the team that invented and developed the CMOS image sensor technology at the Jet Propulsion Laboratory at the California Institute of Technology in the 1990s. The CMOS image sensor integrated all the essential camera functions on a single piece of silicon—each chip contained arrays of light-sensitive pixels, each with its own amplifier.

Fossum recalls the moment when their first image sensor worked flawlessly in testing. It was a eureka moment, but only in hindsight. His initial reaction was tempered by caution. “It seemed so straightforward that I figured others must have tried this before, and there must be a fatal flaw somewhere. So, it was exhilarating to see that it was working,” he says.

The CMOS sensor was commercialized through Photobit, the company he co-founded and helped lead until its acquisition by Micron. 

As the CMOS sensor grew in sophistication, so too did its impact, finding applications in both predictable and surprising ways, such as swallowable pill cameras that can take images inside the body and the explosion of smartphone cameras, which forever changed how we capture and share our lives.
“The impact it has had on social justice has been huge, which I did not anticipate at all, and is truly gratifying. It protects people that might otherwise be powerless, and those with power from false accusations,” Fossum says.

Fossum, a Connecticut native, received a bachelor of science degree in physics and engineering from Trinity College, and a PhD in engineering and applied science from Yale in 1984. Prior to his work at the Jet Propulsion Lab, he was a faculty member at Columbia University. After leading several startups, consulting, and co-founding the International Image Sensor Society, he joined Dartmouth in 2010.
Fossum’s many other honors include the NASA Exceptional Achievement Medal, the IEEE Jun-ichi Nishizawa Medal, and induction into the U.S. Space Foundation Technology Hall of Fame in 1999 and the National Inventors Hall of Fame in 2011. He also served as CEO of Siimpel, developing MEMS devices for autofocus in smartphone camera modules, and worked as a consultant for Samsung on time-of-flight sensor development. He is a member of the National Academy of Engineering and a fellow of the National Academy of Inventors, the Institute of Electrical and Electronics Engineers, and Optica.

Counting photons: The future of imaging

Fossum continues to push the boundaries of imaging. His more recent invention, the quanta image sensor, was developed at Dartmouth and enables high-resolution imaging in extremely low-light conditions.

“We’re working on sensors that can count photons, one at a time,” he says. “Imagine being able to take a photo in almost complete darkness or measuring extremely faint signals in biology. It’s like turning the lights on in a place that was previously invisible to us.” 

Fossum and two of his former Dartmouth students co-founded Gigajot to commercialize the technology.
“Eric’s achievements are not the result of a single breakthrough, but of sustained curiosity and a focus on real-world impact,” says Douglas Van Citters ’99, Thayer ’03, ’06, interim Thayer dean. “To this day, he brings exceptional dedication to teaching and research, along with a passion for entrepreneurship that permeates Dartmouth, especially Thayer. And that spirit has inspired generations of engineers at Dartmouth who, like Eric, are committed to improving lives through the technologies they create.”

When asked about where he sees the field of imaging in the next decade, Fossum imagines a world where great images can be captured using a handful of photons and where computational imaging allows humans to see the world in ways eyes themselves never could. 

“The ability to capture images in low light will continue to improve,” he predicts. “And we’re likely to see a proliferation of augmented reality technologies that will change the way we experience the world around us.”

 In his mind, the grand challenge ahead is miniaturization—creating sensors with pixels so tiny that they become smaller than the wavelength of light itself. With this breakthrough, imaging technology could scale to the point where a single chip contains billions of pixels, opening new possibilities for everything from medical diagnostics to space exploration.

Along with his continuing work on sensors, Fossum draws from his extensive experience in innovation and entrepreneurship in his role as vice provost and in overseeing the PhD Innovation Program.
He says that the program trains students not just to think creatively but to apply their research in ways that have a meaningful impact.

“It is just so much more satisfying to make a real impact with the work that you do,” he says.
The awards ceremony is scheduled for Feb. 18 in Washington, D.C. As he did with the Queen Elizabeth prize, Fossum plans to donate the majority of the Draper Prize funds to STEM-related charities.

Go to the original article...

Mythic image sensor

Image Sensors World        Go to the original article...

Link: https://www.eetimes.com/mythic-rises-from-the-ashes-with-125-million-funding-round/

Mythic Rises from the Ashes with $125 Million Funding Round 

Excerpt: 

A separate product family, dubbed “Starlight,” will use a Mythic compute chiplet hybrid-bonded under a vision sensor’s photodiode array. The two dies will use less than 1 W between them.
Ozcelik said he noticed a gap in the market for this type of device while previously working at OnSemiconductor.

“One of the biggest challenges for image sensors is low light performance,” he said. “Dynamic range is another major problem, especially in mission critical applications.”

A Mythic AI accelerator could run a neural network to improve low-light performance and dynamic range directly next to the sensor. Image sensors made for applications like cellphones are very small (one-third of an inch), and performance suffers as they get smaller, Ozcelik said. Mythic has a unique opportunity here as its technology is compact, and crucially, it uses very little power, according to Ozcelik (photodiode arrays are extremely thermally sensitive, meaning even a small DSP couldn’t be placed directly under the photodiode array).

Mythic is going to build this sensor and AI accelerator combination itself, and both the accelerator chiplet and the image sensor product will tape out this year, Ozcelik said.

Overall, Ozcelik is pragmatic about the scale of the challenges ahead, particularly given the company’s move into the data center where it will compete with Nvidia.

“[Our advantage] has to be incredibly material,” he said. “It has to be at least one hundred times, hopefully more.”

Go to the original article...

Voyant releases solid-state FMCW LiDAR

Image Sensors World        Go to the original article...

Press release: https://voyantphotonics.com/news/1075/

New York, NY – December 17, 2025 – Voyant Photonics, the leader in chip-scale frequency-modulated continuous-wave (FMCW) LiDAR, today announced its Helium™ Platform of fully solid-state LiDAR sensors and modules. The solution is built on a silicon photonics chip, enabling a breakthrough architecture designed to deliver unprecedented reliability, integration, and performance for industrial automation, robotics, and mobile autonomy.

Leveraging Voyant’s proprietary Photonic Integrated Circuit (PIC), Helium offers camera-like simplicity and unmatched flexibility. Helium uses a dense two-dimensional photonic focal plane array with fully integrated 2D on-chip beam steering — eliminating all unreliable scanning methods: MEMS, mirrors, and resulting in no moving parts. The FMCW LiDAR chip leverages a two-dimensional array of surface emitters to create a fully solid-state LiDAR in an ultra-compact, rugged design. Helium also supports multi-sensor configurations, combining for instance a wide-FoV short-range and narrow-FoV long-range sensing in one system — delivering the most versatile and cost-effective LiDAR solution for advanced perception applications.

Helium first prototype release will be demonstrated at Voyant’s booth (LVCC, West Hall, Booth #4875) at CES 2026 in Las Vegas, January 6-9, marking a major milestone in advancing silicon-photonics LiDAR from R&D into high-volume systems that are proliferating Physical AI.

“Helium represents the next step in our mission to deliver the most affordable high performance LiDAR sensor ever,” said Voyant CEO Clément Nouvel. “Industrial and consumer markets demand sensors that are small, cost efficient, and highly reliable. Helium provides all of that while delivering performance that unlocks new classes for intelligent machines.”

A Flexible Platform to Move Solid-State LiDAR Forward

Helium extends the technology foundation proven in Voyant’s Carbon™ product line, bringing full two-dimensional beam steering to a silicon-photonics platform for the first time. The result is a compact, high-precision 4D sensor that meets the highest industry standards for safety and reliability.

Key advantages include:

  •  True solid-state — no MEMS, polygon scanners, or rotating assemblies
  •  High-resolution FPA architecture spanning from 12,000 pixels to over 100,000 pixels
  •  Long-range FMCW performance, per-pixel radial velocity
  •  Software-defined LiDAR (SDL) enabling adaptive scan patterns and region of interest
  •  Ultra Compact Size -as small as a matchbox (<150 g mass and <50 cm³ volume), ideal for drones, mobile robots, and compact industrial systems

Field of view and range can be tailored with different lenses, and the platform scales from core module options to fully enclosed sensor. Helium is built on a 2D array of surface-emitting photonic antennas combined with a fixed lens and integrated electronics, forming a rugged module ideal for embedded perception.

With no moving parts and monolithic photonic integration, Helium offers an estimated 20× improvement in MTBF over legacy ToF LiDAR architectures —a critical reliability requirement for high-duty-cycle industrial fleets.

Engineered for Scalable Manufacturing 

As with the Carbon family, Helium is built entirely on Voyant’s leading proprietary silicon-photonics platform, enabling new levels of performance and integration. This deep integration eliminates the unreliable optical alignments that limit traditional TOF LiDAR manufacturability. Helium leverages the same mature photonics foundry ecosystem as the optical datacom industry — allowing Voyant to scale production toward semiconductor-class cost structures.

From Carbon to Helium —Voyant Advances a Modular LiDAR Platform for Broader Adoption
Voyant established the company’s leadership in compact, cost-optimized FMCW sensing for compute-constrained platforms with its first-generation Carbon™ family, extended last week with the new Carbon 32 and Carbon 64 variants. Helium builds directly on these advances, expanding the architecture from 1D to 2D on-chip beam steering, with higher resolution and a fully solid-state scan engine. Voyant now enables OEMs to integrate its sensing technology directly into their machines by offering module-only access along with full design-in support. This allows partners to build customized, high-performance sensor solutions tailored to their exact requirements.

Helium sensors and modules will be available with multiple resolution and range configurations, supporting a wide choice of field-of-view options—from ultra-wide coverage approaching 180° down to narrower, long-range targeting optics. These modular variants enable OEMs and developers to select and integrate lenses that best suit their application, allowing LiDAR architectures to be tailored for mobile robots, material-handling systems, smart infrastructure, and emerging edge-compute platforms. 

Go to the original article...

Fujifilm instax mini EVO Cinema review

Cameralabs        Go to the original article...

The instax mini EVO Cinema looks like a vintage cine camera, takes photos or short video clips, and features a built-in printer to make instant prints. Is this the coolest camera of the year? Find out in my review!…

Go to the original article...

Leica image sensor development?

Image Sensors World        Go to the original article...

There are some recent news reports that Leica is developing its own image sensor.

Petapixel: https://petapixel.com/2026/01/02/leica-is-developing-its-own-image-sensors-again/

Lecia rumors: https://leicarumors.com/2026/01/01/leica-is-developing-its-own-camera-sensor-again-most-likely-for-the-leica-m12-camera.aspx/ 

Excerpt:

In a recent podcast, Dr. Andreas Kaufmann (Chairman of the Supervisory Board and majority shareholder of Leica Camera AG) confirmed that Leica is again developing their own sensor, most likely for the next Leica M12 camera (Google translation):

Furthermore, as has already become somewhat known, we are also developing our own sensor again. […] Up until the M10, we had a sensor of European origin. It was manufactured by AMS in Graz, or rather, developed by their Dutch development office. And the foundry itself was in Grenoble, a French company. And then there was the transition with the M11 to Sony sensors. It’s no secret that they’re in there. At the same time, we started developing our own sensor again, in a more advanced version. I think we’ve made significant progress with that. We can’t say more at the moment. 

Go to the original article...

Eric Fossum receives 2026 IEEE Nishizawa Medal

Image Sensors World        Go to the original article...

Link: https://engineering.dartmouth.edu/news/eric-fossum-to-receive-2026-ieee-jun-ichi-nishizawa-medal

Eric Fossum Named 2026 Recipient of IEEE Jun-ichi Nishizawa Medal
Dec 17, 2025

Eric R. Fossum, the John H. Krehbiel Sr. Professor for Emerging Technologies and vice provost for entrepreneurship and technology transfer at Dartmouth, has been named the 2026 recipient of the Institute of Electrical and Electronics Engineers' (IEEE) Jun-ichi Nishizawa Medal for the "invention, development, and commercialization of the CMOS image sensor" that revolutionized digital imaging around the world.

Fossum joins a distinguished group of some of the world's most renowned engineers and scientists selected by IEEE to receive the organization's highest honors for their contributions to technology, society, and the engineering profession. 

The prize is awarded annually by IEEE, the largest technical professional organization in the world dedicated to advancing technology for humanity.

Eric Fossum and the team that invented the CMOS image sensor, at NASA's Jet Propulsion Laboratory. (Photo courtesy of NASA/JPL-Caltech)

Fossum led the team at NASA's Jet Propulsion Laboratory that developed the complementary metal-oxide-semiconductor (CMOS) sensor during the early 1990s, an innovation that dramatically miniaturized cameras used in space missions onto a single chip. The "camera on a chip" sensor subsequently made digital photography and imaging widely accessible worldwide. 

Today, the CMOS sensor is integrated in nearly every smartphone, as well as in well as countless other devices including webcams, medical imaging devices, and automobile cameras.

Fossum will formally receive the medal at a ceremony in New York City in April 2026. Named in honor "father of Japanese microelectronics," the Nishizawa prize also comes with an honorarium, which Fossum plans to donate to STEM-related charities. 

Fossum co-founded Photobit Corporation to commercialize the CMOS sensor, serving as CEO, before the company was acquired by Micron. He also served as CEO of Siimpel Corporation, which developed MEMS-based camera modules with autofocus and shutter functions for cell phones. More recently, he served as chairman of Gigajot Technology Inc., which he co-founded with two former Dartmouth PhD students to develop and commercialize quanta image sensors, which they developed at Dartmouth.

Fossum joined Dartmouth's engineering faculty in 2010 and helped launch the PhD Innovation Program, the nation's first doctoral level program focused on research translation and entrepreneurship.

Fossum is a member of the National Academy of Engineering. He was inducted in the National Inventors Hall of Fame in 2011, and to date, holds 185 US patents. He is a fellow of the National Academy of Inventors, an IEEE life fellow, an Optica fellow, and a member of the Society of Motion Picture and Television Engineers and the American Association for the Advancement of Science.

Throughout his career, Fossum has earned numerous accolades for his work, including the Queen Elizabeth Prize for Engineering in 2017, the Emmy for technology and engineering from the National Academy of Television Arts and Sciences in 2021, and most recently the National Medal of Technology and Innovation from President Biden in 2025.

Go to the original article...

Conference List – June 2026

Image Sensors World        Go to the original article...

The International SPAD Sensor Workshop - 1-4 June 2026 - Seoul, South Korea - Website

SPIE Photonics for Quantum - 8-11 June 2026 - Waterloo, Ontario, Canada - Website

AutoSens USA 2026 - 9-11 June 2026 - Detroit, Michigan, USA - Website

Sensor+Test - 9-11 June 2026 - Nuremberg, Germany - Website

Smart Sensing - 10-12 June 2026 - Tokyo, Japan - Website

IEEE/JSAP Symposium on VLSI Technology and Circuits - 14-18 June 2026 - Honolulu, Hawaii, USA - Website

Quantum Structure Infrared Photodetector - 14-19 June 2026 - Sète, France - Website

International Conference on Sensors and Sensing Technology (ICCST2026)- 15-17 June 2026 - Florence, Italy - Website

International Conference on IC Design and Technology (ICICDT) - 22-24 June 2026 - Dresden, Germany - Website

Automate 2026 - 22-25 June 2026 - Chicago, Illinois, USA - Website

27th International Workshop on Radiation Imaging Detectors - 28 June-2 July 2026 - Ghent, Belgium - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Go to the original article...

Prophesee leadership change

Image Sensors World        Go to the original article...

Prophesee Appoints Jean Ferré as Chief Executive Officer to Lead Event-based Vision Sensing Pioneer in Next Stage of Growth

Paris, France – December 23, 2025 – Prophesee, a pioneer and global leader in event-based vision technology, today announced the appointment of Jean Ferré as Chief Executive Officer. He has been designated by the board to succeed Luca Verre, Prophesee’s co-founder and former CEO, who is leaving the company. This leadership transition comes as the company enters a new phase of commercialization and growth, building on a strong technological and organizational foundation and welcoming new investors. The company is sharpening its near-term focus on sectors with high value use cases demonstrating today the strongest demand and adoption momentum such as security, defense and aerospace, as well as industrial automation. Prophesee will continue to support volume vision-enabled applications markets where it has achieved initial commercial success such as IoT, AR/VR, consumer electronics.

[...

Full press release is available here: https://www.prophesee.ai/2025/12/23/prophesee-appoints-jean-ferre-as-chief-executive-officer-to-lead-event-based-vision-sensing-pioneer-in-next-stage-of-growth/ 

Go to the original article...

MagikEye’s real-time 3D system at CES

Image Sensors World        Go to the original article...

MagikEye to Showcase New High-Resolution Real-Time 3D Evaluation System at CES

Reference platform delivers with >8000 points in a 3D cloud at 30 FPS for robotics, low-cost LiDAR, and automotive in-cabin deployments

STAMFORD, Conn.--(BUSINESS WIRE)--Magik Eye Inc (www.magik-eye.com), a developer of advanced 3D depth sensing based on its ILT™ (Invertible Light Technology), will be showcasing a new high-resolution, real-time ILT evaluation system at the upcoming Consumer Electronics Show. The system is designed to help customers evaluate ILT performance, validate configurations, and begin application development for robotics, low-cost LiDAR-class replacement, and automotive in-cabin applications.

The new evaluation system is a reference implementation, not a commercial sensor product. It delivers an approximately over 8,600-point 3D point cloud per frame at 30 frames per second, corresponding to more than 259,000 depth-points per second, while maintaining real-time operation and low latency (~33 ms). This represents roughly 2× the spatial point density of MagikEye’s prior evaluation platform without sacrificing frame rate.

“Customers evaluating depth sensing technologies want realistic, real-time data they can actually build on,” said Skanda Visvanathan, VP of Business Development at MagikEye. “This reference system is designed to shorten the path from evaluation to application development by delivering higher-resolution ILT depth at a full 30 FPS, in a form factor and performance envelope aligned with embedded systems.”

Designed for real-world evaluation and development, the evaluation system enables customers to evaluate ILT depth sensing in their own environments, begin application software development using live 3D point cloud output, and validate specific ILT configurations—including field of view, operating range, optical setup, and processing pipeline—prior to custom module design.

Key characteristics of the evaluation platform include a wide 105° × 79° field of view, a wide operating range of 0.3 m to 2 m (with support for near-field proximity use cases), and operation in bright indoor lighting conditions of up to ~50,000 lux, dependent on distance and target reflectance.

Unlike depth solutions that increase point density by reducing frame rate, MagikEye’s ILT evaluation system maintains a full 30 FPS, enabling depth perception suitable for dynamic, real-time environments. ILT™ can scale to even higher frame rates with increased processing performance.

At CES, MagikEye will demonstrate how the evaluation system supports development and prototyping across robotics applications such as real-time perception and navigation, low-cost LiDAR-class embedded sensing, and automotive in-cabin occupancy and interior monitoring.

The evaluation system integrates with MagikEye’s MKE API, allowing customers to stream point clouds and integrate ILT depth data into existing software stacks.

MagikEye will be showcasing the new evaluation system at CES in Las Vegas. To schedule a meeting or request a demonstration, please contact ces2026@magik-eye.com. 

Go to the original article...

AZO Sensors interview article on Teledyne e2v CCD imagers

Image Sensors World        Go to the original article...

The Enduring Relevance of CCD Sensors in Scientific and Space Imaging

(Inteview with Marc Watkins, Teledyne e2v)

While CMOS technology has become the dominant force in many imaging markets, Charge-Coupled Devices (CCDs) continue to hold an essential place in scientific and space imaging. From the Euclid Space Telescope to cutting-edge microscopy and spectroscopy systems, CCDs remain the benchmark for precision, low-noise performance, and reliability in mission-critical environments.
In this interview, Marc Watkins from Teledyne e2v, discusses why CCD technology continues to thrive, the company’s long-standing heritage in space missions and scientific discovery, and how ongoing innovation is ensuring CCDs remain a trusted solution for the most demanding imaging applications. 

To begin, could you provide an overview of your role at Teledyne e2v and the types of imaging applications your team typically supports?

I manage the CCD product portfolio and associated sales globally. Our CCDs are mostly used in scientific applications such as astronomy, microscopy, spectroscopy, in vivo imaging, X-ray imaging, and space imaging. Almost every large telescope worldwide uses our CCDs for their visible light instruments.

CCDs are vital for medical research, especially for in vivo preclinical trials in areas such as cancer research. Advanced microscopy techniques such as Super Resolution Microscopy require the extreme sensitivity of EMCCDs. Not all CCDs are hidden in labs, on top of mountains, or in space; you’ll likely have passed a CCD in airport security without realising it.

In a time when CMOS technology has become dominant in most imaging markets, what are the primary reasons CCD sensors still maintain relevance in scientific, astronomical, and space-based applications?

We observe that in many markets, CMOS has made significant advances; however, CCDs remain the best overall solution for many niche applications, such as the ones I just described. The technical advantages vary greatly between applications.

Could you elaborate on some of the technical advantages CCD sensors offer over CMOS in high-performance or mission-critical imaging environments?

CCDs are great for long integrations where larger charge capacities, higher linearity, and low noise provide the best performance. They can be deeply cooled, making dark noise negligible. CCDs can be manufactured on thicker silicon, which gives better Red/near-infrared sensitivity. CCD pixels can be combined or “binned” together noiselessly, a technique widely used in spectroscopy. Specialized “Electron Multiplying” CCDs are sensitive enough to count individual photons.

What are some of the unique requirements in space or astronomy applications that make CCDs a more suitable choice than CMOS?

Most astronomy applications use very long integration times, require excellent Red/NIR response, and have no problem cooling to -100 °C, making CCDs a much better solution.

For space, the answer can be as simple as our mission heritage, making them a low-risk option. Since 1986, Teledyne’s sensors have unlocked countless scientific discoveries from over 160 flown missions. Our CCDs can be found exploring the deep expanses of space with the Hubble and Euclid Space Telescopes, imaging the sun from solar observatories, navigating Mars with rovers, and monitoring the environment with the Copernicus Earth observation Sentinel satellites.

As CMOS technology continues to advance, are you seeing any significant closing of the performance gap in areas where CCDs have traditionally been stronger, such as low noise, uniformity, or quantum efficiency?

For most of our applications, recent advances in CMOS technology have had little impact on the CCD business. An example of this might be the development of improved high-speed CMOS. If high speed is critical, then CMOS is already the incumbent technology. Where quantum efficiency is concerned, we can offer the same backthinning and AR coatings for both CCD and CMOS technologies, with a peak QE of up to 95 %.

One area of transition for us is in space applications, such as Earth observation, where improvements in areas such as radiation hardness, frame rate, and TDI are steering many of our customers from our CCD to our CMOS solutions.

How has Teledyne e2v continued to innovate or evolve its CCD product lines to meet the demands of modern applications while CMOS continues to gain market share?

Our CCD product lines have a long development heritage. In general, we aim to optimize existing designs by tailoring specifications, such as anti-reflective coatings, to benefit specific applications. With in-house sensor design, manufacture, assembly, and testing, all our CCDs can be supplied partially or fully customized to fit the application and achieve the best possible performance.

Our CCD wafer fab and processing facility in England was established in 1985 and quickly became the world’s major supplier for space imaging missions and large ground-based astronomical telescopes. We continue to develop a vertically integrated, dedicated CCD fab and are committed to the development of high-performance, customized CCD detectors.

The CCD fabrication facility is critical to the success and quality of future space and science projects. At Teledyne, we remain committed to being the long-term supplier of high-specification and high-quality devices for the world’s major space agencies and scientific instrument producers.

Are there particular missions or projects, either current or upcoming, where CCD technology remains critical? What makes CCDs indispensable in those scenarios?

A prototype for a new intraoperative imaging technique incorporates CCDs, which we hope will have a significant impact on cancer treatments in the future.

In astronomy, one example is the Vera C. Rubin Observatory, which utilizes an enormous 3.2 Gigapixel camera composed of an array of HiRho CCDs, offering NIR sensitivity and close butting, features not currently available in CMOS technology.

In space, ESA’s recently completed Gaia missions relied completely on the functionality (TDI) and performance of our CCDs. The second Aeolus mission, that will continue to measure the Earth’s wind profiles to improve weather forecasting, uses a unique ‘Accumulation CCD’ which allows noiseless summing of many LIDAR signals to achieve measurable signal levels.

How do you address customer questions or misconceptions around CCDs being considered legacy technology in an industry that often pushes toward the latest advancements?

Consider what is best for your application; it may well be a CCD. You can find our range of available CCDs and their performance on our website, or I would be happy to discuss your application directly. If you would like to speak with me in person, I’ll be attending SPIE Astronomical Telescopes + Instrumentation in July 2026.

Looking ahead, what do you see as the long-term future of CCD sensors within the broader imaging ecosystem? Will they continue to coexist with CMOS, or is the industry moving toward complete CMOS dominance?

The sheer variety of imaging requirements, combined with the continued advantages of CCDs, suggests a long-term demand. We continue to see instruments baselining CCD products into 2030 and beyond.
How does Teledyne e2v position itself within this evolving landscape, and what message would you give to organizations evaluating sensor technologies for specialized imaging applications?
Teledyne e2v solutions are technology agnostic and will recommend what's best for the application, be it CMOS, MCT, or of course CCD.

 

Go to the original article...

Singular Photonics and Renishaw collaboration

Image Sensors World        Go to the original article...

Singular Photonics and Renishaw Shed New Light on Spectroscopy
 
Strategic collaboration integrates next-generation SPAD-based image sensor into Renishaw’s new Raman spectroscopy module to allow measurements of highly fluorescent samples
 
Edinburgh, UK – December 17, 2025 – Image-sensor innovator Singular Photonics today announced a major milestone in its strategic collaboration with Renishaw, a global leader in metrology and analytical instrumentation. The companies have been co-developing next-generation spectroscopy capabilities powered by Singular’s new suite of single-photon avalanche diode (SPAD) image sensors.
 
Renishaw today revealed the launch of its latest breakthrough in Raman spectroscopy: the addition of Time-Resolved Raman Spectroscopy (TRRS) to its renowned inVia™ confocal Raman microscope. At the core of this innovation is Singular’s Sirona SPAD sensor, enabling researchers and engineers to overcome one of Raman spectroscopy’s most persistent challenges – capturing Raman signals obscured by intense fluorescence backgrounds. With TRRS and Sirona, inVia users can now acquire high-quality Raman spectra from samples previously considered too difficult or impossible to measure.
 
“We are always on the lookout for new, innovative technology to maintain our lead in this market, and we believe we have achieved this with our partnership with Singular Photonics,” said Dr Tim Batten, Director and General Manager, Spectroscopy Products Division, Renishaw. “Our TRRS solution for the inVia microscope offers customers a multitude of benefits when dealing with highly fluorescent samples, such as those containing pigments. We have had an in-depth collaboration with Singular Photonics dating back to their inception and have been developing this product in tandem with their cutting-edge Sirona SPAD sensor.”
 
Built on advanced CMOS SPAD architecture, Singular’s Sirona is a 512-pixel SPAD-based line sensor integrating on-chip time-resolved processing and histogramming functionality. This allows simultaneous acquisition of both fluorescence and Raman signals with high temporal precision, unlocking new measurement modalities for scientific and industrial applications.
 
“By integrating the Sirona sensor into Renishaw’s new TRRS system, they have created a spectrometer that showcases the clear performance advantages of our SPAD technology,” said Shahida Imani, CEO of Singular Photonics. “We’ve built a strong relationship with the Renishaw team since before our spin-out from the University, fostering trust and deep technical collaboration. This partnership opens a significant opportunity to expand our market reach, especially in high-precision scientific and industrial sectors.”

Go to the original article...

Fujifilm color filter optimizations for small pixels

Image Sensors World        Go to the original article...

Link: https://www.fujifilm.com/pl/en/news/hq/13164

Fujifilm Launches World’s First Color Filter Material for Image Sensors Compatible with KrF Lithography “WAVE CONTROL MOSAIC™”
PFAS-Free, Contributing to Higher Image Quality in Smartphone Cameras

TOKYO, December 9, 2025 – FUJIFILM Corporation announced the launch of a new color filter material for image sensors, “WAVE CONTROL MOSAIC™*1”, compatible with KrF*2 lithography. This innovative product is the world’s first color filter material for image sensors that supports KrF exposure, and is entirely PFAS-free, addressing environmental and ecological concerns. The new material is designed for use in cutting-edge image sensors requiring ultra-miniaturization and high sensitivity, contributing to higher image quality in smartphone cameras.

Image sensors are semiconductors that convert light into electrical signals to produce images, and are incorporated into devices such as smartphones and digital cameras. In recent years, the range of applications for image sensors has expanded to include automobiles, security equipment such as surveillance cameras, and AR/VR devices. As a result, the image sensor market is expected to grow at an annual rate of approximately 6%*3. With the increasing opportunities for photo and video capture—such as taking pictures and streaming videos shot on smartphones—there is a growing demand for capturing bright and smooth images and videos in any scene, as well as for editing and cropping images after shooting. These trends are driving the need for even higher image quality in image sensors. To achieve higher image quality, it is necessary to miniaturize sensor pixels to create more detailed and high-resolution images. However, as pixels become smaller, the amount of light that can be captured decreases, resulting in lower sensitivity—a key challenge in image sensor development.

The newly launched product in Fujifilm’s WAVE CONTROL MOSAIC™ is the world’s first color filter material for image sensors compatible with KrF lithography, enabling the formation of finer pixels that was previously unattainable with conventional i-line*4 exposure. Building on its expertise in functional molecule design and organic synthesis cultivated through silver halide photographic R&D, Fujifilm has developed new additives optimized for KrF exposure and a proprietary dye with outstanding heat and light resistance. In addition, through our unique formulation technology, the company combined this newly developed dye with conventional pigments to increase light transmittance and compensate for the reduction in light caused by pixel miniaturization, resulting in a color filter material that achieves both miniaturization and high sensitivity. With this new product, users can capture bright, smooth images and videos in various scenes.

Furthermore, the product is PFAS-free*5, containing no per- or polyfluoroalkyl substances, which are of increasing environmental concern. Fujifilm has long been committed to reducing and replacing substances that pose potential risks to human health and the environment, having previously developed PFAS-free negative-tone ArF immersion photoresists and nanoimprint resists. Building on the PFAS-free technology established through this product, Fujifilm will extend these efforts to all WAVE CONTROL MOSAIC™ materials and photoresists*6, accelerating the transition of its semiconductor materials portfolio to PFAS-free solutions.

As a leading manufacturer of color filter materials for image sensors, Fujifilm will continue to develop materials that not only enhance image quality but also enable applications such as infrared photography for low-light environments. Under the concept of “Transforming the invisible world into the visible, delivering new vision and value to society,” Fujifilm remains committed to contributing to the expansion of the image sensor market. 

*1 General term referring to a group of functional materials for controlling electromagnetic light waves in a broad range of wavelengths, including photosensitive color materials for manufacturing color filters for image sensors such as CMOS sensors, used in digital cameras and smartphones. WAVE CONTROL MOSAIC is a registered trademark or trademark of FUJIFILM Corporation.
*2 KrF (Krypton Fluoride): A 248nm wavelength laser light source used in the photolithography process for semiconductor manufacturing.
*3 Source: Techno System Research, “2025 First Half Edition CCD & CMOS Market Marketing Analysis.”
*4 i-line: A mercury spectral line with a wavelength of 365nm, also used as a light source in photolithography processes.
*5 PFAS refers to a collective term for perfluoroalkyl compounds, polyfluoroalkyl compounds, and their salts, as defined in the OECD's 2021 report “Reconciling Terminology of the Universe of Per- and Polyfluoroalkyl Substances: Recommendations and Practical Guidance.” Accordingly, the claim ‘PFAS-Free’ denotes the absence of substances falling within this defined group.
*6 Material used to coat wafer substrate when circuit patterns are drawn in the process of semiconductor manufacturing. 

Go to the original article...

Intelligent SPAD Sensor [PhD Thesis]

Image Sensors World        Go to the original article...

Thesis title: "SPAD Image Sensors with Embedded Intelligence"

Yang Lin, EPFL (2025)

Abstract: Single-photon avalanche diodes (SPADs) are solid-state photodetectors that can detect individual photons with picosecond timing precision, enabling powerful time-resolved imaging across scientific, industrial, and biomedical applications. Despite their unique sensitivity, conventional SPAD imaging workflows passively collect photons, transfer large volumes of raw data off-chip, and reconstruct results through offline post-processing, leading to inefficiencies in photon usage, high latency, and limited adaptability. This thesis explores the potential of embedded artificial intelligence (AI) for efficient, real-time, intelligent processing in SPAD imaging through hardware-software co-design, bringing computation directly to the sensor to process photon data in its native form. Two general frameworks are proposed, each representing a paradigm shift from the conventional process. The first framework is inspired by the power of artificial neural networks (ANNs) in computer vision. It employs recurrent neural networks (RNNs) that operate directly on timestamps of photon arrival, extracting temporal information in an event-driven manner. The RNN is trained and evaluated for fluorescence lifetime estimation, achieving high precision and robustness. Quantization and approximation techniques are explored to enable FPGA implementation. Based on this, an imaging system integrating a SPAD image sensor with an on-FPGA RNN is developed, enabling real-time fluorescence lifetime imaging and demonstrating generalizability to other time-resolved tasks. The second framework is inspired by the human visual system, employing spiking neural networks (SNNs) that operate directly on the asynchronous pulses generated by SPAD avalanche breakdown upon photon arrival, thereby enabling temporal analysis with ultra-low latency and energy-efficient computation. Two hardware-friendly SNN architectures, Transporter SNN and Reversed start-stop SNN are proposed, which transform the phase-coded spike trains into density-coded and inter-spike-interval-coded representations, enabling more efficient training and processing. Dedicated training methods are explored, and both architectures are validated through fluorescence lifetime imaging. Based on the Transporter SNN architecture, the first SPAD image sensor with on-chip spike encoder for active time-resolved imaging is developed. This thesis encompasses a full-stack imaging workflow, spanning SPAD image sensor design, FPGA implementation, software development, neural network training and evaluation, mathematical modeling, fluorescence lifetime imaging, and optical system setup. Together, these contributions establish new paradigms of intelligent SPAD imaging, where sensing and computation are deeply integrated. The proposed frameworks demonstrate significant gains in photon efficiency, processing speed, robustness, and adaptability, illustrating how embedded AI can transform SPAD systems from passive detectors into intelligent, adaptive, and autonomous imaging platforms for next-generation applications.

Full thesis is available for download at this link: https://infoscience.epfl.ch/entities/publication/c6ecfd11-dd30-4693-a104-c27c66aecad9 

Go to the original article...

Teradyne blog on automated test equipment – part 2

Image Sensors World        Go to the original article...

Part 2 of the blog is here: https://www.teradyne.com/2025/12/08/high-throughput-image-sensors-smart-testing-powers-progress/ 

High-Throughput Image Sensors: Smart Testing Powers Progress

As data streams grow, modern ATE ensures flexible, scalable accuracy

In the race to produce higher resolution image sensors—now pushing beyond 500 megapixels—the industry faces significant challenges. These sensors aren’t just capturing more pixels; they’re handling massive streams of data, validating intricate on-chip AI functions, and doing it all at breakneck speeds. For manufacturers, the challenge is as unforgiving as it is critical: test more complex devices, in less time, all while maintaining or even reducing costs.

Today’s high-resolution sensors must deliver more than just pixel perfection. They must demonstrate pixel uniformity, identify and compensate for defective pixels, verify electrical performance under varying conditions, and prove their resilience under strict power efficiency requirements. As AI functionality becomes integrated with image sensors, testing must also account for new processing capabilities and system interactions.   

The move toward higher resolutions introduces not only more data but significant production constraints as well. As sensors grow larger to accommodate more pixels, fewer can be tested simultaneously under the illumination field of automated test equipment (ATE). This site count constraint can be well-handled with strategies for faster data processing and smarter testing. This is where Teradyne plays an important industry role, moving beyond supplier, and stepping in as a strategic partner to help manufacturers redefine what’s possible.  

Why High Resolution Means High Stakes  
As image sensor resolutions soar—from smartphones to cars to industrial systems—so do data demands. Each leap in resolution extends the volume of data that must be captured and processed, including during testing. For example, a single 50-megapixel image sensor produces around 100 megabytes of data per image. Multiple images, as many as 25 or more, must be captured under different lighting conditions to validate pixel response, uniformity, and defect detection. When multiplied across millions of units, data quickly scales into terabytes for every production batch.  

Without innovation, this increase in data threatens to overwhelm production lines. Test times can double or even quadruple, slashing throughput and driving up costs. Manufacturers are left with a critical challenge: how to test sensors faster, without sacrificing accuracy or profitability.  

Teradyne Delivers High-throughput, Scalable Solutions  
Teradyne addresses these high-stakes dynamics with a combination of powerful, modular hardware and flexible software tools. At the heart of Teradyne’s approach is the belief that high performance and flexibility are essential. The company’s UltraSerial20G capture instrument for Teradyne UltraFLEXplus was built precisely for this moment and is designed to handle the enormous data loads of modern image sensors. It offers a modular architecture that enables manufacturers to respond to new interface protocols without expensive, time-consuming hardware redesigns.  

At the same time, Teradyne’s engineers kept the future in focus, looking beyond meeting current requirements by adding future capacity in the UltraSerial20G. Essentially, this provides customers with room to grow without replacing critical hardware. When new protocols emerge or higher data rates become the standard, manufacturers can count on the capabilities of their capture platform. While competitors scramble to keep pace, Teradyne customers are already testing the next generation of devices. 

Teradyne also recognizes that hardware is just part of the story. The company’s IG-XL software platform is where this flexibility comes to life. It gives engineers the tools to write custom test strategies at the pin level, controlling everything from voltage and timing to the finest adjustments of signal slopes. Importantly, this software environment allows manufacturers to build and refine their test programs without exposing sensitive intellectual property, a crucial advantage in an industry where secrecy is a competitive necessity.  

Overall, Teradyne’s flexible hardware and software architecture offers an integrated approach that enables manufacturers to manage increasing data volumes while maintaining production schedules.  Teradyne’s Alexander Metzdorf describes it as giving customers the tools to write their own destiny: “Our role is to provide a toolbox that’s flexible and powerful enough for our customers to test whatever they need—when they need it—without being held back by fixed systems.” 

Go to the original article...

MRAM forum colocated with IEDM on Dec 11 (today!)

Image Sensors World        Go to the original article...

MRAM applications to image sensors may be of interest to our readers.

Masanori Hosomi, Sony - eMRAM for image sensor applications

eMRAM for Image Sensor Applications  

Since logic chip size of Sony's image sensor, bonding of pixel wafers and logic wafers, are limited to be equal to or smaller than that of the pixel chip, we have been using DRAM and SRAM while battling the enhancement of image sensor functions and area limitations. To incorporate further functionalities, there is a demand for better performance memory that can suppress the memory macro area with small bit cells, as well as non-volatile memory to keep a code or an administration data without external memories. Embedded STT-MRAM has a bit cell size that is less than one-third of that of SRAM and possesses non-volatile characteristics, and eMRAM is suitable for small-scale systems such as smart MCUs that do not have external memory. Therefore, at this point, eMRAM has been commercialized in GNSS (GPS), smart watch and wireless communication systems. In the same way, it is presumed to be suitable for frame buffer memory and data memory in image sensor systems. This talk will present how the device can enhance its functionality with the assumption of application in image sensors.

The full program is available here: https://sites.google.com/view/mramforum/program-booklet 

Go to the original article...

IEDM Image Sensors Session Dec 10 (today!)

Image Sensors World        Go to the original article...

Program link: https://iedm25.mapyourshow.com/8_0/sessions/session-details.cfm?ScheduleID=36&

Advanced Image Sensors
Wednesday, December 10
1:30 PM - 5:20 PM PST

This session includes 8 papers on the latest in image sensor technology. The first is an invited paper on progress in flash LiDAR using heterodyne detection. The next two papers present HDR imagers using LOFIC pixels that reach or exceed 120dB. This is followed by papers on specialty imagers, the first one describing an all-organic flexible imager, followed by an extremely high frame-rate burst CIS sensor. The final three papers of the session cover the latest technologies for shrinking pixels to sub-micron dimensions. Of special note is the last paper, which shrinks the dual-photodiode pixel to 0.7um.

3D FMCW Wide-angle Flash Lidar: Towards System Integration and In-pixel Frequency Measurement (Invited)

The Frequency Modulated Continuous Wave Light detection and ranging (FMCW Lidar) usually scans the scene point by point, and measures the distance by a Fourier Transform (FT) of the heterodyne signal. A promising solution for higher frame rate with larger image resolution is the flash version of the FMCW Lidar, using floodlight illumination and an image sensor. We first review our recent achievements of FMCW flash Lidar system developments with commercially available components and post-processing. Because FT is difficult to implement in small pixels, we then introduce the principle of a heterodyne image sensor with in-pixel frequency measurement combined with a multi-chirp laser modulation strategy, targeting video rate real-time measurements. 

A 120 dB Dynamic Range 3D Stacked 2-Stage LOFIC CMOS Image Sensor with Illuminance-Adaptive Signal Selection Function

This work presents a 3D stacked 2-stage lateral overflow integration capacitor (LOFIC) CMOS image sensor with illumination adaptive signal selection function. To reduce the high data rates of conventional wide dynamic range sensors, this work proposes an illuminance-adaptive signal selection circuit that non-destructively determines the light intensity level using electrons accumulated in the first stage of LOFIC. This allows the developed sensor to selectively output one or two most appropriate signals out of three, reducing the data rate while maintaining a wide dynamic range. Furthermore, a 3D-stacked Si trench capacitor is employed to achieve over 8.6Me- FWC with 5.6 µm pixel pitch. The fabricated chip demonstrates a dynamic range of 120 dB with selected signal readout and a maximum SNR of 67.5 dB. 

A 129 dB Dynamic Range Triple Readout CMOS Image Sensor with FWC Enhancement Technology

We present a 2.1 μm pixel CMOS image sensor for automotive applications achieving 129 dB single exposure dynamic range with triple readout. The advanced sub-pixel architecture incorporates FDTI, conformal doping and 3D-MIM technologies, significantly enhancing full-well capacity. The sensor enables a seamless triple-image composition with 29 dB SNR at connection points, suitable for high-temperature automotive environments. 

Flexible 256×256 All-Organic-Transistor Active-Matrix Optical Imager with Integrated Gate Driver

Solution-processed organic thin film transistors (OTFTs) provide a promising platform for truly flexible, large-area integrated sensor systems. Here, an all-organic-transistor active-matrix imager using OTFTs for both the backplane and optical sensing layer is developed. Through reducing the density of states at the channel interface for a steep subthreshold swing and low dark current, the resulting organic phototransistor (OPT) presents a high detectivity of 2.2×1016 Jones. The OPT is stacked on top of an OTFT switch with a high ON/OFF ratio of 4.7×1010 to form the active matrix and the gate driver is also integrated. Finally, a 256 × 256 (213 PPI) flexible active-matrix imager is demonstrated for fingerprint and low-distortion imaging with the constructed real-time imaging system. 

A Global Shutter Burst CMOS Image Sensor with 6-Tpixel/s Readout Speed, 256-recording Frames and -170dB Parasitic Light Sensitivity

This paper presents an ultra-high-speed (UHS) global shutter burst CMOS image sensor (CIS) featuring pixelwis analog memory arrays. The developed CIS with 628H x 480V pixels achieves a maximum frame rate of 20 Mfps and a readout speed of 6.03 Tpixel/s. A recording length of 256 frames and a parasitic light sensitivity (PLS) of -170 dB were also achieved simultaneously in a UHS camera. This low PLS is achieved through comprehensive metal shielding of the pixel circuit and memory regions, and by the spatial separation between the photodiode and memory regions, implemented using Si trench capacitors. The introduced bias adjustment circuit compensates for voltage variations among pixel positions due to the ground resistance and pixel circuit current during the pixel driving period, enabling high-resolution video recording with an effective 628H x 480V pixels and a 48 μm pitch. 

Silicon-on-Insulator Pixel FinFET Technology for a High Conversion Gain and Low Dark Noise 2-Layer Transistor Pixel Stacked CIS

This study presents a 2-Layer transistor pixel stacked 0.8-µm dual-pixel (DP) CIS with silicon-on-insulator (SOI) fin field-effect transistor (FinFET) technology. The application of SOI FinFETs as pixel transistors, featuring a body-less configuration on buried oxide, reduces parasitic capacitance at the floating diffusion node, thereby enhancing conversion gain and noise characteristics. The SOI FinFET achieves improved transconductance and source follower gain compared to a previous pixel FinFET. The resolution of challenges associated with the SOI structure is demonstrated through a 0.8μm DP CIS with SOI FinFETs. 

A 0.43µm Quad-Photodiode CMOS Image Sensor by 3-Wafer-Stacking and Dual-Backside Deep Trench Isolation Technologies

Scaling pixel pitch below 0.5um has become highly challenging in conventional 2-wafer stacked CMOS image sensors due to the limited silicon area shared among photodiodes, photodiode-photodiode isolation, and the associated functional transistors, while maintaining the excellent pixel performance. In this work, several advanced pixel technologies, including 3-wafer stacking, dual-backside deep trench isolation, and enhanced composite metal grid, were proposed and employed, to realize the world's smallest 0.43 µm pitch quad-photodiode pixel, achieving exceptional performance metrics of full well capacity of 6000 e-, dark current of 1.3 e-/s, and read noise of 1.5 e-rms, without degradation in conversion gain. 

A 2-layer, 0.7μm-pitch Dual Photodiode Pixel CMOS Image Sensors with Metaphotonic Color Router

In this article, a world’s smallest 0.7μm-pitch dual photodiode pixel is presented. We integrated 2-layer pixel with hybrid Cu-Cu bonding process only, without introducing pixel-level deep contacts. By optimizing layout of Cu pad layer, we suppressed capacitive coupling between neighboring floating diffusion nodes, still achieved similar conversion gain compared to that of 0.7μm-pitch, 1-layer single photodiode pixel. We overcome the degradation of the auto-focus (AF) separation ratio by incorporating multi-focal, metaphotonic color routers (MPCR).

Go to the original article...

Sony enters the 200MP race

Image Sensors World        Go to the original article...

Link: https://www.sony-semicon.com/en/info/2025/2025112701.html

Sony Semiconductor Solutions to Release Approx. 200-Effective Megapixel Image Sensor for Mobile Applications with Built-in AI Technology Achieving high definition and high image quality for high-powered zooming on monocular cameras

Atsugi, Japan — Sony Semiconductor Solutions Corporation (Sony) today announced the upcoming release of the 1/1.12-type large-format LYTIA 901 mobile image sensor with a high resolution of approximately 200-effective megapixels.*1 This product uses a pixel array format that delivers both high resolution and high sensitivity, further incorporates an image processing circuit utilizing AI technology within the sensor. It achieves high-definition image quality even with high-powered zooming of up to 4x on monocular cameras and offers new experiential value when shooting on mobile cameras.

Main Features
■Approximately 200-effective megapixels and Quad-Quad Bayer Coding (QQBC) array deliver both high resolution and high sensitivity
 The new sensor uses a pixel pitch of 0.7 μm for an approximately 200-effective megapixel resolution on a 1/1.12 large-format sensor. Advances in pixel structure and color filter design increase the saturation signal level, contributing to improved dynamic range.
 To leverage the high resolution of approximately 200-effective megapixels, the new product employs a Quad-Quad Bayer Coding (QQBC) array in which 16 (4×4) adjacent pixels are clustered with filters of the same color. During normal shooting, the signals of the 16 clustered pixels are processed as a single pixel unit, allowing the camera to maintain high sensitivity even at night and in dim indoor shooting conditions. On the other hand, during zoom shooting, a form of array conversion processing known as remosaicing reverts the clustered pixels to a normal pixel array, to deliver high-resolution imaging. 


 

■Equipped with an AI learning-based remosaicing function for high quality imaging while zooming
Array conversion processing (remosaicing), which reverts the QQBC array to a normal pixel array, requires extremely advanced calculation processes. For this product, Sony has developed a new AI learning-based remosaicing for the QQBC array and mounted the processing circuit inside the sensor, for another Sony industry-first.*2 This new technology makes it possible to process high-frequency component signals, which are generally difficult to reproduce, offering superior reproduction of details such as fine patterns and letters. Furthermore,incorporating AI learning-based remosaicing directly in the sensor enables high-speed processing and up to 30 fps high- quality video capture when shooting with up to 4x zoom in 4K resolution.

■High dynamic range and rich tonal expression enabled by various HDR technologies
 DCG-HDR and Fine12bit ADC technologies deliver high dynamic range and tonal expressions across the entire zoom range up to 4x
 In addition to Dual Conversion Gain‐HDR (DCG-HDR) technology, which composites data read at different gain settings in a single frame, the new sensor is equipped with Fine12bit ADC (AD converter) technology that improves the quantization bit depth from the conventional 10 bits to 12. These features deliver a high dynamic range and rich tonal expression across the entire zoom range up to 4x.
 HF-HDR technology delivers over 100 dB*3 high dynamic range performance
 Hybrid Frame-HDR (HF-HDR) is an HDR technology that composites frames captured in short exposures with DCG data on a post-processing application processor. HF-HDR significantly improves the dynamic range compared to conventional HDR technology, delivering performance of over 100 dB.*3 This significantly suppresses highlight blowout in bright areas, as well as blackout in dark areas, delivering images that more closely resemble what the human eye actually sees. 

 


 

Go to the original article...

Teradyne blog on automated test equipment for image sensors

Image Sensors World        Go to the original article...

Link: https://www.teradyne.com/2025/11/11/invisible-interfaces/

Invisible Interfaces: The Hidden Challenge Behind Every Great Image Sensor

Flexible, future-ready test strategies are crucial to the irregular cycle of sensor design and standards development.

Alexander Metzdorf, Teradyne Inc. 

When you snap a photo on your phone or rely on a car’s camera for lane detection, you’re trusting an unseen network of technologies to deliver or interpret image data flawlessly. But behind the scenes, the interface between the image sensor and its processor is doing the heavy lifting, moving megabtyes of data without error or delay.

While much of the industry conversation focuses on advances in resolution and sensor technology, another challenging aspect of modern imaging innovation is the interfaces—the invisible pathways that connect these sensors to the systems around them, including the processors tasked with interpreting their data. One of the most pressing and underappreciated imaging challenges lies in the ability of the interfaces to handle growing demands for speed, bandwidth, and reliability. The challenge isn’t one-size-fits-all. Smartphone cameras may need ultra-high resolution over short distances, while automotive sensors prioritize robustness and wider areas.

As image sensors and the technologies used to interpret the data evolve to deliver higher resolutions and even integrate artificial intelligence directly onto the chip, these interfaces are under more pressure than ever before. The challenge is both technical and practical: how do you design and test interfaces that must support vastly different applications, from the low-power demands of smartphones to the rugged, long-distance requirements of automotive systems?

And even more critically, how do you keep up when the rules change every few months?

The Growing Challenge in Image Sensor Development

The industry’s insatiable appetite for higher resolutions is well known, but what often goes unnoticed is the corresponding explosion in data traffic. A single image sensor on a smartphone might capture 500 megabytes of data in one shot. In automotive systems, that sensor could be sending critical visual information across several meters of cabling to a centralized processor, where decisions like emergency braking or obstacle detection happen in real-time. Industrial imaging is pushing resolutions even higher (up to 500 megapixels in some cases) to support inspection and automation systems, creating enormous data handling and processing demands.

Each of these scenarios represents wildly different demands on the interfaces connecting sensors to the rest of the system. In smartphones, the processor is typically located just millimeters away from the image sensor. Power efficiency is paramount, and interfaces must support blisteringly fast data rates to process high-resolution images without draining the battery. In an automotive application, a vehicle’s safety system might require those same sensors to transmit data over longer distances, and deliver real-time information and decision-making in harsh environments, while meeting stringent reliability and safety standards.

It’s a challenge compounded by the fact that image sensor manufacturers rarely control these interface requirements. Industry-wide, sensor manufacturers are generally forced to adopt a growing variety of interface standards and proprietary solutions, each with unique requirements for bandwidth, distance, latency, and power consumption.

This creates a relentless cycle of adaptation, where manufacturers are forced to develop and validate new interfaces almost as quickly as they can design the sensors themselves. It’s not uncommon for entirely new interface requirements to be handed down with lead times as short as six months. Unpredictability follows for both image sensor designers and the teams responsible for testing these devices.

The Shift Toward Proprietary Interfaces

While MIPI remains the dominant open standard for image sensor interfaces, proprietary protocols are growing. These custom protocols are typically developed privately by major technology companies to support their unique product requirements, for example, to achieve specific performance advantages. These custom interfaces are closely guarded secrets and often remain entirely undocumented outside of the companies that develop them, making it extremely difficult for test equipment vendors to keep pace.
Even a full teardown of a high-end smartphone won’t reveal how its camera interfaces are engineered. Yet, despite having no access to these underlying specifications, test teams are still expected to validate sensor performance against them.

For manufacturers and test engineers, this creates a near-constant state of uncertainty. New protocols can emerge rapidly and without warning, and must be supported almost immediately, which can cause test equipment providers to scramble to retool systems.

Teradyne’s Approach: Flexibility as a Strategic Imperative

Teradyne has set out to solve this challenge, developing a modular, future-ready approach that gives manufacturers the flexibility they need to thrive in unpredictable environments.

At the hardware level, Teradyne’s UltraSerial20G capture instrument for the UltraFLEXplus is designed for adaptability. Its modular architecture allows changes in key components and software to quickly accommodate new protocols.

Additional flexibility is added with Teradyne’s IG-XL software. Customers are empowered to develop highly customized test strategies, controlling every detail of the testing process, from voltage and timing to signal slopes and data handling.

The Path Ahead: Staying Competitive in a Fragmented, Fast-moving Market

For image sensor makers, the message is clear: choose test platforms that are prepared for proprietary protocols, evolving standards, and ever-tighter time-to-market demands.

In this landscape, Teradyne’s modular hardware and powerful, agile software ensure that manufacturers are meeting current demands and are prepared for whatever comes next. With early interface testing capabilities and scalable solutions that can adapt on the fly, Teradyne customers stay ahead of integration risks, control costs, and accelerate time-to-market.

In an industry where speed, innovation, and reliability are everything, that kind of flexibility is more than just a technical feature. It’s a strategic necessity that offers manufacturers the freedom to innovate, knowing they have the flexibility they need in their test solutions.

Go to the original article...

A-SSCC Circuit Insights CMOS Image Sensor

Image Sensors World        Go to the original article...

 

A-SSCC 2025 - Circuit Insights #4: Introduction to CMOS Image Sensors - Prof. Chih-Cheng Hsieh

About Circuit Insights: Circuit Insights features internationally renowned researchers in circuit design, who will deliver engaging and accessible lectures on fundamental circuit concepts and diverse application areas, tailored to a level suitable for senior undergraduate students and early graduate students. The event will provide a valuable and inspiring opportunity for those who are considering or pursuing a career in circuit design.

About the Presenter: Chih-Cheng Hsieh received the B.S., M.S., and Ph.D. degrees from the Department of Electronics Engineering, National Chiao Tung University, Hsinchu, Taiwan, in 1990, 1991, and 1997, respectively.,From 1999 to 2007, he was with an IC Design House, Pixart Imaging Inc., Hsinchu. He led the Mixed-Mode IC Department, as a Senior Manager and was involved in the development of CMOS image sensor ICs for PC, consumer, and mobile phone applications. In 2007, he joined the Department of Electrical Engineering, National Tsing Hua University, Hsinchu, where he is currently a Full Professor. His current research interests include low-voltage low-power smart CMOS image sensor IC, ADC, and mixed-mode IC development for artificial intelligence (AI), internet of things (IoT), biomedical, space, robot, and customized applications.,Dr. Hsieh serves as a TPC member of ISSCC and A-SSCC, and an Associate Editor of IEEE Solid–State Circuit Letters (SSC-L) and IEEE Circuits and Systems Magazine (CASM). He was the SSCS Taipei Chapter Chair and the Student Branch Counselor of NTHU, Taiwan.

Go to the original article...

Sony A7 V review so far

Cameralabs        Go to the original article...

The Sony A7 V is a full-frame mirrorless camera with a 33 Megapixel partially-stacked sensor, 4k video up to 120p, IBIS and 30fps burst shooting. Check out my review-so-far.…

Go to the original article...

Canon EOS R6 Mark III review

Cameralabs        Go to the original article...

The Canon EOS R6 Mark III is a full-frame camera with 32.5 Megapixels, IBIS, 40fps bursts, and 7k RAW video. Here's my hands-on review so far!…

Go to the original article...

Time-mode CIS paper

Image Sensors World        Go to the original article...

In a recent paper titled "An Extended Time-Mode Digital Pixel CMOS Image Sensor for IoT Applications" Kim et al from Yonsei University write:

Time-mode digital pixel sensors have several advantages in Internet-of-Things applications, which require a compact circuit and low-power operation under poorly illuminated environments. Although the time-mode digitization technique can theoretically achieve a wide dynamic range by overcoming the supply voltage limitation, its practical dynamic range is limited by the maximum clock frequency and device leakage. This study proposes an extended time-mode digitization technique and a low-leakage pixel circuit to accommodate a wide range of light intensities with a small number of digital bits. The prototype sensor was fabricated in a 0.18 μm standard CMOS process, and the measurement results demonstrate its capability to accommodate a 0.03 lx minimum light intensity, providing a dynamic range figure-of-merit of 1.6 and a power figure-of-merit of 37 pJ/frame·pixel. 

Sensors 2025, 25(23), 7228; https://doi.org/10.3390/s25237228

 



Figure 1. Operation principle of conventional CISs: (a) voltage mode; (b) fixed reference; and (c) ramp-down TMD.
Figure 2. Theoretical photo-transfer curve of conventional 6-bit TMDs.
Figure 3. The operation principle of the proposed E-TMD technique.
Figure 4. Theoretical photo-transfer curve of the proposed E-TMD: (a) TS = TU = TD = 2000tCK, Δ = 0; (b) TS = TU = TD = 100tCK, Δ = 0; (c) TS = TU = 0, TD = 45tCK, Δ = 0; and (d) TS = 0, TU = 25tCK, TD = 45tCK, Δ = 0.7.
Figure 5. The conventional time-mode digital pixel CIS adapted from [11]: (a) architecture; (b) pixel schematic diagram.
Figure 6. Architecture and schematic diagram of the proposed time-mode digital pixel CIS.
Figure 7. Operation of the proposed time-mode digital pixel CIS with α representing VDD-vREF-VT: (a) six operation phases and (b) timing diagram.
Figure 8. Transistor-level simulated photo-transfer curve comparison.

Figure 9. Chip micrograph.

 

Figure 10. Captured sample images: (a) 190 lx, TS = 17 ms, tCK = 50 µs; (b) 1.9 lx, TS = 400 ms, tCK = 2 µs.
Figure 11. Captured sample images and their histograms: (a) 20.5 lx, TS = 32.6 ms; (b) 200.6 lux, TS = 4.6 ms; (c) 2106 lux, TS = 0.64 ms; (d) 2106 lux, TS = 0.64 ms, TU = 0.74 ms, TD = 1.84 ms, Δ = 0.5.

Go to the original article...

ISSCC 2026 Image Sensors session

Image Sensors World        Go to the original article...

ISSCC 2026 will be held Feb 15-19, 2026 in San Francisco, CA.

The advance program is now available: https://submissions.mirasmart.com/ISSCC2026/PDF/ISSCC2026AdvanceProgram.pdf 

Session 7 Image Sensors and Ranging (Feb 16)

Session Chair: Augusto Ximenes, CogniSea, Seattle, WA
Session Co-Chair: Andreas Suess, Google, Mountain View, CA

54×42 LiDAR 3D-Stacked System-On-Chip with On-Chip Point
Cloud Processing and Hybrid On-Chip/Package-Embedded 25V
Boost Generation

VoxCAD: A 0.82-to-81.0mW Intelligent 3D-Perception dToF SoC
with Sector-Wise Voxelization and High-Density Tri-Mode eDRAM
CIM Macro

A Multi-Range, Multi-Resolution LiDAR Sensor with
2,880-Channel Modular Survival Histogramming TDC and Delay
Compensation Using Double Histogram Sampling

A 480×320 CMOS LiDAR Sensor with Tapering 1-Step
Histogramming TDCs and Sub-Pixel Echo Resolvers

A 26.0mW 30fps 400x300-pixel SWIR Ge-SPAD dToF Range
Sensor with Programmable Macro-Pixels and Integrated
Histogram Processing for Low-Power AR/VR Applications

A 128×96 Multimodal Flash LiDAR SPAD Imager with Object
Segmentation Latency of 18μs Based on Compute-Near-Sensor
Ising Annealing Machine

A Fully Reconfigurable Hybrid SPAD Vision Sensor with 134dB
Dynamic Range Using Time-Coded Dual Exposures

A 55nm Intelligent Vision SoC Achieving 346TOPS/W System
Efficiency via Fully Analog Sensing-to-Inference Pipeline

A 1.09e--Random-Noise 1.5μm-Pixel-Pitch 12MP Global-Shutter-
Equivalent CMOS Image Sensor with 3μm Digital Pixels Using
Quad-Phase-Staggered Zigzag Readout and Motion
Compensation

A 200MP 0.61μm-Pixel-Pitch CMOS Imager with Sub-1e- Readout
Noise Using Interlaced-Shared Transistor Architecture and
On-Chip Motion Artifact-Free HDR Synthesis for 8K Video
Applications

Go to the original article...

Ubicept releases toolkit for SPAD and CIS

Image Sensors World        Go to the original article...

Ubicept Extends Availability of Perception Technology to Make Autonomous Systems Using Conventional Cameras More Reliable

Computer vision processing unlocks higher quality, more trustworthy visual data for machines whether they use advanced sensors from Pi Imaging Technology or conventional vision systems

BOSTON--(BUSINESS WIRE)--Ubicept, the computer vision startup operating at the limits of physics, today announced the release of the Ubicept Toolkit, which will bring its physics-based imaging to any modern vision system. Whether for single-photon avalanche diode (SPAD) sensors in next-generation vision systems or immediate image quality improvements with existing hardware, Ubicept provides a unified, physics-based approach that delivers high quality, trustworthy data.

“Ubicept’s technology revolutionizes how machines see the world by unlocking the full potential of today's and tomorrow's image sensors. Our physics-based approach captures the full complexity of motion, even in low-light or high-dynamic-range conditions, providing more trustworthy data than AI-based video enhancement,” said Sebastian Bauer, CEO of Ubicept. “With the Ubicept Toolkit, we’re now making our advanced single-photon imaging more accessible for a broad range of applications from robotics to automotive to industrial sensing.”

Ubicept’s solution is designed for the most advanced sensors to maximize image data quality and reliability. Now, the Toolkit will support any widely available CMOS camera with raw uncompressed output, giving perception developers immediate quality gains.

“Autonomous systems need a better way to understand the world. Our mission is to turn raw photon data into outputs that are specifically designed for computer vision, not human consumption,” said Tristan Swedish, CTO of Ubicept. “By making our technology available for more conventional vision systems, we are giving engineers the opportunity to experience the boost in reliability now while creating an easier pathway to SPAD sensor adoption.”

SPAD sensors – traditionally used in 3D systems – are poised to reshape the image sensor and computer vision landscape. While the CMOS sensor market is projected to grow to $30B by 2029 at 7.5% CAGR, the SPAD market is growing nearly three times faster, expected to reach $2.55B by 2029 at 20.1% CAGR.

Pi Imaging Technology is a leader in the field with its SPAD Alpha, a next-generation 1-megapixel single-photon camera that delivers zero read noise, nanosecond-level exposure control, and frame rates up to 73,000 fps. Designed for demanding scientific applications, it offers researchers and developers extreme temporal precision and light sensitivity. The Ubicept Toolkit builds on these strengths by transforming the SPAD Alpha’s raw photon data into clear, ready-to-use imagery for perception and analysis.

“Ubicept shares our deep commitment to advancing perception technology,” said Michel Antolović, CEO of Pi Imaging Technology. “By combining our SPAD Alpha’s state-of-the-art hardware with Ubicept’s real-time processing, perception engineers can get the most from what single-photon imaging has to offer.”

The Toolkit provides engineering teams with everything they need to visualize, capture, and process video data efficiently with the Ubicept Photon Fusion (UPF) algorithm. The SPAD Toolkit also includes Ubicept’s FLARE (Flexible Light Acquisition and Representation Engine) firmware for optimized photon capture. In addition, the Toolkit includes white-glove support to early adopters for a highly personalized and premium experience.

The Ubicept Toolkit will be available in December 2025. To learn how it can elevate perception performance and integrate into existing workflows, contact Ubicept here.

Go to the original article...

Job Postings – Week of November 23 2025

Image Sensors World        Go to the original article...


ByteDance

Image Sensor Digital Design Lead- Pico

San Jose, California, USA

Link

ST Microelectronics

Silicon Photonics Product Development Engineer

Grenoble, France

Link

DigitalFish

Senior Systems Engineer, Cameras/Imaging

Sunnyvale, California, USA [Remote]

Link

Imasenic

Digital IC Design Engineer

Barcelona, Spain

Link

Meta

Technical Program Manager, Camera Systems

Sunnyvale, California, USA

Link

Westlake University

Ph.D. Positions in Dark Matter & Neutrino Experiments

Hangzhou, Zhejiang,

China

Link

General Motors

Advanced Optical Sensor Test Engineer

Warren, Michigan, USA

[Hybrid]

Link

INFN

Post-Doc senior research grant in experimental physics

Frascati, italy

Link

Northrop Grumman

Staff EO/IR Portfolio Technical Lead

Melbourne, Florida, USA

Link

Go to the original article...

"Camemaker" image sensors search tool

Image Sensors World        Go to the original article...

An avid reader of the blog shared this handy little search tool for image sensors: 

https://www.camemaker.com/shop

Although it isn't comprehensive (only covers a few companies), you can filter by various sensor specs. Try it out? 

Go to the original article...

Event cameras: applications and challenges

Image Sensors World        Go to the original article...

Gregor Lenz (roboticist, and cofounder of Open Neuromorphic and Neurobus) has written a two-part blogpost that readers of ISW might find enlightening:

https://lenzgregor.com/posts/event-cameras-2025-part1/

https://lenzgregor.com/posts/event-cameras-2025-part2/ 

Gregor goes into various application domains where event cameras have been tried, but faced challenges, technical and otherwise.

Wide adoption will depend less and less on technical merit and more on how well the new sensor modality will fit into existing pipelines for X where X can be supply chain, hardware, software, manufacturing, assembly, testing, ...  pick your favorite!

Go to the original article...

Conference List – May 2026

Image Sensors World        Go to the original article...

Quantum Photonics Conference, Networking and Trade Exhibition - 5-6 May 2026 - Erfurt, Germany - Website

Sensors Converge - 5-7 May 2026 - Santa Clara, California, USA -  Website

LOPS 2026 - 8-9 May 2026 - Chicago, Illinois, USA - Website

Embedded Vision Summit - 11-13 May 2026 - Santa Clara, California, USA - Website

CLEO - Congress on Lasers and Electro-Optics - 17-20 May 2026 - Charlotte, North Carolina, USA 

IEEE International Symposium on Robotic and Sensors Environments - 18-19 May 2026 - Norfolk, Virginia, USA - Website

IEEE International Symposium on Integrated Circuits and Systems - 24-27 May 2026 - Shanghai, China - Website

ALLSENSORS 2026 - 24-28 May 2026 - Venice, Italy - Website

Robotics Summit and Expo - 27-28 May 2026 - Boston, Massachusetts, USA - Website


If you know about additional local conferences, please add them as comments.

Return to Conference List index

Go to the original article...

css.php