EETimes article about Prophesee-Qualcomm deal

Image Sensors World        Go to the original article...

Full article here: https://www.eetimes.com/experts-weigh-impact-of-prophesee-qualcomm-deal/

Experts Weigh Impact of Prophesee-Qualcomm Deal

Some excerpts:

Frédéric Guichard, CEO and CTO of DXOMARK, a French company that specializes in testing cameras and other consumer electronics, and that is unconnected with Paris-based Prophesee, told EE Times that the ability to deblur in these circumstances could provide definite advantages.

“Reducing motion blur [without increasing noise] would be equivalent to virtually increasing camera sensitivity,” Guichard said, noting two potential benefits: “For the same sensitivity [you could] reduce the sensor size and therefore camera thickness,” or you could maintain the sensor size and use longer exposures without motion blur.

Judd Heape, VP for product management of camera, computer vision and video at Qualcomm Technologies, told EE Times that they can get this image enhancement with probably a 20-30% increase in power consumption to run the extra image sensor and execute the processing.

“The processing can be done slowly and offline because you don’t really care about how long it takes to complete,” Heape added.

...

“We have many, many low-power use cases,” he said. Lifting a phone to your ear to wake it up is one example. Gesture-recognition to control the car when you’re driving is another.

“These event-based sensors are much more efficient for that because they can be programmed to easily detect motion at very low power,” he said. “So, when the sensor is not operating, when there’s no movement or no changes in the scene, the sensor basically consumes almost no power. So that’s really interesting to us.”

Eye-tracking could also be very useful, Heape added, because Qualcomm builds devices for augmented and virtual reality. “Eye-tracking, motion-tracking of your arms, hands, legs… are very efficient with image sensors,” he said. “In those cases, it is about power, but it’s also about frame rate. We need to track the eyes at like 90 [or 120] frames per second. It’s harder to do that with a standard image sensor.”

Prophesee CEO Luca Verre told EE Times the company is close to launching its first mobile product with one OEM. “The target is to enter into mass production next year,” he said. 

Go to the original article...

TechCrunch article on future of computer vision

Image Sensors World        Go to the original article...

Everything you know about computer vision may soon be wrong

Ubicept wants half of the world's cameras to see things differently


Some excepts from the article:

Most computer vision applications work the same way: A camera takes an image (or a rapid series of images, in the case of video). These still frames are passed to a computer, which then does the analysis to figure out what is in the image. 

Computers don’t care, however, and Ubicept believes it can make computer vision far better and more reliable by ignoring the idea of frames.

The company’s solution is to bypass the “still frame” as the source of truth for computer vision and instead measure the individual photons that hit an imaging sensor directly. That can be done with a single-photon avalanche diode array (or SPAD array, among friends). This raw stream of data can then be fed into a field-programmable gate array (FPGA, a type of super-specialized processor) and further analyzed by computer vision algorithms.

The newly founded company demonstrated its tech at CES in Las Vegas in January, and it has some pretty bold plans for the future of computer vision.


Visit www.ubicept.com for more information.

Check out their recent demo of low-light license plate recognition here: https://www.ubicept.com/blog/license-plate-recognition-in-low-light

Go to the original article...

Hailo-15 AI-centric Vision Processor

Image Sensors World        Go to the original article...

From Yole: https://www.yolegroup.com/industry-news/leading-edge-ai-chipmaker-hailo-introduces-hailo-15-the-first-ai-centric-vision-processors-for-next-generation-intelligent-cameras/

Leading edge AI chipmaker Hailo introduces Hailo-15: the first AI-centric vision processors for next-generation intelligent cameras


The powerful new Hailo-15 Vision Processor Units (VPUs) bring unprecedented AI performance directly to cameras deployed in smart cities, factories, buildings, retail locations, and more.

Hailo, the pioneering chipmaker of edge artificial intelligence (AI) processors, today announced its groundbreaking new Hailo-15™ family of high-performance vision processors, designed for integration directly into intelligent cameras to deliver unprecedented video processing and analytics at the edge.

With the launch of Hailo-15, the company is redefining the smart camera category by setting a new standard in computer vision and deep learning video processing, capable of delivering unprecedented AI performance in a wide range of applications for different industries.

With Hailo-15, smart city operators can more quickly detect and respond to incidents; manufacturers can increase productivity and machine uptime; retailers can protect supply chains and improve customer satisfaction; and transportation authorities can recognize everything from lost children, to accidents, to misplaced luggage.

“Hailo-15 represents a significant step forward in making AI at the edge more scalable and affordable,” stated Orr Danon, CEO of Hailo. “With this launch, we are leveraging our leadership in edge solutions, which are already deployed by hundreds of customers worldwide; the maturity of our AI technology; and our comprehensive software suite, to enable high performance AI in a camera form-factor.”

The Hailo-15 VPU family includes three variants — the Hailo-15H, Hailo-15M, and Hailo-15L — to meet the varying processing needs and price points of smart camera makers and AI application providers. Ranging from 7 TOPS (Tera Operation per Second) up to an astounding 20 TOPS, this VPU family enables over 5x higher performance than currently available solutions in the market, at a comparable price point. All Hailo-15 VPUs support multiple input streams at 4K resolution and combine a powerful CPU and DSP subsystems with Hailo’s field-proven AI core.

By introducing superior AI capabilities into the camera, Hailo is addressing the growing demand in the market for enhanced video processing and analytic capabilities at the edge. With this unparalleled AI capacity, Hailo-15-empowered cameras can carry out significantly more video analytics, running several AI tasks in parallel including faster detection at high resolution to enable identification of smaller and more distant objects with higher accuracy and less false alarms.

As an example, the Hailo-15H is capable of running the state-of-the-art object detection model YoloV5M6 with high input resolution (1280×1280) at real time sensor rate, or the industry classification model benchmark, ResNet-50, at an extraordinary 700 FPS.

With this family of high-performance AI vision processors, Hailo is also pioneering the use of vision-based transformers in cameras for real-time object detection. The added AI capacity can also be utilized for video enhancement and much better video quality in low-light environments, for video stabilization, and high dynamic range performance.

Hailo-15-empowered cameras lower the total cost of ownership in massive camera deployments by offloading cloud analytics to save video bandwidth and processing, while improving overall privacy due to data anonymization at the edge. The result is an ultra-high-quality AI-based video analytics solution that keeps people safer, while ensuring their privacy and allows organizations to operate more efficiently, at a lower cost and complexity of network infrastructure.

The Hailo-15 vision processors family, like the Hailo-8TM AI accelerator, which is already widely deployed, are engineered to consume very little power, making them suitable for every type of IP camera and enabling the design of fanless edge devices. The small power envelope means camera designers can develop lower-cost products by leaving out an active cooling component. Fanless cameras are also better suited for industrial and outdoor applications where dirt or dust can otherwise impact reliability.

“By creating vision processors that offer high performance and low power consumption directly in cameras, Hailo has pushed the limits of AI processing at the edge,” said KS Park, Head of R&D for Truen, specialists in edge AI and video platforms. “Truen welcomes the Hailo-15 family of vision processors, embraces their potential, and plans to incorporate the Hailo-15 in the future generation of Truen smart cameras.”

“With Hailo-15, we’re offering a unique, complete and scalable suite of edge AI solutions,” Danon concluded. “With a single software stack for all our product families, camera designers, application developers, and integrators can now benefit from an easy and cost-effective deployment supporting more AI, more video analytics, higher accuracy, and faster inference time, exactly where they’re needed.”

Hailo will be showcasing its Hailo-15 AI vision processor at ISC-West in Las Vegas, Nevada, from March 28-31, at booth #16099.

Go to the original article...

Sony’s new SPAD-based dToF Sensor IMX611

Image Sensors World        Go to the original article...

https://www.sony-semicon.com/en/news/2023/2023030601.html

Sony Semiconductor Solutions to Release SPAD Depth Sensor for Smartphones with High-Accuracy, Low-Power Distance Measurement Performance, Powered by the Industry’s Highest*1 Photon Detection Efficiency

Atsugi, Japan — Sony Semiconductor Solutions Corporation (SSS) today announced the upcoming release of the IMX611, a direct time-of-flight (dToF) SPAD depth sensor for smartphones that delivers the industry’s highest*1 photon detection efficiency.

The IMX611 has a photon detection efficiency of 28%, the highest in the industry,*1 thanks to its proprietary single-photon avalanche diode (SPAD) pixel structure.*2 This reduces the power consumption of the entire system while enabling high-accuracy measurement of the distance of an object.

This new sensor will generate opportunities to create new value in smartphones, including functions and applications that utilize distance information.



In general, SPAD pixels are used as a type of detector in a dToF sensor, which acquire distance information by detecting the time of flight of light emitted from a source until it returns to the sensor after being reflected off an object.




The IMX611 uses a proprietary SPAD pixel structure that gives the sensor the industry’s highest*1 photon detection efficiency, at 28%, which makes it possible to detect even very weak photons that have been emitted from the light source and reflected off the object. This allows for highly accurate measurement of object distance. It also means the sensor can offer high distance-measurement performance even with lower light source laser output, thereby helping to reduce the power consumption of the whole smartphone system.

This sensor can accurately measure the distance to an object, making it possible to improve autofocus performance in low-light environments with poor visibility, to apply a bokeh effect to the subject’s background, and to seamlessly switch between wide-angle and telephoto cameras. All of these capabilities will improve the user experience of smartphone cameras. This sensor also enables 3D spatial recognition, AR occlusion,*4 motion capture/gesture recognition, and other such functions. With the spread of the metaverse in the future, this sensor will contribute to the functional evolution of VR head mounted displays and AR glasses, which are expected to see increasing demand.

By incorporating a proprietary signal processing function into the logic chip inside the sensor, the RAW information acquired from the SPAD pixels is converted into distance information to output, and all this is done within the sensor. This approach makes it possible to reduce the load of post-processing, thereby simplifying overall system development.





Go to the original article...

CIS Revenues Fall

Image Sensors World        Go to the original article...

From Counterpoint Research: https://www.counterpointresearch.com/global-cis-market-annual-revenue-falls-for-first-time-in-a-decade/

Global CIS Market Annual Revenue Falls for First Time in a Decade


  • The global CIS market’s revenue fell 7% YoY in 2022 to $19 billion.
  • The mobile phone segment entered a period of contraction and its CIS revenue share fell below 70%.
  • Automotive CIS share rose to 9% driven by strong demand for ADAS and autonomous driving.
  • The surveillance and PC and tablet segments’ shares dipped as demand weakened in the post-COVID era.
  • We expect growth recovery in 2023 in the low single digits on improving smartphone markets and continued automotive growth.





Go to the original article...

Summary of ISSCC 2023 presentations

Image Sensors World        Go to the original article...

Please visit Harvest Imaging's recent blog post at https://harvestimaging.com/blog/?p=1828 for a summary of interesting papers at ISSCC 2023 written by Dan McGrath.

Go to the original article...

Prophesee Collaboration with Qualcomm

Image Sensors World        Go to the original article...

https://www.prophesee.ai/2023/02/27/prophesee-qualcomm-collaboration-snapdragon/ 

Prophesee Announces Collaboration with Qualcomm to Optimize Neuromorphic Vision Technologies For the Next Generation of Smartphones, Unlocking a New Image Quality Paradigm for Photography and Video

Highlights

  •  The world is neither raster-based nor frame-based. Inspired by the human eye, Prophesee Event-Based sensors repair motion blur and other image quality artefacts caused by conventional sensors, especially in high dynamic scenes and low light conditions bringing Photography and Video closer to our true experiences.
  •  Collaborating with Qualcomm Technologies, Inc., a leading provider of premium mobile technologies, to help accelerate mobile industry adoption of Prophesee’s solutions.
  •  Companies join forces to optimize Prophesee’s neuromorphic Event-Based Metavision Sensors and software for use with the premium Snapdragon mobile platforms. Development kits expected to be available from Prophesee this year.

PARIS, February 27, 2023 – Prophesee today announced a collaboration with Qualcomm Technologies, Inc. that will optimize Prophesee’s Event-based Metavision sensors for use with premium Snapdragon® mobile platforms to bring the speed, efficiency, and quality of neuromorphic-enabled vision to mobile devices.

The technical and business collaboration will provide mobile device developers a fast and efficient way to leverage the Prophesee sensor’s ability to dramatically improve camera performance, particularly in fast-moving dynamic scenes (e.g. sport scenes) and in low light, through its breakthrough event-based continuous and asynchronous pixel sensing approach. Prophesee is working on a development kit to support the integration of the Metavision sensor technology for use with devices that contain next generation Snapdragon platforms.

How it works

Prophesee’s breakthrough sensors add a new sensing dimension to mobile photography. They change the paradigm in traditional image capture by focusing only on changes in a scene, pixel by pixel, continuously, at extreme speeds.

Each pixel in the Metavision sensor embeds a logic core, enabling it to act as a neuron.

They each activate themselves intelligently and asynchronously depending on the amount of photons they sense. A pixel activating itself is called an event. In essence, events are driven by the scene’s dynamics, not an arbitrary clock anymore, so the acquisition speed always matches the actual scene dynamics.

High-performance event-based deblurring is achieved by synchronizing a frame-based and Prophesee’s event-based sensor. The system then fills the gaps between and inside the frames with microsecond events to algorithmically extract pure motion information and repair motion blur.

Availability

A development kit featuring compatibility with Prophesee sensor technologies is expected to be available this year.

Go to the original article...

Panasonic introduces high sensitivity hyperspectral imager

Image Sensors World        Go to the original article...

From Imaging and Machine Vision Europe: https://www.imveurope.com/news/panasonic-develops-low-light-hyperspectral-imaging-sensor-worlds-highest-sensitivity 

Panasonic develops low-light hyperspectral imaging sensor with "world's highest" sensitivity

Panasonic has developed what it says is the world's highest sensitivity hyperspectral imaging technology for low-light conditions.

Based on a ‘compressed’ sensor technology previously used in medicine and astronomy, the technology was first demonstrated last month in Nature Photonics.

Conventional hyperspectral imaging technologies use optical elements such as prisms and filters to selectively pass and detect light of a specific wavelength assigned to each pixel of the image sensor. However, these technologies have a physical restriction in that light of the non-assigned wavelengths cannot be detected at each pixel, decreasing the sensitivity inversely proportional to the number of wavelengths being captured.

Therefore, illumination with a brightness comparable to that of the outdoors on a sunny day (10,000 lux or more) is required to use such technologies, which decreases their usability and versatility.

The newly developed hyperspectral imaging technology instead employs ‘compressed’ sensing, which efficiently acquires images by "thinning out" the data and then reconstructing it. Such techniques have previously been deployed in medicine for MRI examinations, and in astronomy for black hole observations.

A distributed Bragg reflector (DBR) structure that transmits multiple wavelengths of light is implemented on the image sensor. This special filter transmits around 45% of incident light, between 450-650nm, and is divided into 20 wavelengths. It offers a sensitivity around 10-times higher than conventional technologies, which demonstrate a light-use efficiency of less than 5%. The filter is designed to appropriately thin out the captured data by transmitting incident light with randomly changing intensity for each pixel and wavelength. The image data is then reconstructed rapidly using a newly optimised algorithm. By leaving a part of the colour-separating functions to the software, Panasonic has been able to overcome the previous trade-off between the number of wavelengths and sensitivity – the fundamental issue of conventional hyperspectral technologies. 

This approach has made it possible to capture hyperspectral images and video with what Panasonic says is the world's highest sensitivity, under indoor levels of illumination (550 lux). This level of sensitivity enables a fast shutter speed of more than 30fps, previously unachievable using conventional hyperspectral technologies due to their low sensitivity and consequently low frame rate. This significantly increases the new technology’s usability due it being easier to focus and align.

Application examples of the new technology, which was initially demonstrated alongside Belgian research institute Imec, include the inspection of tablets and foods, as this can now be done without the risk of the previously-required high levels of illumination raising their temperature.


Go to the original article...

Sony’s high-speed camera interface standard SLVS-EC

Image Sensors World        Go to the original article...

https://www.sony-semicon.com/en/technology/is/slvsec.html?cid=em_nl_20230228 

Scalable Low-Voltage Signaling with Embedded Clock (SLVS-EC), is a high-speed interface standard developed by Sony Semiconductor Solutions Corporation (SSS) for fast, high-resolution image sensors. The interface's simple protocol makes it easy to build camera systems. Featuring an embedded clock signal, it is ideal for applications that require larger capacity, higher speed, or transmission over longer distances. While introducing a wide range of SLVS-EC compliant products, SSS will continue to promote SLVS-EC as a standard of interface for industrial image sensors that face increasing demands for more pixels and higher speed.



Enables implementation for high-speed, high-resolution image sensors without adding pins or enlarging the package. Supports up to 5 Gbps/lane. (As of November 2020.)

Uses the same 8b/10b encoding as in common interfaces. Can be connected to FPGAs or other common industrial camera components. With an embedded clock signal, SLVS-EC requires no skew adjustment between lanes and is a good choice for long-distance transmission. Simple protocol facilitates implementation.

SLVS-EC is standardized by the Japan Industrial Imaging Association (JIIA)

Go to the original article...

ON Semi sensor sales jumped up by 42% in 2022

Image Sensors World        Go to the original article...

From Counterpoint Research: https://www.counterpointresearch.com/sensing-power-solutions-drive-onsemis-record-revenue-in-2022/

Sensing, Power Solutions Drive onsemi’s Record Revenue in 2022






2022 highlights
  • Delivered a record revenue of $8.3 billion at 24% YoY growth, primarily driven by strength in automotive and industrial businesses.
  • Reduction in price-to-value discrepancies, exiting volatile and competitive businesses and pivoting portfolio to high-margin products helped onsemi deliver strong earnings.
  • Revenue from auto and industrial end-markets increased 38% YoY to $ 4 billion and accounted for 68% of total revenues.
  • Intelligent Sensing Group revenue increased 42% YoY to $1.28 billion driven by the transition to higher-resolution sensors at elevated ASPs.
  • Non-GAAP gross margin was at 49.2%, an increase of 880 basis points YoY. The expansion was driven by manufacturing efficiencies, favorable mix and pricing, and reallocation of capacity to strategic and high-margin products.
  • Revenue from silicon carbide (SiC) shipments in 2022 was more than $200 million.
  • Revenue committed from SiC solutions through LTSAs increased to $4.5 billion.
  • Total LTSAs across the entire portfolio were at $16.6 billion exiting 2022.
  • Revenue from new product sales increased by 34% YoY.
  • Design wins increased 38% YoY.

Go to the original article...

Stanford University talk on Pixel Design

Image Sensors World        Go to the original article...

Dan McGrath (Senior Consultant) recently gave a talk titled "Insider’s View on Pixel Design" at the Stanford Center for Image Systems Engineering (SCIEN), Stanford University. It is survey of challenges based on Dan's 40+ years of experience.

The full 1+ hour talk is available here:

Description:
The success of solid state image sensors has been the cost-effective integrating mega-arrays of transducers into the design flow and manufacturing process that has been the basis of the success of integrated circuits in our industry, This talk will provide from a front-line designer’s perspective key challenges that have been overcome and that still exist to enable this: device physics, integration, manufacturing, meeting customer expectations.

Further Information:
Dan McGrath has worked for over 40 years specializing in the device physics of pixels, both CCD and CIS, and in the integration of image-sensor process enhancements in the manufacturing flow. He received his doctorate in physics from John Hopkins University. He chose his first job because it offered that designing image sensors “means doing physics” and has kept this passion front-and-center in his work. He has worked at Texas Instruments, Polaroid, Atmel, Eastman Kodak, Aptina, BAE Systems and GOODiX Technology and with manufacturing facilities in France, Italy, Taiwan, China and the USA. He has been involved with astronomers on the Galileo mission to Jupiter and to Halley’s Comet, with commercial companies on cell phone imagers and biometrics, with scientific community for microscopy and lab-on-a-chip, with robotics on 3-d mapping sensors and with defense contractors on night vision. His publications include the first megapixel CCD and the basis for dark current spectroscopy (DCS).










Go to the original article...

Ambient light resistant long-range time-of-flight sensor

Image Sensors World        Go to the original article...

Kunihiro Hatakeyama et al. of Toppan Inc. and Brookman Technology Inc. (Japan) published an article titled "A Hybrid ToF Image Sensor for Long-Range 3D Depth Measurement Under High Ambient Light Conditions" in the IEEE Journal of Solid-State Circuits.

Abstract: 

A new indirect time of flight (iToF) sensor realizing long-range measurement of 30 m has been demonstrated by a hybrid ToF (hToF) operation, which uses multiple time windows (TWs) prepared by multi-tap pixels and range-shifted subframes. The VGA-resolution hToF image sensor with 4-tap and 1-drain pixels, fabricated by the BSI process, can measure a depth of up to 30 m for indoor operation and 20 m for outdoor operation under high ambient light of 100 klux. The new hToF operation with overlapped TWs between subframes for mitigating an issue on the motion artifact is implemented. The sensor works at 120 frames/s for a single subframe operation. Interference between multiple ToF cameras in IoT systems is suppressed by a technique of emission cycle-time changing.





















Full paper: https://doi.org/10.1109/JSSC.2023.3238031

Go to the original article...

PetaPixel article on limits of computational photography

Image Sensors World        Go to the original article...

Full article: https://petapixel.com/2023/02/04/the-limits-of-computational-photography/

Some excerpts below:

On the question of whether dedicated cameras are better than today's smartphone cameras the author argues:
“yes, dedicated cameras have some significant advantages”. Primarily, the relevant metric is what I call “photographic bandwidth” – the information-theoretic limit on the amount of optical data that can be absorbed by the camera under given photographic conditions (ambient light, exposure time, etc.).

Cell phone cameras only get a fraction of the photographic bandwidth that dedicated cameras get, mostly due to size constraints. 
 
There are various factors that enable a dedicated camera to capture more information about the scene:
  • Objective Lens Diameter
  • Optical Path Quality
  • Pixel Size and Sensor Depth
Computational photography algorithms try to correct the following types of errors:
  • “Injective” errors. Errors where photons end up in the “wrong” place on the sensor, but they don’t necessarily clobber each other. E.g. if our lens causes the red light to end up slightly further out from the center than it should, we can correct for that by moving red light closer to the center in the processed photograph. Some fraction of chromatic aberration is like this, and we can remove a bit of chromatic error by re-shaping the sampled red, green, and blue images. Lenses also tend to have geometric distortions which warp the image towards the edges – we can un-warp them in software. Computational photography can actually help a fair bit here.
  • “Informational” errors. Errors where we lose some information, but in a non-geometrically-complicated way. For example, lenses tend to exhibit vignetting effects, where the image is darker towards the edges of the lens. Computational photography can’t recover the information lost here, but it can help with basic touch-ups like brightening the darkened edges of the image.
  • “Non-injective” errors. Errors where photons actually end up clobbering pixels they shouldn’t, such as coma. Computational photography can try to fight errors like this using processes like deconvolution, but it tends to not work very well.
The author then goes on to criticize the practice of imposing too strong a "prior" in computational photography algorithms, so much that the camera might "just be guessing" what the image looks like with very little real information about the scene. 

Go to the original article...

TRUMPF industrializes SWIR VCSELs above 1.3 micron wavelength

Image Sensors World        Go to the original article...

From Yole industry news: https://www.yolegroup.com/industry-news/trumpf-reports-breakthrough-in-industrializing-swir-vcsels-above-1300-nm/

TRUMPF reports breakthrough in industrializing SWIR VCSELs above 1300 nm

TRUMPF Photonic Components, a global leader in VCSEL and photodiode solutions, is industrializing the production of SWIR VCSELs above 1300 nm to support high volume applications such as in smartphones in under-OLED applications. The company demonstrates outstanding results regarding the efficiency of infrared laser components with long wavelengths beyond 1300 nm on an industrial-grade manufacturing level. This takes TRUMPF one step further towards mass production of indium-phosphide-based (InP) VCSELs in the range from 1300 nm to 2000 nm. “At TRUMPF we are working hard to mature this revolutionary production process and to implement standardization, which would further develop this outstanding technology into a cost-attractive solution. We aim to bring the first products to the high-volume market in 2025,” said Berthold Schmidt, CEO at TRUMPF Photonic Components. By developing the new industrial production platform, TRUMPF is expanding its current portfolio of Gallium arsenide- (GaAs-) based VCSELs in the 760 nm to 1300 nm range for NIR applications. The new platform is more flexible in the longer wavelength spectrum than are GaAs, but it still provides the same benefits as compact, robust and economical light sources. “The groundwork for the successful implementation of long-wavelength VCSELs in high volumes has been laid. But we also know that it is still a way to go, and major production equipment investments have to be made before ramping up mass production”, said Schmidt.

VCSELs to conquer new application fields

A broad application field can be revolutionized by the industrialization of long-wavelength VCSELs, as the SWIR VCSELs can be used in applications with higher output power while remaining eye-safe compared to shorter-wavelength VCSELs. The long wavelength solution is not susceptible to disturbing light such as sunlight in a broader wavelength regime. One popular example from the mass markets of smartphone and consumer electronics devices, is under-OLED applications. The InP-based VCSELs can be easily put below these OLED displays, without disturbing other functionalities and with the benefit of higher eye-safety standards. OLED displays are a huge application field for long wavelength sensor solutions. “In future we expect high volume projects not only in the fields of consumer sensing, but automotive LiDAR, data communication applications for longer reach, medical applications such as spectroscopy applications, as well as photonic integrated circuits (PICs), and quantum photonic integrated circuits (QPICs). The related demands enable the SWIR VCSEL technology to make a breakthrough in mass production”, said Schmidt.

Exceptional test results

TRUMPF presents results showing VCSEL laser performance up to 140°C at ~1390 nm wavelength. The technology used for fabrication is scalable for mass production and the emission wavelength can be tuned between 1300 nm to 2000 nm, resulting in a wide range of applications. Recent results show good reproducible behavior and excellent temperature performance. “I’m proud of my team, as it’s their achievement that we can present exceptional results in the performance and robustness of these devices”, said Schmidt. “We are confident that the highly efficient, long wavelength VCSELs can be produced at high yield to support cost-effective solutions”, Schmidt adds.

Go to the original article...

ON Semi announces that it will be manufacturing image sensors in New York

Image Sensors World        Go to the original article...

Press release: https://www.onsemi.com/company/news-media/press-announcements/en/onsemi-commemorates-transfer-of-ownership-of-east-fishkill-new-york-facility-from-globalfoundries-with-ribbon-cutting-ceremony

onsemi Commemorates Transfer of Ownership of East Fishkill, New York Facility from GlobalFoundries with Ribbon Cutting Ceremony

  • Acquisition and investments planned for ramp-up at the East Fishkill (EFK) fab create onsemi’s largest U.S. manufacturing site
  • EFK enables accelerated growth and differentiation for onsemi’s power, analog and sensing technologies
  • onsemi retains more than 1,000 jobs at the site
PHOENIX – Feb. 10, 2023 – onsemi (Nasdaq: ON) a leader in intelligent power and sensing technologies, today announced the successful completion of its acquisition of GlobalFoundries’ (GF’s) 300 mm East Fishkill (EFK), New York site and fabrication facility, effective December 31, 2022. The transaction added more than 1,000 world-class technologists and engineers to the onsemi team. Highlighting the importance of manufacturing semiconductors in the U.S., the company celebrated this milestone event with a ribbon-cutting ceremony led by Senate Majority Leader Chuck Schumer (NY), joined by Senior Advisor to the Secretary of Commerce on CHIPS Implementation J.D. Grom. Also in attendance were several other local governmental dignitaries.

Over the last three years, onsemi has been focusing on securing a long-term future for the EFK facility and its employees, making significant investments in its 300 mm capabilities to accelerate growth in the company’s power, analog and sensing products, and enable an improved manufacturing cost structure. The EFK fab is the largest onsemi manufacturing facility in the U.S., adding advanced CMOS capabilities - including 40 nm and 65 nm technology nodes with specialized processing capabilities required for image sensor production - to the company’s manufacturing profile. The transaction includes an exclusive commitment to supply GF with differentiated semiconductor solutions and investments in research and development as both companies collaborate to build on future growth.

“With today’s ribbon cutting, onsemi will preserve more than 1,000 local jobs, continue to boost the state’s leadership in the semiconductor industry, and supply ‘Made in New York' chips for everything from electric vehicles to energy infrastructure across the country,” said Senator Schumer. “I am elated that onsemi has officially made East Fishkill home to its leading and largest manufacturing fab in the U.S. onsemi has already hired nearly 100 new people and invested committed $1.3 billion to continue the Hudson Valley’s rich history of science and technology for future generations. I have long said that New York had all the right ingredients to rebuild our nation’s semiconductor industry, and personally met with onsemi’s top brass multiple times to emphasize this as I was working on my historic CHIPS legislation. Thanks to my CHIPS and Science Act, we are bringing manufacturing back to our country and strengthening our supply chains with investments like onsemi’s in the Hudson Valley.”

The EFK facility contributes to the community by retaining more than 1,000 jobs. With the recent passage of the Federal CHIPS and Science Act as well as the New York Green CHIPS Program, onsemi will continue to evaluate opportunities for expansion and growth in East Fishkill and its contribution to the surrounding community. Earlier today, the Rochester Institute of Technology (RIT) announced that onsemi has pledged to donate $500,000 over 10 years to support projects and education aimed at increasing the pipeline of engineers in the semiconductor industry.

“onsemi appreciates Senate Majority Leader Schumer’s unwavering commitment to ensure American leadership in semiconductors and chip manufacturing investments in New York,” said Hassane El-Khoury, president and chief executive officer, onsemi. “With the addition of EFK to our manufacturing footprint, onsemi will have the only 12-inch power discrete and image sensor fab in the U.S., enabling us to accelerate our growth in the megatrends of vehicle electrification, ADAS, energy infrastructure and factory automation. We look forward to working with Empire State Development and local government officials to find key community programs and educational partnerships that will allow us to identify, train and employ the next generation of semiconductor talent in New York.”

Go to the original article...

ST introduces new sensors for computer vision, AR/VR

Image Sensors World        Go to the original article...

 


ST has released a new line of global shutter image sensors with embedded optical flow feature which is fully autonomous with no need for host computing/assistance. This can provide savings in power and bandwidth and free up host resources that would otherwise be needed for optical flow computations. From this optical flow data, it is possible for a host processor to compute the visual odometry (SLAM or camera trajectory), without the need for the full RGB image. The optical flow data can be interlaced with the standard image stream, with any of the monochrome, RGB Bayer or RGB-IR sensor versions. 

Go to the original article...

Canon Announces 148dB (24 f-stop) Dynamic Range Sensor

Image Sensors World        Go to the original article...

Canon develops CMOS sensor for monitoring applications with industry-leading dynamic range, automatic exposure optimization function for each sensor area that improves accuracy for recognizing moving subjects


TOKYO, January 12, 2023—Canon Inc. announced today that the company has developed a 1.0-inch, back-illuminated stacked CMOS sensor for monitoring applications that achieves an effective pixel count of approximately 12.6 million pixels (4,152 x 3,024) and provides an industry-leading1 dynamic range of 148 decibels2 (dB). The new sensor divides the image into 736 areas and automatically determines the best exposure settings for each area. This eliminates the need for synthesizing images, which is often necessary when performing high-dynamic-range photography in environments with significant differences in brightness, thereby reducing the amount of data processed and improving the recognition accuracy of moving subjects.



With the increasingly widespread use of monitoring cameras in recent years, there has been a corresponding growth in demand for image sensors that can capture high-quality images in environments with significant differences in brightness, such as stadium entrances and nighttime roads. Canon has developed a new sensor for such applications, and will continue to pursue development of sensors for use in a variety of fields.

The new sensor realizes a dynamic range of 148 dB—the highest-level performance in the industry among image sensors for monitoring applications. It is capable of image capture at light levels ranging from approximately 0.1 lux to approximately 2,700,000 lux. The sensor's performance holds the potential for use in such applications as recognizing both vehicle license plates and the driver's face at underground parking entrances during daytime, as well as combining facial recognition and background monitoring at stadium entrances.

 1Among market for CMOS sensors used in monitoring applications. As of January 11, 2023. Based on Canon research.

 2Dynamic range at 30 fps is 148 dB. Dynamic range at approx. 60 fps is 142 dB.

In order to produce a natural-looking image when capturing images in environments with both bright and dark areas, conventional high-dynamic-range image capture requires taking multiple separate photos under different exposure conditions and then synthesizing them into a single image. Because exposure times vary in length, this synthesis processing often results in a problem called "motion artifacts," in which images of moving subjects are merged but do not overlap completely, resulting in a final image that is blurry. Canon's new sensor divides the image into 736 distinct areas, each of which can automatically be set to the optimal exposure time based on brightness level. This prevents the occurrence of motion artifacts and makes possible facial recognition with greater accuracy even when scanning moving subjects. What's more, image synthesizing is not required, thereby reducing the amount of data to be processed and enabling high-speed image capture at speeds of approximately 60 frames-per-second3 (fps) and a high pixel count of approximately 12.6 million pixels.

 3Dynamic range at 30 fps is 148 dB. Dynamic range at approx. 60 fps is 142 dB.

Video is comprised of a series of individual still images (single frames). However, if exposure conditions for each frame is not specified within the required time for that frame, it becomes difficult to track and capture images of subjects in environments with subject to significant changes in brightness, or in scenarios where the subject is moving at high speeds. Canon's new image sensor is equipped with multiple CPUs and dedicated processing circuitry, enabling it to quickly and simultaneously specify exposure conditions for all 736 areas within the allotted time per frame. In addition, image capture conditions can be specified according to environment and use case. Thanks to these capabilities, the sensor is expected to serve a wide variety of purposes including fast and highly accurate subject detection on roads or in train stations, as well as stadium entrances and other areas where there are commonly significant changes in brightness levels.

Example use case for new sensor
  • Parking garage entrance, afternoon: With conventional cameras, vehicle's license plate is not legible due to whiteout, while driver's face is not visible due to crushed blacks. However, the new sensor enables recognition of both the license plate and driver's face.
  • The new sensor realizes an industry-leading high dynamic range of 148 dB, enabling image capture in environments with brightness levels ranging from approx. 0.1 lux to approx. 2,700,000 lux. For reference, 0.1 lux is equivalent to the brightness of a full moon at night, while 500,000 lux is equivalent to filaments in lightbulbs and vehicle headlights.






Technology behind the sensor's wide dynamic range

With conventional sensors, in order to produce a natural-looking image when capturing images in environments with both bright and dark areas, high-dynamic-range image capture requires taking multiple separate photos under different exposure conditions and then synthesizing them into a single image. (In the diagram below, four exposure types are utilized per single frame).

With Canon's new sensor, optimal exposure conditions are automatically specified for each of the 736 areas, thus eliminating the need for image synthesis.




Technology behind per-area exposure

Portion in which subject moves is detected based on discrepancies between first image (one frame prior) and second image (two frames prior). ((1) Generate movement map).

In first image (one frame prior) brightness of subject is recognized for each area4 and luminance map is generated (2). After ensuring difference in brightness levels between adjacent areas are not excessive ((3) Reduce adjacent exposure discrepancy), exposure conditions are corrected based on information from movement map, and final exposure conditions are specified (4).

Final exposure conditions (4) are applied to images for corresponding frames.


4 Diagram below is a simplified visualization. Actual sensor is divided into 736 areas.





Go to the original article...

New SWIR Sensor from NIT

Image Sensors World        Go to the original article...


NSC2001 is the NIT Triple H SWIR sensor:
  • High Dynamic Range operating in linear and logarithmic mode response, it exhibits more than 120 dB of dynamic range
  • High Speed, capable of generating up to 1K frames per second in full frame mode, and much more with sub windowing
  • High Sensitivity and low noise figure (< 50e-)



NSC2001 fully benefits from NIT’s new manufacturing factory installed in their brand-new clean room, which includes their high-yield hybridization process. The new facility allows NIT to cover the entire design and manufacturing cycle of these sensors in volume with a level of quality never achieved before.

Moreover, NSC2001 was designed with the objective of addressing new markets that could not invest in expensive and difficult-to-use SWIR cameras. The result is that our WiDy SenS 320 camera based on NSC2001 exhibits the lowest price point on the market even in unit quantity.

Typical applications for NSC2001 are optical metrology and testing, additive manufacturing, welding, & laser communication, etc.

Go to the original article...

Workshop on Infrared Detection for Space Applications June 7-9, 2023 in Toulouse, France

Image Sensors World        Go to the original article...

CNES, ESA, ONERA, CEA-LETI, Labex Focus, Airbus Defence & Space and Thales Alenia Space are pleased to inform you that they are organising the second workshop dedicated to Infrared Detection for Space Applications, that will be held in Toulouse from 7th to 9th, June 2023 in the frame of the Optics and Optoelectronics Technical Expertise Community (COMET).

The aim of this workshop is to focus on Infrared Detectors technologies and components, Focal Plane Arrays and associated subsystems, control and readout ASICs, manufacturing, characterization and qualification results. The workshop will only address IR spectral bands between 1μm and 100 μm. Due to the commonalities with space applications and the increasing interest of space agencies to qualify and to use COTS IR detectors, companies and laboratories involved in defence applications, scientific applications and non-space cutting-edge developments are very welcome to attend this workshop.

The workshop will comprise several sessions addressing the following topics:

  • Detector needs for future space missions,
  • Infrared detectors and technologies including (but not limited to):
    • Photon detectors: MCT, InGaAs, InSb, XBn, QWIP, SL, intensified, SI:As, ...
    • Uncooled thermal detectors: microbolometers (a-Si, VOx), pyroelectric detectors ...
    • ROIC (including design and associated Si foundry aspects).
    • Optical functions on detectors
  • Focal Plane technologies and solutions for Space or Scientific applications including subassembly elements such as:
    • Assembly techniques for large FPAs,
    • Flex and cryogenic cables,
    • Passive elements and packaging,
    • Cold filters, anti-reflection coatings,
    • Proximity ASICs for IR detectors,
  • Manufacturing techniques from epitaxy to package integration,
  • Characterization techniques,
  • Space qualification and validation of detectors and ASICs,
  • Recent Infrared Detection Chain performances and Integration from a system point of view.

Three tutorials will be given during this workshop.

Please send a short abstract giving the title, the authors’ names and affiliations, and presenting the subject of your talk, to following contacts: anne.rouvie@cnes.fr and nick.nelms@esa.int.

The workshop official language is English (oral presentation and posters).

After abstract acceptance notification, authors will be requested to prepare their presentation in pdf or PowerPoint format, to be presented at the workshop. Authors will also be required to provide a version of their presentation to the organization committee along with an authorization to make it available for Workshop attendees and on-line for COMET members. No proceedings will be compiled and so no detailed manuscript needs to be submitted.







Go to the original article...

Recent Industry News: Sony, SK Hynix

Image Sensors World        Go to the original article...

Sony separates production of cameras for China and non-China markets

Link: https://asia.nikkei.com/Business/Electronics/Sony-separates-production-of-cameras-for-China-and-non-China-markets

Sony Group has transferred production of cameras sold in the Japanese, U.S. and European markets to Thailand from China, part of growing efforts by manufacturers to protect supply chains by reducing their Chinese dependence. Sony’s plant in China will in principle produce cameras for the domestic market. Sony offers the Alpha line of high-end mirrorless cameras. The company sold roughly 2.11M units globally in 2022, according to Euromonitor. Of those, China accounted for 150,000 units, with the rest, or 90%, sold elsewhere, meaning the bulk of Sony’s Chinese production has been shifted to Thailand. Canon in 2022 closed part of its camera production in China, shifting it back to Japan. Daikin Industries plans to establish a supply chain to make air conditioners without having to rely on Chinese-made parts within fiscal 2023.

TOKYO -- Sony Group has transferred production of cameras sold in the Japanese, U.S. and European markets to Thailand from China, part of growing efforts by manufacturers to protect supply chains by reducing their Chinese dependence.

Sony's plant in China will in principle produce cameras for the domestic market. Until now, Sony cameras were exported from China and Thailand. The site will retain some production facilities to be brought back online in emergencies. 

After tensions heightened between Washington and Beijing, Sony first shifted manufacturing of cameras bound for the U.S. The transfer of the production facilities for Japan- and Europe-bound cameras was completed at the end of last year. 

Sony offers the Alpha line of high-end mirrorless cameras. The company sold roughly 2.11 million units globally in 2022, according to Euromonitor. Of those, China accounted for 150,000 units, with the rest, or 90%, sold elsewhere, meaning the bulk of Sony's Chinese production has been shifted to Thailand. 

On the production shift, Sony said it "continues to focus on the Chinese market and has no plans of exiting from China."

Sony will continue making other products, such as TVs, game consoles and camera lenses, in China for export to other countries. 

The manufacturing sector has been working to address a heavy reliance on Chinese production following supply chain disruptions caused by Beijing's zero-COVID policy.

Canon in 2022 closed part of its camera production in China, shifting it back to Japan. Daikin Industries plans to establish a supply chain to make air conditioners without having to rely on Chinese-made parts within fiscal 2023.

Sony ranks second in global market share for cameras, following Canon. Its camera-related sales totaled 414.8 billion yen ($3.2 billion) in fiscal 2021, about 20% of its electronics business.


SK Hynix reshuffles CIS team to focus on high-end products

Link: https://www.thelec.net/news/articleView.html?idxno=4379

SK Hynix has reshuffled its CMOS image sensor (CIS) team in a bid to shift focus from expanding market share to developing high-end products, TheElec has learned.

Its CIS team was a singular organization prior to the changes, but the company has now created sub-teams that focus on specific functions and features of image sensors.

Overall, the team is now more of a research and development team rather than sales and marketing.

CIS is used widely in smartphones and IT products for its camera features.

Sony’s is the world’s largest producer of the component followed by Samsung.

The pair focuses on high resolution and multi-functions and controls between 70% to 80% of the market together __ Sony is the overwhelming leader with around 50% market share.

SK Hynix is a smaller player in the field and in the past had focused on low-end CIS with 20MP or below resolution.

The company has however started to supply its CIS to Samsung in 2021. It provided its 13MP CIS for Samsung’s foldable phones and last year provided 50MP sensors for the Galaxy A series.

Still, the overall demand for CIS has dropped in recent years as smartphones that mainly use them are suffering from a slowdown in demand.

This has been especially poignant for mid-tier phones due to their unit prices dropping in response to low consumer demand.

SK Hynix has been reducing its CIS output in light of this and is also reducing its inventory, the sources said.

Go to the original article...

Call for Papers: IEEE International Conference on Computational Photography (ICCP) 2023

Image Sensors World        Go to the original article...

Call for Papers: IEEE International Conference on Computational Photography (ICCP) 2023 


Submission Deadline: April 7, 2023
Contact: iccp2023programchairs@googlegroups.com

The ICCP 2023 Call-for-Papers is released on the conference website. ICCP is an international venue for disseminating and discussing new scholarly work in computational photography, novel imaging, sensors and optics techniques. 

As in previous years, ICCP is coordinating with the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) for a special issue on Computational Photography to be published after the conference. 

Learn more on the ICCP 2023 website, and submit your latest advancements by Friday, 7th April, 2023. 

Go to the original article...

Global Image Sensor Market Forecast to Grow Nearly 11% through 2030

Image Sensors World        Go to the original article...

Link: https://www.novuslight.com/global-image-sensor-market-forecast-at-17-6-billion-in-2020_N12654.html

The global image sensors market was calculated at ~US$17.6 billion in 2020. The market forecasts to reach ~US$48 billion in revenue by 2030 by registering a compound annual growth rate of 10.7% during the forecast period from 2021-2030.

Factors Influencing
The global image sensor market is expected to gain traction in the upcoming years because of the upscaling demand for image sensors technology in the automotive industry. Image sensors are highly useful in converting optical images into electronic ones. Thus, the demand for image sensors is expected to increase due to their applications in digital cameras.

Moreover, constant advancements in Complementary metal-oxide-semiconductor (CMOS) imaging technology would positively impact the growth of the global image sensors market. Recent advancements in CMOS technology have improved visualization presentations of the machines. Moreover, the cost-effectiveness of these technologies, together with better performance, would bolster the growth of the global image sensor market during the analysis period.

The growing adoption of smartphones and advancements in the industry are driving the growth of the global image sensor market. Dual camera trend in smartphones and tablets, forecast to accelerate the growth of the global image sensor market. In addition, excessive demand for advanced medical imaging systems would present some promising opportunities for the prominent market players during the forecast timeframe.

Various companies are coming up with advanced image sensors with Artificial Intelligence capabilities. Sony Corporation (Japan) recently launched IMX500, the world's first intelligent vision sensor that carries out machine learning and boosts computer vision operations automatically. Thus, such advancements are forecast to prompt the growth of the global image sensor market in the coming years.
Furthermore, the growing trend of smartphone photography has surged the demand for the image sensor to provide clear and quality output. Growing demand for 48 MP and 64 MP cameras would lead to the growth of the global image sensors market in the future.

Regional Analysis
Asia-Pacific forecasts to hold the maximum share with the highest revenue in the global image sensors market. The growth of the region is attributed to the increasing research and development activities. Moreover, the growing number of accident cases in the region is boosting the use of ADAS (advanced driver assistance system), together with progressive image sensing proficiencies. Thus, it would surge the demand for image sensors in the region during the forecast period.

Covid-19 Impact Analysis
The use of image sensors in smartphones has been the key reason for the growth of the market. However, the demand for smartphones severely declined during the pandemic. Thus, it rapidly slowed down the growth of the global image sensor market.

Go to the original article...

International Image Sensors Workshop (IISW) 2023 Program and Pre-Registration Open

Image Sensors World        Go to the original article...

The 2023 International Image Sensors Workshop announces the technical programme and opens the pre-registration to attend the workshop.

Technical Programme is announced: The Workshop programme is from May 22nd to 25th with attendees arriving on May 21st. The programme features 54 regular presentations and 44 posters with presenters from industry and academia. There are 10 engaging sessions across 4 days in a single track format. On one afternoon, there are social trips to Stirling Castle or the Glenturret Whisky Distillery. Click here to see the technical programme.

Pre-Registration is Open: The pre-registration is now open until Monday 6th Feb. Click here to pre-register to express your interest to attend.










Go to the original article...

PhotonicsSpectra article on quantum dots-based SWIR Imagers

Image Sensors World        Go to the original article...

Full article available here link: 
https://www.photonics.com/Articles/New_Sensor_Materials_and_Designs_Deepen_SWIR/a68543

Some excerpts below:




Cameras that sense wavelengths between 1000 and 2500 nm can often pick up details that would otherwise be hidden in images captured by conventional CMOS image sensors (CIS) that operate in the visible range. SWIR cameras can not only view details obscured by plastic sunglasses (a) and packaging (b), they can also peer through silicon wafers to spot voids after the bonding process (c). QD: quantum dot. Courtesy of mec.



A SWIR imaging forecast shows emerging sensor materials taking a larger share of the market, while incumbent InGaAs sees little gain, and the use of other materials grows at a faster rate. OPD: organic photodetector. Courtesy of IDTechEx.


Quantum dots act as a SWIR photodetector if they are sized correctly. When placed on a readout circuit, they form a SWIR imaging sensor.


The price for SWIR cameras today can run in the tens of thousands of dollars, which is too expensive for many applications and has inhibited wider use of the technology.

Silicon, the dominant sensor material for visible imaging, does not absorb SWIR photons without surface modification — and even then, it performs poorly. As a result, most SWIR cameras today use sensors based on indium gallium arsenide (InGaAs), ...

... sensors based on colloidal quantum dots (QDs) are gaining interest. The technology uses nanocrystals made of semiconductor materials, such as lead sulfide (PbS), that absorb in the SWIR. By adjusting the size of the nanocrystals used, sensor fabricators can create photodetectors that are sensitive from the visible to 2000 nm or even longer wavelengths.

... performance has steadily improved with the underlying materials and processing science, according to Pawel Malinowski, program manager of pixel innovations at imec. The organization’s third-generation QD-based image sensor debuted a couple of years ago with an efficiency of 45%. Newer sensors have delivered above 60% efficiency.

Fabricating QD photodiodes and sensors is also inexpensive because the sensor stack consists of a QD layer a few hundred nanometers thick, along with conducting, structural, and protective layers, Klem said. The stack goes atop a CMOS readout circuit in a pixel array. The technique can accommodate high-volume manufacturing processes and produce either large or small pixel arrays. Compared to InGaAs technology, QD sensors offer higher resolution and lower noise levels, along with fast response times.

Emberion, a startup spun out of Nokia, also makes QD-based SWIR cameras ... The quantum efficiency of these sensors is only 20% at 1800 nm... [but] ... at about half the price of InGaAs-based systems... .

[Another company TriEye is secretive about whether they use QD detectors but...] Academic papers co-authored by one of the company’s founders around the time that TriEye came into existence discuss pyramid-shaped silicon nanostructures that detect SWIR photons via plasmonic enhancement of internal photoemission.

Go to the original article...

Registrations Open for Harvest Imaging Forum (Apr 5-6, 2023)

Image Sensors World        Go to the original article...

When: April 5 and 6, 2023
Where: Delft, the Netherlands
Forum Topic: Imaging Beyond the Visible
Speaker: Prof. dr. Pierre Magnan (ISAE-SUPAERO, France)
Registration link: https://harvestimaging.com/forum_registration_2023_new.php

More information can be found here: https://harvestimaging.com/forum_introduction_2023_new.php

After the Harvest Imaging forums during the last decade, a next and ninth one will be organized on April 5 & 6, 2023 in Delft, the Netherlands. The basic intention of the Harvest Imaging forum is to have a scientific and technical in-depth discussion on one particular topic that is of great importance and value to digital imaging. The forum 2023 will again be organized in a hybrid form:

  • You can attend in-person and can benefit in the optimal way of the live interaction with the speakers and audience,
  • There will be also a live broadcast of the forum, still interactions with the speakers through a chat box will be made possible,
  • Finally the forum also can be watched on-line at a later date.

The 2023 Harvest Imaging forum will deal with a single topic from the field of solid-state imaging and will have only one world-level expert as the speaker.

Register here: https://harvestimaging.com/forum_registration_2023_new.php

 

"Imaging Beyond the Visible"
Prof. dr. Pierre MAGNAN (ISAE-SUPAERO, Fr)
 

Abstract:
Two decades of intensive and tremendous efforts have pushed the imaging capabilities in the visible domain closer to physical limits. But also extended the attention to new areas beyond visible light intensity imaging. Examples can be found either to higher photon energy with appearance of CMOS Ultra-Violet imaging capabilities or even to other light dimensions with Polarization Imaging possibilities, both in monolithic form suitable to common camera architecture.

But one of most active and impressive fields is the extension of interest to the spectral range significantly beyond the visible, in the Infrared domain. Special focus is put on the Short Wave Infrared (SWIR) used in the reflective imaging mode but also the Thermal Infrared spectral range used in self-emissive ‘thermal’ imaging mode in Medium Wave Infrared (MWIR) and Long Wave Infrared (LWIR). Initially mostly motivated for military and scientific applications, the use of these spectral domains have now met new higher volume applications needs.

This has been made possible thanks to new technical approaches enabling cost reduction stimulated by the efficient collective manufacturing process offered by the microelectronics industry. CMOS, even no more sufficient to address alone the non- visible imaging spectral range, is still a key part of the solution.

The goal of this Harvest Imaging forum is to go through the various aspects of imaging concepts, device principles, used materials and imager characteristics to address the beyond-visible imaging and especially focus on the infrared spectral bands imaging.

Emphasis will be put on the material used for both detection :

  • Germanium, Quantum Dots devices and InGaAs for SWIR,
  •  III-V and II-VI semiconductors for MWIR and LWIR
  •  Microbolometers and Thermopiles thermal imagers

Besides the material aspects, also attention will be given to the associated CMOS circuits architectures enabling the imaging arrays implementation, both at the pixel and the imager level.
A status on current and new trends will be provided.
 

Bio:
Pierre Magnan graduated in E.E. from University of Paris in 1980. After being a research scientist involved in analog and digital CMOS design up to 1994 at French Research Labs, he moved in 1995 to CMOS image sensors research at SUPAERO (now ISAE-SUPAERO) in Toulouse, France. The latter is an Educational and Research Institute funded by the French Ministry of Defense. Here Pierre was involved in setting up and growing the CMOS active-pixels sensors research and development activities. From 2002 to 2021, as a Full Professor and Head of the Image Sensor Research Group, he has been involved in CMOS Image Sensor research. His team worked in cooperation with European companies (including STMicroelectronics, Airbus Defense& Space, Thales Alenia Space and also European and French Space Agencies) and developed custom image sensors dedicated to space instruments, extending in the last years the scope of the Group to CMOS design for Infrared imagers.
In 2021, Pierre has been nominated Emeritus Professor of ISAE-Supaero Institute where he focuses now on Research within PhD work, mostly with STMicroelectronics.

Pierre has supervised more than 20 PhDs candidates in the field of image sensors and co-authored more than 80 scientific papers. He has been involved in various expertise missions for French Agencies, companies and the European Commission. His research interests include solid-state image sensors design for visible and non-visible imaging, modelling, technologies, hardening techniques and circuit design for imaging applications.

He has served in the IEEE IEDM Display and Sensors subcommittee in 2011-2012 and in the International Image Sensor Workshop (IISW) Technical Program Committee, being the General Technical Chair of 2015 IISW. He is currently a member of the 2022 IEDM ODI sub-committee and the IISW2023 Technical Program Committee.



Go to the original article...

Samsung Tech Blog about ISOCELL Color, HDR and ToF Imaging

Image Sensors World        Go to the original article...

Link: https://semiconductor.samsung.com/newsroom/tech-blog/how-isocell-unlock-the-future-of-camera-experiences/

Some excerpts below.

The science of creating pixels has made substantial progress in recent years. As a rule, high resolution image sensors need small, light-sensitive pixels. To capture as much light as possible, the pixel structure has evolved from front-side illumination (FSI) to a back-side illumination (BSI). This places the photodiode layer on top of the metal line, rather than below it. By locating the photodiode closer to the light source, each pixel is able to capture more light. The downside of this structure is that it creates higher crosstalk between the pixels, leading to color contamination.

“To remedy such a drawback, Samsung introduced ISOCELL, its first technology that isolates pixels from each other by adding barriers. The name ISOCELL is a compound word from the words “isolate’ and ‘cell,’” Kim explained. “By isolating each pixel, ISOCELL can increase a pixel’s full well capacity to hold more light and reduce crosstalk from one pixel to another.”




With ISOCELL technology, ISOCELL image sensors have very high full well capacity. Pixels in the newest ISOCELL image sensor have up to 70,000 electrons, allowing the sensor to reach huge signal range.  ... “To reduce noise, we perform two readouts: One with high gain to show the dark details and another with low gain to show the bright details. The two readouts are then merged in the sensor. Each read out has 10-bits. With the high conversion gain readout at 4x, it adds an additional 2-bits, producing 12-bit HDR image output. This technology is called Smart-ISO Pro also known as iDCG (intra-scene Dual Conversion Gain).”



Samsung has a plan to release a new generation of iToF sensor that has an image signal processor (ISP) integrated. The whole processing of depth information is done on the ISP within the sensor, rather than delegating to the SoC, so that the overall operation uses lower power consumption. In addition, the new solution offers improved depth quality even in scenarios such as low light environment, narrow objects or repetitive patterns. For future applications, Samsung’s ISP integrated ToF will help provide high quality depth image with little to no motion blur or lagging, at a high frame rate.




Go to the original article...

SD Optics releases MEMS-based system "WiseTopo" for 3D microscopy

Image Sensors World        Go to the original article...

SD Optics has released WiseTopo, a MEMS-based microarray lens system that transforms a 2D microscopes into 3D. 
 
Attendees at Photonics West can see a demonstration at their booth #4128 between Jan 31 to Feb 2, 2023 at the Moscone Center in San Francisco, California.
 


SD OPTICS introduces WiseTopo with our core technology Mals lens, the Mems-based microarray lens system. WiseTopo transforms a 2D microscope into a 3D microscope with a simple plug-in installation, and it fits all microscopes. The conventional system has a limited depth of field, so a user has to adjust the focus manually by moving the z-axis. It is difficult to identify the exact shape of the object instantly.  The manual movements can cause deviations in the observation, missing information, incomplete inspection, and an increase in user work load. SD Optics' WiseTopo is the most innovative 3D microscope module empowered with the patented core technology Mals. WiseTopo converts a 2D microscope into a 3D microscope by replacing the image sensor. With this simple installation, WiseTopo resolves the depth-of-field issue without Z-axis movement. Mals is an optical Mems-based, ultra-fast variable focusing lens that implements curvature changes in the lens with the motion of individual micro-mirrors. Mals moves and focuses at a speed of 12Khz without z-axis mechanical movement. It is a semi-permanent digital lens technology that operates at any temperature and has no life cycle limit. WiseTopo provides ideal features in combination with our developed software. These features let users have a better understanding of an object in real time. WiseTopo provides an All-in-focus function where everything is in focus. The Auto-focus function automatically focuses on the Region of Interest Focus lock maintains focus when multiple focus ROIs are set in the z-axis, multi-focus lock stays in focus even when moving the X- and Y-axis. Auto-focus lock retains auto-focus during Z-axis movement and others. These functions maximize user convenience. WiseTopo and its 3D images will reveal necessary information that is hidden when using a 2D microscope. WiseTopo obtains in-focused images with fast varying focus technology and processes many 3D attributes such as shape matching and point cloud instantly. WiseTopo supports various 3D data formats for analysis. For example, a comparison between the reference 3D data with the real-time 3D data can be performed easily. In the microscope, objective lenses with different magnifications are mounted on the turret. Wisetopo provides all functions even when the magnification is changed. Wisetopo provides all 3D features in any microscope and can be used with all of them, regardless of the brand
3D images created in Wisetopo can be viewed in AR/VR. This will let users feel and observe 3D data in the metaverse space.
 

Go to the original article...

poLight’s paper on passive athermalisation of compact camera/lens using its TLens® tunable lens

Image Sensors World        Go to the original article...

Images defocus over wide temperature range is a challenge in many applications. poLight's TLens technology behaves the opposite of plastic lenses over temperature, so just adding it to the optics stack addresses this issue.

A whitepaper is available here: [link]

Abstract: poLight ASA is the owner of and has developed the TLens products family as well as other patented micro-opto-electro-mechanical systems (MOEMS) technologies. TLens is a focusable tunable optics device based on lead zirconium titanate (PZT) microelectromechanical systems (MEMS) technology and a novel optical polymer material. The advantages of the TLens have already been demonstrated in multiple products launched on the market since 2020. Compactness, low power consumption, and fast speed are clear differentiators in comparison with incumbent voice coil motor (VCM) technology, thanks to the patented MEMS architecture. In addition, the use of TLens in the simple manner by adding it onto a fixed focus lens camera, or inserting the TLens inside the lens stack, enables stable focusing over an extended operating range. It has been demonstrated that the TLens passively compensates the thermal defocus of the plastic lens stack/camera structure. The fixed focus plastic lens stack cameras, usually used in consumer devices, typically exhibits a thermal defocus of a few diopters over the operating temperature range. Results of simulations as well as experimental data are presented together with a principal athermal lens design using TLens in only a passive manner (without the use of its electro-tunablity) while the electro-tunability can be used to additionally secure an extended depth of focus with further enhanced image quality.















 

Go to the original article...

Towards a Colorimetric Camera – Talk from EI 2023 Symposium

Image Sensors World        Go to the original article...

Tripurari Singh and Mritunjay Singh of Image Algorithmics presented a talk titled "Towards a Colorimetric Camera" at the recent Electronic Imaging 2023 symposium. They show that for low-light color imaging it is better to use a long/medium/short (LMS) filter that more closely mimics human color vision as opposed to the traditional RGB Bayer pattern.













Go to the original article...

Jabil Inc. collaboration with ams OSRAM and Artilux

Image Sensors World        Go to the original article...

Link: https://www.jabil.com/news/swir-3d-camera-prototype.html

ST. PETERSBURG, FL – January 18, 2023 – Jabil Inc. (NYSE: JBL), a leading manufacturing solutions provider, today announced that its renowned optical design center in Jena, Germany, is currently demonstrating a prototype of a next-generation 3D camera with the ability to seamlessly operate in both indoor and outdoor environments up to a range of 20 meters. Jabil, ams OSRAM and Artilux combined their proprietary technologies in 3D sensing architecture design, semiconductor lasers and germanium-silicon (GeSi) sensor arrays based on a scalable complementary metal-oxide-semiconductor (CMOS) technology platform, respectively, to demonstrate a 3D camera that operates in the short-wavelength infrared (SWIR), at 1130 nanometers.

Steep growth in automation is driving performance improvements for robotic and mobile automation platforms in industrial environments. The industrial robot market is forecast to grow at over 11% compound annual growth rate to over $35 billion by 2029. The 3D sensor data from these innovative depth cameras will improve automated functions such as obstacle identification, collision avoidance, localization and route planning — key applications necessary for autonomous platforms. 

“For too long, industry has accepted 3D sensing solutions limiting the operation of their material handling platforms to environments not impacted by the sun. The new SWIR camera provides a glimpse of the unbounded future of 3D sensing where sunlight no longer impinges on the utility of autonomous platforms,” said Ian Blasch, senior director of business development for Jabil’s Optics division. “This new generation of 3D cameras will not only change the expected industry standard for mid-range ambient light tolerance but will usher in a new paradigm of sensors capable of working across all lighting environments.”

“1130nm is the first of its kind SWIR VCSEL technology from ams OSRAM, offering enhanced eye safety, outstanding performance in high sunlight environments, and skin detection capability, which is of critical importance for collision avoidance when, for example humans and industrial robots interact,” says Dr. Joerg Strauss, senior vice president and general manager at ams OSRAM for business line visualization and sensing. “We are excited to partner with Jabil to make the next-generation 3D sensing and machine vision solutions a reality.”

Dr. Stanley Yeh, vice president of platform at Artilux, concurs, “We are glad to work with Jabil and ams OSRAM to deliver the first mid-range SWIR 3D camera with the use of near infrared (NIR)-like components such as CMOS-based sensor and VCSEL. It's a significant step toward the mass-adoption of SWIR spectrum sensing and being the leader of CMOS SWIR 2D/3D imaging technology.”
For nearly two decades, Jabil’s optical division has been recognized by leading technology companies as the premier service provider for advanced optical design, industrialization and manufacturing. Our Optics division has more than 170 employees across four locations. Jabil’s optics designers, engineers and researchers specialize in solving complex optical problems for its customers in 3D sensing, augmented and virtual reality, action camera, automotive, industrial and healthcare markets. Additionally, Jabil customers leverage expertise in product design, process development, testing, in-house active alignment (from Kasalis, a technology division of Jabil), supply chain management and manufacturing expertise.

More information and the test data could be found at the following website: www.jabil.com/3DCamera



Go to the original article...

css.php