Sony announces full-frame global shutter camera

Image Sensors World        Go to the original article...

Link: https://www.sony.com/lr/electronics/interchangeable-lens-cameras/ilce-9m3

Sony recently announced a full-frame global shutter camera which was featured in several press articles below:


PetaPixel https://petapixel.com/2023/11/07/sony-announces-a9-iii-worlds-first-global-sensor-full-frame-camera/

DPReview https://www.dpreview.com/news/7271416294/sony-announces-a9-iii-world-s-first-full-frame-global-shutter-camera

The Verge https://www.theverge.com/2023/11/7/23950504/sony-a9-iii-mirrorless-camera-global-shutter-price-release


From Sony's official webpage:

[This camera uses the] Newly developed full-frame stacked 24.6 MP Exmor RS™ image sensor with global shutter [...] a stacked CMOS architecture and integral memory [...] advanced A/D conversion enable high-speed processing to proceed with minimal delay. [AI features are implemented using the] BIONZ XR™ processing engine. With up to eight times more processing power than previous versions, the BIONZ XR image processing engine minimises processing latency [...] It's able to process the high volume of data generated by the newly developed Exmor RS image sensor in real-time, even while shooting continuous bursts at up to 120 fps, and it can capture high-quality 14-bit RAW images in all still shooting modes. [...] [The] α9 III can use subject form data to accurately recognise movement. Human pose estimation technology recognises not just eyes but also body and head position with high precision. 

 


 

Go to the original article...

Tsuzuri Project donates to Kanazawa College of Art a high-resolution facsimile of Sotatsu’s “Screen with Scattered Fans,” to be displayed to the public with five previous works

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Tsuzuri Project donates to Kanazawa College of Art a high-resolution facsimile of Sotatsu’s “Screen with Scattered Fans,” to be displayed to the public with five previous works

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Job Postings – Week of 26 Nov 2023

Image Sensors World        Go to the original article...

Apple

Hardware Sensing Systems Engineer

Cupertino, California, USA

Link

Friedrich-Schiller-Universität Jena

2 PhD scholarships in Optics & Photonics

Jena, Germany

Link

Rice University

Open Rank Faculty Position in Advanced Materials

Houston, Texas, USA

Link

Caeleste

Image Sensor Architect

Mechelen, Belgium

Link

BAE Systems (Secret Clearance)

FAST Labs - Multi-Function Sensor Systems Chief Scientist (US only)

Merrimack, New Hampshire,  USA

Link

LYNRED

HgCdTe Epitaxy on Quaternary Substrates (6-month contract)

Veurey-Voroize, France

Link

Princeton Infrared Technologies

Camera Development Manager/Engineer

Monmouth Junction, New Jersey, USA

Link

Rutherford Appleton Laboratory

Senior Project Manager, Imaging Systems Division

Didcot, Oxfordshire, England

Link

Excelitas

Wafer Fab Engineering Mgr

Billerica, Massachusetts, USA

Link

 

Go to the original article...

2024 International SPAD Sensor Workshop Submission Deadline Approaching!

Image Sensors World        Go to the original article...

The deadline for the 2024 ISSW on December 8, 2023 is fast approaching! Paper submission portal is now open!

The 2024 International SPAD Sensor Workshop will be held from 4-6 June 2024 in Trento, Italy.

Paper submission

Workshop papers must be submitted online on Microsoft CMT. Click here to be redirected to the submission website. You may need to register first, then search for the "2024 International SPAD Sensor Workshop" within the list of conferences using the dedicated search bar.

Paper format

Kindly take note that the ISSW employs a single-stage submission process, necessitating the submission of camera-ready papers. Each submission should comprise a 1000-character abstract and a 3-page paper, equivalent to 1 page of text and 2 pages of images. The submission must include the authors' name(s) and affiliation, mailing address, telephone, and email address. The formatting can adhere to either a style that integrates text and figures, akin to the standard IEEE format, or a structure with a page of text followed by figures, mirroring the format of the International Solid-State Circuits Conference (ISSCC) or the IEEE Symposium on VLSI Technology and Circuits. Examples illustrating these formats can be accessed in the online database of the International Image Sensor Society.

The deadline for paper submission is 23:59 CET, Friday December 8th, 2023.

Papers will be considered on the basis of originality and quality. High quality papers on work in progress are also welcome. Papers will be reviewed confidentially by the Technical Program Committee. Accepted papers will be made freely available for download from the International Image Sensor Society website. Please note that no major modifications are allowed. Authors will be notified of the acceptance of their abstract & posters at the latest by Wednesday Jan 31st, 2024.
 
Poster submission 

In addition to talks, we wish to offer all graduate students, post-docs, and early-career researchers an opportunity to present a poster on their research projects or other research relevant to the workshop topics . If you wish to take up this opportunity, please submit a 1000-character abstract and a 1-page description (including figures) of the proposed research activity, along with authors’ name(s) and affiliation, mailing address, telephone, and e-mail address.

The deadline for poster submission is 23:59 CET, Friday December 8th, 2023.

Go to the original article...

Detecting hidden defects using a single-pixel THz camera

Image Sensors World        Go to the original article...

 

Li et al. present a new THz imaging technique for defect detection in a recent paper in the journal Nature Communications. The paper is titled "Rapid sensing of hidden objects and defects using a single-pixel diffractive terahertz sensor".

Abstract: Terahertz waves offer advantages for nondestructive detection of hidden objects/defects in materials, as they can penetrate most optically-opaque materials. However, existing terahertz inspection systems face throughput and accuracy restrictions due to their limited imaging speed and resolution. Furthermore, machine-vision-based systems using large-pixel-count imaging encounter bottlenecks due to their data storage, transmission and processing requirements. Here, we report a diffractive sensor that rapidly detects hidden defects/objects within a 3D sample using a single-pixel terahertz detector, eliminating sample scanning or image formation/processing. Leveraging deep-learning-optimized diffractive layers, this diffractive sensor can all-optically probe the 3D structural information of samples by outputting a spectrum, directly indicating the presence/absence of hidden structures or defects. We experimentally validated this framework using a single-pixel terahertz time-domain spectroscopy set-up and 3D-printed diffractive layers, successfully detecting unknown hidden defects inside silicon samples. This technique is valuable for applications including security screening, biomedical sensing and industrial quality control. 

Paper (open access): https://www.nature.com/articles/s41467-023-42554-2

News coverage: https://phys.org/news/2023-11-hidden-defects-materials-single-pixel-terahertz.html

CMOS SPAD Sensors for Solid-state LIDAR

 
In the realm of engineering and material science, detecting hidden structures or defects within materials is crucial. Traditional terahertz imaging systems, which rely on the unique property of terahertz waves to penetrate visibly opaque materials, have been developed to reveal the internal structures of various materials of interest.


This capability provides unprecedented advantages in numerous applications for industrial quality control, security screening, biomedicine, and defense. However, most existing terahertz imaging systems have limited throughput and bulky setups, and they need raster scanning to acquire images of the hidden features.


To change this paradigm, researchers at UCLA Samueli School of Engineering and the California NanoSystems Institute developed a unique terahertz sensor that can rapidly detect hidden defects or objects within a target sample volume using a single-pixel spectroscopic terahertz detector.
Instead of the traditional point-by-point scanning and digital image formation-based methods, this sensor inspects the volume of the test sample illuminated with terahertz radiation in a single snapshot, without forming or digitally processing an image of the sample.


Led by Dr. Aydogan Ozcan, the Chancellor's Professor of Electrical & Computer Engineering and Dr. Mona Jarrahi, the Northrop Grumman Endowed Chair at UCLA, this sensor serves as an all-optical processor, adept at searching for and classifying unexpected sources of waves caused by diffraction through hidden defects. The paper is published in the journal Nature Communications.


"It is a shift in how we view and harness terahertz imaging and sensing as we move away from traditional methods toward more efficient, AI-driven, all-optical sensing systems," said Dr. Ozcan, who is also the Associate Director of the California NanoSystems Institute at UCLA.


This new sensor comprises a series of diffractive layers, automatically optimized using deep learning algorithms. Once trained, these layers are transformed into a physical prototype using additive manufacturing approaches such as 3D printing. This allows the system to perform all-optical processing without the burdensome need for raster scanning or digital image capture/processing.


"It is like the sensor has its own built-in intelligence," said Dr. Ozcan, drawing parallels with their previous AI-designed optical neural networks. "Our design comprises several diffractive layers that modify the input terahertz spectrum depending on the presence or absence of hidden structures or defects within materials under test. Think of it as giving our sensor the capability to 'sense and respond' based on what it 'sees' at the speed of light."


To demonstrate their novel concept, the UCLA team fabricated a diffractive terahertz sensor using 3D printing and successfully detected hidden defects in silicon samples. These samples consisted of stacked wafers, with one layer containing defects and the other concealing them. The smart system accurately revealed the presence of unknown hidden defects with various shapes and positions.
The team believes their diffractive defect sensor framework can also work across other wavelengths, such as infrared and X-rays. This versatility heralds a plethora of applications, from manufacturing quality control to security screening and even cultural heritage preservation.


The simplicity, high throughput, and cost-effectiveness of this non-imaging approach promise transformative advances in applications where speed, efficiency, and precision are paramount.

Go to the original article...

A 400 kilopixel resolution superconducting camera

Image Sensors World        Go to the original article...

Oripov et al. from NIST and JPL recently published a paper titled "A superconducting nanowire single-photon camera with 400,000 pixels" in Nature.

Abstract: For the past 50 years, superconducting detectors have offered exceptional sensitivity and speed for detecting faint electromagnetic signals in a wide range of applications. These detectors operate at very low temperatures and generate a minimum of excess noise, making them ideal for testing the non-local nature of reality, investigating dark matter, mapping the early universe and performing quantum computation and communication. Despite their appealing properties, however, there are at present no large-scale superconducting cameras—even the largest demonstrations have never exceeded 20,000 pixels. This is especially true for superconducting nanowire single-photon detectors (SNSPDs). These detectors have been demonstrated with system detection efficiencies of 98.0%, sub-3-ps timing jitter, sensitivity from the ultraviolet to the mid-infrared and microhertz dark-count rates, but have never achieved an array size larger than a kilopixel. Here we report on the development of a 400,000-pixel SNSPD camera, a factor of 400 improvement over the state of the art. The array spanned an area of 4 × 2.5 mm with 5 × 5-μm resolution, reached unity quantum efficiency at wavelengths of 370 nm and 635 nm, counted at a rate of 1.1 × 105 counts per second (cps) and had a dark-count rate of 1.0 × 10^−4 cps per detector (corresponding to 0.13 cps over the whole array). The imaging area contains no ancillary circuitry and the architecture is scalable well beyond the present demonstration, paving the way for large-format superconducting cameras with near-unity detection efficiencies across a wide range of the electromagnetic spectrum.

Link: https://www.nature.com/articles/s41586-023-06550-2

a, Imaging at 370 nm, with raw time-delay data from the buses shown as individual dots in red and binned 2D histogram data shown in black and white. b, Count rate as a function of bias current for various wavelengths of light as well as dark counts. c, False-colour scanning electron micrograph of the lower-right corner of the array, highlighting the interleaved row and column detectors. Lower-left inset, schematic diagram showing detector-to-bus connectivity. Lower-right inset, close-up showing 1.1-μm detector width and effective 5 × 5-μm pixel size. Scale bar, 5 μm.


 

a, Circuit diagram of a bus and one section of 50 detectors with ancillary readout components. SNSPDs are shown in the grey boxes and all other components are placed outside the imaging area. A photon that arrives at time t0 has its location determined by a time-of-flight readout process based on the time-of-arrival difference t2 − t1. b, Oscilloscope traces from a photon detection showing the arrival of positive (green) and negative (red) pulses at times t1 and t2, respectively.

a, Histogram of the pulse differential time delays Δt = t1 − t2 from the north bus during flood illumination with a Gaussian spot. All 400 detectors resolved clearly, with gaps indicating detectors that were pruned. Inset, zoomed-in region showing that counts from adjacent detectors are easily resolvable and no counts were generated by a pruned detector. b, Plot of raw trow and tcol time delays when flood illuminated at 370 nm. c, Zoomed-in subsection of the array with 25 × 25 detectors. d, Histogram of time delays for a 2 × 2 detector subset with 10-ps bin size showing clear distinguishability between adjacent detectors.

a, Count rate versus optical attenuation for a section of detectors biased at 45 μA per detector. The dashed purple line shows a slope of 1, with deviations from that line at higher rates indicating blocking loss. b, System jitter of a 50-detector section. Detection delay was calculated as the time elapsed between the optical pulse being generated and the detection event being read out.



News coverage: https://www.universetoday.com/163959/a-new-superconducting-camera-can-resolve-single-photons/


A New Superconducting Camera can Resolve Single Photons

Researchers have built a superconducting camera with 400,000 pixels, which is so sensitive it can detect single photons. It comprises a grid of superconducting wires with no resistance until a photon strikes one or more wires. This shuts down the superconductivity in the grid, sending a signal. By combining the locations and intensities of the signals, the camera generates an image.


The researchers who built the camera, from the US National Institute of Standards and Technology (NIST) say the architecture is scalable, and so this current iteration paves the way for even larger-format superconducting cameras that could make detections across a wide range of the electromagnetic spectrum. This would be ideal for astronomical ventures such as imaging faint galaxies or extrasolar planets, as well as biomedical research using near-infrared light to peer into human tissue.


These devices have been possible for decades but with a fraction of the pixel count. This new version has 400 times more pixels than any other device of its type. Previous versions have not been very practical because of the low-quality output.

In the past, it was found to be difficult-to-impossible to chill the camera’s superconducting components – which would be hundreds of thousands of wires – by connecting them each to a cooling system.
According to NIST, researchers Adam McCaughan and Bakhrom Oripov and their collaborators at NASA’s Jet Propulsion Laboratory in Pasadena, California, and the University of Colorado Boulder overcame that obstacle by constructing the wires to form multiple rows and columns, like those in a tic-tac-toe game, where each intersection point is a pixel. Then they combined the signals from many pixels onto just a few room-temperature readout nanowires.


The detectors can discern differences in the arrival time of signals as short as 50 trillionths of a second. They can also detect up to 100,000 photons a second striking the grid.
McCaughan said the readout technology can easily be scaled up for even larger cameras, and predicted that a superconducting single-photon camera with tens or hundreds of millions of pixels could soon be available.


In the meantime, the team plans to improve the sensitivity of their prototype camera so that it can capture virtually every incoming photon. That will enable the camera to tackle quantum imaging techniques that could be a game changer for many fields, including astronomy and medical imaging.

Go to the original article...

Voigtlander 50mm f1 Nokton review

Cameralabs        Go to the original article...

The Voigtlander 50mm f1 Nokton may have a standard focal length, but the f1 aperture is a whole stop brighter than 1.4 and allows dramatic, buttery bokeh. I tested the version for Canon EOS R!…

Go to the original article...

Job Postings – Week of 19 Nov 2023

Image Sensors World        Go to the original article...

onsemi

Principal Process Engineer - CMOS Image Sensor

Nampa, Idaho, USA

Link

Institute of Photonic Sciences

Post-doctoral position in development of low-cost CMOS compatible intersubband optoelectronics

Barcelona, Spain

Link

HP

Camera Engineer for Personal Systems

Austin, Texas, USA

Link

Ring of Security Asia

Image Quality Engineer

Taipei, Taiwan

Link

Omnivision

Sensor Characterization Engineer

Santa Clara, California, USA

Link

National University of Singapore

Research Engineer (Sensors Technology)

Kent Ridge Campus, Singapore

Link

University of Maine

Assistant Professor of Physics (tenure)

Orono, Maine, USA

Link

Framos

Technical Imaging Expert – Image Sensors

Munich, Germany

Link

 

Go to the original article...

Conference List – May 2024

Image Sensors World        Go to the original article...

Robotics Summit & Expo - 1-2 May 2024 - Boston, Massachusetts, USA - Website

CLEO - Congress on Lasers and Electro-Optics  -  5-10 May 2024 - Charlotte, North Carolina, USA - Website

Automate - 6-9 May 2024 - Detroit, Michigan, USA - Website

8th International Conference on Bio-Sensing Technology - 12-15 May 2024 - Seville, Spain - Website

16th Optatec - 14-16 May 2024 - Frankfurt, Germany - Website

The 4th International Electronic Conference on Biosensors - 20-22 May 2024 - Online - Website

Auto-Sens USA - 21-23 May 2024 - Detroit, Michigan, USA - Website

ALLSENSORS 2024 - 26-30 May 2024 - Barcelona, Spain - Website

20th WCNDT - 27-31 May 2024 - Incheon, Korea - Website

 

Return to Conference List Index 

Go to the original article...

DJI Osmo Pocket 3 review

Cameralabs        Go to the original article...

The DJI Osmo Pocket 3 is a compact camera aimed at vloggers and content creators. It continues the concept of previous models, but improves on them in every regard. Find out why you'll want it in our review!…

Go to the original article...

RADOPT 2023 Nov 29-30 in Toulouse, France

Image Sensors World        Go to the original article...

The 2023 workshop on Radiation Effects on Optoelectronic Detectors and Photonics Technologies (RADOPT) will be co-organised by CNES, UJM, SODERN, ISAE-SUPAERO AIRBUS DEFENCE & SPACE, THALES ALENIA SPACE in Touluse, France on November 29 and 30, 2023.

After the success of RADOPT 2021, this second edition of the workshop, will continue to combine and replace two well-known events from the Photonic Devices and IC’s community: the “Optical Fibers in Radiation Environments Days -FMR” and the Radiation effects on Optoelectronic Detectors Workshop, traditionally organized every-two years by the COMET OOE of CNES.

The objective of the workshop is to provide a forum for the presentation and discussion of recent developments regarding the use of optoelectronics and photonics technologies in radiation-rich environments. The workshop also offers the opportunity to highlight future prospects in the fast-moving space, high energy physics, fusion and fission research fields and to enhance exchanges and collaborations between scientists. Participation of young researchers (PhD) is especially encouraged.




Go to the original article...

Acuros announces 6 MP SWIR Sensor to be released in 2024

Image Sensors World        Go to the original article...

The sensor is based on quantum dot crystals deposited on silicon.

Link: https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/

Acuros® CQD® sensors are fabricated via the deposition of quantum dot semiconductor crystals upon the surface of silicon wafers. The resulting CQD photodiode array enables high resolution, small pixel pitch, broad bandwidth, low noise, and low inter-pixel crosstalk arrays, eliminating the prohibitively expensive hybridization process inherent to InGaAs sensors. CQD sensor technology is silicon wafer-scale compatible, opening its potential to very low-cost high-volume applications.

Features:

  •  3072 x 2048 Pixel Array
  •  7µm Pixel Pitch
  •  Global Snapshot Shutter
  •  Enhanced QE
  •  100 Hz Framerate
  •  Integrated 12bit ADC
  •  Full Visible-to-SWIR bandwidth
  •  Compatible with a range of SWIR lenses
Applications:
  • Industrial Inspection: Suitable for inspection and quality control in various industries, including semiconductor, electronics, and pharmaceuticals.
  •  Agriculture: Crop health monitoring, food quality control, and moisture content analysis.
  •  Medical Imaging: Blood vessel imaging, tissue differentiation, and endoscopy.
  •  Degraded Visual Environment: Penetrating haze, smoke, rain & snow for improved situational awareness.
  •  Security and Defense:Target recognition, camouflage detection, and covert surveillance.
  •  Scientific Research: Astronomy, biology, chemistry, and material science.
  •  Remote Sensing: Environmental monitoring, geology, and mineral exploration

 

Full press release:

SWIR Vision Systems to release industry-leading 6 MP SWIR sensors for defense, scientific, automotive, and industrial vision markets
 
The company’s latest innovation, the Acuros® 6, leverages its pioneering CQD® Quantum Dot image sensor technology, further contributing to the availability of very high resolution and broad-band sensors for a diversity of applications.

Durham, N.C., October 31, 2023 – SWIR Vision Systems today announces the upcoming release of two new models of short-wavelength infrared (SWIR) image sensors for Defense, Scientific, Automotive, and Industrial Users. The new sensors are capable of capturing images in the visible, the SWIR, and the extended SWIR spectral ranges. These very high resolution SWIR sensors are made possible by the company’s patented CQD Quantum Dot sensor technology.

SWIR Vision’s new products include both the Acuros 6 and the Acuros 4 CQD SWIR image sensors, featuring 6.3 megapixel and 4.2 megapixel global shutter arrays. Each sensor has a 7-micron pixel-pitch, 12-bit digital output, low read noise, and enhanced quantum efficiency, resulting in excellent sensitivity and SNR performance for a broad array of applications.

The new products employ SWIR Vision’s CQD photodiode technology, in which photodiodes are created via the deposition of low-cost films directly on top of silicon readout ICs. This approach enables small pixel sizes, affordable prices, broad spectral response, and industry-leading high-resolution SWIR focal plane arrays.

SWIR Vision is now engaging global camera makers, automotive, industrial, and defense system integrators, who will leverage these breakthrough sensors to tackle challenges in laser inspection and manufacturing, semiconductor inspection, automotive safety, long-range imaging, and defense.
“Our customers challenged us again to deliver more capability to their toughest imaging problems. The Acuros 4 and the Acuros 6 sensors deliver the highest resolution and widest spectral response available today,” said Allan Hilton, SWIR Vision’s Chief Product Officer. “The industry can expect to see new camera and system solutions based on these latest innovations from our best-in-class CQD sensor engineering group”.

About SWIR Vision Systems – SWIR Vision Systems (www.swirvisionsystems.com), a North Carolina-based startup company, has pioneered the development and introduction of high-definition, Colloidal Quantum Dot (CQD® ) infrared image sensor technology for infrared cameras, delivering breakthrough sensor capability. Imaging in the short wavelength IR has become critical for key applications within industrial, defense systems, mobile phones, and autonomous vehicle markets.
To learn more about our 6MP Sensors, go to https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/.

Go to the original article...

SWIR Vision Systems announces 6 MP SWIR sensor to be released in 2024

Image Sensors World        Go to the original article...

The sensor is based on quantum dot crystals deposited on silicon.

Link: https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/

Acuros® CQD® sensors are fabricated via the deposition of quantum dot semiconductor crystals upon the surface of silicon wafers. The resulting CQD photodiode array enables high resolution, small pixel pitch, broad bandwidth, low noise, and low inter-pixel crosstalk arrays, eliminating the prohibitively expensive hybridization process inherent to InGaAs sensors. CQD sensor technology is silicon wafer-scale compatible, opening its potential to very low-cost high-volume applications.

Features:

  •  3072 x 2048 Pixel Array
  •  7µm Pixel Pitch
  •  Global Snapshot Shutter
  •  Enhanced QE
  •  100 Hz Framerate
  •  Integrated 12bit ADC
  •  Full Visible-to-SWIR bandwidth
  •  Compatible with a range of SWIR lenses
Applications:
  • Industrial Inspection: Suitable for inspection and quality control in various industries, including semiconductor, electronics, and pharmaceuticals.
  •  Agriculture: Crop health monitoring, food quality control, and moisture content analysis.
  •  Medical Imaging: Blood vessel imaging, tissue differentiation, and endoscopy.
  •  Degraded Visual Environment: Penetrating haze, smoke, rain & snow for improved situational awareness.
  •  Security and Defense:Target recognition, camouflage detection, and covert surveillance.
  •  Scientific Research: Astronomy, biology, chemistry, and material science.
  •  Remote Sensing: Environmental monitoring, geology, and mineral exploration

 

Full press release:

SWIR Vision Systems to release industry-leading 6 MP SWIR sensors for defense, scientific, automotive, and industrial vision markets
 
The company’s latest innovation, the Acuros® 6, leverages its pioneering CQD® Quantum Dot image sensor technology, further contributing to the availability of very high resolution and broad-band sensors for a diversity of applications.

Durham, N.C., October 31, 2023 – SWIR Vision Systems today announces the upcoming release of two new models of short-wavelength infrared (SWIR) image sensors for Defense, Scientific, Automotive, and Industrial Users. The new sensors are capable of capturing images in the visible, the SWIR, and the extended SWIR spectral ranges. These very high resolution SWIR sensors are made possible by the company’s patented CQD Quantum Dot sensor technology.

SWIR Vision’s new products include both the Acuros 6 and the Acuros 4 CQD SWIR image sensors, featuring 6.3 megapixel and 4.2 megapixel global shutter arrays. Each sensor has a 7-micron pixel-pitch, 12-bit digital output, low read noise, and enhanced quantum efficiency, resulting in excellent sensitivity and SNR performance for a broad array of applications.

The new products employ SWIR Vision’s CQD photodiode technology, in which photodiodes are created via the deposition of low-cost films directly on top of silicon readout ICs. This approach enables small pixel sizes, affordable prices, broad spectral response, and industry-leading high-resolution SWIR focal plane arrays.

SWIR Vision is now engaging global camera makers, automotive, industrial, and defense system integrators, who will leverage these breakthrough sensors to tackle challenges in laser inspection and manufacturing, semiconductor inspection, automotive safety, long-range imaging, and defense.
“Our customers challenged us again to deliver more capability to their toughest imaging problems. The Acuros 4 and the Acuros 6 sensors deliver the highest resolution and widest spectral response available today,” said Allan Hilton, SWIR Vision’s Chief Product Officer. “The industry can expect to see new camera and system solutions based on these latest innovations from our best-in-class CQD sensor engineering group”.

About SWIR Vision Systems – SWIR Vision Systems (www.swirvisionsystems.com), a North Carolina-based startup company, has pioneered the development and introduction of high-definition, Colloidal Quantum Dot (CQD® ) infrared image sensor technology for infrared cameras, delivering breakthrough sensor capability. Imaging in the short wavelength IR has become critical for key applications within industrial, defense systems, mobile phones, and autonomous vehicle markets.
To learn more about our 6MP Sensors, go to https://www.swirvisionsystems.com/acuros-6-mp-swir-sensor/.

Go to the original article...

imec paper on thin film pinned photodiode

Image Sensors World        Go to the original article...

Kim et al. from imec and coauthors from universities in Belgium and Korea recently published a paper titled "A Thin-Film Pinned-Photodiode Imager Pixel with Fully Monolithic Fabrication and beyond 1Me- Full Well Capacity" in MDPI Sensors. This paper describes imec's recent thin film pinned photodiode technology.

Open access paper link: https://www.mdpi.com/1424-8220/23/21/8803

Abstract
Thin-film photodiodes (TFPD) monolithically integrated on the Si Read-Out Integrated Circuitry (ROIC) are promising imaging platforms when beyond-silicon optoelectronic properties are required. Although TFPD device performance has improved significantly, the pixel development has been limited in terms of noise characteristics compared to the Si-based image sensors. Here, a thin-film-based pinned photodiode (TF-PPD) structure is presented, showing reduced kTC noise and dark current, accompanied with a high conversion gain (CG). Indium-gallium-zinc oxide (IGZO) thin-film transistors and quantum dot photodiodes are integrated sequentially on the Si ROIC in a fully monolithic scheme with the introduction of photogate (PG) to achieve PPD operation. This PG brings not only a low noise performance, but also a high full well capacity (FWC) coming from the large capacitance of its metal-oxide-semiconductor (MOS). Hence, the FWC of the pixel is boosted up to 1.37 Me- with a 5 μm pixel pitch, which is 8.3 times larger than the FWC that the TFPD junction capacitor can store. This large FWC, along with the inherent low noise characteristics of the TF-PPD, leads to the three-digit dynamic range (DR) of 100.2 dB. Unlike a Si-based PG pixel, dark current contribution from the depleted semiconductor interfaces is limited, thanks to the wide energy band gap of the IGZO channel material used in this work. We expect that this novel 4 T pixel architecture can accelerate the deployment of monolithic TFPD imaging technology, as it has worked for CMOS Image sensors (CIS).


Figure 1. Pixel cross-section for the monolithic TFPD image sensor (a) 3 T and (b) 4 T (TF-PPD) structure (TCO: transparent conductive oxide, HTL: hole transport layer, PG: photogate, TG: transfer gate, FD: floating diffusion). Electric potential and signal readout configuration for 3 T pixel (c) and for 4 T pixel (d). Pixel circuit diagram for 3 T pixel (e) and for the 4 T pixel (f).

 


Figure 2. I-V characteristic of QDPD test structure (a) and of IGZO TFT (b), a micrograph of the TF-PPD passive pixel array (c), and its measurement schematic (d). Band diagrams for the PD (e) and PG (f).


Figure 3. Silvaco TCAD simulation results; (a) simulated structure, (b) lateral potential profile along the IGZO layer, and (c) potential profile when TG is turned off and (d) on.


Figure 4. Signal output vs. integration time with different VPG and VTG values with the illumination. Signal curves with the fixed VTG (−1 V), varying VPG (−4~−1 V) (a), the same graphs for the fixed VPG (−2 V), and different VTGs (−6.5~−1 V) (b).

Figure 4. Signal output vs. integration time with different VPG and VTG values with the illumination. Signal curves with the fixed VTG (−1 V), varying VPG (−4~−1 V) (a), the same graphs for the fixed VPG (−2 V), and different VTGs (−6.5~−1 V) (b).

Figure 5. (a) Pixel output vs. integration time for different pixel pitches. (b) FWC comparison between estimation and measurement.

Figure 6. FWC comparison by different pixel fill factors. Pixel schematics for different shapes (a), and FWC by different pixel shapes and pitches (b).



Figure 7. Potential diagram describing FWC increase by the larger VPG (a), and FWC vs. VPG (b).

Figure 8. Passive pixel dark current (a) and Arrhenius plots (b) for the QDPD test structure and the passive pixel.

Figure 9. FWC vs. pixel area. A guideline showing the FWC density per unit area for this work (blue) and a trend line for the most of CISs (red).

 



Go to the original article...

Job Postings – Week of 12 Nov 2023

Image Sensors World        Go to the original article...


EPIR Technologies

MBE Growth Postdoctoral Fellow

Bolingbrook, Illinois, USA

Link

Apple

Camera Image Sensor Digital Design Engineer

Cupertino, California, USA

Link

Validation & Engineering Group

Process Development Scientist

Juncos, Puerto Rico

Link

Oak Ridge National Laboratory

Applied Sensor Analytics Engineer

Oak Ridge, Tennessee, USA

Link

PhD Studentship: A multi-spectral single photon sensor for enhanced 3D vision

The University of Edinburgh

Edinburgh, Scotland, UK

Link

Research Follow for Silicon Optomechanical Levitated Sensors

University of Southampton

Southampton, UK

Link

Sensor Engineer

BRS

Tokyo, Japan

Link

CMOS Sensor Expert

Talisman

Tokyo, Japan

Link

Field Application Engineer - Image Signal Processing/Image Sensor

Samsung Semiconductor Europe

Munich, Germany (Hybrid)

Link

 

Go to the original article...

Conference List – April 2024

Image Sensors World        Go to the original article...

SPIE Photonics Europe - 7-11 Apr 2024 - Strasbourg, France - Website

Position Sensitive Neutron Detectors (PSND 24) - 8-11 Apr 2024 - Oxford, UK - Website

IEEE Silicon Photonics Conference - 15-18 April 2024 - Hilton Tokyo Bay, Japan - Website

Compound Semiconductor International Conference -  16-17 Apr 2024 - Brussels, Belgium - Website

IEEE Custom Integrated Circuits Conference (CICC) - 21-24 Apr 2024 - Denver, Colorado, USA - Website

2024 MRS Spring Meeting & Exhibit - 22-26 Apr 2024 - Seattle, Washington, USA - Website

SPIE Defense and Commercial Sensing - 23-25 Apr 2024 - National Harbor, Maryland, USA - Website

SPIE Future Sensing Technologies  -  23-25 Apr 2024 - Yokohama, Japan - Website

OPIE ’24 (OPTICS & PHOTONICS International Exhibition 2024) - 24-26 Apr 2024 - Yokohama, Japan - Website


Return to Conference List Index   

Go to the original article...

EETimes article about imec’s new thin film pinned photodiode

Image Sensors World        Go to the original article...

Full article: https://www.eetimes.eu/imec-taps-pinned-photodiode-to-build-a-better-swir-sensor/

Imec Taps Pinned Photodiode to Build a Better SWIR Sensor

‘Monolithic hybrid’ prototype integrates PPD into the TFT structure to lower the cost of light detection in the nonvisible range, with improved noise performance. 

Silicon-based image sensors can detect light within a limited range of wavelengths and thus have limitations in applications like automotive and medical imaging. Sensors that can capture light beyond the visible range, such as short-wave infrared (SWIR), can be built using III-V materials, which combine such elements as gallium, indium, aluminum and phosphorous. But while those sensors perform well, their manufacture requires a high degree of precision and control, increasing their cost.

Research into less expensive alternatives has yielded thin-film absorbers such as quantum-dot (QD) and other organic photodiode (OPD) materials that are compatible with the CMOS readout circuits found in electronic devices, an advantage that has boosted their adoption for IR detection. But thin-film absorbers exhibit higher levels of noise when capturing IR light, resulting in lower image quality. They are also known to have lower sensitivity to IR.

The challenge, then, is to design a cost-effective image sensor that uses thin-film absorbers but offers better noise performance. Imec has taken aim at the problem by revisiting a technology that was first used in the 1980s to improve noise in early CMOS image sensors: the pinned photodiode (PPD).
The PPD structure’s ability to completely remove electrical charges before starting a new capture cycle makes it an efficient approach, as the sensor can reset without unwanted background noise (kTC noise) or any lingering influence from the previous image frame. PPDs quickly became the go-to choice for consumer-grade silicon-based image sensors. Their low noise and high power efficiency made them a favorite among camera manufacturers.

Researchers at imec integrated a PPD structure into thin-film–transistor (TFT) image sensors to yield a hybrid prototype. The sensor structure also uses imec’s proprietary indium gallium zinc oxide (IGZO) technology for electron transport.

“You can call such systems ‘monolithic hybrid’ sensors, where the photodiode is not a part of the CMOS circuit [as in CMOS image sensors, in which silicon is used for light absorption], but is formed with another material as the photoactive layer,” Pawel Malinowski, Pixel Innovations program manager at imec, told EE Times Europe. “The spectrum this photodiode captures is something separate … By introducing an additional thin-film transistor in between, it enables separation of the storage and readout nodes, making it possible to fully deplete the photodiode and transfer all charges to the readout, [thereby] preventing the generation of kTC noise and reducing image lag.”

Unlike the conventional thin-film-based pixel architecture, imec’s TFT hybrid PPD structure introduces a separate thin-film transistor (TFT) to the design, which acts as a transfer gate and a photogate—in other words, it functions as a middleman. Here, imec’s IGZO technology serves as an effective electron transport layer, as it has higher electron mobility. Also acting as the gate dielectric, it contributes to the performance of the sensor by controlling the flow of charges and enhancing absorption characteristics.
With the new elements strategically placed within the traditional PDD structure, the prototype 4T image sensor showed a low readout noise of 6.1e-, compared to >100e- for the conventional 3T sensor, demonstrating its superior noise performance, imec stated. Because of IGZO’s large bandgap, the TFT hybrid PPD structure also entails lower dark current than traditional CMOS image sensors. This means the image sensor can capture infrared images with less noise, less distortion or interference and more accuracy and detail, according to imec


Figure 1: Top (a) and cross-sectional (b) view of structure of TF-PPD pixels


By using thin-film absorbers, imec’s prototype image sensor can detect at SWIR wavelengths and beyond, imec said. Image sensors operating in the near-infrared range are already used in automotive applications and consumer apps like iPhone Face ID. Going to longer wavelengths, such as SWIR, enables better transmission through OLED displays, which leads to better “hiding” of the components behind the screen and reduction of the “notch.”


Malinowski said, “In automotive, going to longer wavelengths can enable better visibility in adverse weather conditions, such as visibility through fog, smoke or clouds, [and achieve] increased contrast of some materials that are hard to distinguish against a dark background—for example, high contrast of textiles against poorly illuminated, shaded places.” Using the thin-film image sensor could make intruder detection and monitoring in dark conditions more effective and cost-efficient. It could also aid in medical imaging, which uses SWIR to study veins, blood flow and tissue properties.


Looking ahead, imec plans to diversify the thin-film photodiodes that can be used in the proposed architecture. The current research has tested for two types of photodiodes: a photodiode sensitive to near-infrared and a QD photodiode sensitive to SWIR.


“Current developments were focused on realizing a proof-of-concept device, with many design and process variations to arrive at a generic module,” Malinowski said. “Further steps include testing the PPD structure with different photodiodes—for example, other OPD and QDPD versions. Furthermore, next-generation devices are planned to focus on a more specific use case, with a custom readout suitable for a particular application.


“SWIR imaging with quantum dots is one of the avenues for further developments and is also a topic with high interest from the imaging community,” Malinowski added. “We are open to collaborations with industrial players to explore and mature this exciting sensor technology.”

Go to the original article...

onsemi announces Hyperlux low power CIS for smart home

Image Sensors World        Go to the original article...

Press release: https://www.onsemi.com/company/news-media/press-announcements/en/onsemi-introduces-lowest-power-image-sensor-family-for-smart-home-and-office

onsemi Introduces Lowest Power Image Sensor Family for Smart Home and Office 

Hyperlux LP Image Sensors can extend battery life by up to 40%¹



What's New: Today onsemi introduced the Hyperlux LP image sensor family ideally suited for industrial and commercial cameras such as smart doorbells, security cameras, AR/VR/XR headsets, machine vision and video conferencing. These 1.4 µm pixel sensors deliver industry-leading image quality and low power consumption while maximizing performance to capture crisp, vibrant images even in difficult lighting conditions.

The product family also features a stacked architecture design that minimizes its footprint and at its smallest approaches the size of a grain of rice, making it ideal for devices where size is critical. Depending on the use case, customers can choose between the 5-megapixel AR0544, the 8-megapixel AR0830 or the 20-megapixel AR2020.

Why It Matters: Home and business owners continue to choose cameras to protect themselves more than any other security measure, with the market expected to triple by the end of the decade.² As a result, consumers are demanding devices that offer better image quality, reliability and longer battery life to improve the overall user experience.

With the image sensors, cameras can deliver clearer images and more accurate object detection even in harsh weather and lighting conditions. Additionally, these cameras are often placed in locations that can be difficult to access to replace or recharge batteries, making low power consumption a critical feature.

How It Works: The Hyperlux LP family is packed with features and proprietary technologies that optimize performance and resolution including:

  •  Wake on Motion – Enables the sensors to operate in a low-power mode that draws a fraction of the power needed in the full-performance mode. Once the sensor detects movement, it moves to a higher performance state in less time than it takes to snap a photo.
  •  Smart ROI – Delivers more than one region of interest (ROI) to give a context view of the scene at reduced bandwidth and a separate ROI in original detail.
  •  Near-Infrared (NIR) Performance – Delivers superior image quality due to the innovative silicon design and pixel architecture, with minimal supplemental lighting.
  •  Low Power – Reduces thermal noise which negatively impacts image quality and eliminates the need for heat sinks, reducing the overall cost of the vision system.

Supporting Quotes:
“By leveraging our superior analog design and pixel architecture, our sensors elevate the two most important elements people consider when buying a device, picture quality and battery life. Our new image sensor family delivers performance that matters with a significantly increased battery life and exquisite, highly detailed images,” said Ross Jatou, senior vice president and general manager, Intelligent Sensing Group, onsemi.

In addition to smart home devices, one of the other applications the Hyperlux LP family can improve is the office meeting experience with more intuitive, seamless videoconferencing solutions.
“Our video collaboration solutions require high-quality image sensors that bring together multiple factors for the best user experience. The superior optical performance, innovative features and extremely low power consumption of the Hyperlux LP image sensors enable us to deliver a completely immersive virtual meeting experience in highly intelligent and optimized videoconferencing systems,” said Ashish Thanawala, Sr. Director of Systems Engineering, Owl Labs.

What's Next: The Hyperlux LP Image Sensor Family will be available in the fourth quarter of 2023.

More Information:
 Learn more about the AR2020, the AR0830 and the AR0544.
 Read the blog: A Closer Look - Hyperlux LP Image Sensors

¹ Based on internal tests conducted under specific conditions. Actual results may vary based on device, usage patterns, and other external factors.
² Status of the CMOS Image Sensor Industry, Yole Intelligence Report, 2023.

Go to the original article...

Tamron 70-180mm f2.8 Di III VC G2 review

Cameralabs        Go to the original article...

The 70-180mm f2.8 Di III VC VXD G2 is the second generation of Tamron’s small and light telephoto zoom. Designed for Sony’s E-mount mirrorless cameras Tamron added optical image stabilization plus a highly configurable focus set button, custom switch, and focus ring. Can it improve upon the optical qualities of its predecessor? Find out in my full review!…

Go to the original article...

ESSCIRC 2023 Lecture on "circuit insights" by Dr. Sara Pellegrini

Image Sensors World        Go to the original article...


In this invited talk at ESSCIRC 2023, Dr. Pellegrini shares her insights on circuits and sensor design through her research career at Politecnico Milano, Heriot Watt and now at STMicro. The lecture covers basics of LiDAR and SPAD sensors, and various design challenges such as low signal strength and background illumination.

Go to the original article...

Canon comprises number one share of press cameras used during Rugby World Cup France 2023

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Canon comprises number one share of press cameras used during Rugby World Cup France 2023

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Job Postings – Week of 5 Nov 2023

Image Sensors World        Go to the original article...

MIT Lincoln Laboratory

Imager Characterization Engineer

Lexington, Massachusetts, USA

Link

Forza Silicon

Director, Business Development

Pasadena, California, USA

Link

ASML

Co-op - Optical Sensor System

Wilton, Connecticut, USA

Link

Heidelberg University

Interdisciplinary Postdoc Fellowships

Heidelberg, Germany

Link

SpaceX

Sr. MEMS Sensor Integration Engineer

Hawthorne, California, USA

Link

Heimann Sensors

Development engineer with a focus on optics or electronics

Eltville am Rhein, Germany

Link

PhotonForce

Digital Design Engineer

Edinburgh, Scotland, UK

Link

University of Valencia

PostDoc - Experiments: LUXE, CALICE

Valencia, Spain

Link

 

Go to the original article...

Dr. Robert Henderson’s lecture on time-of-flight SPAD cameras

Image Sensors World        Go to the original article...


 

Imaging Time: Cameras for the Fourth Dimension

Abstract
Time is often considered as the fourth dimension, along with the length, width and depth that form the fabric of space-time. Conventional cameras observe only two of those dimensions inferring depth from spatial cues and record time only coarsely relative to many fast phenomena in the natural world. In this talk, I will introduce the concept of time cameras, devices based on single photon avalanche diodes (SPADs) that can record the time dimension of a scene at the picosecond scales commensurate with the speed of light. This talk will chart 2 decades of my research into these devices which have seen their transformation from a research curiosity to a mainstream semiconductor technology with billions of SPAD devices in consumer use in mobile phones for depth sensing autofocus-assist. We will illustrate the talk with videos and demonstrations of ultrafast SPAD cameras developed at the University of Edinburgh. I am proud that my group’s research maintains the University position at forefront of imaging technology which has transformed our lives, seeing the transition from chemical film to digital cameras, the omnipresence of camera phones and video meetings. In the near future, SPAD-based time cameras can also be expected to play a major societal role, within optical radars (LIDARs) for robotic vision and driverless cars, surgical guidance for cancer and perhaps even to add two further dimensions to the phone camera in your pocket!

Biography
Robert K. Henderson is a Professor of Electronic Imaging in the School of Engineering at the University of Edinburgh. He obtained his PhD in 1990 from the University of Glasgow. From 1991, he was a research engineer at the Swiss Centre for Microelectronics, Neuchatel, Switzerland. In 1996, he was appointed senior VLSI engineer at VLSI Vision Ltd, Edinburgh, UK where he worked on the world’s first single chip video camera. From 2000, as principal VLSI engineer in STMicroelectronics Imaging Division he developed image sensors for mobile phone applications. He joined University of Edinburgh in 2005, designing the first SPAD image sensors in nanometer CMOS technologies in the MegaFrame and SPADnet EU projects. This research activity led to the first volume SPAD time-of-flight products in 2013 in the form of STMicroelectronics FlightSense series, which perform an autofocus-assist now present in over 1 billion smartphones. He benefits from a long-term research partnership with STMicroelectronics in which he explores medical, scientific and high speed imaging applications of SPAD technology. In 2014, he was awarded a prestigious ERC advanced fellowship. He is an advisor to Ouster Automotive and a Fellow of the IEEE and the Royal Society of Edinburgh.

Go to the original article...

Canon RF 200-800mm f6.3-9 review so far

Cameralabs        Go to the original article...

The Canon RF 200-800 f6.3-9 IS USM is a super-telephoto zoom for EOS R cameras. Could it be ideal for distant wildlife and sports? Find out in my review so far!…

Go to the original article...

Canon RF 24-105mm f2.8L IS USM Z review so far

Cameralabs        Go to the original article...

The Canon RF 24-105 f2.8L IS USM Z is a high-end zoom aimed at those who shoot photos and video. It looks set to be be popular at events and weddings. Here's my review so far.…

Go to the original article...

Canon RF-S 10-18mm f4.5-6.3 review so far

Cameralabs        Go to the original article...

The Canon RF-S 10-18mm f4.5-6.3 IS STM is a compact ultra-wide angle zoom for EOS R cameras with cropped APS-C sensors. Find out more in my review so far!…

Go to the original article...

Image Sensing Topics at Upcoming IEDM 2023 Dec 9-13 in San Francisco

Image Sensors World        Go to the original article...

The 69th annual IEEE International Electron Devices Meeting (IEDM) will be held in San Francisco Dec. 9-13. This year there are three sessions dealing with advanced image sensing topics. You can find summaries of all of these papers by going here (https://submissions.mirasmart.com/IEDM2023/Itinerary/EventsAAG.aspx) and then clicking on the relevant sessions and papers within each one:
 
Session #8 on Monday, Dec. 11 is “Advanced Photonics for Image Sensors and High-Speed Communications.” It features six papers describing advanced photonics for image sensors and high speed communications. The first three deal with device and integration concepts for sub-diffraction color filters targeting imaging key performance indicators, while the second three deal with devices and technologies for high speed communication systems.

  1.  IMEC will describe a novel sub-micron integration approach to color-splitting, to match human eye color sensitivity.
  2.  VisEra Technologies will describe the use of nano-light pillars to improve the quantum efficiency and signal-to-noise ratio (SNR) of color filters on CMOS imaging arrays under low-light conditions.
  3.  Samsung will detail a metasurface nano-prism structure for wide field-of-view lenses, demonstrating 25% higher sensitivity and 1.2 dB increased SNR vs. conventional micro-lenses.
  4.  National University of Singapore will describe the integration of ferroelectric material into a LiNbO3-on-insulator photonic platform, demonstrating non-volatile memory and high-efficiency modulators with an efficiency of 66 pm/V.
  5.  IHP will discuss the first germanium electro-optical modulator operating at 100 GHz in a SiGe BiCMOS photonics technology.
  6.  An invited paper from Intel will discuss the first 256 Gbps WDM transceiver with eight 200 GHz-spaced wavelengths simultaneously modulated at 32 Gbps, and with a bit-error-rate less than 1e-12.

 
Session #20 on Tuesday, Dec. 12 is Emerging Photodetectors. It features five papers describing recent developments in emerging photodetectors spanning the MIR to the DUV spectral range, and from group IV and III-V sensors to organic detectors.

  1.  The first paper by KAIST presents a fully CMOS-compatible Ge-on-Insulator platform for detection of wavelengths beyond 4 µm.
  2.  The second paper by KIST (not a typo) presents a new record-low-jitter SPAD device integrated into a CIS process technology, covering a spectral range of visible up to NIR.
  3.  The third paper by KAIST describes a wavelength-tunable detection device combining optical gratings and phase-change materials, reaching wavelengths up to 1700 nm.
  4.  The University of Science and Technology of China will report on a dual-function tunable emitter and NIR photodetector combination based on III-V GaN/AlGaN nanowires on silicon.
  5.  An invited paper from France’s CNRS gives an overview on next-generation sustainable organic photodetectors and emitters.

 
Session #40 on Wednesday, Dec. 13 features six papers describing the most recent advances in image sensors.

  1.  Samsung will describe a 0.5 µm pixel, 3 layers-stacked, CMOS image sensor (CIS) with in-pixel Cu-Cu bonding technology featuring improved conversion gain and noise.
  2.  Omnivision will present a 2.2 µm-2 layer stacked high dynamic range VDGS CIS with 1x2 shared structure offering dual conversion gain and achieving low FPN.
  3.  STMicroelectronics will describe a 2.16 µm 6T BSI VDGS CIS using deep trench capacitors and achieving 90 dB dynamic range using spatially-split exposure.
  4.  Meta will describe a 2 megapixel - 4.23 µm pixel pitch - offering block-parallel A/D architecture and featuring programmable sparse-capture with a fine grain gating scheme for power saving.
  5.  Canon will introduce a new twisted photodiode CIS structure - 6 µm pixel pitch - enabling all-directional autofocus for high speed and accuracy and 95 dB DR.
  6.  Shanghai Jiao Tong University will present a 64x64-pixel organic imager prototype, based on a novel hole transporting layer (HTL)-free structure achieving the highest recorded low-light performance.

 
Full press release about the conference is below.

2023 IEEE International Electron Devices Meeting to Highlight Advances in Critical Semiconductor Technologies with the Theme, “Devices for a Smart World Built Upon 60 Years of CMOS”

Four Focus Sessions on topics of intense research interest:

  •  3D Stacking for Next-Generation Logic & Memory by Wafer Bonding and Related Technologies
  •  Logic, Package and System Technologies for Future Generative AI
  •  Neuromorphic Computing for Smart Sensors
  •  Sustainability in Semiconductor Device Technology and Manufacturing

 
SAN FRANCISCO, CA – Since it began in 1955, the IEEE International Electron Devices Meeting (IEDM) has been where the world’s best and brightest electronics technologists go to learn about the latest breakthroughs in semiconductor and related technologies. That tradition continues this year, when the 69th annual IEEE IEDM conference takes place in-person December 9-13, 2023 at the Hilton San Francisco Union Square hotel, with online access to recorded content available afterward.
 
The 2023 IEDM technical program, supporting the theme, “Devices for a Smart World Built Upon 60 Years of CMOS,” will consist of more than 225 presentations plus a full slate of panels, Focus Sessions, Tutorials, Short Courses, a career luncheon, supplier exhibit and IEEE/EDS award presentations.
 
“The IEDM offers valuable insights into where the industry is headed, because the leading-edge work presented at the conference showcases major trends and paradigm shifts in key semiconductor technologies,” said Jungwoo Joh, IEDM 2023 Publicity Chair and Process Development Manager at Texas Instruments. “For example, this year many papers discuss ways to stack devices in 3D configurations. This is of course not new, but two things are especially noteworthy about this work. One is that it isn’t just happening with conventional logic and memory devices, but with sensors, power, neuromorphic and other devices as well. Also, many papers don’t describe futuristic laboratory studies, but rather specific hardware demonstrations that have generated solid results, opening pathways to commercial feasibility.”
 
“Finding the right materials and device configurations to develop transistors that will perform well with acceptable levels of reliability remains a key challenge,” said Kang-ill Seo, IEDM 2023 Publicity Vice Chair and Vice President, Semiconductor R&D, Samsung Semiconductor. “This year’s program shows that electrothermal considerations remain a key focus, particularly with attempts to add functionality to a chip’s interconnect, or wiring, which is fabricated using low-temperature processes.”
 
Here are details of the 2023 IEEE International Electron Devices Meeting:
 
Tutorial Sessions – Saturday, Dec. 9
The Saturday tutorial sessions on emerging technologies are presented by experts in the field to bridge the gap between textbook-level knowledge and leading-edge current research, and to introduce attendees to new fields of interest. There are three time slots, each with two tutorials running in parallel:
1:30 p.m. - 2:50 p.m.
• Innovative Technology for Beyond 2 nm, Matthew Metz, Intel
• CMOS+X: Functional Augmentation of CMOS for Next-Generation Electronics, Sayeef Salahuddin, UC-Berkeley
3:05 p.m. - 4:25 p.m.
• Reliability Challenges of Emerging FET Devices, Jacopo Franco, Imec
• Advanced Packaging and Heterogeneous Integration - Past, Present & Future, Madhavan Swaminathan, Penn State
4:40 p.m. - 6:00 p.m.
• Synapses, Circuits, and Architectures for Analog In-Memory Computing-Based Deep Neural Network Inference Hardware Acceleration, Irem Boybat, IBM
• Tools for Device Modeling: From SPICE to Scientific Machine Learning, Keno Fischer, JuliaHub
 
Short Courses – Sunday, Dec. 10
In contrast to the Tutorials, the full-day Short Courses are focused on a single technical topic. They offer the opportunity to learn about important areas and developments, and to network with global experts.

• Transistor, Interconnect, and Chiplets for Next-Generation Low-Power & High-Performance Computing, organized by Yuri Y. Masuoka, Samsung

  •  Advanced Technology Requirement for Edge Computing, Jie Deng, Qualcomm
  •  Process Technology toward 1nm and Beyond, Tomonari Yamamoto, Tokyo Electron
  •  Empowering Platform Technology with Future Semiconductor Device Innovation, Jaehun Jeong, Samsung
  •  Future Power Delivery Process Architectures and Their Capability and Impact on Interconnect Scaling, Kevin Fischer, Intel
  •  DTCO/STCO in the Era of Vertical Integration, YK Chong, ARM
  •  Low Power SOC Design Trends/3D Integration/Packaging for Mobile Applications, Milind Shah, Google

 
• The Future of Memory Technologies for High-Performance Memory and Computing, organized by Ki Il Moon, SK Hynix

  •  High-Density and High-Performance Technologies for Future Memory, Koji Sakui, Unisantis Electronics Singapore/Tokyo Institute of Technology
  •  Advanced Packaging Solutions for High Performance Memory and Compute, Jaesik Lee, SK Hynix
  •  Analog In-Memory Computing for Deep Learning Inference, Abu Sebastian, IBM
  •  The Next Generation of AI Architectures: The Role of Advanced Packaging Technologies in Enabling Heterogeneous Chiplets, Raja Swaminathan, AMD
  •  Key Challenges and Directional Path of Memory Technology for AI and High-Performance Computing, Keith Kim, NVIDIA
  •  Charge-Trapping Memories: From the Fundamental Device Physics to 3D Memory Architectures (3D NAND, 3D NOR, 3D DRAM) and Computing in Memory (CIM), Hang-Ting (Oliver) Lue, Macronix

 
Plenary Presentations – Monday, Dec. 11

  •  Redefining Innovation: A Journey forward in the New Dimension Era, Siyoung Choi, President & GM, Samsung Foundry Business, Device Solutions Division
  •  The Next Big Thing: Making Memory Magic and the Economics Beyond Moore's Law, Thy Tran, Vice President of Global Frontend Procurement, Micron
  •  Semiconductor Challenges in the 5G and 6G Technology Platforms, Björn Ekelund, Corporate Research Director, Ericsson

 
Evening Panel Session – Tuesday evening, Dec. 12
The IEDM evening panel session is an interactive forum where experts give their views on important industry topics, and audience participation is encouraged to foster an open exchange of ideas. This year’s panel will be moderated by Dan Hutcheson, Vice Chair at Tech Insights.

  •  AI: Semiconductor Catalyst? Or Disrupter? Artificial Intelligence (AI) has long been a hot topic. In 2023 it became super-heated when large language models became readily available to the public. This year’s IEDM will not rehash what’s been dragged through media. Instead, it will bring together industry experts to have a conversation about how AI is changing the semiconductor industry and to ask them how they are using AI to transform their efforts. The topics will be wide-ranging, from how AI will drive demand for semiconductors, to how it’s changing design and manufacturing, and even to how it will change the jobs and careers of those working in it.

 
Luncheon – Tuesday, Dec. 12
There will be a career-focused luncheon featuring industry and scientific leaders talking about their personal experiences in the context of career growth. The discussion will be moderated by Jennifer Zhao, President/CEO, asm OSRAM USA Inc. The speakers will be:

  •  Ilesanmi Adesida, University Provost and Acting President, Nazarbayev University, Kazakhstan -- Professor Ilesanmi Adesida is a scientist/engineer and an experienced administrator in both scientific and educational circles, with more than 350 peer-reviewed articles/250 presentations at international conferences.
  •  Isabelle Ferain, Vice-President of Technology Development, GlobalFoundries -- Dr. Ferain oversees GF’s technology development mission in its 300mm fabs in the US and Europe.

 
Vendor Exhibition/MRAM Poster Session/MRAM Global Innovation Forum

  •  A vendor exhibition will be held once again.
  •  A special poster session dedicated to MRAM (magnetoresistive RAM memory) will take place during the IEDM on Tuesday, Dec. 12 from 2:20 pm to 5:30 p.m., sponsored by the IEEE Magnetics Society.
  •  Also sponsored by the IEEE Magnetics Society, the 15th MRAM Global Innovation Forum will be held in the same venue after the IEDM conference concludes, on Thursday, Dec. 14.

 
For registration and other information, visit www.ieee-iedm.org.
 
Follow IEDM via social media

 
About IEEE & EDS
IEEE is the world’s largest technical professional organization dedicated to advancing technology for the benefit of humanity. Through its highly cited publications, conferences, technology standards, and professional and educational activities, IEEE is the trusted voice on a wide variety of areas ranging from aerospace systems, computers, and telecommunications to biomedical engineering, electric power, and consumer electronics. The IEEE Electron Devices Society is dedicated to promoting excellence in the field of electron devices, and sponsors the IEEE IEDM.

Go to the original article...

Canon’s VR imaging lens wins the Bronze Award at the Design for Asia Awards 2023

Newsroom | Canon Global        Go to the original article...

Go to the original article...

css.php