Archives for April 2018

Anitoa Ultra-low Light Bio-optical Sensor in Volume Production

Image Sensors World        Go to the original article...

PRNewswire: Anitoa, a Menlo Park CA startup since 2012, announces the volume production of its Ultra-low-light CMOS bio-optical sensor, ULS24. Capable of 3x10-6 lux low-light detection, Anitoa ULS24 is said to be "the world's most sensitive image sensor manufactured with proven low-cost CMOS image sensor (CIS) process technology."

Until now, molecular testing such as DNA or RNA, and immunoassay testing (e.g. ELISA) rely on traditional bulky and expensive Photon Multiplier Tube (PMT) or cooled CCD technologies. "Following the trend of CMOS image sensors replacing CCDs in consumer cameras, many customers are exploring this CMOS Bio-optical sensor to replace CCD or PMT designs for new products," says Anitoa SVP Yuping Chung. With Anitoa ULS24 now in volume production, it's low-light sensitivity is said to rival PMTs and CCDs used in molecular and immunoassay testing devices. ULS24 achieves this high level of sensitivity through the innovation of temperature compensated dark current management algorithm.

Go to the original article...

Cadence Presents Tensilica Vision Q6 DSP

Image Sensors World        Go to the original article...

Cadence announces the Tensilica Vision Q6 DSP, its latest DSP for embedded vision and AI built on a new, faster processor architecture. The fifth-generation Vision Q6 DSP offers 1.5X greater vision and AI performance than its predecessor, the Vision P6 DSP, and 1.25X better power efficiency at the Vision P6 DSP’s peak performance. The Vision Q6 DSP is targeted for embedded vision and on-device AI applications in the smartphone, surveillance camera, automotive, augmented reality (AR)/virtual reality (VR), drone and robotics markets.

Go to the original article...

NIT Announces HDR Sensor with LED Flicker Mitigation

Image Sensors World        Go to the original article...

NIT presents its NSC 1701 HDR Global Shutter CMOS sensor featuring a new Light Flicker Mitigation mode and a 12b digital output.
  • 1280 x 1024 Pixels Resolution 
  • 6.8µm Pixel Pitch 
  • On Chip 12 bits ADC 
  • 1.3 MP
  • Light Flicker Mitigation
  • Color or Monochrome
The NSC1701 sensor is aimed to industrial and emerging embedded applications and to the automotive market with its new flicker mitigation mode. The engineering samples are available now and the mass production planned for June 2018.

Go to the original article...

ABI Research: Face Recognition is 5x Easier to Spoof than Fingerprint One

Image Sensors World        Go to the original article...

ABI Research: “Face recognition on smartphones is five times easier to spoof than fingerprint recognition,” stated ABI Research Industry Analyst Dimitrios Pavlakis. “Despite the decision to forgo its trademark sapphire sensor in the iPhone X in favor of face recognition (FaceID,) Apple may be now forced to return to fingerprints in the next iPhone,” added Pavlakis.

ABI also comments on Synaptics under-display fingerprint sensor in Vivo X20+ smartphone:

Vivo may have been cautious to fully commit to the new technology and left room to fall back to a traditional sensor below the display,” said Jim Mielke, ABI Research’s VP of the Teardowns service. “The performance of this first implementation does warrant some caution as the sensor seemed less responsive and required increased pressure to unlock the phone.

Go to the original article...

Samsung 0.9um TetraCell Sensor Reverse Engineered

Image Sensors World        Go to the original article...

TechInsights publishes reverse engineering of Samsung first 0.9um pixel sensor found in Vivo V7+ smartphone 24MP selfie camera:

"We did not expect to see the Samsung S5K2X7SP image sensor until Q2, 2018…but here it is in the Vivo V7+."

Go to the original article...

Omnivision Announces 8MP 2um Nyxel Sensor

Image Sensors World        Go to the original article...

PRNewswire: OmniVision announces the 8MP OS08A20 equipped with 2um pixel with Nyxel NIR technology. The OS08A20 is the first sensor to combine Nyxel technology with PureCel pixel architecture.

"With 8 megapixels of resolution and our industry-leading Nyxel technology, the OS08A20 allows surveillance cameras to capture accurate and detailed images at night, without the need for high-power LEDs," said Brian Fang, Business Development Director at OmniVision. "With such capabilities, this sensor also fills a need in emerging applications, such as video analytics, where accurate object and facial recognition is aided by higher resolution and sensitivity."

Demand for surveillance cameras continues to grow, with well over 125 million such cameras expected to ship globally in 2018, according to IHS Markit. Other applications with similar requirements, such as body-worn cameras for law enforcement, represent an additional growth opportunity.

Nyxel technology delivers QE improvement at 850nm and 940nm while maintaining high-modulation transfer function, allowing the OS08A20 to monitor a larger area compared with legacy technologies. Eliminating the need for external lighting sources reduces power consumption and enables covert surveillance for improved security. The OS08A20 is also a color CMOS image sensor, employing the PureCel pixel architecture with BSI to capture color images during the daytime.

The OS08A20 is currently sampling and is expected to start volume production in Q2 2018.

Go to the original article...

Omnivision Proposes Adding Shield Bumps to Pixel Level Interconnect

Image Sensors World        Go to the original article...

Omnivision patent application US20180097030 "Stacked image sensor with shield bumps between interconnects" by Sohei Manabe, Keiji Mabuchi, Takayuki Goto, Vincent Venezia, Boyd Albert Fowler, and Eric A. G. Webster reduces coupling in pixel level interconnected stacked sensor:

"One of the challenges presented with conventional stacked image sensors is the unwanted capacitive coupling that exists between the adjacent interconnection lines between the first and second dies of the stacked image sensors that connect the photodiodes to the pixel support circuits. The capacitive coupling between the adjacent interconnection lines can cause interference or result in other unwanted consequences between adjacent interconnection lines when reading out image data from the photodiodes."


"As such, there are also shield bumps 520 disposed between adjacent interconnection lines 518 along each of the diagonals A-A′ and/or B-B′ of the pixel array of stacked imaging system 500 in accordance with the teachings of the present invention. As such, when every other pixel cell in two rows of the pixel array included in stacked imaging system 500 are read out at a time, there is a shield bump 520 disposed the corresponding interconnect lines 518 in accordance with the teachings of the present invention. With a shield bump 520 disposed between adjacent interconnection lines 518, the coupling capacitance is eliminated to reduce unwanted interference, crosstalk, and the like, during readouts of stacked image sensor 500 in accordance with the teachings of the present invention."

Go to the original article...

Image Sensors at 2018 VLSI Symposia

Image Sensors World        Go to the original article...

VLSI Symposia to be held on June 18-22 in Honolulu, Hawaii, publishes its official Circuit and Technology programs. In total, there are 8 image sensor papers:
  • C7‐1 A 252 × 144 SPAD Pixel FLASH LiDAR with 1728 Dual‐clock 48.8 ps TDCs, Integrated Histogramming and 14.9‐to‐1 Compression in 180nm CMOS Technology,
    S. Lindner, C. Zhang*, I. Antolovic*, M. Wolf**, E. Charbon***,
    EPFL/University of Zurich, *TUDelft, **University of Zurich, ***EPFL/TUDelft 
  • C7‐2 A 220 m‐Range Direct Time‐of‐Flight 688 × 384 CMOS Image Sensor with Sub‐Photon Signal Extraction (SPSE) Pixels Using Vertical Avalanche Photo Diodes and 6 kHz Light Pulse Counters,
    S, Koyama, M. Ishii, S. Saito, M. Takemoto, Y. Nose, A. Inoue, Y. Sakata, Y. Sugiura, M. Usuda, T. Kabe, S. Kasuga, M. Mori, Y. Hirose, A. Odagawa, T. Tanaka,
    Panasonic Corporation 
  • C7‐3 Multipurpose, Fully‐Integrated 128x128 Event‐Driven MD‐SiPM with 512 16‐bit TDCs with 45 ps LSB and 20 ns Gating,
    A. Carimatto, A. Ulku, S. Lindner*, E. D’Aillon, S. Pellegrini**, B. Rae**, E. Charbon*,
    TU Delft, *EPFL, **ST Microelectronics 
  • C7‐4 A Two‐Tap NIR Lock‐In Pixel CMOS Image Sensor with Background Light Cancelling Capability for Non‐Contact Heart Rate Detection,
    C. Cao, Y. Shirakawa, L. Tan, M. W. Seo, K. Kagawa, K. Yasutomi, T. Kosugi*, S. Aoyama*, N. Teranishi, N. Tsumura**, S. Kawahito,
    Shizuoka University, *Brookman Technology, **Chiba University
  • T7‐2 An Over 120 dB Wide‐Dynamic‐range 3.0 μm Pixel Image Sensor with In‐pixel Capacitor of 41.7 fF/µm2 and High Reliability Enabled by BEOL 3D Capacitor Process,
    M. Takase, S. Isono, Y. Tomekawa, T. Koyanagi, T. Tokuhara, M. Harada, Y. Inoue,
    Panasonic Corporation
  • T15‐4 Next‐generation Fundus Camera with Full Color Image Acquisition in 0‐lx Visible Light by 1.12‐micron Square Pixel, 4K, 30‐fps BSI CMOS Image Sensor with Advanced NIR Multi‐spectral Imaging System,
    H.Sumi, T.Takehara*,S.Miyazaki*, D.Shirahige*, K.Sasagawa*, T. Tokuda*, Y. Watanabe*, N.Kishi, J.Ohta*, M.Ishikawa,
    The University of Tokyo, *NAIST
  • T15‐2 A Near‐ & Short‐Wave IR Tunable InGaAs Nanomembrane PhotoFET on Flexible Substrate for Lightweight and Wide‐Angle Imaging Applications,
    Y.Li, A. Alian*, L.Huang, K. Ang, D. Lin*, D. Mocuta*, N. Collaert*, A.V‐Y Thean,
    National University of Singapore, *IMEC
  • C23‐2 A 2pJ/pixel/direction MIMO Processing based CMOS Image Sensor for Omnidirectional Local Binary Pattern Extraction and Edge Detection,
    X. Zhong, Q. Yu, A. Bermak**, C.‐Y. Tsui, M.‐K. Law*,
    Hong Kong University of Science and Technology, *University of Macau, **also with Hamad Bin Khalifa University

Go to the original article...

Luminar Acquires Black Forest Engineering

Image Sensors World        Go to the original article...

Optics.org, Techcrunch: Colorado Springs-based image sensor and ROIC design house Black Forest Engineering has been acquired by a LiDAR startup Luminar:

"“This year for us is all about scale. Last year it took a whole day to build each unit — they were being hand assembled by optics PhDs,” said Luminar’s wunderkind founder Austin Russell. “Now we’ve got a 136,000 square foot manufacturing center and we’re down to 8 minutes a unit.”

...the production unit is about 30 percent lighter and more power efficient, can see a bit further (250 meters vs 200), and detect objects with lower reflectivity (think people wearing black clothes in the dark).

The secret is the sensor. Most photosensors in other lidar systems use a silicon-based photodetector. Luminar, however, decided to start from the ground up with InGaAs.

The problem is that indium gallium arsenide is like the Dom Perignon of sensor substrates. It’s expensive as hell and designing for it is a highly specialized field. Luminar only got away with it by minimizing the amount of InGaAs used: only a tiny sliver of it is used where it’s needed, and they engineered around that rather than use the arrays of photodetectors found in many other lidar products. (This restriction goes hand in glove with the “fewer moving parts” and single laser method.)

Last year Luminar was working with a company called Black Forest Engineering to design these chips, and finding their paths inextricably linked, Luminar bought them. The 30 employees at Black Forest, combined with the 200 hired since coming out of stealth, brings the company to 350 total.

By bringing the designers in house and building their own custom versions of not just the photodetector but also the various chips needed to parse and pass on the signals, they brought the cost of the receiver down from tens of thousands of dollars to… three dollars.

“We’ve been able to get rid of these expensive processing chips for timing and stuff,” said Russell. “We build our own ASIC. We only take like a speck of InGaAs and put it onto the chip. And we custom fab the chips.”

“This is something people have assumed there was no way you could ever scale it for production fleets,” he continued. “Well, it turns out it doesn’t actually have to be expensive!”
"


Update: IEEE Spectrum publishes a larger image of Luminar's InGaAs sensors:

Go to the original article...

Spectral Edge Raises $5.3M

Image Sensors World        Go to the original article...

Remember the times when ISP startups were popular - Nethra, Insilica, Alphamosaic, Nucore, Atsana, Mtekvision, etc? With AI and machine learning in fashion, this time might come back. EETimes reports that UK-based Spectral Edge has raised $5.3m. The new startup bets on image fusion of IR and RGB images claiming the improvements of image quality:

Go to the original article...

SWIR Camera Market

Image Sensors World        Go to the original article...

Esticast Research and Consulting publishes Shortwave Infra-Red Camera Market report. Tke key findings in the report:

  • North America held the largest chunk of market share in 2016 owing to rapid technical development and increasing applications.
  • China and other Asian countries are expected to grow the fastest during the forecast period.
  • Area cameras held more than 50% of the global market share. However, linear cameras are expected to grow with the fastest growth rate of 8.31% during the forecast period.
  • Optical communication dominated the global market in 2017, holding nearly 3/7th of the global market.
  • Aerial SWIR cameras are expected to witness the highest CAGR of 10.03% during the forecast period.

SWIR Camera Market

Go to the original article...

Qualcomm Unveils Vision Intelligence Platform

Image Sensors World        Go to the original article...

PRNewswire: Qualcomm announces its Vision Intelligence Platform featuring the Company's first family of SoCs for IoT built in 10nm FinFET process. The QCS605 and QCS603 SoCs deliver computing for on-device camera processing and machine learning across a wide range of IoT applications. The SoCs integrate Qualcomm's most advanced ISP to date and an Artificial Intelligence (AI) Engine, along with a heterogeneous compute architecture including ARM-based multicore CPU, vector processor and GPU. The Vision Intelligence Platform also includes Qualcomm Technologies' advanced camera processing software, machine learning and computer vision SD), as well as connectivity and security technologies.

"Our goal is to make IoT devices significantly smarter as we help customers bring powerful on-device intelligence, camera processing and security. AI is already enabling cameras with object detection, tracking, classification and facial recognition, robots that avoid obstacles autonomously, and action cameras that learn and generate a video summary of your latest adventure, but this is really just the beginning," said Joseph Bousaba, VP, product management, Qualcomm. "The Qualcomm Vision Intelligence Platform is the culmination of years of advanced research and development that brings together breakthrough advancements in camera, on-device AI and heterogeneous computing. The platform is a premier launchpad for manufacturers and developers to create a new world of intelligent IoT devices."

The Vision Intelligence Platform supports up to 4K video resolution at 60 fps, or 5.7K at 30 fps, as well as multiple concurrent video streams at lower resolutions. The platform integrates a dual 14-bit Spectra 270 ISP supporting dual 16 MP sensors. In addition, the Vision Intelligence Platform includes vision processing capabilities necessary for IoT segments such as staggered HDR to prevent the "ghost" effect in HDR video, electronic image stabilization, de-warp, de-noise, chromatic aberration correction, and motion compensated temporal filters in hardware.

The QCS605 and QCS603 are sampling now.

Go to the original article...

SPAD-based HDR Imaging

Image Sensors World        Go to the original article...

MDPI Sensors keep publishing expanded papers from 2017 International Image Sensor Workshop. ST Micro and University of Edinburgh present "High Dynamic Range Imaging at the Quantum Limit with Single Photon Avalanche Diode-Based Image Sensors" by Neale A.W. Dutton, Tarek Al Abbas, Istvan Gyongy, Francescopaolo Mattioli Della Rocca, and Robert K. Henderson.

"This paper examines methods to best exploit the High Dynamic Range (HDR) of the single photon avalanche diode (SPAD) in a high fill-factor HDR photon counting pixel that is scalable to megapixel arrays. The proposed method combines multi-exposure HDR with temporal oversampling in-pixel. We present a silicon demonstration IC with 96 × 40 array of 8.25 µm pitch 66% fill-factor SPAD-based pixels achieving >100 dB dynamic range with 3 back-to-back exposures (short, mid, long). Each pixel sums 15 bit-planes or binary field images internally to constitute one frame providing 3.75× data compression, hence the 1k frames per second (FPS) output off-chip represents 45,000 individual field images per second on chip. Two future projections of this work are described: scaling SPAD-based image sensors to HDR 1 MPixel formats and shrinking the pixel pitch to 1–3 µm."

Go to the original article...

Dual Camera Phone Market

Image Sensors World        Go to the original article...

IFNews quotes Isaiah Research forecast on dual camera mobile phone market:

"Even though the growth of overall smartphone shipments only reaches 2%~3% in 2017, Isaiah Research is still optimistic about the dual camera market growth in 2018. They expect that the dual camera smartphone shipments may reach 450M units in 2018, showing 66.7% YoY growth."

Go to the original article...

Photon Shot Noise in Photolithography

Image Sensors World        Go to the original article...

IEEE Spectrum says that photon shot noise becomes a major limiting factor in EUV photolithography. The EUV wavelength of 13.5nm is much shorter than the commonly used 193nm, meaning there is much less photons per each Watt of the light power.

"...when Imec engineers began producing experimental features for 5-nm chips last year, they noticed many more defects than they’d expected.

They built rows of trenches of the kind that would form a chip’s wiring once filled with metal and arrays of holes that would become the contacts from the wiring to the parts of the transistors below. But there were “nanobridges” between the trenches, holes that were missing, and holes that had merged with their neighbors, Ronse says. Such random snafus are collectively called stochastic defects.

What causes them? ...They can be caused by what’s called photon shot noise. It’s the fact that there are few photons falling on the chip and you just don’t always get the same number at every spot on the chip.

Another culprit is likely a sort of chemical version of shot noise. The photoactive chemicals in the photoresist may not be perfectly uniformly distributed on the wafer, leaving spots that act as though they are underexposed.
"

Noisy EUV image

Go to the original article...

ULIS Breaks Bolometric Pixel Speed-Resolution Trade-offs

Image Sensors World        Go to the original article...

ALA News: ULIS says it has achieved unmatched performance with its bolometer sensor that now meets speeds of fast imaging applications. It has gained a response time that is four-times greater than standard bolometers without any trade-off in sensitivity. The company says it is the first to overcome this technological challenge in its aim to significantly raise the overall FoM of bolometers.

ULIS is thrilled with the outstanding results we have achieved in improving both the response time and sensitivity of the bolometer. This is proof of the continued strength of our affordable pixel technology and the skillset within our R&D teams,” said Sébastien Tinnes, marketing team leader at ULIS. “We feel camera makers will benefit tremendously from thermal image sensors, which, when used in conjunction with visible or SWIR cameras, can provide valuable additional information on product quality.

ULIS presented its results in a technical paper entitled: ‘ULIS Bolometer for Fast Imaging Applications Sets New Response Time Record’ at OPTO2018 in Paris. Th company has shown improvement of FoM from the usual NETD=50mk multiplied by TTC=10 to 12ms (meaning FoM 500 to 600mK.ms) to a new level of NETD=50mK multiplied by TTC=2.5 – 3ms (meaning FoM 125 to 150mK.ms), a 4x improvement:

Go to the original article...

DALSA Announces Camera with Sony Polarization Sensor

Image Sensors World        Go to the original article...

Sony polarization sensors get more and more customers. Teledyne DALSA introduces its Genie Nano camera built around the Sony Pregius 5.1MP polarized image sensor. The Genie Nano-M2450-Polarized model features a monochrome quad polarization filter, resolution of 2448 x 2048 pixels, and image capture of 35 fps.

Go to the original article...

ON Semi Unveils 4.2um BSI Pixel Sensor

Image Sensors World        Go to the original article...

BusinessWire: ON Semi introduces the industry’s first 1/1.7-inch 2.1MP CMOS sensor featuring the newly developed 4.2μm BSI pixels. The AR0221 is said to deliver class-leading low light sensitivity for industrial applications. The AR0221 also offers 3-exposure line-interleaved HDR, supporting frame rates of 1080p at 30 fps.

Gianluca Colli, VP and GM, Consumer Solution Division of Image Sensor Group at ON Semi says: “The AR0221 represents the industry’s best CMOS image sensor in this class, thanks to its outstanding low light sensitivity and SNR performance. By including features like windowing, auto black level correction and an onboard temperature sensor, ON Semiconductor has produced an image sensor that will enable a new generation of security and surveillance cameras.

The AR0221 is in production now.

Go to the original article...

TechInsights Reviews Samsung Galaxy S9 3-Layer Stacked Sensors

Image Sensors World        Go to the original article...

TechInsights compares Sony and Samsung 3-layer stacked sensors found in different version of Galaxy S9 smartphone:


Sony "IMX345 general structure is similar to the IMX400, the world’s first 3-layer stacked imager from the year-old Sony Xperia XZs:"


"In contrast to Sony’s custom-DRAM-in-the-middle wafer stack, Samsung has chosen to assemble its stack by micro-bumping a standard DRAM chip face-to-back on the ISP:"


Thanks to RF for the pointer!

Go to the original article...

Ambarella Self-Driving Vehicle Has 20 Cameras and No LiDAR

Image Sensors World        Go to the original article...

ExtremeTech: Ambarella self-driving test vehicle is a rare case of LiDAR-less platform:

"its EVA (Embedded Vehicle Autonomy) development vehicle built on its CVFlow architecture and a phalanx of 20 cameras powered by 16 of its CV1 image and vision processing SoCs.

For long range detection, EVA relies on six of Ambarella’s SuperCam3 4K stereo cameras, each with a 75-degree field of view. They are arranged in a hexagon, providing full 360-degree coverage. Ambarella claims that the system can detect and identify a pedestrian out to 150 meters, and as far as 180 meters if coupled with additional neural network software.

Each stereo camera is powered by its own pair of Ambarella’s CV1 vision SoCs. The cameras use Sony IMX317 sensors.. have lower resolution to give them bigger pixels and improved low-light and high-dynamic range performance.

The short-range stereo cameras are place in the front, back, and both sides of the car. Utilizing fisheye lenses, this provides the EVA with a full 360-degree view of the area immediately around the car. The four stereo cameras are supported by a dedicated pair of CV1 SoCs, and use an array of Sony’s 2MP IMX290, which feature very large pixels.

Rounding out the system are a front-facing radar for redundancy and driving in poor visibility, and a PC to run higher-level sensor fusion, localization, and path planning tasks.
"

Go to the original article...

Fast Camera Interviews

Image Sensors World        Go to the original article...

Samsung publishes an interview with Galaxy S9 fast camera design team:

"We were able to achieve a readout speed that is four times faster than conventional cameras thanks to a three-layer stacked image sensor that includes the CMOS image sensor itself, a fast readout circuit, and a dedicated dynamic random-access (DRAM) memory chip, which previously was not added to image sensors,” explained Dongsoo Kim. “Integrating DRAM allowed us to overcome obstacles such as speed limits between the sensor and application processor (AP) in a high-speed camera with 960fps features.

“It wasn’t easy testing out Motion Detection for Super Slow-mo, especially with people staring at us when we brought our laptops to these amusement parks and took videos of fast-moving rides. However, we were all committed to getting the feature exactly right,” said Dongsoo Kim.

“We also shot for two hours in the middle of a mountain range, on a freezing night, to complete the low-light camera function,” recalled Sungwook Choi.
"

Samsung also publishes a picture showing the 3-Layer structure - pixel layer on top, logic layer in the middle and DRAM on the bottom:


Photoblographer publishes Sony’s Mark Weir short explanation about fast full-frame stacked sensor used in A9 camera (dated by April 2017):

Go to the original article...

ON Semi Rolls Out Its Enhanced NIR QE Sensors

Image Sensors World        Go to the original article...

BusinessWire: ON Semi introduces its first CMOS sensors with NIR+ technology combining HDR with enhanced low light performance for high-end security and surveillance cameras.

The AR0522 is a 1/2.5-inch 5.1MP sensor based on a 2.2 μm BSI pixel platform, developed for industrial applications that require high resolution, high quality video capture in low light conditions. The AR0522 image sensor delivers approximately twice the sensitivity in NIR over the existing AR0521.

The AR0431 is a 1/3.2-inch 4 MP sensor based on a 2.0 μm BSI pixel technology platform. Offering low power modes and a frame rate of up to 120 fps, it is aimed to applications where slow motion video capture is required. Its low operating power fits for battery-powered security cameras, action/sports cameras, in-car DVRs and general surveillance cameras.

ON Semi has developed NIR+ technology to increase NIR QE without sacrificing color fidelity in the visible spectrum. Through this technique, security cameras can use fewer IR LEDs on BOM driven at lower voltages and still achieve high quality images in low light and NIR conditions.

Gianluca Colli, VP and GM, Consumer Solution Division of Image Sensor Group at ON Semiconductor said: “Our NIR+ technology and the back-side illuminated pixel platforms utilised in the AR0522 and AR0431 really come together to deliver outstanding image quality and low-light performance, resulting in brighter and sharper images even in challenging lighting conditions.

The AR0522 engineering samples are available now, with mass production planned for May 2018. Engineering samples of the AR0431 will be available later in April 2018, with mass production starting in July 2018.

Go to the original article...

IS Auto Europe 2018

Image Sensors World        Go to the original article...

Image Sensors Auto Europe 2018 program has been published and and has a number of image sensing papers, primarily about LiDARs:

  • LiDAR past, present, and future: seeing through the noise
    Anand Gopalan | CTO of Velodyne LiDAR, Inc.
    The presenter will trace the evolution of LiDAR from beginning to the current state of the art.
    He will then discuss the multitude of LiDAR approaches in the public domain and attempt to separate the hype from the reality.
    Finally, he will explore future trends toward greater integration and “edge compute.”
  • Improving LIDAR sensitivity by using a DMD to mask ambient light
    John Fenske | DLP Automotive Systems Engineer of Texas Instruments
    Why ambient light is a problem in LIDAR systems and how it defines the noise floor
    How to use a DMD to block ambient light and increase LIDAR sensitivity
    How to estimate the performance improvement given by the enhanced sensitivity
  • How to improve robustness of automotive LiDARs
    Celine Canal | Senior Project Leader & ADAS Application Engineer of Quantel Laser
    Trends in solid-state LiDARs
    Challenges and design considerations of novel beam steering approaches and flash LiDARs
    Short-pulse edge-emitter diode arrays
  • Moving from legacy LiDAR to next-generation iDAR (Intelligent Detection and Ranging)
    John Stockton | VP of Product of AEye
    Traditional LiDAR systems, because of siloed sensors and rigid data collection methods, tend to oversample or undersample information. This then requires significant processing power and time to extract critical objects, leading to latency. Second-generation systems are emerging that fuse intelligence with the data collection process - enabling the system to dynamically track targets and objects of interest, with almost no computational penalty.
    In this session, AEye will delve into this new form of intelligent data collection, called iDAR”, or “Intelligent Detection and Ranging”. The session will help industry players understand iDAR’s role in optimising data collection, reducing bandwidth, improving vision perception and intelligence, and speeding up motion planning for autonomous vehicles.

Go to the original article...

iPhone 8 Ambient Light Sensor

Image Sensors World        Go to the original article...

SystemPlus publishes reverse enginering report of ams ambient light sensor found in iPhone 8:

"ams AG is a long-time supplier for Apple’s iPhone and has the biggest share of the ambient light sensor (ALS) market.

By introducing the RGBC sensor into high-end smartphones, ams has placed a key milestone leading to the next-generation, true-color sensor inside the iPhone X.

ams’ iPhone 8 color sensor is located at the phone’s front, sharing the flex PCB with the proximity sensor and the front camera module. The sensor is integrated in a 6-pin LGA package with dimensions 2.85 mm x 2.61 mm x 0.53 mm.

For the iPhone 8, Apple chose to continue its partnership with ams, but with a next-generation ALS that allows for detecting a wider range of wavelengths (color and infrared). The uniqueness of this color sensor lies in the organic material used for certain filters, which involves a more complex process. The die design has also changed, with around 4x more photodiodes than the ALS in the iPhone 6S or 7, and there’s a circular arrangement for the sensing part which improves sensing ability. The central filter region is for the infrared sensor, and the color sensors are positioned around it. They are composed of organic color filters and interferometric filters.
"

Go to the original article...

Brookman ToF Ambient Light Suppression

Image Sensors World        Go to the original article...

Brookman Technology publishes Youtube demo of its ToF camera of Dynamic ambient light suppression (DALS) by Multi-tap sensing technology with Short Pulse Modulation (SPM):

Go to the original article...

ON Semi Announces NIR-Enhanced EMCCD

Image Sensors World        Go to the original article...

BusinessWire: ON Semiconductor announces KAE-08152 that shares the same 8.1MP resolution and 4/3 optical format as the existing KAE-08151, but incorporates an enhanced pixel design that doubles NIR QE at 850 nm – critical for surveillance, microscopy, and ophthalmology. The KAE-08152 is fully drop-in compatible with the existing device, simplifying adoption for camera manufacturers.

In addition, all devices in ON Semiconductor’s IT-EMCCD portfolio are now available with packages that incorporate an integrated thermoelectric cooler.

Interline Transfer EMCCD technology enables image capture with video frame rates even under extreme low-light conditions – down to moonless starlight,” said Herb Erhardt, VP and GM of Industrial Solutions Division, Image Sensor Group at ON Semiconductor. “With new options that provide enhanced NIR sensitivity and integrated cooling, this expanded portfolio allows customers to identify the best low-light imaging solution for their application.

Interline Transfer EMCCD devices combine two established imaging technologies with a unique output structure to enable a new class of low-noise, high-dynamic range imaging. Interline Transfer CCD provides excellent image quality and uniformity with a highly efficient electronic shutter, while EMCCD excels under low-light conditions. Combining these technologies allows the low-noise architecture of EMCCD to be extended to multi-megapixel resolutions, and an innovative output design allows both standard CCD (normal-gain) and EMCCD (high-gain) outputs to be utilized for a single image capture - extending intrascene dynamic range and scene detection from sunlight to starlight in a single image.

Engineering grade versions of the KAE-08152 are now available, with production planned for 2Q18. KAE-04471 and KAE-08151 devices with integrated TEC coolers will be in production in 2Q18, while cooled versions of the KAE-02150 and KAE-02152 are available today. All IT-EMCCD devices ship in ceramic micro-PGA packages, and are available in both Monochrome and Bayer Color configurations. The KAE-02152 is also available with the Sparse CFA color pattern.

Go to the original article...

Quanta Image Sensor Lecture

Image Sensors World        Go to the original article...

University of Illinois publishes Eric Fossum March 29, 2018 presentation on QIS:

" The Quanta Image Sensor (QIS) is a possible 3rd generation solid-state image sensor technology based on photon-counting. Primarily focused on scientific and defense applications, it may also be useful for consumer applications. The specialized QIS pixel device and its deep sub-electron read noise will be discussed. The specialized pixel uses ultra-low capacitance rather than avalanche multiplication to achieve single photoelectron detection capability. The high frame rate, low power readout will also be described. The QIS opens new possibilities for computational imaging.

Go to the original article...

Image Sensors at 2018 VLSI Symposium

Image Sensors World        Go to the original article...

VLSI Symposium to be held on June 18-22, 2018 in Honolulu, HI, publishes lists circuits and technology accepted papers with few image sensor related ones:
  • A Two‐Tap NIR Lock‐In Pixel CMOS Image Sensor with Background Light Cancelling Capability for Non-Contact Heart Rate Detection,
    Chen Cao, Shizuoka University
  • A 252 × 144 SPAD pixel FLASH LiDAR with 1728 Dual‐clock 48.8 ps TDCs, Integrated Histogramming and 14.9‐to‐1 Compression in 180nm CMOS Technology,
    Scott Lindner, EPFL and University of Zurich
  • A 220 m‐Range Direct Time‐of‐Flight 688 × 384 CMOS Image Sensor with Sub‐Photon Signal Extraction (SPSE) Pixels Using Vertical Avalanche Photo‐Diodes and 6 kHz Light Pulse Counters,
    Shinzo Koyama, Panasonic Corp.
  • An over 120 dB wide‐dynamic‐range 3.0 μm pixel image sensor with in‐pixel capacitor of 41.7 fF/um2 and high reliability enabled by BEOL 3D capacitor process,
    Masayuki Takase, Panasonic Corporation
  • Next‐generation Fundus Camera with Full Color Image Acquisition in 0‐lx Visible Light by 1.12‐micron Square Pixel, 4K, 30‐fps BSI CMOS Image Sensor with Advanced NIR Multi‐spectral Imaging System,
    Hirofumi Sumi, The University of Tokyo
  • A Near‐ & Short‐Wave IR Tunable InGaAs Nanomembrane PhotoFET on Flexible Substrate for Lightweight and Wide‐Angle Imaging Applications,
    Yida Li, National University of Singapore

Go to the original article...

css.php