Archives for July 2018

NIST Publishes LiDAR Characterization Standard

Image Sensors World        Go to the original article...

Spar3D: NIST's Physical Measurement Laboratory develops an international performance evaluation standard for for LiDARs.

An invitation was sent to all leading manufacturers of 3D laser scanners to visit NIST and run the approximately 100 tests specified in the draft standard. Four of the manufacturers traveled to NIST (one each from Germany and France) to participate in the runoff. Another sent an instrument during the week for testing. Two other manufacturers who could not attend have expressed interest in visiting NIST soon to try out the tests. These seven manufacturers represent about four-fifths of the entire market for large volume laser scanners.

The new standard ASTM E3125 is available here.

Go to the original article...

PMD Presentation at AWE 2018

Image Sensors World        Go to the original article...

PMD VP Business Development Mitchel Reifel presents the company ToF solutions for AR and VR applications at Augmented World Expo:

Go to the original article...

Parrot Anafi review

Cameralabs        Go to the original article...

The Parrot Anafi is a mid-range drone with 4k video and a powered gimbal. Parrot's most sophisticated drone to date, it's pitched squarely against the DJI Mavic Air, undercutting it on price and boasting some unique features. Adam takes it for a spin in his full review!…

The post Parrot Anafi review appeared first on Cameralabs.

Go to the original article...

Panasonic PIR and Thermopile Sensor Presentation

Image Sensors World        Go to the original article...

Panasonic publishes a video presenting its PIR and Thermopile sensor lineup:

Go to the original article...

TI Unveils AFE for ToF Proximity Sensor

Image Sensors World        Go to the original article...

TI ToF proximity sensor AFE OPT3101 integrates most of the ToF system on a single chip:

Go to the original article...

Rode NT USB review

Cameralabs        Go to the original article...

The Rode NT USB is a broadcast-quality USB microphone designed to capture a wide range of sound from vocals to musical instruments. The condenser design captures a broader range of frequencies than dynamic mics like the Podcaster, allowing it to deliver more transparent audio. Check out my review!…

The post Rode NT USB review appeared first on Cameralabs.

Go to the original article...

Magic Leap Gets Investment from AT&T

Image Sensors World        Go to the original article...

Techcrunch reports that AT&T makes a strategic investment into Magic Leap, a developer of AR glasses. Magic Leap last round D valued the startup at $6.3b, and the companies have confirmed that this AT&T completes the Series D round of $963m.

So far, Magic Leap has raised $2.35b from a number of strategic backers including Google, Alibaba and Axel Springer.

Go to the original article...

AutoSens Announces its Awards Finalists

Image Sensors World        Go to the original article...

AutoSens Awards reveals the shortlisted finalists for 2018 with some of them related to imaging:

Most Engaging Content:

  • Mentor Graphics, Andrew Macleod
  • videantis, Marco Jacobs
  • Toyota Motor North America, CSRC, Rini Sherony
  • 2025AD, Stephan Giesler
  • EE Times, Junko Yoshida

Hardware Innovation:

  • NXP Semiconductors
  • Cepton
  • Renesas
  • OmniVision
  • Velodyne Lidar
  • Robert Bosch

Software Innovation:

  • Dibotics
  • Algolux
  • Brodmann17
  • Civil Maps
  • Dataspeed
  • Immervision
  • Prophesee

Most Exciting Start-Up:

  • Hailo
  • Metamoto
  • May Mobility
  • AEye
  • Ouster
  • Arbe Robotics

Game Changer:

  • Siddartha Khastgir, WMG, University of Warwick, UK
  • Marc Geese, Robert Bosch
  • Kalray
  • Prof. Nabeel Riza, University College Cork
  • Intel
  • NVIDIA and Continental partnership

Greatest Exploration:

  • Ding Zhao, University of Michigan
  • Prof Philip Koopman, Carnegie Mellon University
  • Prof Alexander Braun, University of Applied Sciences Düsseldorf
  • Cranfield University Multi-User Environment for Autonomous Vehicle Innovation (MUEAVI)
  • Professor Natasha Merat, Institute for Transport Studies
  • Dr Valentina Donzella, WMG University of Warwick

Best Outreach Project:

  • NWAPW
  • Detroit Autonomous Vehicle Group
  • DIY Robocars
  • RobotLAB
  • Udacity

Go to the original article...

Image Sensors America Agenda

Image Sensors World        Go to the original article...

Image Sensors America to be held on October 11-12, 2018 in San Francisco announces its agenda with many interesting papers:

State of the Art Uncooled InGaAs Short Wave Infrared Sensors
Dr. Martin H. Ettenberg | President of Princeton Infrared Technologies

Super-Wide-Angle Cameras- The Next Smartphone Frontier Enabled by Miniature Lens Design and the Latest Sensors
Patrice Roulet Fontani | Vice President,Technology and Co-Founder of ImmerVision

SPAD vs. CMOS Image Sensor Design Challenges – Jitter vs. Noise
Dr. Daniel Van Blerkom | CTO & Co-Founder of Forza Silicon

sCMOS Technology: The Most Versatile Imaging Tool in Science
Dr. Scott Metzler | PCO Tech

Image Sensor Architecture
Presentation By Sub2R

Using Depth Sensing Cameras for 3D Eye Tracking
Kenneth Funes Mora | CEO and Co-founder of Eyeware

Autonomous Driving The Development of Image Sensors?
Ronald Mueller | CEO of Vision Markets of Associate Consultant of Smithers Apex

SPAD Arrays for LiDAR Applications
Carl Jackson | CTO and Founder of SensL Division, OnSemi

Future Image Sensors for SLAM and Indoor 3D Mapping
Vitality Goncharuk | CEO & Founder | Augmented Pixels

Future Trends in Imaging Beyond the Mobile Market
Amos Fenigstein | Senior Director of R&D for Image Sensors of TowerJazz

Presentation by Gigajot

Go to the original article...

ST FlightSense Presentation

Image Sensors World        Go to the original article...

ST publishes its presentation on ToF proximity sensor products:

Go to the original article...

Four Challenges for Automotive LiDARs

Image Sensors World        Go to the original article...

DesignNews publishes a list of four challenges that LiDARs have to overcome on the way to wide acceptance in vehicles:

Price reduction:

Every technology gets commoditized at some point. It will happen with LiDAR,” said Angus Pacala, co-founder and CEO of LiDAR startup Ouster. “Automotive radars used to be $15,000. Now, they are $50. And it did take 15 years. We’re five years into a 15-year lifecycle for LiDAR. So, cost isn’t going to be a problem.

Increase detection range:

Range isn’t always range,” said John Eggert, director of automotive sales and marketing at Velodyne. “[It’s] dynamic range. What do you see and when can you see it? We see a lot of ‘specs’ around 200 meters. What do you see at 200 meters if you have a very reflective surface? Most any LiDAR can see at 100, 200, 300 meters. Can you see that dark object? Can you get some detections off a dark object? It’s not just a matter of reputed range, but range at what reflectivity? While you’re able to see something very dark and very far away, how about something very bright and very close simultaneously?

Improve robustness:

It comes down to vibration and shock, wear and tear, cleaning—all the aspects that we see on our cars,” said Jada Smith, VP engineering and external affairs at Aptiv, Delphi spin-off. “LiDAR systems have to be able to withstand that. We need perfection in the algorithms. We have to be confident that the use cases are going to be supported time and time again.

Withstand the environment and different weather conditions:

Jim Schwyn, CTO of Valeo North America, said “What if the LiDAR is dirty? Are we in a situation where we are going to take the gasoline tank from a car and replace it with a windshield washer reservoir to be able to keep these things clean?

The potentially fatal LiDAR flaws that need to be corrected:

  • Bright sun against a white background
  • A blizzard that causes whiteout conditions
  • Early morning fog
Another article on somewhat similar matter has been published by Lidarradar.com:

Go to the original article...

SmartSens Unveils GS BSI VGA Sensor

Image Sensors World        Go to the original article...

PRNewswire: SmartSens launches SC031GS calling it "the world's first commercial-grade 300,000-pixel Global Shutter CMOS image sensor based on BSI pixel technology." While other companies announced GS BSI sensors, they have higher than VGA resolution.

The SC031GS is aimed to a wide range of commercial products, including smart barcode readers, drones, smart modules (Gesture Recognition/vSLAM/Depth Information/Optical Flow) and other image recognition-based AI applications, such as facial recognition and gesture control.

SC031GS uses 3.75um large pixels (1/6" optical size) and SmartSens' single-frame HDR technology, combined with a global shutter. The maximum frame rate is 240fps.

Leo Bai, GM of SmartSens' AI Image Sensors Division, stated: "SmartSens is not only a new force in the global CMOS image sensor market, but also a company that commits to designing and developing products that meet the market needs and reflect industry trends. We partnered with key players in the AI field to integrate AI functions into the product design. SC031GS is such a revolutionary product that is powered by our leading Global Shutter CMOS image sensing technology and designed for trending AI applications."

SC031GS is now in mass production.

Go to the original article...

SenseTime to Expand into Automotive Applications

Image Sensors World        Go to the original article...

South China Morning Post: Face recognition startup SenseTime announces its plans to expand in automotive applications.

Our leading algorithms for facial recognition have already proven a big success,” said SenseTime co-founder Xu Bing, “and now comes [new technologies for] autonomous driving, which enable machines to recognise images both inside and outside cars, and an augmented reality engine, integrating know-how in reading facial expressions and body movement.

SenseTime raised $620m in May caling itself world’s most valuable AI start-up, with a valuation of $4.5b. Known for providing AI-powered surveillance software for China’s police, SenseTime said it achieved profitability last year, selling AI-powered applications for smart cities, surveillance, smartphones, internet entertainment, finance, retail and other industries.

Last year, Honda announced a partnership with SenseTime for automated driving technologies.

Go to the original article...

Nvidia AI-Enhanced Noise Removal

Image Sensors World        Go to the original article...

DPReview quotes Nvidia blog presenting a joint research with MIT and Aalto University on AI-enhanced noise removal with pretty impressive results:


Go to the original article...

Omnivision Releases Sensor Optimized for Structured Light FaceID Applications

Image Sensors World        Go to the original article...

OmniVision announces a global shutter sensors targeting facial authentication in mobile devices, along with other machine vision applications such as AR/VR, drones and robotics. The high-resolution OV9286 sensor, with 20% more pixels than the previous-generation sensor, is said to enable a new level of accuracy in facial authentication for smartphone applications requiring the high levels of security. The OV9286 is optimized for payment-level facial authentication using a structured light solution for high-quality 3D images.

The market for facial recognition components is expected to grow rapidly to $9.2b by 2022, according to a report from Allied Market Research.

A higher level of image-sensing accuracy is required to safely authenticate smartphones for payment applications, compared to using facial authentication for unlocking a device,” said Arun Jayaseelan, senior marketing manager at OmniVision. “The increased resolution of the OV9286 image sensor meets these requirements, while using the global shutter technology to optimize system power consumption as well as to eliminate motion artifacts and blurring.

The sensor is available in two versions: the OV9286 for smartphone applications, and the OV9285 for other machine vision applications that also need high-resolution sensors to enable a broad range of image-sensing functions. The OV9286 has a high CRA of 26.7 degrees for low z-height and slim-profile smartphone designs. The OV9285 has a lower CRA of 9 degrees for applications where that tight z-height restriction does not apply, supporting wide field-of-view lens designs.

Both the OV9285 and the OV9286 incorporate 1328 x 1120 resolution at 90 fps, an optical format of 1/3.5-inch and 3x3-micrometer OmniPixel3-GS technology. These global shutter sensors, in combination with excellent NIR sensitivity at 850 nm and 940 nm, reduce device power consumption to extend battery life.

The OV9285 and OV9286 sensors are available now.

Go to the original article...

Nikon COOLPIX P1000 review – preview

Cameralabs        Go to the original article...

The Nikon COOLPIX P1000 is a DSLR-styled super-zoom camera with a massive 125x range, taking it from an equivalent of 24-3000mm. The P1000 has a 16 Megapixel sensor, can film 4k video and has a built-in OLED viewfinder and fully-articulated screen. Check out my preview!…

The post Nikon COOLPIX P1000 review – preview appeared first on Cameralabs.

Go to the original article...

3D Imaging Fundamentals (Open Access)

Image Sensors World        Go to the original article...

OSA Advances in Optics and Photonics publishes "Fundamentals of 3D imaging and displays: a tutorial on integral imaging, light-field, and plenoptic systems" by Manuel Martínez-Corral (University of Valencia, Spain) and Bahram Javidi (University of Connecticut, Storrs).

"This tutorial is addressed to the students and researchers in different disciplines who are interested to learn about integral imaging and light-field systems and who may or may not have a strong background in optics. Our aim is to provide the readers with a tutorial that teaches fundamental principles as well as more advanced concepts to understand, analyze, and implement integral imaging and light-field-type capture and display systems. The tutorial is organized to begin with reviewing the fundamentals of imaging, and then it progresses to more advanced topics in 3D imaging and displays. More specifically, this tutorial begins by covering the fundamentals of geometrical optics and wave optics tools for understanding and analyzing optical imaging systems. Then, we proceed to use these tools to describe integral imaging, light-field, or plenoptics systems, the methods for implementing the 3D capture procedures and monitors, their properties, resolution, field of view, performance, and metrics to assess them. We have illustrated with simple laboratory setups and experiments the principles of integral imaging capture and display systems. Also, we have discussed 3D biomedical applications, such as integral microscopy."


OSA Advances in Optics and Photonics site also has a 2011 open access paper "Structured-light 3D surface imaging: a tutorial" by Jason Geng. Since the structured light approach has progressed a lot over the recent years, the information in this tutorial is largely obsolete. Still, it could be good start for learning basics or for history-inclined readers.

Go to the original article...

Espros on ToF FaceID Calibration Challenges

Image Sensors World        Go to the original article...

Espros Dieter Kaegi presentation "3D Facial Scanning" at Swiss Photonics Workshop, held at Chur on June 21, 2018, talks about many challenges on the way of ToF-based FaceID module development:

Go to the original article...

AMS Presentation on 3D Sensing for Consumer Applications

Image Sensors World        Go to the original article...

AMS Markus Rossi presentation "3D Cameras for Consumer Application" at Swiss Photonics Workshop held in Chur on June 21, 2018 has interesting comparisons between different depth sensing approaches:

Go to the original article...

Leti-SNRS Full-Frame Curved Sensor Paper

Image Sensors World        Go to the original article...

Leti-SNRS curved sensor paper "Curved detectors developments and characterization: application to astronomical instruments" by Simona Lombardo, Thibault Behaghel, Bertrand Chambion, Wilfried Jahn, Emmanuel Hugot, Eduard Muslimov, Melanie Roulet, Marc Ferrari, Christophe Gaschet, Stephane Caplet, and David Henry is available on-line. This work was first announced a year ago.

"We describe here the first concave curved CMOS detector developed within a collaboration between CNRS-LAM and CEA-LETI. This fully-functional detector 20 Mpix (CMOSIS CMV20000) has been curved down to a radius of Rc =150 mm over a size of 24x32 mm2. We present here the methodology adopted for its characterization and describe in detail all the results obtained. We also discuss the main components of noise, such as the readout noise, the fixed pattern noise and the dark current. Finally we provide a comparison with the flat version of the same sensor in order to establish the impact of the curving process on the main characteristics of the sensor.

The curving process of these sensors consists of two steps: firstly the sensors are thinned with a grinding equipment to increase their mechanical flexibility, then they are glued onto a curved substrate. The required shape of the CMOS is, hence, due to the shape of the substrate. The sensors are then wire bonded keeping the packaging identical to the original one before curving. The final product is, therefore, a plug-and-play commercial component ready to be used or tested (figure 1B).
"


"The PRNU factor of the concave sensor shows an increase of 0.8% with respect to the flat sensor one. The difference between the two is not significant. However more investigations are required as it might be due to the curving process and it could explain the appearance of a strong 2D pattern for higher illumination levels."

Go to the original article...

Yole Webcast on Autonomous Driving

Image Sensors World        Go to the original article...

Yole Developpement publishes a recording of its April 2018 webcast "Core Technologies for Robotic Vehicle" that talks about cameras and LiDARs among the other key technologies:


Go to the original article...

Harvard University Proposes Flat Lens

Image Sensors World        Go to the original article...

Photonics.com: Harvard University Prof. Federico Capasso and his group present "a single flat lens that can focus the entire visible spectrum of light in the same spot and in high resolution. Professor Federico Capasso and members of the Capasso Group explain why this breakthrough in metalenses could have major implications in the field of optics, and could replace bulky, curved lenses currently used in optical devices."

Go to the original article...

CEA-Leti with Partners to Develop LiDAR Benchmarks

Image Sensors World        Go to the original article...

LiDAR performance claims are a bit of Wild West today as there are no standardized performance tests. Every company can claim, basically, anything measuring the performance in its own unique way. Not anymore. Leti is aiming to change that.

CEA-Leti and its partner companies Transdev and IRT Nanoelec are to develop a list of criteria and objective parameters by which various commercial LiDAR systems could be evaluated and compared. Leti teams will focus on perception requirements and challenges from a LiDAR system perspective and evaluate the sensors in real-world conditions. Vehicles will be exposed to objects with varying reflectivity, such as tires and street signs, as well as environmental conditions, such as weather, available light, and fog.

Go to the original article...

e2v Unveils 67MP APS-C Sensor with 2.5um Global Shutter Pixels

Image Sensors World        Go to the original article...

Teledyne e2v announces its Emerald 67MP CMOS image sensor. The new sensor features the smallest global shutter pixel (2.5µm) on the market, ideal for high end automated optical inspection, microscopy and surveillance.

Emerald 67M has 2.8e- of readout noise, 70% QE, and high speed, which significantly enhances production line throughput.

Vincent Richard, Marketing Manager at Teledyne e2v, said, “We are very pleased to widen our sensor portfolio with the addition of Emerald 67M, the first 8192 x 8192 global shutter sensor, running at high frame rates and offering a comprehensive set of features. Developed through close discussions with leading OEM’s in the automated optical inspection market, this new sensor offers application features such as our unique Region of Interest mode, which helps to improve customer yield. Combined with its 67M resolution, our newest Emerald sensor tackles the challenge of image instability as a result of inspection system vibration.

Go to the original article...

EVG Wafer Bonding Machine Alignment Accuracy Improved to 50nm

Image Sensors World        Go to the original article...

PRNewswire: EV Group (EVG) unveiled the SmartView NT3 aligner, which is available on the company's GEMINI FB XT integrated fusion bonding system for high-volume manufacturing (HVM) applications. The SmartView NT3 aligner provides sub-50-nm wafer-to-wafer alignment accuracy—a 2-3X improvement—as well as significantly higher throughput (up to 20 wafers per hour) compared to the previous-generation platform.

Eric Beyne, imec fellow and program director 3D system integration says "area of particular focus is wafer-to-wafer bonding, where we are achieving excellent results in part through our work with industry partners such as EV Group. Last year, we succeeded in reducing the distance between the chip connections, or pitch, in hybrid wafer-to-wafer bonding to 1.4 microns, which is four times smaller than the current standard pitch in the industry. This year we are working to reduce the pitch by at least half again."

"EVG's GEMINI FB XT fusion bonding system has consistently led the industry in not only meeting but exceeding performance requirements for advanced packaging applications, with key overlay accuracy milestones achieved with several industry partners within the last year alone," stated Paul Lindner, executive technology director, EV Group. "With the new SmartView NT3 aligner specifically engineered for the direct bonding market and added to our widely adopted GEMINI FB XT fusion bonder, EVG once again redefines what is possible in wafer bonding—helping the industry to continue to push the envelope in enabling stacked devices with increasing density and performance, lower power consumption and smaller footprint."

Go to the original article...

Digitimes Image Sensor Market Forecast

Image Sensors World        Go to the original article...

Digitimes Research forecasts global CMOS sensors and CCDs sales to reach $15b in 2020. The shipments increased by over 15% YoY to $12.2b in 2017. Sony market share in CMOS sensors is estimated at 45% in both 2016 and 2017.

As smartphone market slows down, Sony moves its resources to automotive CIS market where its share is relatively low 9% in 2017. Sony sells its image sensors to Toyota and looks to expand its customer base to include Bosch, Nissan and Hyundai this year.

Go to the original article...

Apple to Integrate Rear 3D Camera in Next Year iPhone

Image Sensors World        Go to the original article...

DeviceSpecifications quotes Korean site ETNews saying that Hynix group assembly house JSCK works with Apple on the next generation 3D sensing camera:

"Apple has revealed the iPhone of 2019 will have a triple rear camera setup with 3D sensing capability that will be a step ahead of the technology that was used for the front-facing camera of the iPhone X released in 2017. The front camera will be used for unlocking purposes, and the rear ones will be used to provide augmented reality (AR) experience. According to industry sources, Jesset Taunch Chippak Korea (JSCK), a Korean company in China, has been developing the 3D sensing module since the beginning of this year. It will be placed in the middle of the rear triple camera module... Apple used infrared (IR) as a light source for the iPhone's front-facing camera 3D sensing, but the rear camera plans to use a different light source than the IR because it needs to sense a wider range."

Go to the original article...

Peter Noble to Publish a Book

Image Sensors World        Go to the original article...

Peter Noble, the inventor of active pixel sensor in 1966, is about to publish his autobiography book:

Go to the original article...

MTA Special Section on Advanced Image Sensor Technology

Image Sensors World        Go to the original article...

Japanese ITE Transactions on Media Technology and Applications publishes a Special Section on Advanced Image Sensor Technology with many interesting papers, all in open access:

Statistical Analyses of Random Telegraph Noise in Pixel Source Follower with Various Gate Shapes in CMOS Image Sensor
Shinya Ichino, Takezo Mawaki, Akinobu Teramoto, Rihito Kuroda, Shunichi Wakashima, Tomoyuki Suwa, Shigetoshi Sugawa
Tohoku University

Random telegraph noise (RTN) that occurs at in-pixel source follower (SF) transistors and column amplifier is one of the most important issues in CMOS image sensors (CIS) and reducing RTN is a key to the further development of CIS. In this paper, we clarified the influence of transistor shapes on RTN from statistical analysis of SF transistors with various gate shapes including rectangular, trapezoidal and octagonal structures by using an array test circuit. From the analysis of RTN parameter such as amplitude and the current-voltage characteristics by the measurement of a large number of transistors, the influence of shallow trench isolation (STI) edge on channel carriers and the influence of the trap location along source-drain direction are discussed by using the octagonal SF transistors which have no STI edge and the trapezoidal SF transistors which have an asymmetry gate width at source and drain side.

Impacts of Random Telegraph Noise with Various Time Constants and Number of States in Temporal Noise of CMOS Image Sensors
Rihito Kuroda, Akinobu Teramoto, Shigetoshi Sugawa
Tohoku University

This paper describes the impacts of random telegraph noise (RTN) with various time constants and number of states to temporal noise characteristics of CMOS image sensors (CISs) based on a statistical measurement and analysis of a large number of MOSFETs. The obtained results suggest that from a trap located relatively away from the gate insulator/Si interface, the trapped carrier is emitted to the gate electrode side. Also, an evaluation of RTN using only root mean square values tends to underestimate the effect of RTN with large signal transition values and relatively long time constants or multiple states especially for movie capturing applications in low light environment. It is proposed that the signal transition values of RTN should be incorporated during the evaluation.

Quantum Efficiency Simulation and Electrical Cross-talk Index Development with Monte-Carlo Simulation Based on Boltzmann Transport Equation
Yuichiro Yamashita, Natsumi Minamitani, Masayuki Uchiyama, Dun-Nian Yaung, Yoshinari Kamakura
TSMC and Osaka University

This paper explains a new method to model a photodiode for accurate quantum efficiency simulation. Individual photo-generated particles are modeled by Boltzmann transport equation, and simulated by Monte-Carlo method. Good accuracy is confirmed in terms of similarities of quantum efficiency curves, as well as color correction matrices and SNR10s. Three attributes - "initial energy of the electron", "recombination of electrons at the silicon surface" and "impurity scattering" - are tested to examine their effectiveness in the new model. The theoretical difference to the conventional method with drift-diffusion equation is discussed as well. Using the simulation result, the relationship among the cross-talk, potential barrier, and distance from the boundary has been studied to develop a guideline for cross-talk suppression. It is found that a product of the normal distance from the pixel boundary and the electric field perpendicular to the Z-axis needs to be more than 0.02V to suppress the probability of electron leakage to the adjacent pixel to less than 10%.

A Multi Spectral Imaging System with a 71dB SNR 190-1100 nm CMOS Image Sensor and an Electrically Tunable Multi Bandpass Filter
Yasuyuki Fujihara, Yusuke Aoyagi, Maasa Murata, Satoshi Nasuno, Shunichi Wakashima, Rihito Kuroda, Kohei Terashima, Takahiro Ishinabe, Hideo Fujikake, Kazuhiro Wako, Shigetoshi Sugawa
Tohoku University

This paper demonstrates a multi spectral imaging system utilizing a linear response, high signal to noise ratio (SNR) and wide spectral response CMOS image sensor (CIS), and an electrically tunable multi bandpass optical filter with narrow full width at half maximum (FWHM) of transmitted waveband. The developed CIS achieved 71dB SNR, 1.5x107 e- full well capacity (FWC), 190-1100nm spectral response with very high quantum efficiency (QE) in near infrared (NIR) waveband using low impurity concentration Si wafer (~1012 cm-3). With the developed CIS, diffusion of 5mg/dl glucose into physiological saline solution, as a preliminary experiment for non-invasive blood glucose measurement, was successfully visualized under 960nm and 1050nm wavelengths, at which absorptions of water molecules and glucose appear among UV to NIR waveband, respectively.

Single Exposure Type Wide Dynamic Range CMOS Image Sensor With Enhanced NIR Sensitivity
Shunsuke Tanaka, Toshinori Otaka, Kazuya Mori, Norio Yoshimura, Shinichiro Matsuo, Hirofumi Abe, Naoto Yasuda, Kenichiro Ishikawa, Shunsuke Okura, Shinji Ohsawa, Takahiro Akutsu, Ken Wen-Chien Fu, Ho-Ching Chien, Kenny Liu, Alex YL Tsai, Stephen Chen, Leo Teng, Isao Takayanagi
Brillnics Japan

In new markets such as in-vehicle cameras, surveillance camera and sensing applications that are rising rapidly in recent years, there is a growing need for better NIR sensing capability for clearer night vision imaging, in addition to wider dynamic range imaging without motion artifacts and higher signal-to-noise (S/N) ratio, especially in low-light situation. We have improved the previously reported single exposure type wide dynamic range CMOS image sensor (CIS), by optimizing the optical structure such as micro lens shape, forming the absorption structure on the Si surface and adding the back side deep trench isolation (BDTI). We achieved high angular response of 91.4%, high Gr/Gb ratio of 98.0% at ±20°, 610nm, and high NIR sensitivity of QE 35.1% at 850nm, 20.5% at 940nm without degrading wide dynamic range performance of 91.3dB and keeping low noise floor of 1.1e-rms.

Separation of Multi-path Components in Sweep-less Time-of-flight Depth Imaging with a Temporally-compressive Multi-aperture Image Sensor
Futa Mochizuki, Keiichiro Kagawa, Ryota Miyagi, Min-Woong Seo, Bo Zhang, Taishi Takasawa, Keita Yasutomi, Shoji Kawahito
Shizuoka University

This paper demonstrates to separate multi-path components caused by specular reflection with temporally compressive time-of-flight (CToF) depth imaging. Because a multi-aperture ultra-high-speed (MAUHS) CMOS image sensor is utilized, any sweeping or changing of frequency, delay, or shutter code is not necessary. Therefore, the proposed scheme is suitable for capturing dynamic scenes. A short impulse light is used for excitation, and each aperture compresses the temporal impulse response with a different shutter pattern at the pixel level. In the experiment, a transparent acrylic plate was placed 0.3m away from the camera. An objective mirror was placed at the distance of 1.1 m or 1.9m from the camera. A set of 15 compressed images was captured at an acquisition rate of 25.8 frames per second. Then, 32 subsequent images were reconstructed from it. The multi-path interference from the transparent acrylic plates was distinguished.

CMOS Image Sensor with Pseudorandom Pixel Placement for Image Measurement using Hough Transform
Junichi Akita, Masahi Toda
Kanazawa University, Kumamoto University

The pixels in the conventional image sensors are placed at lattice positions, and this causes the jaggies at the edge of the slant line we perceive, which is hard to resolve by pixel size reduction. The authors have been proposing the method of reducing the jaggies effect by arranging the photo diode at pseudorandom positions, with keeping the lattice arrangement of pixel boundaries that are compatible with the conventional image sensor architecture. In this paper, the authors discuss the design of CMOS image sensor with pseudorandom pixel placement, as well as the the evaluation on image measurement accuracy of line parameters using Hough transform.

Go to the original article...

2018 Harvest Imaging Forum Agenda

Image Sensors World        Go to the original article...

The 6th Harvest Imaging Forum is to be held on Dec. 6th and 7th, 2018 in Delft, the Netherlands. The agenda includes two topics, each one taking one day:

"Efficient embedded deep learning for vision applications" by Prof. Marian VERHELST (KU Leuven, Belgium)
Abstract:

Deep learning has become popular for smart camera applications, showing unprecedented recognition, tracking and segmentation capabilities. Deep learning however comes with significant computational complexity, making it until recently only feasible on power-hungry server platforms. In the past years, we however see a trend towards embedded processing of deep learning networks. It is crucial to understand that this evolution is not enabled by either novel processing architecture or novel deep learning algorithms alone. The breakthroughs clearly come from a close co-optimization between algorithms and implementation architectures.

After an introduction into deep neural network processing and its implementation challenges, this forum will give an overview of recent trends enabling efficient network evaluations in embedded platforms such as smart camera's. This discussion involves a tight interplay between newly emerging hardware architectures, and emerging implementation-driven algorithmic innovations. We will review a wide range of recent techniques which make the learning algorithms implementation-aware towards a drastically improved inference efficiency. This forum will give the audience a better understanding into the opportunities and implementation challenges from embedded deep learning, and enable to follow research on deep learning processors.


"Image and Data Fusion" by Prof. Wilfried PHILIPS (Ghent University, Belgium)
Abstract:

Large scale video surveillance networks are now common place and smart cameras and advanced video have been introduced to alleviate the resulting problem of information overload. However, the true power of video analytics comes from fusing information from various cameras and sensors, with applications such people tracking over wide areas or inferring 3D shape from multi-view video. Fusion also helps to overcome the limitations of individual sensors. For instance, thermal imaging helps to detect pedestrians in difficult lighting conditions pedestrians are more easily (re)identified in RGB images. Automotive sensing and traffic control applications are another major driver for sensor fusion. Typical examples include lidar, radar and depth imaging to complement optical imaging. In fact, as the spatial resolution of lidar and radar is gradually increasing, these devices these days (can) produce image like outputs.

he workshop will introduce the theoretical foundations of sensor fusion and the various options for fusion ranging from fusion at the pixel level, over decision fusion to more advanced cooperative and assistive fusion. It will address handling heterogeneous data, e.g., video with different spatial, temporal or spectral resolution and/or representing different physical properties. It will also address fusion frameworks to create scalable systems based on communicating smart cameras and distributed processing. This cooperative and assistive fusion facilitates the integration of cameras in the Internet-of-Things.

Go to the original article...

css.php