Archives for May 2017

Xperi: Pelican Array Camera Getting Traction

Image Sensors World        Go to the original article...

Xperi earnings call transcript has an interesting statement by the company President Jon Kirchner:

"For the longer term, we gained traction with perspective customers at CES and Mobile World Congress for various vision and imaging solutions, including our multi-camera depth-sensing technology that was acquired from Pelican Imaging in 2016."

Go to the original article...

Harvest Imaging Project on Reproducibility, Variability and Reliability is About to Start

Image Sensors World        Go to the original article...

Albert Theuwissen publishes more info on his new project on image sensors reproducibility, variability and reliability. As any serious project, it starts with the definitions:
  • Reproducibility : will give quantitative information about how well particular measurements and retrieved performance data do reproduce if the devices are measured over and over again by means of the same calibrated measurement equipment,
  • Variability : will give quantitative information about the spread of the performance data from sensor to sensor/from camera to camera,
  • Reliability : will give quantitative information about the stability of the sensor and camera performance over time.

The measurements will be performed on a higher-end, more expensive camera with a global shutter CMOS sensor and on a lower-end, cheaper camera with a rolling shutter CMOS sensor. The cameras will be thoroughly measured every 6 months over a period of 5 years, from 2017 to 2021.

Go to the original article...

Microsoft Hololens Challenges

Image Sensors World        Go to the original article...

EETimes quotes Marc Pollefeys, Microsoft director of science who runs a computer vision lab at ETH Zurich:

Most of the energy is spent moving bits around … so it would seem natural that … the first layers of processing should happen in the sensor,” Pollefeys told EE Times in a brief interview... “I’m following the neuromorphic work that promises very power-efficient systems with layers of processing in the sensor — that’s a direction where we need a lot of innovation — it’s the only way to get a device that’s not heavier than glasses and can do tracking all day.”

Researchers are still working on ways to map a user’s hands accurately into an environment so that they can be used to control virtual objects. Occlusions, segmentation failures, and noisy data have hindered such efforts for years, Pollefeys said.
"

MS Hololens Block Diagram

Go to the original article...

TechInsights Publishes Sony 3-layer Stacked 960fps Sensor Analysis

Image Sensors World        Go to the original article...

TechInsights publishes reverse engineering report of Sony IMX400 3-layer stacked image sensor with integrated DRAM, extracted from Xperia XZs phone. According to Sony ISSCC 2017 paper, the pixel array is in the top die, the DRAM array and row drivers are in the middle die, and the remaining blocks are in the bottom ISP die:


"Given that the DRAM die also has the CIS row drivers on it, then it must have been designed as a custom part, and is not one of the TSV-enabled (TSV = through-silicon via) commodity DRAMs that we have seen in recent years. Sony’s cross-section also shows that the center die has a thick back oxide and landing pad for TSVs coming down from the CIS above.

We can also see (if the scale bar is accurate), that the CIS and DRAM die substrates have been thinned down to ~2.6 µm, normal for a back-illuminated CIS (BI-CIS), but that’s the thinnest DRAM we have ever seen. Our own image (above) confirms that the CIS and DRAM silicon are of the same order in thickness, and the landing pads are also visible.

It seems likely that the CIS/ISP connection could use the DRAM landing pad layer as an interconnect, to avoid the challenge of drilling through two dies after the full stack was formed.
"


Thanks to RF for the info!

Go to the original article...

Quanergy LiDAR Presentation

Image Sensors World        Go to the original article...

SPIETV publishes Quanergy CEO Louay Eldada presentation:

Go to the original article...

AutoSens Detroit Starts on May 22

Image Sensors World        Go to the original article...

Autosens Detroit 2017 conference offers quite significant image sensor content:

  • IEEE P2020 Working Group Meeting - Official meeting for the IEEE Standards Association Working Group on Automotive System Image Quality – P2020
  • Challenges in automotive image quality testing, Norman Koren, Imatest
  • Optimizing Imaging and Vision Systems for Autonomous Driving, Felix Heide, Algolux
  • Image Processing, Gregory Rofett, ST Micro
  • Image processing challenges for self-driving cars, Joyce Farrell, Stanford University
  • Image quality and safety in automotive video applications, Marc Geese, Bosch
  • Toward visible-light-based imaging for autonomous vehicles, Guy Satat, MIT
  • Improving Image Quality through Camera Radiometric Calibration, Mary Pagnutti, Innovative Imaging and Research
  • Novel, affordable automotive lidar solution, Filip Geuens, XenomatiX

Go to the original article...

e-WBM Publishes More Info on its Dual Aperture Depth Processor

Image Sensors World        Go to the original article...

Korea-based eWBM publishes more info on its DR1152 dual aperture depth processor announced last year:

Go to the original article...

More Info on Qualcomm Always-On Vision Module

Image Sensors World        Go to the original article...

Qualcomm publishes a page about its previously reported always-on vision camera module and its use-cases:


"Operating at less than 2mW of end-to-end power, and expected to be sold at low cost, the CVM provides smartphones and IoT devices with affordable, always-on computer vision awareness. By emitting CV data about what’s happening in a field of view rather than transmitting images, the CVM also delivers a much more privacy sensitive vision solution."

Face Detection for Smartphones
"For smartphones, it provides passive, always-on face detection that can auto wake the device when a face is detected and auto sleep (save power) when a face is not detected. The CVM can also auto stop the screen brightness for as long as a face is detected. Additionally, the CVM also supports intelligent screen orientation where, upon face detection, it adjusts and holds the screen orientation based on the position of the user's face. The always-on CVM can also trigger third party applications and hardware. For example, it could enable a biometric-type iris authentication process to be initiated when a face is detected or a QR scanner could be initiated when a QR code is detected.

Beyond face detection, simple gestures can wake and trigger the smartphone handset. Lastly, the CVM can provide ambient light sensing (ALS) and proximity functionality.
"

Interactivity Trigger for Toys and Smart Appliances
"For example, it could enable a smart refrigerator door to turn transparent and light up when it detects that a user is interested in seeing what's inside, or it can enable an appliance screen to present a menu of options when it sees a user is interested in engaging with it."

Occupancy Trigger (OT) for Smart Home Devices
"With a smaller form factor than passive infrared (PIR) sensors, and at a very low cost and power, the CVM will more accurately identify humans in a field of view, and track their specific motion, noting whether they are moving toward the device or perpendicular to the device. It also discriminates between them and other moving objects, such as vehicles and pets, which normally trigger PIR sensors. The OT also provides associated metadata about the humans, reporting on variables such as location and the number of people within the sensor's field of view. Lastly, the highly flexible OT is trainable for varied types of object detection such as vehicles or pets using machine learning approaches."

Standalone Data Tracker (SDT)
"Our CVM is being engineered with a coin cell battery and Bluetooth radio into a stamp-sized SDT that can be used in commercial, residential, and smart city applications."

Go to the original article...

Cadence Unveils its First CNN DSP IP

Image Sensors World        Go to the original article...

PRNewswire: Cadence unveils the Tensilica Vision C5 DSP, its first neural network DSP IP core for vision, radar/lidar and fused-sensor applications. Camera-based vision systems in automobiles, drones and security systems require two types of vision-optimized computation. First, the input from the camera is enhanced using traditional computational photography/imaging algorithms. Second, neural-network-based recognition algorithms perform object detection and recognition. Existing neural network accelerator solutions are hardware accelerators attached to imaging DSPs, with the neural network code split between running some network layers on the DSP and offloading convolutional layers to the accelerator. This combination is inefficient and consumes unnecessary power.

Architected as a dedicated neural-network-optimized DSP, the Vision C5 DSP accelerates all neural network computational layers (convolution, fully connected, pooling and normalization), not just the convolution functions. This frees up the main vision/imaging DSP to run image enhancement applications independently while the Vision C5 DSP runs inference tasks.


EETimes writes "The [Cadence Tensilica C5] cores are among as many as 50 silicon products now available to run various forms of computer vision and machine-learning tasks, said Jeff Bier, founder of the Embedded Vision Alliance, chairman of its event this week, and president of consulting firm BDTI. “There are so many [chips], with new ones popping up weekly, that it’s difficult to get a reliable count.

Go to the original article...

Renesas Autonomy Platform

Image Sensors World        Go to the original article...

EETimes: Renesas Autonomy platform for ADAS application features ISP and vision processors:


Key Features of the Renesas' R-Car V3M SoC solution:
  • Efficient image recognition engine and functional safety
    The R-Car V3M SoC implements a computer vision platform using different accelerators, including a versatile pipeline engine (IMP) and a computer vision engine (CVE), allowing the R-Car V3M to manage algorithms like optical flow, object detection and classification and convolutional neural networks.
  • High level of integration for reduced cost
    The R-Car V3M includes an integrated ISP that makes the image ready for computer vision. The integration eliminates the need for an external ISP component in the front camera or in the sensor itself.
  • Open solution for front camera

Samples of the R-Car V3M SoC will be available from December 2017. Mass production is scheduled to begin in June 2019.

Go to the original article...

Sony Semiconductor Factsheet

Image Sensors World        Go to the original article...

Sony publishes its updated semiconductor business factsheet:

Go to the original article...

css.php