Archives for October 2017

Smartphone Imaging Market Reports

Image Sensors World        Go to the original article...

InstantFlashNews quotes Taiwan-based Isaiah Research on 3D-imaging capable smartphones saying that market keep to be dominated by Apple till 2019 when iPhone is supposed to have 55% out of the total 290M units:


Isaiah Research increases forecast for dual camera phones to 270M units in 2017:


The earlier Isaiah reports from July 2016 gave significantly lower numbers of 170-180M units:


Gartner reports a decrease in the global image sensor revenue from 2015 to 2016: "The CMOS image sensor market declined 1.7% in 2016, mainly because of a price drop and saturation of smartphones, especially high-end smartphones." The top 5 companies accounted for 88.9% of global CIS revenue in 2016 with the top 3 companies have 78.9% of the market:

Go to the original article...

3 Layer Color and Polarization Sensitive Imager

Image Sensors World        Go to the original article...

A group of researchers from University of Illinois at Urbana-Champaign, Washington University in St. Louis, and University of Cambridge, UK published a paper "Bio-inspired color-polarization imager for real-time in situ imaging" by Missael Garcia, Christopher Edmiston, Radoslav Marinov, Alexander Vail, and Viktor Gruev. The image sensor is said to be inspired by mantis shrimps vision, although it more reminds me quite approach of Foveon:

"Nature has a large repertoire of animals that take advantage of naturally abundant polarization phenomena. Among them, the mantis shrimp possesses one of the most advanced and elegant visual systems nature has developed, capable of high polarization sensitivity and hyperspectral imaging. Here, we demonstrate that by shifting the design paradigm away from the conventional paths adopted in the imaging and vision sensor fields and instead functionally mimicking the visual system of the mantis shrimp, we have developed a single-chip, low-power, high-resolution color-polarization imaging system.

Our bio-inspired imager captures co-registered color and polarization information in real time with high resolution by monolithically integrating nanowire polarization filters with vertically stacked photodetectors. These photodetectors capture three different spectral channels per pixel by exploiting wavelength-dependent depth absorption of photons.
"


"Our bio-inspired imager comprises 1280 by 720 pixels with a dynamic range of 62 dB and a maximum signal-to-noise ratio of 48 dB. The quantum efficiency is above 30% over the entire visible spectrum, while achieving high polarization extinction ratios of ∼40∼40 on each spectral channel. This technology is enabling underwater imaging studies of marine species, which exploit both color and polarization information, as well as applications in biomedical fields."

A Youtube video shows nice pictures coming out of the camera:

Go to the original article...

Infineon, Autoliv on Automotive Imaging Market

Image Sensors World        Go to the original article...

Infineon publishes an "Automotive Conference Call" presentation dated by Oct. 10, 2017. Few interesting slides showing camera and LiDAR content in cars of the future:


Autoliv CEO presentation dated by Sept. 28, 2017 gives a bright outlook on automotive imaging:

Go to the original article...

Trinamix Distance Sensing Technology Explained

Image Sensors World        Go to the original article...

BASF spin-off Trinamix publishes a nice technology page with Youtube videos explaining its depth sensing principles. They call it "Focus-Induced Photoresponse (FIP):"






"FIP takes advantage of a particular phenomenon in photodetector devices: an irradiance-dependent photoresponse. The photoresponse of these devices depends not only on the amount of light incident, but also on the size of the light spot on the detector. This phenomenon allows to distinguish whether the same amount of light is focused or defocused on the sensor. We call this the “FIP effect” and use it to measure distance.

The picture illustrates how the FIP effect can be utilized for distance measurements. The photocurrent of the photodetector reaches its maximum when the light is in focus and decreases symmetrically outside the focus. A change of the distance between light source and lens results in such a change of the spot size on the sensor. By analyzing the photoresponse, the distance between light source and lens can be deduced.
"

Trinamix also started production of Hertzstück PbS SWIR photodetectors with PbSe ones to follow:

Go to the original article...

Invisage Acquired by Apple?

Image Sensors World        Go to the original article...

Reportedly, there is some sort of acquisition deal reached between Invisage and Apple. A part of Invisage employees joined Apple. Another part is looking for jobs, apparently. While the deal has never been officially announced, I got an unofficial confirmation of this story from 3 independent sources.

Update: According to 2 sources, the deal was closed in July this year.

Somewhat old Invisage Youtube videos are still available and show the company's visible-light technology, although Invisage worked on IR sensing in more recent years:



Update #2: There are few more indications that Invisage has been acquired. Nokia Growth Partners (NGP) that participated the 2014 investing round shows Invisage in its exits list:


InterWest Partners too invested in 2014 and now lists Invisage as its non-current investments:

Go to the original article...

Samsung VR Camera Features 17 Imagers

Image Sensors World        Go to the original article...

Samsung introduces the 360 Round, a camera for developing and streaming high-quality 3D content for VR experience. The 360 Round uses 17 lenses—eight stereo pairs positioned horizontally and one single lens positioned vertically—to livestream 4K 3D video and spatial audio, and create engaging 3D images with depth.

With such cameras getting widely adopted on the market, it can easily become a major market for image sensors:

Go to the original article...

Google Pixel 2 Smartphone Features Stand-Alone HDR+ Processor

Image Sensors World        Go to the original article...

Ars Technica reports that Google Pixel 2 smartphone features a separate Google-designed image processor chip, "Pixel Visual Core." It's said "to handle the most challenging imaging and machine learning applications" and that the company is "already preparing the next set of applications" designed for the hardware. The Pixel Visual Core has its own CPU, a low power ARM A53 core, DDR4 RAM, the eight IPU cores, and a PCIe and MIPI interfaces. Google says the company's HDR+ image processing can run "5x faster and at less than 1/10th the energy" than it currently does on the main CPU. The new core will be enabled in the forthcoming Android Oreo 8.1 (MR1) update.

The new IPU cores are intended to use Halide language for image processing and TensorFlow for machine learning. A custom Google-made compiler optimizes the code for the underlying hardware.


Google also publishes an article explaining its HDR+ and portrait mode that the new core is supposed to accelerate. Google also publishes a video explaining the Pixel 2 camera features:

Go to the original article...

Basler Compares Image Sensors for Machine Vision and Industrial Applications

Image Sensors World        Go to the original article...

Basler presents EMVA 1288 measurements of the image sensors in its cameras. It's quite interesting to compare CCD with CMOS sensors and Sony with other companies in terms of QE. Qsat, Dark Noise, etc.:

Go to the original article...

5 Things to Learn from AutoSens 2017

Image Sensors World        Go to the original article...

EMVA publishes "AutoSens Show Report: 5 Things We Learned This Year" by Marco Jacobs, VP of Marketing, Videantis. The five important things are:
  1. The devil is in the detail
    Sort of obvious. See some examples in the article.
  2. No one sensor to rule them all
    Different image sensors, Lidars, each optimized for a different sub-task
  3. No bold predictions
    That is, nobody knows what the autonomous driving arrives to the market
  4. Besides the drive itself, what will an autonomous car really be like?
  5. Deep learning a must-have tool for everyone
    Sort of a common statement although the approaches vary. Some put the intelligence into the sensors, others keep sensors dumb while concentrating the processing in a central unit.

Go to the original article...

DENSO and Fotonation Collaborate

Image Sensors World        Go to the original article...

BusinessWire: DENSO and Xperi-Fotonation start joint technology development of cabin sensing based on image recognition. DENSO expects to significantly improve the performance of its Driver Status Monitor, an active safety product used in tracks since 2014. Improvements of such products also will be used in next-generation passenger vehicles, including a system to help drivers return to driving mode during Level 3 of autonomous drive.

Using FotoNation’s facial image recognition and neural networks technologies, detection accuracy will be increased remarkably by detecting much more features instead of using the conventional detection method based on the relative positions of the eyes, nose, mouth, and other facial regions. Moreover, DENSO will develop new functions, such as those to detect the driver’s gaze direction and facial expressions more accurately, to understand the state of mind of the driver in order to help create more comfortable vehicles.

Understanding the status of the driver and engaging them at the right time is an important component for enabling the future of autonomous driving,” said Yukihiro Kato, senior executive director, Information & Safety Systems Business Group of DENSO. “I believe this collaboration with Xperi will help accelerate our innovative ADAS product development by bringing together the unique expertise of both our companies.

We are excited to partner with DENSO to innovate in such a dynamic field,” said Jon Kirchner, CEO of Xperi Corporation. “This partnership will play a significant role in paving the way to the ultimate goal of safer roadways through use of our imaging and facial analytics technologies and DENSO’s vast experience in the space.

Using FotoNation’s facial image recognition and neural networks technologies, detection accuracy will be increased remarkably by detecting much more features instead of using the conventional detection method based on the relative positions of the eyes, nose, mouth, and other facial regions. Moreover, DENSO will develop new functions, such as those to detect the driver’s gaze direction and facial expressions more accurately, to understand the state of mind of the driver in order to help create more comfortable vehicles.

Go to the original article...

AutoSens 2017 Awards

Image Sensors World        Go to the original article...

AutoSens conference held on Sept. 20-21 in Brussels, Belgium publishes its Awards. Some of the image sensor relevant ones:

Most Engaging Content
  • First place: Vladimir Koifman, Image Sensors World (yes, this is me!)
  • Highly commended: Junko Yoshida, EE Times

Hardware Innovation
  • First place: Renesas
  • Highly commended: STMicroelectronics

Most Exciting Start-Up
  • Winner: Algolux
  • Highly commended: Innoviz Technologies

Go to the original article...

LG, Rockchip and CEVA Partner on 3D Imaging

Image Sensors World        Go to the original article...

PRNewswire: CEVA partners with LG to deliver a high-performance, low-cost smart 3D camera for consumer electronics and robotic applications.

The 3D camera module incorporates a Rockchip RK1608 coprocessor with multiple CEVA-XM4 imaging and vision DSPs to perform biometric face authentication, 3D reconstruction, gesture/posture tracking, obstacle detection, AR and VR.

"There is a clear demand for cost-efficient 3D camera sensor modules to enable an enriched user experience for smartphones, AR and VR devices and to provide a robust localization and mapping (SLAM) solution for robots and autonomous cars," said Shin Yun-sup, principal engineer at LG Electronics. "Through our collaboration with CEVA, we are addressing this demand with a fully-featured compact 3D module, offering exceptional performance thanks to our in-house algorithms and the CEVA-XM4 imaging and vision DSP."

Go to the original article...

Ambarella Loses Key Customers

Image Sensors World        Go to the original article...

The Motley Fool publishes an analysis of Ambarella performance over the last year. The company lost some of its key customers GoPro, Hikvision and DJI, while the new Google Clips camera opted for non-Ambarella processor as well:

"Faced with shrinking margins, GoPro needed to buy cheaper chipsets to cut costs. It also wanted a custom design which wasn't readily available to competitors like Ambarella's SoCs. That's why it completely cut Ambarella out of the loop and hired Japanese chipmaker Socionext to create a custom GP1 SoC for its new Hero 6 cameras.

DJI also recently revealed that its portable Spark drone didn't use an Ambarella chipset. Instead, the drone uses the Myriad 2 VPU (visual processing unit) from Intel's Movidius. DJI previously used the Myriad 2 alongside an Ambarella chipset in its flagship Phantom 4, but the Spark uses the Myriad 2 for both computer vision and image processing tasks.

Google also installed the Myriad 2 in its Clips camera, which automatically takes burst shots by learning and recognizing the faces in a user's life.

Ambarella needs the CV1 to catch up to the Myriad 2, but that could be tough with the Myriad's first-mover's advantage and Intel's superior scale.

To top it all off, Chinese chipmakers are putting pressure on Ambarella's security camera business in China.
"

Go to the original article...

Pikselim Demos Low-Light Driver Vision Enhancement

Image Sensors World        Go to the original article...

Pikselim publishes a night-time Driver Vision Enhancement (DVE) video using its low-light CMOS sensor behind the windshield of the vehicle with the headlights off (sensor is operated in the 640x512 format at 15 fps in the Global Shutter mode, using an f/0.95 optics and off-chip de-noising):

Go to the original article...

Canon G1X Mark III preview

Cameralabs        Go to the original article...

Canon's PowerShot G1X Mark III is a high-end compact sporting the same 24 Megapixel APS-C sensor as the EOS 80D. The Mark III also squeezes a 3x / 24-72mm zoom, EVF, fully-articulated touchscreen, Wifi and 1080p video into its weatherproof body. Check out my hands-on preview!…

The post Canon G1X Mark III preview appeared first on Cameralabs.

Go to the original article...

Yole on Automotive LiDAR Market

Image Sensors World        Go to the original article...

Yole Developpement publishes its AutoSens Brussels 2017 presentation "Application, market & technology status of the automotive LIDAR." Few slides form the presentation:

Go to the original article...

Sony Announces Three New Sensors

Image Sensors World        Go to the original article...

Sony added three new sensors to its flyers table: 8.3MP 2um pixel based IMX334LQR and 4.5um global shutter pixel based 2.9MP IMX429LLJ and 2MP IMX430LLJ. The news sensors are said to have high sensitivity and aimed to security and surveillance applications.

Go to the original article...

Yole Image Sensors M&A Review

Image Sensors World        Go to the original article...

IMVE publishes article "Keeping Up With Consolidation" by Pierre Cambou, Yole Developpement image sensor analyst. There is a nice chart showing the large historical mergers and acquisitions:


"For the source of future M&A, one should rather look toward the decent number of machine vision sensor technology start-ups, companies like Softkinetic, which was purchased by Sony in 2015, and Mesa, which was acquired by Ams, in 2014. There are a certain number of interesting start-ups right now, such as PMD, Chronocam, Fastree3D, SensL, Sionyx, and Invisage. Beyond the start-ups, and from a global perspective, there is little room for a greater number of deals at sensor level, because almost all players have recently been subject to M&A."

Go to the original article...

Waymo Self-Driving Car Relies on 5 LiDARs and 1 Surround-View Camera

Image Sensors World        Go to the original article...

Alphabet Waymo publishes Safety Report with some details on its self-driving car sensors - 5 LiDARs and one 360-deg color camera:

LiDAR (Laser) System
LiDAR (Light Detection and Ranging) works day and night by beaming out millions of laser pulses per second—in 360 degrees—and measuring how long it takes to reflect off a surface and return to the vehicle. Waymo’s system includes three types of LiDAR developed in-house: a short-range LiDAR that gives our vehicle an uninterrupted view directly around it, a high-resolution mid-range LiDAR, and a powerful new generation long-range LiDAR that can see almost three football fields away.

Vision (Camera) System

Our vision system includes cameras designed to see the world in context, as a human would, but with a simultaneous 360-degree field of view, rather than the 120-degree view of human drivers. Because our high-resolution vision system detects color, it can help our system spot traffic lights, construction zones, school buses, and the flashing lights of emergency vehicles. Waymo’s vision system is comprised of several sets of high-resolution cameras, designed to work well at long range, in daylight and low-light conditions.


Half a year ago, Bloomberg published an animated gif image showing the cleaning of Waymo 360-deg camera:

Go to the original article...

Chronocam Partners with Huawei

Image Sensors World        Go to the original article...

French sites L'usine Novelle, InfoDSI, Chine report that Chronocam partners with Huawei. Huawei is said to cooperate with Chronocam on face recognition technology in its smartphones, similar to Face ID in iPhone X.

Go to the original article...

Hynix Proposes TrenchFET TG

Image Sensors World        Go to the original article...

SK Hynix patent application US20170287959 "Image Sensor" by Pyong-su Kwag, Yun-hui Yang, and Young-jun Kwon leverages the company's DRAM trench technology:

Go to the original article...

Omron Improves Its Driver Monitoring System

Image Sensors World        Go to the original article...

OMRON driver monitoring system uses three barometers to judge whether the driver is capable of focusing on driving responsibilities: (1) whether the driver is observing the vehicle's operation (Eyes ON/OFF); (2) how quickly the driver will be able to resume driving (Readiness High/Mid/Low); and (3) whether the driver is behind the wheel (Seating ON/OFF). Additionally, the company's facial image sensing technology, OKAO Vision, now makes it possible to sense the state of the driver even if wearing a mask or sunglasses - something that had previously not been possible.

Go to the original article...

Magic Leap Seeks $1b Funding on $6b Valuation

Image Sensors World        Go to the original article...

Reuters reports that AR glasses startup Magic Leap files in SEC that it's seeking to raise $1b on $6b valuation. The filing does not indicate the amount that Magic Leap had so far secured from investors. It may end up raising less than $1b.

Go to the original article...

Compressed Sensing Said to Save Image Sensor Power

Image Sensors World        Go to the original article...

Pravir Singh Gupta and Gwan Seong Choi from Texas A&M University publish an open access paper "Image Acquisition System Using On Sensor Compressed Sampling Technique." They say that "Compressed Sensing has the potential to increase the resolution of image sensors for a given technology and die size while significantly decreasing the power consumption and design complexity. We show that it has potential to reduce power consumption by about 23%-65%."

The proposed sensor architecture implementing this claim is given below:


"Now we demonstrate the reconstruction results of our proposed novel system flow. We use both binary and non-binary block diagonal matrix to compressively sample the image. The binary block diagonal(ΦB) and non-binary block diagonal(ΦNB) sampling matrix are mentioned below."

Go to the original article...

EI 2018, "Image Sensors and Imaging Systems" Preliminary Program

Image Sensors World        Go to the original article...

Electronic Imaging 2018, "Image Sensors and Imaging Systems" Symposium is about to publish its preliminary program. I was given an early preview:

There will be five invited keynotes:
  • "Dark Current Limiting Mechanisms in CMOS Image Sensors"
    Dan McGrath, BAE Systems (California)
  • "Security imaging in an unsecure world"
    Anders Johanesson, AXIS COMMUNICATIONS AB (Sweden)
  • "Quantum Efficiency and Color"
    Jörg Kunze, Basler AG (Germany)
  • "Sub-Electron Low Noise CMOS image sensors"
    Angel Rodriguez Vasquez, University of Sevilla (Spain)
  • "Advances in automotive image sensors"
    Boyd Fowler, OmniVision Technologies (California)
The regular papers are grouped into several sessions with the following themes (the exact names are still under discussion):
  • QE curves, color and spectral imaging
  • Depth sensing
  • High speed and ultra high speed imaging
  • Noise, performance and characterization
  • Technology and design for high performance image sensors
  • Image sensors and technologies for automotive and autonomous vehicles
  • Applications
  • Interactive posters
It is a program over two days within the 5 days of the Electronic Imaging symposium. It is held at the same time as Photonics West and the week after the P2020 meeting.

Go to the original article...

Intel Unveils D400 Realsense Camera Family

Image Sensors World        Go to the original article...

Intel publishes an official page of D400 camera family, currently consisting of D415 and D435 active stereo cameras. Reportedly, the earlier Realsense cameras SR300, R200 and F200 are being discontinued, while D400 series will be expanded to include passive and active stereo models:

Go to the original article...

Velodyne More Than Quadruples LiDAR Manufacturing

Image Sensors World        Go to the original article...

BusinessWire: Velodyne has more than quadrupled production for its LiDAR sensors to meet strong global demand. As a result, Velodyne LiDAR’s sensors are immediately available via distribution partners in Europe, Asia Pacific, and North America, with industry standard lead-times for direct contracts.

To support that expansion, Velodyne has doubled the number of its full-time employees over the past six months. These employees operate across three facilities in California, including the company’s new Megafactory in San Jose, its long-standing manufacturing facility in Morgan Hill, and the Velodyne Labs research center in Alameda.

Velodyne leads the market in real-time 3D LiDAR systems for fully autonomous vehicles,” said David Hall, Velodyne LiDAR Founder and CEO. “With the tremendous surge in autonomous vehicle orders and new installations across the last 12 months, we scaled capacity to meet this demand, including a significant increase in production from our 200,000 square-foot Megafactory.

Velodyne Megafactory in San Jose, CA

Looking at GM autonomous driving fleet one can understand why Velodyne needs so much production capacity:

Go to the original article...

Samsung Announces 0.9um Pixel Sensor for Smartphones, More

Image Sensors World        Go to the original article...

BusinessWire: Samsung introduces two new ISOCELL sensors: 1.28 μm 12MP Fast 2L9 with Dual Pixel technology, and ultra-small 0.9μm 24Mp Slim 2X7 with Tetracell technology.

The Fast 2L9 features reduced pixel size from the previous Dual Pixel sensor’s 1.4μm to 1.28μm.

At 0.9μm, the Slim 2X7 is said to be the first sensor in the industry with pixel size below 1.0μm. The pixel uses improved ISOCELL technology with deeper DTI that reduces color crosstalk and expands the full-well capacity to hold more light information. In addition, the small 0.9μm pixel size enables a 24Mp image sensor to be fitted in a thinner camera module.

The Slim 2X7 is also features Tetracell technology. Tetracell improves performance in low-light situations by merging four neighboring pixels to work as one to increase light sensitivity. In bright environments, Tetracell uses a re-mosaic algorithm to produce full resolution images. This enables consumers to use the same front camera to take photos in various lighting conditions.

Samsung ISOCELL Fast 2L9 and ISOCELL Slim 2X7 are new image sensors that fully utilize Samsung’s advanced pixel technology, and are highly versatile as they can be placed in both front and rear of a smartphone,” said Ben K. Hur, VP of System LSI Marketing at Samsung.

In an earlier news, Samsung Tetracell technology received Korea Multimedia Technology Award:

Go to the original article...

ON Semi Announces Two 1MP Sensors

Image Sensors World        Go to the original article...

BusinessWire: ON Semi announces 3um pixel-based AS0140 and AS0142 1/4-inch 1MP sensors with integrated ISP for automotive applications. The new sensors support 45 fps at full resolution or 60 fps at 720p. Key features include distortion correction, multi-color overlays and both analog (NTSC) and digital (Ethernet) interfaces. Both SoC devices achieve enhanced image quality by making use of the adaptive local tone mapping (ALTM) in order to eliminate artifacts that impinge on the acquisition process while achieving DR of 93 dB.

Both new devices are said to have class-leading power efficiency; when running at 30 fps in HDR mode, they consume just 530 mW. Operating temperature range is -40°C to +105°C. Engineering samples are available now. The AS0140 will be in production in 4Q17, and AS0142 will be in production in 1Q18.

AS0140 ISP pipeline

Go to the original article...

Image Fusion in Dual Cameras

Image Sensors World        Go to the original article...

Corephotonics publishes a presentation on image fusion in dual cameras:

Go to the original article...

css.php