Archives for January 2018

Samsung Presents 3-Layer Stacked Image Sensor

Image Sensors World        Go to the original article...

Samsung mobile image sensor page unveils its 3-layer stacked image sensor capturing 1080p video at 480fps:


Thanks to DJ for the link!

Go to the original article...

Samsung Applies for Under-Display Fingerprint Image Sensor Patent

Image Sensors World        Go to the original article...

Samsung patent application US20180012069 "Fingerprint sensor, fingerprint sensor package, and fingerprint sensing system using light sources of display panel" by Dae-young Chung, Hee-chang Hwang, Kun-yong Yoon, Woon-bae Kim, Bum-suk Kim, Min Jang, Min-chul Lee, and Jung-woo Kim proposes an optical fingerprint image sensor under an OLED display panel:

Go to the original article...

ST 115dB Linear HDR Pixel

Image Sensors World        Go to the original article...

MDPI Special Issue on IISW 2017 publishes ST paper "A 750 K Photocharge Linear Full Well in a 3.2 μm HDR Pixel with Complementary Carrier Collection" by Frédéric Lalanne, Pierre Malinge, Didier Hérault, Clémence Jamin-Mornet, and Nicolas Virollet.

"The native HDR pixel concept based on a parallel electron and hole collection for, respectively, a low signal level and a high signal level is particularly well-suited for this performance challenge. The theoretical performance of this pixel is modeled and compared to alternative HDR pixel architectures. This concept is proven with the fabrication of a 3.2 μm pixel in a back-side illuminated (BSI) process including capacitive deep trench isolation (CDTI). The electron-based image uses a standard 4T architecture with a pinned diode and provides state-of-the-art low-light performance, which is not altered by the pixel modifications introduced for the hole collection. The hole-based image reaches 750 kh+ linear storage capability thanks to a 73 fF CDTI capacitor. Both images are taken from the same integration window, so the HDR reconstruction is not only immune to the flicker issue but also to motion artifacts."

Go to the original article...

Espros LiDAR Sensor Presentation at AutoSens 2017

Image Sensors World        Go to the original article...

AutoSens publishes a video of Espros CEO Beat De Coi presentation of a pulsed ToF sensor in October 2017:

Go to the original article...

Intel Starts Shipments of D400 RealSense Cameras

Image Sensors World        Go to the original article...

Intel begins shipping two RealSense D400 Depth Cameras from its next-generation D400 product family: the D415 and D435, based on previously announced D400 3D modules.

RealSense D415

Intel is also offering its D4 and D4M (mobile version) depth processor chips for stereo cameras:


Go to the original article...

ams Bets on 3D Sensing

Image Sensors World        Go to the original article...

SeekingAlpha publishes an analysis of the recent ams business moves:

"ams has assembled strong capabilities in 3D sensing - one of the strongest emerging new opportunities in semiconductors. 3D sensing can detect image patterns, distance, and shape, allowing for a wide range of uses, including facial recognition, augmented reality, machine vision, robotics, and LIDAR.

Although ams is not currently present in the software side, the company has recently begun investing in software development as a way to spur future adoption. Ams has also recently begun a collaboration with Sunny Optical, a leading Asian sensor manufacturer, to take advantage of Sunny's capabilities in module manufacturing.

At this point it remains to be seen how widely adopted 3D sensing will be; 3D sensing could become commonplace on all non-entry level iPhones in a short time and likewise could gain broader adoption in Android devices. What's more, there is the possibility of adding 3D sensing to other consumer devices like tablets, not to mention adding 3D sensing to the back of phones in future models.
"

Go to the original article...

RGB to Hyperspectral Image Conversion

Image Sensors World        Go to the original article...

Ben Gurion University, Israel, researches implement a physically impossible thing - converting regular RGB consumer camera images into hyperspectral ones, purely by software. Their paper "Sparse Recovery of Hyperspectral Signal from Natural RGB Images" by Boaz Arad and Ohad Ben-Shahar presented at European Conference on Computer Vision (ECCV) in Amsterdam, The Netherlands, in October 2016, says:

"We present a low cost and fast method to recover high quality hyperspectral images directly from RGB. Our approach first leverages hyperspectral prior in order to create a sparse dictionary of hyperspectral signatures and their corresponding RGB projections. Describing novel RGB images via the latter then facilitates reconstruction of the hyperspectral image via the former. A novel, larger-than-ever database of hyperspectral images serves as a hyperspectral prior. This database further allows for evaluation of our methodology at an unprecedented scale, and is provided for the benefit of the research community. Our approach is fast, accurate, and provides high resolution hyperspectral cubes despite using RGB-only input."


"The goal of our research is the reconstruction of the hyperspectral data from natural images from their (single) RGB image. Prima facie, this appears a futile task. Spectral signatures, even in compact subsets of the spectrum, are very high (and in the theoretical continuum, infinite) dimensional objects while RGB signals are three dimensional. The back-projection from RGB to hyperspectral is thus severely underconstrained and reversal of the many-to-one mapping performed by the eye or the RGB camera is rather unlikely. This problem is perhaps expressed best by what is known as metamerism – the phenomenon of lights that elicit the same response from the sensory system but having different power distributions over the sensed spectral segment.

Given this, can one hope to obtain good approximations of hyperspectral signals from RGB data only? We argue that under certain conditions this otherwise ill-posed transformation is indeed possible; First, it is needed that the set of hyperspectral signals that the sensory system can ever encounter is confined to a relatively low dimensional manifold within the high or even infinite-dimensional space of all hyperspectral signals. Second, it is required that the frequency of metamers within this low dimensional manifold is relatively low. If both conditions hold, the response of the RGB sensor may in fact reveal much more on the spectral signature than first appears and the mapping from the latter to the former may be achievable.

Interestingly enough, the relative frequency of metameric pairs in natural scenes has been found to be as low as 10^−6 to 10^−4. This very low rate suggests that at least in this domain spectra that are different enough produce distinct sensor responses with high probability.

The eventual goal of our research is the ability to turn consumer grade RGB cameras into a hyperspectral acquisition devices, thus permitting truly low cost and fast HISs.
"

Go to the original article...

X-Ray Imaging at 30fps

Image Sensors World        Go to the original article...

Teledyne Dalsa publishes a nice demo of its 1MP 30fps X-Ray sensor:

Go to the original article...

SD Optics Depth Sensing Camera

Image Sensors World        Go to the original article...

SD Optics publishes two videos of depth sensing by means of fast focus variations of its MEMS lens:




Go to the original article...

Imec 3D Stacking Aims to 100nm Contact Pitch

Image Sensors World        Go to the original article...

Imec article on 3D bonding technology by Eric Beyne, imec fellow & program director 3D system integration presents solutions that are supposed to reach 100nm contact pitch:

Go to the original article...

Gate/Body-tied MOSFET Image Sensor Proposes

Image Sensors World        Go to the original article...

Sensors and Materials publishes a paper "Complementary Metal Oxide Semiconductor Image Sensor Using Gate/Body-tied P-channel Metal Oxide Semiconductor Field Effect Transistor-type Photodetector for High-speed Binary Operation" by Byoung-Soo Choi, Sang-Hwan Kim, Jimin Lee, Chang-Woo Oh, Sang-Ho Seo, and Jang-Kyoo Shin from Kyungpook National University, Korea.

"In this paper, we propose a CMOS image sensor that uses a gate/body-tied p-chnnel metal oxide semiconductor field effect transistor (PMOSFET)-type photodetector for highspeed binary operation. The sensitivity of the gate/body-tied PMOSFET-type photodetector is approximately six times that of the p–n junction photodetector for the same area. Thus, an active pixel sensor with a highly sensitive gate/body-tied PMOSFET-type photodetector is more appropriate for high-speed binary operation."

The 3T-style pixel uses pmos instead of PD and has a non-linear response. Probably, its inherent non-linearity has been the main reason that the binary operation mode is proposed:

Go to the original article...

Sony E 18-135mm f3.5-5.6 review

Cameralabs        Go to the original article...

The Sony E 18-135mm f3.5-5.6 is a general-purpose 7.5x zoom for its cropped-frame mirrorless bodies, including the A6000, A5000 and NEX series, on which it delivers an equivalent range of 27-203mm. Compact and light but capable of quality results, find out if it's for you in my review!…

The post Sony E 18-135mm f3.5-5.6 review appeared first on Cameralabs.

Go to the original article...

The Rise of Smartphone Spectrometer

Image Sensors World        Go to the original article...

MDPI publishes a paper "Smartphone Spectrometers" by Andrew J.S. McGonigle, Thomas C. Wilkes, Tom D. Pering, Jon R. Willmott, Joseph M. Cook, Forrest M. Mims, and Alfio V. Parisi from University of Sheffield, UK and University of Sydney and University of Southern Queensland, Australia.

"Smartphones are playing an increasing role in the sciences, owing to the ubiquitous proliferation of these devices, their relatively low cost, increasing processing power and their suitability for integrated data acquisition and processing in a ‘lab in a phone’ capacity. There is furthermore the potential to deploy these units as nodes within Internet of Things architectures, enabling massive networked data capture. Hitherto, considerable attention has been focused on imaging applications of these devices. However, within just the last few years, another possibility has emerged: to use smartphones as a means of capturing spectra, mostly by coupling various classes of fore-optics to these units with data capture achieved using the smartphone camera. These highly novel approaches have the potential to become widely adopted across a broad range of scientific e.g., biomedical, chemical and agricultural application areas. In this review, we detail the exciting recent development of smartphone spectrometer hardware, in addition to covering applications to which these units have been deployed, hitherto. The paper also points forward to the potentially highly influential impacts that such units could have on the sciences in the coming decades."

Go to the original article...

GM Self-Driving Car Has 5 LiDARs and 16 Cameras

Image Sensors World        Go to the original article...

GM autonomous car safety report details the sensors on board of Cruise self-driving vehicle: "To perform Perception functions, the vehicle has five LiDARs, 16 cameras and 21 radars. Their combined data provides sensor diversity allowing Perception to see complex environments."

Go to the original article...

Brillnics 90dB DR Image Sensor Paper

Image Sensors World        Go to the original article...

MDPI Sensors Special Issue on the 2017 International Image Sensor Workshop publishes Brillnics paper "An Over 90 dB Intra-Scene Single-Exposure Dynamic Range CMOS Image Sensor Using a 3.0 μm Triple-Gain Pixel Fabricated in a Standard BSI Process" by Isao Takayanagi, Norio Yoshimura, Kazuya Mori, Shinichiro Matsuo, Shunsuke Tanaka, Hirofumi Abe, Naoto Yasuda, Kenichiro Ishikawa, Shunsuke Okura, Shinji Ohsawa, and Toshinori Otaka.

"To respond to the high demand for high dynamic range imaging suitable for moving objects with few artifacts, we have developed a single-exposure dynamic range image sensor by introducing a triple-gain pixel and a low noise dual-gain readout circuit. The developed 3 μm pixel is capable of having three conversion gains. Introducing a new split-pinned photodiode structure, linear full well reaches 40 ke−. Readout noise under the highest pixel gain condition is 1 e− with a low noise readout circuit. Merging two signals, one with high pixel gain and high analog gain, and the other with low pixel gain and low analog gain, a single exposure dynamic rage (SEHDR) signal is obtained. Using this technology, a 1/2.7”, 2M-pixel CMOS image sensor has been developed and characterized. The image sensor also employs an on-chip linearization function, yielding a 16-bit linear signal at 60 fps, and an intra-scene dynamic range of higher than 90 dB was successfully demonstrated. This SEHDR approach inherently mitigates the artifacts from moving objects or time-varying light sources that can appear in the multiple exposure high dynamic range (MEHDR) approach."

Go to the original article...

Innoviz LiDAR Prototype

Image Sensors World        Go to the original article...

PRNewswire: Innoviz presents its prototype LiDAR (model Pro) at CES, with quite a complete performance spec, a rarity among LiDAR startups (sans Velodyne):


Looking forward, the company intends to bring the automotive-grade InnovizOne model sometime in 2019. It requires quite a leap in technology to reach the targets in resolution, FOV, range and size set at last year's CES:


Update: As of January 16, 2018, the following design targets are presented for InnovizOne automotive-qualified product on the company page:

Go to the original article...

Sony, Samsung Talk on Automotive Imaging

Image Sensors World        Go to the original article...

Kazuo Hirai, Sony CEO and President, spent few minutes talking about automotive imaging in his CES keynote:



Samsung announces an open DRVLINE platform that includes "a brand-new ADAS forward-facing camera system, created by Samsung and HARMAN, which is engineered to meet upcoming New Car Assessment Program (NCAP) standards. These include lane departure warning, forward collision warning, pedestrian detection, and automatic emergency braking."


On the sensors side, Samsung DRVLINE partners include 3 LiDAR startups:

Go to the original article...

3D Imaging News: TI, Omnivision, Spreadtrum

Image Sensors World        Go to the original article...

TI publishes Youtube demos on use cases for its ToF image sensors (based on Softkinetic pixels):





PRNewswire: OmniVision and Spreadtrum announced a turnkey active stereo 3D camera reference design for smartphones.

"We anticipate face login to be one of the key features for upcoming mobile devices. Achieving this requires complete camera hardware and software system development, which can be an extremely complicated, resource-intensive and expensive process. The aim of this active stereo 3D camera reference design is to reduce time to market and other key hurdles for our customers," said Sylvia Zhang, head of marketing for OmniVision's mobile segment. "This offering reflects our strong partnership with Spreadtrum and our continued determination to deliver innovative solutions that ease our customers' pain points."

Pairing OmniVision's OV9282 and OV7251 global shutter sensors with the Spreadtrum SC9853 AP has resulted in a compelling active stereo camera solution. SC9853 is based on 14nm octa-core 64-bit Intel Airmont processor architecture.

Go to the original article...

Yole CIS Market Observations

Image Sensors World        Go to the original article...

Yole Developpement presentation "Machine Vision sensors & cameras competitive landscape" is available for free but requires registration. The June 23, 2017 33-page presentation has many interesting slides on image sensor market:

Go to the original article...

Yole Forecasts Fast Vision Processors Growth

Image Sensors World        Go to the original article...

Yole Developpement article on CNN and ISP forecasts dramatic growth on vision processor market:

Go to the original article...

NXP, LG, and HELLA Announce Open Automotive Vision Platform

Image Sensors World        Go to the original article...

GlobeNewswire: NXP, LG, and HELLA Aglaia announce a strategic collaboration for automotive vision applications. The new vision platform will be made available to all automakers to promote best-in-class detection and classification of vulnerable road users to save lives.

Many of today’s automotive vision platforms are proprietary, inflexible and provide little room for automakers to differentiate in the global marketplace. They also inhibit further software integration and innovation and generally lock out the ability to combine the best available sensing technology and software sources in the market. The collaborators’ joint development work, led by LG, is based on the conviction that vision platforms must be open and safe to meet NCAP guidelines and to pave the way for level 3 to 5 automated driving.

The camera-based vision system, developed by LG together with NXP and HELLA Aglaia, is designed to detect and classify vulnerable road users such as pedestrians and cyclists and activate auto emergency braking (AEB). The cameras can be attached to the windshield behind a car’s rear-view mirror.

Additionally, the system detects traffic signs, notify the driver of speed limits, monitor lane keeping and facilitate steering correction in case of unintentional drift. NXP’s accelerated vision processing IP, together with algorithm expertise from HELLA Aglaia and LG allow such applications to run at very low latency and within a market leading power envelope.

Openness and collaboration empower innovation. Now is the time for partners such as LG Electronics, NXP and HELLA Aglaia to combine resources in an open vision system for the safety of this generation and generations to come,” said Lee Woo-jong, president of LG’s Vehicle Components Company. “We look forward to working with NXP and HELLA Aglaia on our camera-based vision system, one that carmakers around the world will use to help bring about the autonomous revolution.

Go to the original article...

Toyota Paper on SPAD-based LiDAR

Image Sensors World        Go to the original article...

MDPI Sensors publishes Toyota paper "Small Imaging Depth LIDAR and DCNN-Based Localization for Automated Guided Vehicle" by Seigo Ito, Shigeyoshi Hiratsuka, Mitsuhiko Ohta, Hiroyuki Matsubara, and Masaru Ogawa. The paper belongs to the Special Issue on Imaging Depth Sensors—Sensors, Algorithms and Applications.

"We present our third prototype sensor and a localization method for Automated Guided Vehicles (AGVs), for which small imaging LIght Detection and Ranging (LIDAR) and fusion-based localization are fundamentally important. Our small imaging LIDAR, named the Single-Photon Avalanche Diode (SPAD) LIDAR, uses a time-of-flight method and SPAD arrays. A SPAD is a highly sensitive photodetector capable of detecting at the single-photon level, and the SPAD LIDAR has two SPAD arrays on the same chip for detection of laser light and environmental light. Therefore, the SPAD LIDAR simultaneously outputs range image data and monocular image data with the same coordinate system and does not require external calibration among outputs. As AGVs travel both indoors and outdoors with vibration, this calibration-less structure is particularly useful for AGV applications."

Go to the original article...

Bellus 3D Face Recognition for Smartphones

Image Sensors World        Go to the original article...

PRWeb: Bellus3D presents its structured light camera for face recognition in smartphones:

"Bellus3D worked closely with Spreadtrum to define and implement a dual-sensor structured light depth camera module that captures up to 200k 3D points with sub-millimeter depth accuracy. The new reference design contains two NIR sensors and a structured light projector, and offers greater depth resolution than the single-sensor structured light camera design deployed by iPhone X. It enables smartphone manufacturers to use off-the-shelf components to reduce costs and streamlines smartphone manufacturing by relaxing the tight calibration tolerances associated with single-sensor structured light depth camera module designs. In addition, Bellus3D software can automatically recalibrate the depth camera module after a smartphone is accidently dropped, which otherwise may require the product to be returned to the manufacturer for repair."

The camera specifications:
  • Two 1 MP IR sensors (1280 x 800)
  • One 2MP Color sensor (1600 x 1200)
  • Two Infrared structured light VCSEL projectors
  • Working range: 25cm to 60cm (optimal at 30-45cm)


Go to the original article...

Seek Thermal Presents Automotive Thermal Camera

Image Sensors World        Go to the original article...

PRNewswire: Seek Thermal presents its QVGA thermal imaging camera for automotive aftermarket priced at under $999. The thermal image camera easily integrates with existing automotive infotainment systems, making it easier than ever to add high-resolution thermal imaging to your vehicle at low price point.

"The goal of thermal imaging is simple: to give drivers information about their surroundings and help them react to potential hazards," said Tim LeBeau of Seek Thermal. "Risky driving factors such as darkness, rain, fog, snow, and glare become far less dangerous when a driver has access to real-time thermal data. These benefits will also be key in increasing safety in autonomous vehicles. Due to high cost though, this technology has only been available in a select number of high-end automobiles. It is our mission to provide life-saving technology that is available to everyone."

Go to the original article...

AI News: Ambarella, Geo, Nvidia, Omron, Rockchip, Intel, Mediatek

Image Sensors World        Go to the original article...

BusinessWire: Ambarella introduces the CV22 camera SoC, the second chip in the CVflow family, combining image processing, 4Kp60 video encoding and CVflow computer vision processing in a single, low power design in 10nm process. The CV22’s CVflow architecture provides the DNN processing for the next generation of home monitoring, automotive, drone and wearable cameras.

CV22 enables customers to deploy cameras with high-performance deep-learning capabilities,” said Fermi Wang, CEO of Ambarella. “Compared with inefficient CPU and GPU solutions, CV22 provides powerful computer vision performance combined with high-quality image processing, efficient video encoding and the low-power operation required to be deployed in mass production. It will enable a new level of intelligence in cameras, ranging from person recognition in home monitoring cameras to advanced ADAS capabilities in automobiles.

CV22 SoC features:

  • CVflow processor with CNN/deep learning support
  • 4Kp60/8-Megapixel AVC and HEVC encoding with multi-stream support
  • Real-time hardware-accelerated 360-degree de-warping and Lens Distortion Correction (LDC) engine
  • Multi-channel ISP with up to 800-Megapixel/s input pixel rate
  • Multi-exposure HDR and WDR processing
  • LED flicker mitigation
  • SmartAVC and SmartHEVC intelligent rate control for lowest bitrates in security applications
  • Multi-sensor support for multi-imager security cameras, 3-channel electronic mirror and 4-channel AVM systems

BusinessWire: GEO Semiconductor announces the GW5200 and GW5400 CVP’s for automotive cameras. The GW5400 has in-camera computer vision to enable ADAS functionality.

The GW5 product is a major advance as automotive camera technologies evolve to combine viewing and computer vision applications in a single camera. At GEO we are focused on bringing safety benefits to the edge by adding computer vision and ADAS to lower cost automotive camera solutions,” said Dave Orton, GEO CEO, “with design wins at many of the major automotive OEM’s, we are looking to build on our market success by enabling smart backup camera, eMirror, Driver Monitoring Systems, and other innovative automotive camera solutions.

Nvidia unveils Xavier processor with AI capabilities for its Drive platform. With more than 9 billion transistors, Xavier is said to be the most complex SoC ever created, representing the work of more than 2,000 NVIDIA engineers over a four-year period, and an investment of $2 billion in R&D.

It’s built around a custom 8-core CPU, a new 512-core Volta GPU, a new deep learning accelerator, new computer vision accelerators and new 8K HDR video processors. All previous NVIDIA DRIVE software development carries over and runs. DRIVE Xavier puts more processing power to work using less energy, delivering 30 trillion operations per second while consuming just 30 watts. It’s 15 times more energy efficient than our previous generation architecture.

GlobeNewsWire: NVIDIA, ZF and Baidu today announced that they are creating a production-ready AI autonomous vehicle platform designed for China, the world’s largest automotive market.

PRNewswire: OMRON too is putting AI to work to promote automotive safety. OMRON VOR technology can detect early-stage signs of drowsiness by sensing eye movements using a remotely installed automotive camera. While other current technology typically observes blinking to detect drowsy driving, OMRON's new technology observes the correlation between head and eye movements – a reflex motion that is difficult for drivers to control.

This technology enables simultaneous measurement of the driver's gaze angle and 3-D eye position using a single camera for highly precise gaze detection with accuracy of plus or minus 1 degree. By understanding the movement of the driver’s pupil, OMRON's VOR system can detect signs of drowsiness one to two minutes before the driver is even aware he or she feels sleepy, and promote safe driving.

"Artificial intelligence identifies the expression and attitude of the driver and analyzes that data over time to determine if the driver is paying attention to the road or incapacitated. The car can accordingly enact safety measures, like automatic control, warnings or actions to make our roads safer," says Deron Jackson, CTO of OMRON Adept Technologies.


PRNewswire: Rockchip released its first AI processor RK3399Pro, an one-stop turnkey solution for AI. Computing performance of its NPU (Neural Network Processing Unit) reaches 2.4TOPs. Its NPU computing performance is said to be 150% higher than other same type NPU processor; the power consumption is less than 1%, comparing with other solutions adopting GPU as AI computing unit.

MIT Technology Review
: Intel shows its Loihi AI processor that learns to recognize objects in pictures captured by a webcam. The new chip uses about a thousandth as much power as a conventional processor.




Update: PRNewswire: MediaTek reveals more detailed its ongoing AI platform strategy to enable AI edge computing with its NeuroPilot AI platform. Through a combination of hardware and software, an AI processing unit (APU), and NeuroPilot SDK, MediaTek intends to bring AI across its wide-ranging portfolio that of consumer products: smartphones, smart homes, autos and more.

Go to the original article...

LiDAR News: Velodyne, AEye, Cepton, Tetravue, Valeo

Image Sensors World        Go to the original article...

BusinessWire: In a surprise move, Velodyne announced a volume increase and cost reduction of its most popular LiDAR product, the VLP-16. Demand for the VLP-16 was off the charts in 2017, and in just a matter of days the demand has increased even more. “With this cost reduction, we’ll be able to get more Pucks into the hands of more customers, support the growing number of autonomous vehicle development fleets around the world, and start creating a better tomorrow,” said David Hall, Velodyne CEO and Founder.

BusinessWire: AEye announces its AE100 iDAR:

The AE100 is a game-changer for the autonomous vehicle and ADAS markets, as it makes iDAR technology commercially available for the first time,” said Luis Dussan, founder and CEO of AEye. “iDAR-based robotic perception allows sensors to mimic the visual cortex – bringing real-time intelligence to data collection. As a result, the system not only captures everything in a scene - it actually brings higher resolution to key objects and exceeds industry required speeds and distances. By solving for the limitations of first generation LiDAR-only solutions, AEye is enabling the safe, timely rollout of failsafe commercial autonomous vehicles.

Leveraging a more powerful, yet safer, 1550-nm laser, AEye’s adoptive-scanning iDAR AE100 system is said to be able to interrogate 100% of the scene, while typical fixed pattern LiDAR systems are only capable of interrogating 6% of any scene, due to huge vertical gaps inherent in raster or spinning systems. With its embedded intelligence, the AE100 offers key advantages:
  • Coverage: The AE100 can use dynamic patterns for mapping the environment, as it is not tied to one fixed mode. In evaluating any given scene, AE100’s software definable scanning delivers more than 10x the 3D resolution over legacy systems.
  • Speed: It’s 3x faster, and does not miss any objects between scans while identifying and solving any temporal anomalies. This reduces scan gaps, resulting in more than 25 feet of faster response distance at average highway speeds – more than two car lengths.
  • Range: The AE100 extends effective range at comparable resolution by 7-10x over currently deployed LiDAR systems.
Customizable general performance specifications include up to 200Hz frame rates, less than 60μs object or blob revisit rates, software-definable resolution of .09 degrees H/V, and a maximum range of 300-400 meters. It will have a limited release in mid- 2018, and a larger commercial release later in Q3.


BusinessWire: Cepton announces a partnership with May Mobility, the developer of autonomous driving technology to replace an existing transportation solution with self-driving fleet vehicles on public roads. “Cepton is driving the high performance LiDAR evolution to break the $1,000 price point; we are very excited about our collaboration and what it will mean for the future of the industry,” said Edwin Olson, May Mobility CEO and Co-Founder. Cepton LiDARs feature a patented frictionless micro-motion technology that eliminates large spinning parts and does not rely on custom components. The result is faster, more scalable production at a lower price.

GlobeNewsWire: TetraVue partners with NVIDIA, CVedia and AGC/Wideye for next generation ADAS and self-driving applications. TetraVue’s solid-state 4D LIDAR cameras capture multi-megapixel images at up to 30 fps with accurate depth for each individual pixel, said to be over 100x more spatial and motion data than competing low resolution scanning LIDARs.

Tetravue presents its "never before seen high definition" image example with per-pixel depth data:


PRNewswire: NAVYA (France) introduces the AUTONOM CAB said to be the first time in North America (What about Waymo service in Phoenix?) AUTONOM CAB uses a sophisticated multi-sensor technology with no fewer than 10 Lidar sensors, 6 cameras, 4 radars, 2 GNSS antennae and 1 inertial measurement unit. These sensors provide at least a triple redundancy across all functions, guaranteeing exceptional reliability.

PRNewswire: It turns out that Navya actually relies on Valeo Autonom Cab platform. This all-electric, driverless vehicle is fitted with seven Valeo SCALA laser scanners (which appear to be re-branded Ibeo products). The SCALA is said to be the first and only mass-produced LiDAR scanner on the market designed specifically for cars, and is a key component of the autonomous vehicle.

BusinessWire: The remaining three LiDARs on Navya autonomous car are Velodyne low-end VLP-16.

Go to the original article...

Omnivision Announces RGB-Ir Biometric Sensor and Notebook Sensor Family

Image Sensors World        Go to the original article...

PRNewswire: OmniVision introduces the OV9738, a 1/9" 1.4um RGB-Ir sensor with PureCel Plus pixel supporting both color and IR imaging. The OV9738 can lower overall system cost, and can bring IR-based biometric capabilities for facial recognition, as well as gesture interfaces, to a broader range of devices, including mainstream laptops and mobile devices. The OV9738 image sensor is currently available for sampling.

Market analysts predict that the global facial recognition market will reach $7.75b by 2022. According to a recent report by Variant Market Research, the gesture-recognition market is expected to reach $43.6b by 2024.

PRNewswire: OmniVision announces the OV01A product family of image sensors, designed specifically for notebook computers with ultra-thin bezels. The OV01A is built on OmniVision's 1.12-µm PureCel Plus stacked architecture, with a very small form factor for space-constrained applications such as narrow-bezel notebooks and handheld mobile devices.

"In today's high-end clamshell notebooks, camera module size is one of a few limiting factors when designing devices with ever-narrower bezels," said Arun Jayaseelan, Senior Marketing Manager at OmniVision. "The OV01A enables a camera module size of 2.5 mm in the 'y' dimension, which allows developers to create notebook bezel designs that are narrower than ever before."

This sensor's target module package size is also less than 2 mm in the "z" dimension, enabling ultra-thin bezels. Additionally, pad locations and image array offsets are optimized for smaller module sizes.

This product family is available in three versions:

  • OV01A10: RGB camera for Bayer color imaging in the visible range
  • OV01A1B: monochrome IR camera for biometric imaging; optimized for high NIR QE
  • OV01A1S: RGB-Ir, combining RGB and IR imaging capabilities in a single sensor

The need for IR applications like biometric authentication in consumer devices demands sensors that can combine both RGB and IR imaging. This is important for achieving accurate gesture and facial recognition in applications such as Windows Hello.

The OV01A image sensors are currently sampling and are expected to start volume production in Q1 2018.

Go to the original article...

Panasonic Lumix GH5S preview

Cameralabs        Go to the original article...

The Panasonic GH5S is a high-end mirrorless camera aimed at pro videographers. It delivers the best movie and low-light quality from a Lumix G body thanks to a new 10MP sensor that supports multiple aspect ratios and Dual Native ISO. Find out more in my hands-on preview!…

The post Panasonic Lumix GH5S preview appeared first on Cameralabs.

Go to the original article...

ON Semi AR0430 Sensor Delivers Image and Depth Simultaneously from a Single Sensor

Image Sensors World        Go to the original article...

BusinessWire: ON Semiconductor announces a 1/3.2-inch BSI 4MP CMOS sensor delivering 120fps at full resolution. The sensor’s embedded functionality allows customers to capture a color image and simultaneous depth map from a single device; a feature that is normally only possible when using a second sensor for independent depth mapping.

Simultaneous video and depth mapping is enabled by ON Semiconductor’s Super Depth technology. Techniques in the sensor, CFA and the micro-lens create a data stream containing both image and depth data. This data is combined via an algorithm to deliver a 30 fps video stream and depth map of anything within one meter of the camera. This allows capabilities such as interpretation of hand gestures to control smart IoT devices as well as the creation of simple 3D models for AR/VR use.

The device offers low-power performance of 125mW when operating with a 4MP data stream at 30 fps; this reduces to just 8mW in low power monitoring mode - especially valuable in battery-powered applications.

What really sets it apart is simultaneous depth mapping capability,” said Gianluca Colli, VP and GM of Consumer Solution Division at ON Semiconductor. “This is unprecedented in a single-sensor solution and, along with the low power consumption, opens up a multitude of interactive IoT and AR/VR applications. Now our customers can use a single camera where two, with all the design, cost and implementation issues, would have been required in the past. We are honored to be recognized at CES 2018 for this unique technology.

The AR0430 engineering samples are available in bare die format, and will be in full production later in Q1’ 2018.

Go to the original article...

PMD Presents World’s Smallest 3D Camera

Image Sensors World        Go to the original article...

PMD presents what it calls the world's smallest 3D camera, useful for face unlock in smartphones and other applications:



BusinessWire: The IRS238XC features in each pixel the Suppression of Background Illumination (SBI) circuitry, which enables outdoor depth sensing in full sunlight. The 38,000 pixels of the IRS238XC are said to provide a higher resolution than any existing integrable 3D depth sensing chip and are tuned to work also at 940nm wavelength to improve outdoor operation.

The reference camera modules for the IRS238XC have a footprint of 12mm x 8mm, including imager, lens, IR emitter and all relevant circuitries and therefore said to be the smallest 3D camera modules available worldwide.

Go to the original article...

css.php