Archives for January 2019

Swiss Eye Tracking Startup Raises $1.9m

Image Sensors World        Go to the original article...

Swiss startup Eyeware develops 3D eye tracking software for depth sensing enabled consumer devices, such as Microsoft Kinect, Intel RealSense, Orbbec Astra, etc. How do they track gaze from low-resolution 3D images?

"Generally speaking: We require a low-resolution 3D pixel cloud of the face and eye regions and are agnostic to the underlying technology. It works as well for ToF sensors (i.e. non-RGB images). Our software uses that input to model the head pose and eye regions, providing the gaze vector in real-time."

Eyeware announces the closing of its seed financing round of 1.9M CHF ($1.9M USD). The seed round was led by High-Tech Gründerfonds (HTGF), in partnership with TRUMPF Venture GmbH, Swiss Startup Group, and Zurich Kantonalbank.

Eyeware is a spin-off of Idiap Research Institute and EPFL created in September 2016. Eyeware software can use automotive grade ToF cameras to estimate attention of drivers for in-cabin monitoring and infotainment systems. The capital will be used by the Eyeware team to bring the 3D eye tracking development kit ready for integration into consumer applications.


Go to the original article...

SiOnyx Receives $20m Award for US Army Night Vision Project

Image Sensors World        Go to the original article...

BusinessWire: SiOnyx announces a $19.9m award for the delivery of digital night vision cameras for the IVAS (Integrated Visual Augmentation System) program.

Go to the original article...

BBC on Camera Damage by LiDAR

Image Sensors World        Go to the original article...

BBC publishes its version of the camera damage by CES LiDAR story. It turns out that the owner of camera has quite an extensive experience with LiDARs. He sort of confirms that the problem might be specific to Aeye LiDAR:

"I have personally tested many lidar systems and taken pictures up close and [they] did not harm my camera."

Go to the original article...

A review of Optical Phased Array LiDAR

Image Sensors World        Go to the original article...

AutoSens starts to publish videos from its Brussels 2018 conference. One of the first videos is Michael Watts, CEO of Analog Photonics, presentation:

Go to the original article...

e2v TDI CMOS Sensor

Image Sensors World        Go to the original article...

Teledyne-e2v publishes a poster on its TDI CMOS sensor presented in October 2018:

Go to the original article...

Sony A6400 review

Cameralabs        Go to the original article...

The Sony A6400 is a mid-range mirrorless camera with a 24 Megapixel APSC sensor, 4k video, powerful AF, built-in viewfinder and touchscreen that flips-up by 180 degrees to face the subject. Successor to the A6300, it's Sony’s best camera yet for vlogging. Check out my full review!…

The post Sony A6400 review appeared first on Cameralabs.

Go to the original article...

Yole Forecasts Uncooled LWIR Sensors Market

Image Sensors World        Go to the original article...

Yole Developpement report on "Uncooled Infrared Imagers and Detectors 2019" forecasts:

"The uncooled IR detector and imager market looks promising, with annual growth of 7% in value over the 2018-2024 period. For microbolometers, numerous commercial applications have driven the imager market growth. These include thermography, surveillance, personal vision systems (PVS) and firefighting. Most noticeably, shipments exceeded a million units in 2017, mostly due to FLIR’s Lepton core and SEEK Thermal’s success, heading to two million units in 2021!

The uncooled IR imager business is still driven by commercial markets, which will continue to expand quickly, with a compound annual growth rate (CAGR2018-2024) of 15.8% in shipment volumes. These markets will represent 93% of all shipments by 2024. Thermography will have a 9% shipment volume CAGR2018-2024. This application is a sure thing for the microbolometer market, with IR imagers used here for industrial and commercial applications including home diagnosis and hot spot identification. Firefighting is an attractive opportunity with 20% shipment volume CAGR2018-2024. There is a very large potential market, with 37 million firefighters worldwide.
"

Some of the key events on the market are:

  • LG Innotek announced that it is promoting its thermal imaging infrared camera module business. This sounds exciting as LG is the leader for visible camera modules and is Apple’s supplier.
  • Amazon introduced 9 cashierless stores in 2018, which include hundreds of cameras including infrared sensors. 3,000 stores are expected by 2021.
  • Following the success of the CATS60 smartphone, Caterpillar launched the CATS61 with increased functionality. In two years, it shipped more than 500,000 units of the CATS60.
  • Leonardo DRS launched its TENUM 640 core, based on a Wafer-Level-Packaged VOx microbolometer with 10μm x 10μm pixel size, making it the smallest available anywhere.

Go to the original article...

Pixart New Web Site

Image Sensors World        Go to the original article...

Pixart updated its web site making it much more useful. Now it has much more info about its products, including ones from Primesensor, Pixart subsidiary:

"PrimeSensor Technology Inc., part of the PixArt Group, is a professional CMOS Image Sensor chip design house and provider. The PrimeSensor company has been delivering high quality CMOS Image Sensors over 20 years to various industries, including Surveillance Security, Consumer Electronics and Automotive Camera."

Pixart also publishes a couple of new video demos including one with its gesture recognition module:

Go to the original article...

Robosense Interview on $200, 200m-Range, 120-deg FoV LiDAR

Image Sensors World        Go to the original article...

Robosense representative confirms in an interview that their new RS-LiDAR-M1 will cost $200 in mass production:



Go to the original article...

1550nm LiDAR Damaged Sony Camera at CES

Image Sensors World        Go to the original article...

Arstechnica reports that a man who snapped Aeye LiDAR at CES claims that the 1550nm LiDAR permanently damaged his expensive Sony ILC camera. Every image he takes now has two bright spots with vertical and horizontal lines emanating from them (click and download the full resolution images to see the fine details):


AEye CEO Luis Dussan stressed that AEye lidars pose no danger to human eyes in an email to Arstechnica. But he didn't deny that AEye's lidars can cause damage to camera sensors. AEye has offered to buy a new camera instead of the damaged one.

"Cameras are up to 1000x more sensitive to lasers than eyeballs," Dussan wrote. "Occasionally, this can cause thermal damage to a camera's focal plane array."

The 1550nm LiDARs leverage the fact that this wavelength is absorbed in the eye before it reaches the retina, and dramatically increase the laser power. AEye is reported to use a powerful fiber laser. While remaining light safe, that high power apparently can damage image sensors inside cameras.

Thanks to TG for the link!

Go to the original article...

Event-Driven Camera Module for Smartphones

Image Sensors World        Go to the original article...

iniVation and aiCTX announce Speck, world’s first low-power neuromorphic SoC for mobile and IoT applications. Speck is a fully event-driven neuromorphic chip, combining a vision sensor with a convolutional neural network processor, packaged in a single chip. Speck is aimed to applications requiring always-on, real-time response at extremely low power budgets.

iniVation’s event-based Dynamic Vision Sensor technology provides a true neuromorphic vision sensing solution,” says Kynan Eng, CEO of iniVation. “Our sensor enables low-bandwidth, low-power, real-time vision solutions.

Event-based vision is a perfect match for our event-based convolutional network processing architecture,” adds Ning Qiao, CEO of aiCTX. “By putting our processor as close to the sensor as possible, we can provide an ultra-low-power, low-cost solution for vision in mobile and embedded applications.

Samples of Speck devices will be available starting in Q3 2019.


Thanks to TL for the pointer!

Go to the original article...

Aeye Announces 200m Range, Hella and LG Partnerships

Image Sensors World        Go to the original article...

BusinessWire: AEye announces the AE200 Series of its iDAR Level 3 ADAS applications. The company says it has entered into strategic partnerships with Hella and LG Electronics. The AE200 Series will be one of the iDAR-based artificial perception systems that both Hella and LG Electronics will be using to deliver ADAS solutions to global OEMs at scale.

The AE200 Series includes AEye’s anti-spoofing and interference mitigation and is under ASIL-B functional safety certification. Delivering ADAS L3 long range performance of up to 200M at 10% reflectivity at 0.1° resolution and the short range performance configuration of 50M range at 10% reflectivity, the AE200 Series will be modular in design and capable of up to 120°x 45° Field of View.

Key features of the AE200 include:
  • Software-definable LiDAR
  • Designed to meet ISO 26262 functional safety requirements, eye safe in all modes
  • Four layers of interference mitigation and anti-spoofing technology
  • Software and capabilities to support specific ADAS solutions such as Cross Traffic Alerts, Emergency Braking, Front and Rear Collision Warning, and Blind Spot Monitoring, etc.
  • Modular for supporting critical ADAS packaging locations including headlamp, windscreen, side mirror, front grill, A-pillar, and C-pillar.
  • Optional High resolution RGB or IR camera fused and bore sighted with AEye’s agile LiDAR.

Go to the original article...

Phantom Intelligence LiDAR is Powered by ON Semi SiPM

Image Sensors World        Go to the original article...

Phantom Intelligence demo at CES reveals that ON Semi (SensL) SiPM powers its flash LiDAR:

Go to the original article...

Motorola Solutions Acquires 4-year Old Computer Vision Startup for $445m

Image Sensors World        Go to the original article...

BusinessWire, Techcrunch, Venturebeat: Motorola Solutions acquires a maker of automated license plate recognition startup VaaS for $445m in cash and equity. The startup was founded in 2014 and has raised $5m. The startup is divided into two companies, 350 employees in total:

Go to the original article...

Samsung Rumored to Acquire Corephotonics for $150m

Image Sensors World        Go to the original article...

Globes, CTech, JNS report that Samsung is in talks to acquire Corephotonics for $150m. In 2016, Samsung and its investment branch, invested $15m in Corephotonics. So far, Corephotonics has raised $50m.

A year ago, Corephotonics sued Apple for alleged infringement on its dual camera zoom patents. Half a year ago, Corephotonics filed a second lawsuit.

Corephotonics has licensed its dual camera technology to Oppo and Xiaomi.

Corephotonics CEO David Mendlovic said in response to rumors “Samsung is an investor in our company. They have expressed interest by investing in us. There is nothing concrete [about an acquisition] at this stage. We will be happy to inform the public when we know anything.

Go to the original article...

AMS New Products Announcements

Image Sensors World        Go to the original article...

ams launches a spectral sensor chip that is said to bring "laboratory-grade multi-channel color analysis capability to portable and mobile devices."

The AS7341 marks a breakthrough in the category of spectral sensors in a small package suitable for mounting in a mobile phone or consumer device. It is the smallest such device to offer 11 measurement channels, and also offers higher light sensitivity than any other multi-channel spectral sensor aimed at the consumer market,” says Kevin Jensen, Senior Marketing Manager in the Optical Sensors business line at ams. The new sensors mass production starting in February 2019.

Key Features:
  • 8 optical channels distributed over the visible range
  • 3 extra channels: Clear, Flicker and NIR channel
  • 6 parallel ADCs for signal processing
  • Ultra-low-profile package 3.1mm x 2mm x 1mm
  • Unit pricing is $2.00 in quantities of 10,000 units.
Benefits:
  • Spectral information enables highly accurate object color measurements
  • Detection and rejection of environmental influences such as light sources
  • Optimized channel count and signal processing for fast measurements
  • Mobile phone compatible package

ams also introduces what it calls the world’s smallest integrated 1D ToF distance measurement and proximity sensing module.

The sensor use cases include presence detection, for example to trigger the operation of a facial recognition system when the user’s face is in range. The new TMF8701 sensor fits in a narrow bezel, helping smartphone manufacturers to realize widescreen phone designs that have a high display screen to body ratio.

Smartphone manufacturers have been clear about the drawbacks of today’s 1D time-of-flight sensors: they are too large, and their performance degrades noticeably in adverse lighting conditions and when the display screen is dirty,” said Dave Moon, Senior Product Marketing Manager in the Integrated Optical Sensors business line of ams. “The TMF8701 addresses all of these concerns, providing customers with a device in a smaller footprint which also offers exceptional rejection of contamination and interference.

Key Features:
  • Direct ToF technology with high sensitivity SPAD detection
  • Fast Time-to-Digital Converter (TDC) architecture
  • Sub-nanosecond light pulse
  • 0 – 10cm proximity detection and 10 – 60cm distance sensing @ 60Hz
  • On-chip histogram processing
  • 940nm VCSEL Class 1 Eye Safety with 21° FOI
  • Sunlight on-chip rejection filter and algorithm
  • Industry’s smallest modular OLGA 2.2mm x 3.6mm x 1.0 mm package

The TMF8701 separately identifies reflections from fingerprint smudge contaminations on the display screen and optical reflections from objects beyond the cover glass, such as the user’s face, maintaining reliable performance even when the sensor’s aperture is dirty.

The Class 1 Eye Safe VCSEL emitter has excellent immunity to interference from ambient light and produces accurate distance measurement in all lighting conditions: the module achieves accuracy of ±5% when measuring distance in the range 20-60cm in normal lighting conditions. Even in bright sunlight (100klux), ±5% accuracy is maintained at a range of up to 35cm.

The TMF8701 draws only 940µA in proximity sensing mode when sampling at 10Hz. Always on, it triggers the higher-power face recognition system to start up when the ToF sensor detects the presence of an object up to 60cm from the display screen. The proximity sensing capability of the device can also be used to trigger the display and face recognition system to switch off when detecting a reflective surface at a distance of 0-10cm from the screen. The accuracy of the TMF8701’s distance measurements also supports the selfie camera’s LDAF (laser detect auto-focus) function, especially in low-light conditions.

The TMF8701 sensor is in mass production now. Unit pricing is $2.60 in an order quantity of 5,000 units.


ams also releases the TCS3701, an RGB light and IR proximity sensor that can accurately measure the intensity of ambient light from behind an OLED screen. This capability supports today’s trend to maximize smartphone display area by eliminating front-facing bezels, where an ambient light/proximity sensor is typically located.

This ‘Behind OLED’ ambient light/proximity sensor enables smartphone manufacturers to achieve the highest possible ratio of display area to body size. The TCS3701 senses the addition of the ambient light passing through the display to light emitted by the display’s pixels located just above the sensor. ams has developed unique algorithms which enable accurate detection of ambient light levels without knowledge of the display pixel brightness above the sensor. Light transmission through an OLED screen is limited by its opacity, but the TCS3701’s ultra-high sensitivity to light means that it can still produce accurate light measurements in all lighting conditions.

The TCS3701 is available for sampling now. Unit pricing is $1.25 in an order quantity of 1,000 units.

Smartphone OEMs today are striving to maximize their products’ screen-to-body ratio, reducing the bezel area as much as possible on the display’s face,” said David Moon, Senior Marketing Manager at ams. “The TCS3701 enables phone designers to take this trend to a new level, potentially eliminating the bezel entirely. This is only possible because the TCS3701 can operate behind an OLED display, a breakthrough enabled by the outstanding sensitivity of the device and by the implementation of sophisticated measurement algorithms to compensate for the optical distortion caused by the OLED display.

Key Features:
  • High ALS and color sensitivity
  • 1024X dynamic range
  • 0.18µm process technology with 1.8V I²C
  • High proximity crosstalk compensation
  • Companion non-predictive asynchronous algorithm
  • Dark room to sunlight operation
  • Reduced power consumption
  • When combined with a VCSEL emitter enables operation behind OLED
  • Enables ambient light measurement during display blanking periods

Go to the original article...

Pamtek Presents Stereo Camera Calibration System

Image Sensors World        Go to the original article...

Pamtek presents its dual camera stereo calibration system, the TERA-Stereo:

Go to the original article...

LiDAR News: Blackmore, Cepton

Image Sensors World        Go to the original article...

BusinessWire: Blackmore introduces Autonomous Fleet Doppler Lidar (AFDL). The multi-beam Doppler AFDL lidar sensor delivers instantaneous velocity and range data beyond 450 meters, with power consumption and size similar to a small laptop. The system supports a 120 x 30-degree field of view, software defined operation, precise velocity measurements with accuracy down to 0.1 m/s on objects moving up to 150 m/s (335 mph), and measurement rates in excess of 2.4 million points/second. Blackmore’s AFDL is available for pre-order and will ship to customers in Q2 2019 for less than $20,000. Samples are shipping to strategic partners now.

With more than $1b invested in the lidar space since 2015, it’s increasingly difficult for end users to identify a solution that delivers as promised. This is because the majority of lidar vendors focus on using pulse-based lidar technologies. As a result, OEMs and suppliers are struggling with the inadequate data created by these power-hungry AM lidar sensors. And as lidar use becomes more prevalent, interference-prone AM systems are less effective and unsafe.

To address this, in 2015 Blackmore introduced the world’s first frequency-modulation (FM) lidar systems for autonomous vehicles, which measure both range and velocity simultaneously. “The reality is that physics ultimately wins, no matter how much funding chases inferior alternatives,” said Randy Reibel, CEO and co-founder of Blackmore. “But more importantly, FM-based Doppler lidar sensors are safer for self-driving applications.


BusinessWire: Cepton announces two new LiDAR products, Vista-M and Vista-X built with the company's patented Micro-Motion Technology (MMT).

The Vista-M LiDAR packs a 120-deg FOV with 150m range into a sensor the size of a typical box of crayons for integration with a vehicle’s headlights, tail lights and side view mirrors. Currently, Cepton is providing this miniaturized solution to its automotive partners for different integration approaches. The Vista-X LiDAR supports an expanded 200m range capability at a 10% reflectivity target to 120-deg FOV with a uniform 0.2-deg spatial resolution across the entire FOV.

Cepton is ushering in the next generation of autonomous driving with long range, high resolution 3D perception at an affordable price,” said Jun Pei, CEO and co-founder of Cepton Technologies. “Our Vista-X and Vista-M solutions enable seamless vehicle integration and provide designers with the freedom to create elegant vehicle designs. We are ready to mass produce the Vista Series LiDAR with our tier one suppliers and manufacturing partners for the immediate deployment of autonomous vehicles.

Go to the original article...

LFoundry Presentation

Image Sensors World        Go to the original article...

LFoundry presentation "Innovation in a Semiconductor Industry" by Sergio Galbiati discusses the company technology and future plans:

Go to the original article...

Intel, Alibaba and Fujitsu Measure Athletes Achievements with Cameras and AI

Image Sensors World        Go to the original article...

Intel and Alibaba are teaming to develop athlete tracking technology powered by AI that is aimed to be deployed at the Tokyo Olympic Games 2020 and beyond. The technology uses existing and upcoming Intel hardware and Alibaba cloud computing technology to power a cutting-edge deep learning application that extracts 3D forms of athletes in training or competition. The performance is captured with regular video cameras, the AI algorithm is applied with a heavy dose of computing power and a digital model of the performance is created that can be analyzed in different ways.


IEEE Spectrum: Fujitsu is to use LiDAR and AI to support judges at gymnastics competition at Tokyo Olympics 2020. Gymnastics’ scoring rules require judges to closely evaluate the angles of gymnasts’ joints, such as their knees and elbows, as athletes move through their routines. The Fujitsu judging support system aims to take scoring variability out of the equation by using LiDAR to track the angles of athletes’ joints during their competitions.

Go to the original article...

Ambarella, Toshiba Announce Vision Processors

Image Sensors World        Go to the original article...

BusinessWire: Ambarella introduces the CV25 camera SoC in the CVflow family, combining advanced image processing, high-resolution video encoding and CVflow computer vision processing in a single, low power design. The CV25’s CVflow architecture provides the Deep Neural Network (DNN) processing required for intelligent home monitoring, professional surveillance, and aftermarket automotive solutions, including smart dash-cameras, Driver Monitoring Systems (DMS), and electronic mirrors.

CV25 brings computer vision at the edge into the mainstream,” said Fermi Wang, President and CEO of Ambarella. “With this new SoC, we are sharply focused on reducing our customer’s overall system cost for delivery of significant computer vision performance, high-quality image processing and advanced cyber-security features at very low power. CV25-based cameras are capable of performing Artificial Intelligence (AI) at the edge, allowing features like facial recognition to happen in real-time on the device, rather than in the cloud.

BusinessWire: Toshiba announces the development of DNN hardware IP block that will help to realize ADAS and autonomous driving functions. The company will integrate the DNN hardware IP with conventional image processing technology and start sample shipments of Visconti5, the next generation of Toshiba’s image-recognition processor, in September 2019. It enables Visconti5 to recognize road traffic signs and road situations at high speed with low power consumption. Toshiba will promote Visconti5 equipped with DNN hardware IP as a key component of next-generation advanced driver assistance systems.

Go to the original article...

DBI Enables CIS Evolution

Image Sensors World        Go to the original article...

Paul Enquist from Xperi (Ziptronix, Invensas) publishes an article "Direct Bond Technology Enables CMOS Image Sensor Evolution." Few quotes:

"A hybrid bond improvement in direct bond technology that allowed the elimination of TSVs and associated cost and die area by enabling scalable interconnection between two stacked wafers as part of the direct bond process was initially mass produced by Sony to deliver a third generation 1.4-µm pixel, 12-Mpixel BSI CIS in the 2016 iPhone 6S. A 14-µm hybrid bond interconnect pitch was used outside the pixel array for row/column interconnect and an electrically isolated 6-µm hybrid bond interconnect pitch was used within the pixel array.

Earlier this year, a fourth generation BSI using two iterations of the second-generation direct bond plus TSV interconnect was subsequently used to mass produce a three-layer stack of photodiode, 1-Gb DRAM memory, and logic. The DRAM buffering enabled unpresented performance of 960 fps at 1080p with a 1.4-µm pixel, 12-Mpixel BSI CIS used in the Samsung Galaxy S9. Also this year, Sony announced a fifth generation BSI CIS using a hybrid bond with a per pixel interconnect underneath the pixel array not possible with TSV technology. This enabled a 6.9-µm pixel, 1.5-Mpixels at 660 fps with a dedicated subthreshold ADC per pixel and a global shutter with and -75 dB parasitic light sensitivity.

Direct and hybrid bond technology has thus played an enabling role in the realization of BSI and stacked BSI with multiple generational variations of BSI CIS as summarized in Figure 1. The development of these technologies has been led by Xperi for over 15 years resulting in a substantial intellectual property portfolio that has been licensed to CIS industry leaders. The direct bond technology and hybrid bond technologies have been trademarked as ZiBond and DBI, respectively.
"

Go to the original article...

Nikon Z 14-30mm f4 S review

Cameralabs        Go to the original article...

The Nikon Z 14-30mm f4 S is a compact ultra-wide angle zoom designed for Nikon’s full-frame Z-series mirrorless cameras. This fourth native lens in the Z system is ideal for landscape and architecture, and is the first full-frame ultra-wide zoom to take relatively compact and affordable 82mm filters. Find out if it's right for you in my complete review!…

The post Nikon Z 14-30mm f4 S review appeared first on Cameralabs.

Go to the original article...

Valeo Demos its Automotive Vision Technology

Image Sensors World        Go to the original article...

Valeo publishes a couple of video showing the capabilities of its camera-based systems:



Go to the original article...

ToF Imaging with 10ps Resolution

Image Sensors World        Go to the original article...

MDPI publishes University of Glasgow paper "Time-of-Flight Imaging at 10 ps Resolution with an ICCD Camera" by Lucrezia Cester, Ashley Lyons, Maria Chiara Braidotti, and Daniele Faccio.

"ICCD cameras can record low light events with extreme temporal resolution. Thus, they are used in a variety of bio-medical applications for single photon time of flight measurements and LIDAR measurements. In this paper, we present a method which allows improvement of the temporal resolution of ICCD cameras down to 10 ps (from the native 200 ps of our model), thus placing ICCD cameras at a better temporal resolution than SPAD cameras and in direct competition with streak cameras. The higher temporal resolution can serve for better tracking and visualization of the information carried in time-of-flight measurements."

Go to the original article...

Omnivision Announces Video-Centric Image Sensor for Smartphones

Image Sensors World        Go to the original article...

PRNewswire: OmniVision announces the OV02K, a video-centric, 2.9um pixel 1080p image sensor for smartphones. The OV02K allows the secondary camera in multi-camera configurations to capture high-quality videos, even in very low ambient light conditions. This large 2.9um pixel size results in an SNR10 of less than 10 lux. The OV02K, which comes in a 1/2.8" optical format, features 1080p resolution at up to 120 fps. The sensor also supports up to three exposures of staggered timing to enable HDR, and supports frame-to-frame dual conversion gain (DCG).

"There is an increasing demand from smartphone users to capture high-quality videos, especially with social media sharing becoming more prevalent," said Arun Jayaseelan, senior marketing manager at OmniVision. "The OV02K, with its relatively large pixel size, is perfect for super-high-quality video captures, even in less-than-ideal lighting conditions."

According to TSR, the number of smartphones with two or three rear cameras will grow from 8% of the market in 2017, to over 20% by 2022, when the estimated total market size will be 5.5 billion sensors. Furthermore, the attachment rate for CMOS image sensors will be at 99.9%. Separately, Yole Developpement predicts that, on average, there will be three cameras per smartphone by 2022.

Go to the original article...

Automotive Imaging News

Image Sensors World        Go to the original article...

Toyota Research Institute (TRI) unveils its 4th generation P4 automated driving test vehicle featuring new image sensors:

"P4 adds two additional cameras to improve situational awareness on the sides and two new imaging sensors―one facing forward and one pointed to the rear―specifically designed for autonomous vehicles. The imaging sensors feature new chip technology with high dynamic range. The LIDAR sensing system with eight scanning heads carries over from the previous test model, Platform 3.0, and morphs into the new vehicle design."


PRNewswire: Ouster announced its 128-beam flash lidar sensor, the OS-1-128. The OS-1-128 is said to be the highest resolution lidar on the market and comes with no change in size, mass, power consumption, or ruggedness compared to the OS-1-64. The OS-1-128 has a 45° vertical FOV, the widest available of any commercially sold high-performance lidar sensor. It has 0.35° vertical angular resolution.

The OS-1-128 is priced at $18,000, six times cheaper than the competing 128-channel lidar sensor. Ouster expects the OS-1-128 to be available for volume purchases in the summer of 2019.

"Ouster continues to push the frontier of what is possible in lidar; our multi-beam flash lidar architecture gives us the ability to constantly improve our performance while keeping prices low, reliability high, minimizing the sensor's size, weight and power consumption, and producing at volume. In a market where all of these metrics matter and under-delivering on even one of them is a dealbreaker, Ouster continues to provide performance without compromise," said Ouster CEO Angus Pacala.

PRNewswire: Sense Photonics, a startup building high-performance flash LiDAR, is partnering with Infineon. Sense Photonics' core technology – protected by over 200 patents – is said to enable a high-performance, solid state system, with no moving parts with performance, reliability and cost required for automotive and industrial automation applications.

"We are excited that our technology is capable of delivering high resolution, long-range flash LiDAR in a compact and energy efficient package," said Scott Burroughs, co-founder and CEO of Sense Photonics. "For customers pushing the limits in AI and machine learning, the 3D sensor is their view of the world, so point cloud quality and density is essential for exceptional product performance."

Thomas-PR: RoboSense, a CES 2019 Innovation Award Honoree, will publicly demonstrate at CES its new RS-LiDAR-M1 with patented MEMS technology that supports Level 5 driverless automated driving. A breakthrough on the measurement range limit based on 905nm LiDAR with a detection distance to 200 meters

A major step forward from the previous version RoboSense RS-LiDAR-M1Pre, the new RS-LiDAR-M1 MEMS optomechanical LiDAR provides an increased horizontal FOV by nearly 100%, reaching an amazing 120° FOV, so that only a few RS-LiDAR-M1s are needed to cover the 360° field of view. With only five RS-LiDAR-M1s, there is no blind zone around the car with dual LiDAR sensing redundancy provided in front of the car for a L5 level of automatic driving -- full driverless driving. Based on the target production cost at $200 each, the cost of five RS-LiDAR-M1 is only 1/100th the highest mechanical LiDAR available to the market, which is more in line with the cost requirements for the mass production of autonomous vehicles.

The battle between 1550nm and 905nm LiDAR is about cost and performance. A low-cost 905nm RS-LiDAR-M1 is said to achieve a breakthrough on the measurement range limit based on the 905nm LiDAR, with a detection distance to 200 meters.

The RoboSense RS-LiDAR-M1 LiDAR system is a giant leap forward for driverless technology,” said Mark Qiu, Co-founder, RoboSense.“We are committed to developing high-performance automotive-grade LiDAR at a low-cost to advance the LiDAR market, so that LiDAR can be used in fully unmanned vehicles, as well as assisted autonomous driving with superior environmental information detection that ensures driving safety.


BusinessWire: Velodyne unveils the VelaDome, a compact embeddable lidar that provides an ultra-wide 180° x 180° image for near-object avoidance. Powered by Velodyne’s Micro Lidar Array (MLA) technology, the VelaDome is optimized for manufacturability and designed to meet automotive-grade standards.


BusinessWire, VentureBeat: Australian Lidar startup Baraja raises $32m in a series A round led by Sequoia China and Main Sequence Ventures’ CSIRO Innovation Fund, with participation from Blackbird Ventures. The company Spectrum-Scan lidar uses refraction in prisms to scan the scene with no mechanical moving parts. The startup only emerged from stealth in July 2018, and to date it had raised around $1.5m in seed funding.


Xenomatix partners with AGC put its LiDAR under windshield:

Go to the original article...

AEI Active Alignment Assembly System

Image Sensors World        Go to the original article...

AEI Mycronic publishes a video on its automotive camera module assembly system:

Go to the original article...

FBK Presents Moton Detecting Vision Sensor

Image Sensors World        Go to the original article...

IEEE Sensors publishes a 2017 FBK presentation of its motion-detecting vision sensor with static background rejection:

Go to the original article...

Sofradir Uncooled InGaAs Imagers

Image Sensors World        Go to the original article...

Sofradir presents its uncooled "Tecless" InGaAs imagers for industrial and machine vision applications.


Thanks to TL for the link!

Go to the original article...

css.php