Archives for April 2020

AutoSens Launches FREE Online Conference

Image Sensors World        Go to the original article...

AutoSens launches AutoSensONLINE, a new event promising high-quality technical content in the field of vehicle perception – all delivered online. While the world battles with the challenges of COVID-19, the AutoSens team have been busy forming plans to support their engineering community through these times of travel restrictions and social distancing.

This new digital portal offers technical presentations, market analysis and, most importantly, community interaction. It’s a fresh new virtual event-space offering the opportunity to learn and network from the safety and comfort of a home or office.

The first AutoSensONLINE is running 12-14 May with subsequent editions in June and July 2020. Themes for each series include:
  • 12-14 May – The Changing Dynamics of ADAS and Autonomous Vehicle Development
  • 11-12 June – Advances in Sensors for ADAS and Autonomous Vehicles
  • 14-15 July – Managing Data in ADAS and Autonomous Vehicles

The registration is free. Attendees can register just once to receive free access for all seven days of AutoSensONLINE, and all content will be available online for participants to watch after each event.

Go to the original article...

Sense Photonics Changes CEO

Image Sensors World        Go to the original article...

PRNewswire: LiDAR startup Sense Photonics hired Shauna McIntyre as CEO. McIntyre used to led Google's automotive services and Google Maps' automotive programs. Sense Photonics sought a dynamic business leader who could lead the company during an accelerated growth stage.

The company's co-founder Scott Burroughs who served as CEO since the company's inception in 2016, will now assume the roles of president and chief innovation officer.

Go to the original article...

Hamamatsu Presents TDI CCD-CMOS Combo

Image Sensors World        Go to the original article...

Hamamatsu unveils S14810 and S14813 TDI CCDs that use a CMOS readout and ADC for digital output:

Go to the original article...

Alibaba Unveils Automotive ISP with Improved Night Vision

Image Sensors World        Go to the original article...

cnTechPost reports that Alibaba DAMO Academy has developed an ISP processor for in-vehicle cameras that improves vision at night, thereby improving the safety of autonomous driving. The ISP is said to use a 3D noise reduction and image enhancement algorithm developed by the Alibaba DAMO Academy.

Go to the original article...

Omnivision on Eye-Tracking Automotive Cameras

Image Sensors World        Go to the original article...

EETimes publishes an article "Advancing Autonomous Vehicles Is All in the Eyes" by Mathew Arcoleo, Omnivision's Staff Product Marketing Manager. The company believes that eye and gaze tracking becomes an essential part of modern vehicle's driver monitoring system (DMS):

The first question that needs to be answered in creating an eye-tracking system is, what ASIL rating is needed to meet current and future requirements? For eye tracking, a sensor’s application may include more safety-critical functions in the future, so it’s likely to require a higher ASIL certification. We at Omnivision think ASIL B/C is the ideal rating for DMSes, because these systems are used both for autonomous driving and as a safety feature.

Specifically, the following DMS safety goals must be met:

  • The device shall not mirror the whole image or parts of the image in the horizontal or vertical direction.
  • The device shall not transfer images with the incorrect size in terms of rows and columns.
  • The device shall not send any data that is unprotected by a cyclical redundancy check (CRC) that includes the appropriate hamming distance.

Go to the original article...

Sony Engineers Talk about BSI ToF Sensor Development

Image Sensors World        Go to the original article...

Sony Japan site publishes an article about its BSI ToF sensor development. Few quotes in Google automatic translation:

"In order to develop a single sensor, more than 100 engineers develop in various areas. Especially in the case of a ToF sensor, the technology area is wider than that of a sensor alone, such as a laser serving as a light source and signal processing for converting information of detected light into distance. If we include the human resources involved, we may be developing with 200 to 300 people.

Currently, we are creating an evaluation board for the ToF sensor and evaluating the total system as a module that includes not only the sensor itself but also a laser and lens.

The development of the ToF sensor was carried out in collaboration with a base in Belgium, but it was a bit of a puzzle at first. Of course, there were language barriers, and there is a culture that values ​​vacation, so even if you are in the middle of development, the person in charge will take a long vacation, so this common sense does not apply to scheduling There were also things. On the other hand, Belgium is surrounded by other countries and is used to working in an environment where various cultures coexist, so many people have a strong curiosity about different cultures. There are also places that understand Japanese thinking and culture.
"

Go to the original article...

Nature Paper on Camera-Equipped Toilet

Image Sensors World        Go to the original article...

Vice.com: Nature publishes a Stanford University paper on camera usage in the toilet "A mountable toilet system for personalized health monitoring via the analysis of excreta" by Seung-min Park, Daeyoun D. Won, Brian J. Lee, Diego Escobedo, Andre Esteva, Amin Aalipour, T. Jessie Ge, Jung Ha Kim, Susie Suh, Elliot H. Choi, Alexander X. Lozano, Chengyang Yao, Sunil Bodapati, Friso B. Achterberg, Jeesu Kim, Hwan Park, Youngjae Choi, Woo Jin Kim, Jung Ho Yu, Alexander M. Bhatt, Jong Kyun Lee, Ryan Spitler, Shan X. Wang & Sanjiv S. Gambhir. Seoul Song Do Hospital, Salesforce Research, Case Western Reserve University, University of Toronto, Leiden University, Pohang University of Science and Technology, and Catholic University of Korea contributed to this work too.

"The ‘smart’ toilet, which is self-contained and operates autonomously by leveraging pressure and motion sensors, analyses the user’s urine using a standard-of-care colorimetric assay that traces red–green–blue values from images of urinalysis strips, calculates the flow rate and volume of urine using computer vision as a uroflowmeter, and classifies stool according to the Bristol stool form scale using deep learning, with performance that is comparable to the performance of trained medical personnel. Each user of the toilet is identified through their fingerprint and the distinctive features of their anoderm, and the data are securely stored and analysed in an encrypted cloud server. The toilet may find uses in the screening, diagnosis and longitudinal monitoring of specific patient populations."

To estimate the speed, size, and other spatial parameters, the prototype system reconstructs a 3D scene image using a stereo pair of GoPro Hero 7 cameras in 1.2MP 240fps high speed mode. Another two cameras are used for color and shape analysis and a person identification.

Go to the original article...

Starsky Post-Mortem

Image Sensors World        Go to the original article...

Stefan Seltz-Axmacher, CEO and founder of Starsky Robotics, publishes an analysis of the failure of his autonomous tracking company. Forbes reporter Brad Templeton publishes his view on the company demise too. The two articles point to the problems in AI technology for autonomous driving:


"In 2016, we became the first street-legal vehicle to be paid to do real work without a person behind the wheel. In 2018, we became the first street-legal truck to do a fully unmanned run, albeit on a closed road. In 2019, our truck became the first fully-unmanned truck to drive on a live highway. And in 2020, we’re shutting down.

There are too many problems with the AV industry to detail here: the professorial pace at which most teams work, the lack of tangible deployment milestones, the open secret that there isn’t a robotaxi business model, etc. The biggest, however, is that supervised machine learning doesn’t live up to the hype.

Rather than seeing exponential improvements in the quality of AI performance (a la Moore’s Law), we’re instead seeing exponential increases in the cost to improve AI systems — supervised ML seems to follow an S-Curve.
"


"The S-Curve here is why Comma.ai, with 5–15 engineers, sees performance not wholly different than Tesla’s 100+ person autonomy team. Or why at Starsky we were able to become one of three companies to do on-public road unmanned tests (with only 30 engineers)."

Go to the original article...

Tamron 70-180mm f2.8 Di III VXD review

Cameralabs        Go to the original article...

The Tamron 70-180mm f2.8 Di III VXD is a telephoto zoom for Sony’s Alpha mirrorless cameras with a bright f2.8 aperture and an affordable price. Find out how it compares to the Sony FE 70-200mm f2.8 GM in our review!…

The post Tamron 70-180mm f2.8 Di III VXD review appeared first on Cameralabs.

Go to the original article...

Hybrid Bonding Thesis

Image Sensors World        Go to the original article...

University of Grenoble Alpes publishes a PhD Thesis "Numerical and Experimental Investigations on Mechanical Stress in 3D Stacked Integrated Circuits for Imaging Applications" by Clément Sart.

"In recent years, a number of physical and economical barriers have emerged in the race for miniaturization and speed of integrated circuits. To circumvent these issues, new processes and architectures are continuously developed. In particular, a progressive shift towards 3D integration strategies is currently observed in the semiconductor industry as an alternative path to further transistor downscaling. This innovative approach consists in combining chips of different technologies or different functionalities into a single module. A possible strategy to realize such heterogeneous systems is to stack chips on top of each other instead of tiling them on the plane, enabling considerable benefits in terms of compactness and versatility, but also increased performance.

This is especially true for image sensor chips, for which vertical stacking allows the incorporation of additional functionalities such as advanced image signal processing. Among various methods to achieve direct vertical interconnections between stacked chips, a promising method is Cu/SiO2 hybrid bonding, enabling simultaneous mechanical and electrical connection with a submicron interconnection pitch mostly limited by photolithography resolution and alignment accuracy.The mechanical integrity of the different electrical connection elements for such a 3D integrated imager-on-logic device is of critical importance.

The aim of this thesis is to investigate the mechanical robustness of this relatively new architecture in semiconductor manufacturing during its fabrication, aiming to address a number of possible issues from a thermomechanical perspective. In this work, thermomechanical stresses building up in the image sensor during chip processing and assembly onto a package are investigated, and the interactions between the different system components analyzed. The mechanical integrity of several key structures is studied, namely (i) interconnection pads at the hybrid bonding interface between the imager/logic chips, (ii) bondpad structures below the wires connecting the imager to the package substrate, and (iii) semiconductor devices in the image sensor, through in-situ evaluation of process-induced mechanical stresses using doped Si piezoresistive stress sensors. To do so, for each item a combined numerical and experimental approach was adopted, using morphological, mechanical and electrical characterizations, then correlated or extended by thermomechanical finite element analyses, allowing to secure product integration from a thermomechanical perspective.
"

Go to the original article...

Quanergy Changes CEO, Raises More Money

Image Sensors World        Go to the original article...

BusinessWire: Quanergy appoints Kevin J. Kennedy as the company’s new CEO and secures an new funding round. Kennedy keeps serving as a senior managing director of Blue Ridge Partners, one of the Quanergy investors. "While many LiDAR companies are focused on building LiDAR solely for transportation purposes, since its inception, Quanergy has emphasized the development of its technology for multiple industries,” says Kennedy. “With this new capital, we are deepening our investment in our team and our technology and are positioned to prove the value of LiDAR for broader market applications."

BusinessWire: Louay Eldada has stepped down from his positions as Quanergy CEO and board member, effective January 13, 2020. His new role in the company is defined as "Senior Evangelist."

Go to the original article...

Analog-to-Information CMOS Sensor for Image Recognition

Image Sensors World        Go to the original article...

CEA-Leti publishes a PhD Thesis "Exploring analog-to-information CMOS image sensor design taking advantage on recent advances of compressive sensing for low-power image classification" by Wissam Benjilali.

"Recent advances in the field of CMOS Image Sensors (CIS) tend to revisit the canonical image acquisition and processing pipeline to enable on-chip advanced image processing applications such as decision making. Despite the tremendous achievements made possible thanks to technology node scaling and 3D integration, designing a CIS architecture with on-chip decision making capabilities still a challenging task due to the amount of data to sense and process, as well as the hardware cost to implement state-of-the-art decision making algorithms.

In this context, Compressive Sensing (CS) has emerged as an alternative signal acquisition approach to sense the data in a compressed representation. When based on randomly generated sensing models, CS enables drastic hardware saving through the reduction of Analog to Digital conversions and data off-chip throughput while providing a meaningful information for either signal recovery or signal processing. Traditionally, CS has been exploited in CIS applications for compression tasks coupled with a remote signal recovery algorithm involving high algorithmic complexity. To alleviate this complexity, signal processing on CS provides solid theoretical guarantees to perform signal processing directly on CS measurements without significant performance loss opening as a consequence new ways towards the design of low-power smart sensor nodes.Built on algorithm and hardware research axes, this thesis illustrates how Compressive Sensing can be exploited to design low-power sensor nodes with efficient on-chip decision making algorithms.

After an overview of the fields of Compressive Sensing and Machine Learning with a particular focus on hardware implementations, this thesis presents four main contributions to study efficient sensing schemes and decision making approaches for the design of compact CMOS Image Sensor architectures. First, an analytical study explores the interest of solving basic inference tasks on CS measurements for highly constrained hardware. It aims at finding the most beneficial setting to perform decision making on Compressive Sensing based measurements.

Next, a novel sensing scheme for CIS applications is presented. Designed to meet both theoretical and hardware requirements, the proposed sensing model is shown to be suitable for CIS applications addressing both image rendering and on-chip decision making tasks. On the other hand, to deal with on-chip computational complexity involved by standard decision making algorithms, new methods to construct a hierarchical inference tree are explored to reduce MAC operations related to an on-chip multi-class inference task. This leads to a joint acquisition-processing optimization when combining hierarchical inference with Compressive Sensing.

Finally, all the aforementioned contributions are brought together to propose a compact CMOS Image Sensor architecture enabling on-chip object recognition facilitated by the proposed CS sensing scheme, reducing as a consequence on-chip memory needs. The only additional hardware compared to a standard CIS architecture using first order incremental Sigma-Delta Analog to Digital Converter (ADC) are a pseudo-random data mixing circuit, an +/-1 in-Sigma-Delta modulator and a small Digital Signal Processor (DSP). Several hardware optimization are presented to fit requirements of future ultra-low power (≈µW) CIS design.
"

Go to the original article...

Velodyne Moves Production Overseas, Lays Off 140 Employees

Image Sensors World        Go to the original article...

Bloomberg reports that Velodyne Lidar was sued for laying off 140 workers with one day’s notice. Velodyne was expected to provide 60 days notice, but instead told employees in a written notice they were being let go because of the pandemic. The ex-employees complaint claims that “had already begun transferring production jobs overseas beginning in the summer of 2019 and had planned to continue doing so prior to the outbreak of Covid-19.

It appears to be another indication that LiDAR Mega-factory project in San Jose does not go well. Just a year ago, David Hall, Velodyne Founder and then-CEO, said "San Jose has a large and available skilled labor force that, while not price competitive with anywhere in Asia, does a higher quality job than we would get by assembling the units elsewhere."

Silion Valley Business Journal: Velodyne is valued at about $1.8b after raising about $225M from investors including Nikon, Ford, and Baidu.


Update: The Register publishes the lawsuit document.

Go to the original article...

Single-Photon CMOS Pixel Using Multiple Non-Destructive Signal Sampling

Image Sensors World        Go to the original article...

MDPI paper "Simulations and Design of a Single-Photon CMOS Imaging Pixel Using Multiple Non-Destructive Signal Sampling" by by Konstantin D. Stefanov, Martin J. Prest, Mark Downing, Elizabeth George, Naidu Bezawada, and Andrew D. Holland from The Open University, UK, and European Southern Observatory, Germany, describes a 10um pixel with 0.15e- noise in 180nm process.

"A single-photon CMOS image sensor (CIS) design based on pinned photodiode (PPD) with multiple charge transfers and sampling is described. In the proposed pixel architecture, the photogenerated signal is sampled non-destructively multiple times and the results are averaged. Each signal measurement is statistically independent and by averaging, the electronic readout noise is reduced to a level where single photons can be distinguished reliably. A pixel design using this method was simulated in TCAD and several layouts were generated for a 180-nm CMOS image sensor process. Using simulations, the noise performance of the pixel was determined as a function of the number of samples, sense node capacitance, sampling rate and transistor characteristics. The strengths and limitations of the proposed design are discussed in detail, including the trade-off between noise performance and readout rate and the impact of charge transfer inefficiency (CTI). The projected performance of our first prototype device indicates that single-photon imaging is within reach and could enable ground-breaking performances in many scientific and industrial imaging applications."

Go to the original article...

Ibeo 4D LiDAR Looks Similar to Apple iPad Pro

Image Sensors World        Go to the original article...

Ibeo presented its 4D solid-state LiDAR at EPIC World Photonics Technology Summit in San Francisco on Feb 3, 2020. It looks quite similar to the one inside Apple iPad Pro 2020, other than a much longer range of Ibeo LiDAR:

iPad Pro 2020 LiDAR:


Ibeo LiDAR:



Go to the original article...

Emberion Graphene-based SWIR Sensor Presentation

Image Sensors World        Go to the original article...

Emberion CEO Tapani Ryhanen presented the company technology at EPIC World Photonics Technology Summit 2020 held on Feb. 3 in San Francisco:


Go to the original article...

IWISS2020 Cancellation

Image Sensors World        Go to the original article...

The bi-annual International Workshop on Imaging Systems and Image Sensors (IWISS) that was supposed to be held in Tokyo, Japan in November 2020 is cancelled due to coronavirus pandemic. The next IWISS is scheduled for November 2022.

Go to the original article...

Fraunhofer Converts IR Photons to Visible Through Quantum Entanglement

Image Sensors World        Go to the original article...

Fraunhofer IOF reports: "Bio-substances such as proteins, lipids and other biochemical components can be distinguished based on their characteristic molecular vibrations. These vibrations are stimulated by light in the mid-infrared to terahertz range and are very difficult to detect with conventional measurement techniques.

But how can information from these extreme wavelength ranges be made visible? The quantum mechanical effect of photon entanglement is helping the researchers allowing them to harness twin beams of light with different wavelengths. In an interferometric setup, a laser beam is sent through a nonlinear crystal in which it generates two entangled light beams. These two beams can have very different wavelengths depending on the crystal’s properties, but they are still connected to each other due to their entanglement.

“So now, while one photon beam in the invisible infrared range is sent to the object for illumination and interaction, its twin beam in the visible spectrum is captured by a camera. Since the entangled light particles carry the same information, an image is generated even though the light that reaches the camera never interacted with the actual object,” explains [Markus] Gräfe. The visible twin essentially provides insight into what is happening with the invisible twin.
"

Go to the original article...

Actlight Announces Array of DPDs

Image Sensors World        Go to the original article...

Yahoo, PRNewswire: ActLight announces that the Dynamic PhotoDiode (DPD) sensor array has been fabricated and passed the first set of tests.

"The development of a very performant 3D image sensor based on our patented DPD technology is a great challenge for us at ActLight," said Serguei Okhonin, ActLight Co-Founder and CEO. "Seeing the performance of the first prototypes, in particular the absence of crosstalk between pixels and the first pictures produced by the array, and also considering that prototypes were built with standard CMOS image sensors technology give us the highest level of motivation to continue to invest in this project to build the high performance 3D image sensor that exceed the market expectations in terms of precision and efficiency."

Go to the original article...

International SPAD Sensor Workshop Goes Virtual

Image Sensors World        Go to the original article...

Due to coronavirus pandemy, International SPAD Sensor Workshop 2020 (ISSW2020) will be run as a virtual conference on June 8-9 this year. The agenda is tightly packed with excellent presentations:

  • Charge-Focusing SPAD Image Sensors for Low Light Imaging Applications
    Kazuhiro Morimoto, Canon
  • Custom silicon technologies for high detection efficiency SPAD arrays
    Angelo Gulinatti, Politecnico di Milano
  • LFoundry: SPAD, status and perspective
    Giovanni Margutti, Lfoundry
  • Device and method for a precise breakdown voltage detection of APD/SPAD in a dark environment
    Alexander Zimmer, XFAB
  • Ge on Si SPADs for LIDAR and Quantum Technology Applications
    Douglas Paul, University of Glasgow
  • 3D-Stacked SPAD in 40/45nm BSI Technology
    Georg Rohrer, AMS
  • BSI SPAD arrays based on wafer bond technology
    Werner Brockherde, Fraunhofer
  • Planar Microlenses for SPAD sensors
    Norbert Moussy, CEA-LETI
  • 3D Integrated Frontside Illuminated Photon-to-Digital Converters: Status and Applications
    Jean-Francois Pratte, University of Sherbrooke
  • Combining linear and SPAD-mode diode operation in pixel for wide dynamic range CMOS optical sensing
    Matthew Johnston, Oregon State University
  • ToF Image Sensor Systems using SPADs and Photodiodes Simon Kennedy, Monash University
  • A 1.1 mega-pixels vertical avalanche photodiode (VAPD) CMOS image sensor for a long range time-of-flight (TOF) system
    Yukata Hirose, Panasonic
  • Single photon detector for space active debris removal and exploration
    Alexandre Pollini, CSEM
  • 4D solid state LIDAR – NEXT Generation NOW
    Unsal Kabuk, IBEO
  • Depth and Intensity LiDAR imaging with Pandion SPAD array
    Salvatore Gnecchi, OnSemi
  • LIDAR using SPADs in the visible and short-wave infrared
    Gerald Buller, Heriot-Watt University
  • InP-based SPADs for Automotive Lidar
    Mark Itzler, Argo AI
  • Custom Focal Plane Arrays of SWIR SPADs
    Erik Duerr, MIT Lincoln Labs
  • CMOS SPAD Sensors with Embedded Smartness
    Angel Rodriguez-Vasquez, University of Seville
  • Modelling TDC Circuit Perfromance for SPAD Sensor Arrays
    Daniel van Blerkom, Ametek (Forza)
  • Data processing of SPAD sensors for high quality imaging
    Chao Zhang, Adaps Photonics
  • Scalable, Multi-functional CMOS SPAD arrays for Scientific Imaging
    Leonardo Gasparini, FBK
  • Small and Smart SPAD Pixels
    Edoardo Charbon, EPFL
  • High-resolution imaging of the spatio-temporal dynamics of protein interactions via fluorescence lifetime imaging with SPAD arrays
    Simon Ameer-Beg, King's College
  • Image scanning microscopy with classical and quantum correlation contrasts
    Ron Tenne, Weizmann Institute
  • Imaging oxygenation by near-infrared optical tomography based on SPAD image sensors
    Martin Wolf, ETH Zurich
  • Raman spectroscopy utilizing a time resolving CMOS SPAD line sensor with a pulsed laser excitation
    Ilkka Nissinen, University of Oulu
  • Optical wireless communication with SPAD receivers
    Hiwa Mahmoudi, TU Wien
  • SPAD Arrays for Non-Line-of-Sight Imaging
    Andreas Velten, University of Wisconsin

Go to the original article...

LiDAR News: Blickfeld, Cepton, SiLC, Velodyne, Espros

Image Sensors World        Go to the original article...

Munich, Germany-based LiDAR start-up Blickfeld completes its Series A financing round led by the VC unit of Continental together with Wachstumsfonds Bayern, with participation of the existing investors Fluxunit – OSRAM Ventures, High-Tech Gründerfonds, TEV (Tengelmann Ventures) and Unternehmertum Venture Capital Partners. Blickfeld will use the new financial resources to ramp up production, qualify its LiDAR sensors for the automotive market and strengthen the application development and sales teams for industrial markets.

The safety of autonomous vehicles is based on LiDAR sensor technology. We see Blickfeld in a unique position here, as our technology stands out due to its mass market capability,” says Blickfeld co-founder Florian Petit. “But the mobility sector is not the only area of application for our LiDAR sensors and recognition software: Numerous other successful customer projects in logistics, smart cities or the security sector confirm our approach, as does the financial commitment of the venture capital unit of Continental, Bayern Kapital and our previous investors. We are now looking forward to taking the next steps into series production.

The start-up Blickfeld, founded by Mathias Müller, Florian Petit and Rolf Wojtech, has grown to a team of now over 100 people since it was founded three years ago.


Mission publishes an interview with Cepton CEO Jun Pei:

"In the next decade or two Lidar will be just as common as cameras. The third dimension gives you an extra piece of data that’s critical while also removing a concern. Jun explains that there are more concerns with privacy when dealing with cameras. Lidar doesn’t have that issue because it doesn’t worry about facial recognition or color. It doesn’t measure the privacy-related data that people have issues with.

So with that said, the future is not about improving accuracy, it’s more about cost, reliability, and deployment in applications.
"

PRNewswire: SiLC Technologies, the developer of single-chip FMCW LiDAR, closes $12M in seed funding led by Dell Technologies Capital and joined by Decent Capital, ITIC Ventures, and several angel investors. SiLC will use the funding to scale its R&D and operations to develop its FMCW silicon photonic 4D+ Vision Chip platform.

The announcement follows a successful demo of the fully-integrated FMCW chip able to detect objects smaller than one and a half inches at a range of nearly 200 meters, translating to an effective resolution of around 0.01 degrees vertically and horizontally. This level of performance capability can enable a vehicle traveling at highway speed to stop or avoid objects at more than 200 meters range, a critical aspect of autonomous vehicle navigation and safety.

"This is my third startup and by far the most exciting, both at a technology level and the size of the markets it addresses. We believe we have an opportunity to transform several industries," said Mehdi Asghari, founder and CEO, SiLC. "Our 4D+ Vision Chip technology will not only make LiDAR a commercial reality but will also enable applications ranging from robotics to AR/VR to biometric scanning."


Here is the SiLC CEO presentation at AutoSens Brussels 2019:



TechBriefs interviews Velodyne CEO Anand Gopalan about the challenges in autonomous car design:

"On the autonomous side, there are two things that are very challenging. The first is that you are dealing with the tyranny of corner cases. There are a lot of critical corner scenarios that autonomous vehicles have to deal with, which require a lot more innovation in software, sensor, and computing hardware. For example, say you have an autonomous robo-taxi that has dropped a pedestrian at a curbside and now needs to pull back into the main traffic. It needs to make sure everything around the vehicle is safe: the passenger has moved away from the vehicle, there are no bicyclists zooming by, vehicles trying to pull in — all sorts of things you might not encounter in just riding down the street. People are dealing with what I call the tyranny of corner cases by sometimes modifying software and in some cases going back to the drawing board in terms of hardware.

The second aspect is speed. Fleets of vehicles are being deployed in some very dense urban environments, driving at 30 miles per hour or so. But in order to make a viable car you need to go to at least 40 to 45 miles per hour. This introduces many new challenges in terms of perception as well as speed of reaction.
"


Autosens publishes Espros CCD LiDAR presentation by Beat De Coi, Founder and CEO, in Brussels:


Go to the original article...

css.php