PreAct Technologies announces world’s first software-defined flash LiDAR

Image Sensors World        Go to the original article...

Press release: https://www.preact-tech.com/news/preact-technologies-announces-mojave-the-first-release-in-its-3rd-generation-family-of-near-field-software-definable-flash-lidar/

PreAct Technologies Announces Mojave, the First Release in its 3rd Generation Family of Near-field, Software-definable Flash LiDAR



Portland, OR – August 1, 2023 – PreAct Technologies (PreAct), an Oregon-based developer of near-field flash LiDAR technology, today announced the release of its Mojave LiDAR as a high-performance, low-cost sensor solution to address a variety of applications including smart cities, robotics, cargo monitoring, education & university research, building monitoring, patient monitoring, agricultural, and much more.

“As more industries are discovering the power of LiDAR sensors to provide high quality data while also maintaining individual privacy, we knew that our technology would be a perfect fit for these applications,” said Paul Drysch, CEO of PreAct. “We created the sensor to allow companies to monitor volume and movement through high-density point clouds, which gives them the information they need to adjust their services without the ‘creepy’ factor of watching individuals on camera.  In addition, you get much more useful data with a point cloud – such as precise object location and volume.”

Mojave is the only flash LiDAR on the market designed to meet the needs of non-automotive industry, as well as automotive, applications. With its software-definable capabilities, depth accuracy error of less than 2%, and a single unit retail cost of $350, Mojave will be the first truly mass-market LiDAR. Mojave’s performance addresses crucial spatial awareness challenges without paying outrageous amounts for other sensors on the market.

Currently, specific use cases include elevator passenger monitoring, retail, patient monitoring in medical facilities, security cameras, robotics, smart cities, education and university research, and entrepreneurship.

Retail – Mojave addresses key concerns in a retail setting that include customer traffic patterns and behavior, shrinkage protection, product stocking, warehouse logistics, and violence detection. All these areas provide more peace of mind for a better customer experience and profitability.
Patient Monitoring & Security – Medical and rehabilitation facilities can use the Mojave sensor to monitor patient movements to minimize the risk of falling, lack of movement and other potential dangers such as security breaches from unauthorized visitors.

Robotics – Mojave meets the stringent automation needs in manufacturing, logistics, and other industries that have come to rely on robotics applications. Outperforming other sensors on the market with its precision, safety, and spatial awareness capabilities, Mojave stands out as a premier sensor choice.

Smart Cities – As smart cities continue to improve their use of technology, gathering information about travel patterns and public transit passenger behaviors, etc., has become critical to implementing an intelligent transportation system (ITS) that can provide the kind of high performance, accuracy, and speed possible with PreAct’s Mojave LiDAR.

Education and University Research – Technology is moving at record speed with universities being a knowledge-rich forum for professors and students to collaborate on the next generation of sensor innovation & application. Worldwide, university labs and centers are dedicated spaces providing testing environments to explore how sensor technology will better our lives.

Entrepreneurship and Inventors – With creativity abound, most educational institutions teach some form of entrepreneurship. Universities worldwide dedicate entrepreneurship centers for educators to guide student innovators to solve the world’s most pressing problems. PreAct’s sensor technology awaits the next solution to everyday business and life challenges.

The PreAct Mojave LiDAR will be available in September of this year and distributed globally by Digi-Key Electronics and Amazon. Engineering samples will be available August 16 and both products can be pre-ordered now by contacting PreAct.

For Mojave LiDAR specs, visit www.preact-tech.com/mojave

About PreAct Technologies  
PreAct Technologies is the market leader in near-field software-definable flash LiDAR technology and integrated SDK (software development kit). Its patent-pending suite of sensor technologies provides high resolution, affordable LiDAR solutions to a wide range of industries including robotics, healthcare, ITS, logistics, security, industrial, consumer electronics, trucking, and automotive. With unmatched quality and accuracy, PreAct’s edge processing algorithms drive technology resulting in 3D depth-maps of small objects at sub-centimeter accuracy up to 20 meters. PreAct’s LiDARs and SDK enable companies and innovators to address the industry’s most pressing business and technology needs. The firm is headquartered in Portland, Oregon, with offices in Ashburn, Virginia, and Barcelona Spain. For sales inquiries, please contact sales@preact-tech.com. For more information, visit www.preact-tech.com.

Go to the original article...

Detailed depth maps from gated cameras

Image Sensors World        Go to the original article...

Recent work from Princeton University's computational imaging lab shows a new method for generating highly detailed depth maps from a gated camera. 

This work was presented at the recent IEEE/CVF Computer Vision and Pattern Recognition 2022 conference in New Orleans.

Abstract: Gated cameras hold promise as an alternative to scanning LiDAR sensors with high-resolution 3D depth that is robust to back-scatter in fog, snow, and rain. Instead of sequentially scanning a scene and directly recording depth via the photon time-of-flight, as in pulsed LiDAR sensors, gated imagers encode depth in the relative intensity of a handful of gated slices, captured at megapixel resolution. Although existing methods have shown that it is possible to decode high-resolution depth from such measurements, these methods require synchronized and calibrated LiDAR to supervise the gated depth decoder – prohibiting fast adoption across geographies, training on large unpaired datasets, and exploring alternative applications outside of automotive use cases. In this work, propose an entirely self-supervised depth estimation method that uses gated intensity profiles and temporal consistency as a training signal. The proposed model is trained end-to-end from gated video sequences, does not require LiDAR or RGB data, and learns to estimate absolute depth values. We take gated slices as input and disentangle the estimation of the scene albedo, depth, and ambient light, which are then used to learn to reconstruct the input slices through a cyclic loss. We rely on temporal consistency between a given frame and neighboring gated slices to estimate depth in regions with shadows and reflections. We experimentally validate that the proposed approach outperforms existing supervised and self-supervised depth estimation methods based on monocular RGB and stereo images, as well as supervised methods based on gated images. Code is available at https://github.com/princeton-computationalimaging/Gated2Gated.



An example gated imaging system is pictured in the bottom left and consists of a synchronized camera and a not shown VECSL flash illumination source. The system allows to integrate the scene response for narrow depth ranges as illustrated in the bottom row. Therefore, the overlapping gated slices contain implicit depth information according to the time-of-flight principle at image resolution. In comparison the illustrated LiDAR sensors in the top left send out point wise illumination pulses causing a sparse depth representation depicted in the top row. Our proposed self-supervised Gated2Gated learning technique recovers this dense depth information (middle row) from the shown set of three gated images, by learning from temporal and gated illumination cues.

The paper shows results in a variety of challenging driving conditions such as nighttime, fog, rain and snow.

A. Walia et al., "Gated2Gated: Self-Supervised Depth Estimation from Gated Images", CVPR 2022.

Go to the original article...

Preprint on unconventional cameras for automotive applications

Image Sensors World        Go to the original article...

From arXiv.org --- You Li et al. write:

Autonomous vehicles rely on perception systems to understand their surroundings for further navigation missions. Cameras are essential for perception systems due to the advantages of object detection and recognition provided by modern computer vision algorithms, comparing to other sensors, such as LiDARs and radars. However, limited by its inherent imaging principle, a standard RGB camera may perform poorly in a variety of adverse scenarios, including but not limited to: low illumination, high contrast, bad weather such as fog/rain/snow, etc. Meanwhile, estimating the 3D information from the 2D image detection is generally more difficult when compared to LiDARs or radars. Several new sensing technologies have emerged in recent years to address the limitations of conventional RGB cameras. In this paper, we review the principles of four novel image sensors: infrared cameras, range-gated cameras, polarization cameras, and event cameras. Their comparative advantages, existing or potential applications, and corresponding data processing algorithms are all presented in a systematic manner. We expect that this study will assist practitioners in the autonomous driving society with new perspectives and insights.








Go to the original article...

PreAct and Espros working on new lidar solutions

Image Sensors World        Go to the original article...

From optics.org news:

PreAct Technologies, an Oregon-based developer of near-field flash lidar technology and Espros Photonics, Sargans, Switzerland, a firm producing time-of flight chips and 3D cameras, have announced a collaboration agreement to develop new flash lidar technologies for specific use cases in automotive, trucking, industrial automation and robotics.

The collaboration combines the dynamic abilities of PreAct’s software-definable flash lidar and the “ultra-ambient-light-robust time-of-flight technology” from Espros with the aim of creating what the partners call “next-generation near-field sensing solutions”.

Paul Drysch, CEO and co-founder of PreAct Technologies, commented, “Our goal is to provide high performance, software-definable sensors to meet the needs of customers across various industries. Looking to the future, vehicles across all industries will be software-defined, and our flash lidar solutions are built to support that infrastructure from the beginning.”


The automotive and trucking industries continue to rapidly integrate ADAS and self-driving capabilities into vehicles, and as the US NHTSA has just announced the requirement for human controls in fully automated vehicles, the need for ultra-precise, high performance sensors is paramount to ensuring safe autonomous driving.

The sensors created by PreAct and Espros are expected to address significant ADAS and self-driving features – such as traffic sign recognition, curb detection, night vision and pedestrian detection – with the highest frame rates and resolution of any sensor on the market, the partners state.


In addition to providing solutions for automotive and trucking, the partnership will also address the expanding robotics industry. According to a market report published by Allied Market Research, the global industrial robotics market size is expected to reach $116.8 billion by 2030.

The flash lidar technologies solutions will also enable a wide range of robotics and automation applications including QR code scanning, obstacle avoidance and gesture recognition.

Beat DeCoi, President and CEO of Espros, commented, “We have plans to demonstrate the capabilities of our 3D chipsets with PreAct’s hardware and software. By combining our best in class TOF chips with PreAct’s innovation and drive, we will see great results with clients benefiting from this partnership.”

Link: https://optics.org/news/13/3/41

Go to the original article...

Apple iPhone LiDAR applications

Image Sensors World        Go to the original article...

Polycam makes apps that leverage the new lidar sensor on Apple's latest iPhone and iPad models.

Their website presents a gallery of objects scanned with their app: https://poly.cam/explore


Original press release about Polycam's new app:

Polycam launches a 3D scanning app for the new iPhone 12 Pro models with a LiDAR sensor. The app allows users to rapidly create high quality, color 3D scans that can be used for 3D visualization and more. Because the scans are dimensionally accurate, they can be used to take measurements of virtually anything in the scan at once, rapidly speeding up workflows for many professionals such as architects and 3D designers. What is perhaps most impressive about Polycam is the speed -- scans which would have taken hours to process on a desktop without a LiDAR device can now be processed in seconds directly on an iPhone. 

As Chris Heinrich, the CEO of Polycam, puts it: "I've worked for years on 3D scanning with more conventional hardware, and what you can do on these LiDAR devices is literally 100x faster than what was possible before".

3D capture is a valuable tool for many industries, and Polycam is already seeing enthusiastic usage from architects, archaeologists, movie set designers and more, via an iPad Pro version that launched earlier this year. With the launch of the iPhone version, Heinrich expects to see adoption from many more users across a wider range of verticals. "Just as smartphones dramatically expanded the reach of photo and video", Heinrich says, "we expect these new LiDAR-enabled smartphones to dramatically increase the reach of 3D capture".

While the launch of the iPhone version is an important milestone, "this is just the beginning", says Heinrich. Many new features and improvements are in the pipeline, from enabling users to create even larger scans, improved scanning accuracy of smaller objects and a suite of 3D editing and AI-driven postprocessing tools to supercharge professional workflows that utilize 3D capture. 

Polycam is available to download on the App Store for the iPhone 12 Pro, 12 Pro Max, and the 2020 iPad Pro family. Sample 3D scans can be found on Sketchfab. Polycam was built by a small team of individuals with a passion for 3D capture, and deep experience in computer vision and 3D design.


[I am curious to know what the real-world challenges and limitations are. In particular, how much do the final results rely on lidar data vs. traditional "photogrammetry" that fuses multiple RGB images with minimal supervision from the lidar for, say, absolute scale? If you have an iPhone/iPad and get to try this app out, please share your thoughts in comments below! ---AI]

Go to the original article...

High Resolution MEMS LiDAR Paper in Nature Magazine

Image Sensors World        Go to the original article...

Researches from the Integrated Photonics Lab at UC-Berkeley recently published a paper titled "A large-scale microelectromechanical-systems-based silicon photonics LiDAR" proposing a CMOS-compatible high-resolution scanning MEMS LiDAR system.

Three-dimensional (3D) imaging sensors allow machines to perceive, map and interact with the surrounding world. The size of light detection and ranging (LiDAR) devices is often limited by mechanical scanners. Focal plane array-based 3D sensors are promising candidates for solid-state LiDARs because they allow electronic scanning without mechanical moving parts. However, their resolutions have been limited to 512 pixels or smaller. In this paper, we report on a 16,384-pixel LiDAR with a wide field of view (FoV, 70° × 70°), a fine addressing resolution (0.6° × 0.6°), a narrow beam divergence (0.050° × 0.049°) and a random-access beam addressing with sub-MHz operation speed. The 128 × 128-element focal plane switch array (FPSA) of grating antennas and microelectromechanical systems (MEMS)-actuated optical switches are monolithically integrated on a 10 × 11-mm2 silicon photonic chip, where a 128 × 96 subarray is wire bonded and tested in experiments. 3D imaging with a distance resolution of 1.7 cm is achieved with frequency-modulated continuous-wave (FMCW) ranging in monostatic configuration. The FPSA can be mass-produced in complementary metal–oxide–semiconductor (CMOS) foundries, which will allow ubiquitous 3D sensors for use in autonomous cars, drones, robots and smartphones.



Go to the original article...

css.php