PreAct Technologies announces world’s first software-defined flash LiDAR

Image Sensors World        Go to the original article...

Press release: https://www.preact-tech.com/news/preact-technologies-announces-mojave-the-first-release-in-its-3rd-generation-family-of-near-field-software-definable-flash-lidar/

PreAct Technologies Announces Mojave, the First Release in its 3rd Generation Family of Near-field, Software-definable Flash LiDAR



Portland, OR – August 1, 2023 – PreAct Technologies (PreAct), an Oregon-based developer of near-field flash LiDAR technology, today announced the release of its Mojave LiDAR as a high-performance, low-cost sensor solution to address a variety of applications including smart cities, robotics, cargo monitoring, education & university research, building monitoring, patient monitoring, agricultural, and much more.

“As more industries are discovering the power of LiDAR sensors to provide high quality data while also maintaining individual privacy, we knew that our technology would be a perfect fit for these applications,” said Paul Drysch, CEO of PreAct. “We created the sensor to allow companies to monitor volume and movement through high-density point clouds, which gives them the information they need to adjust their services without the ‘creepy’ factor of watching individuals on camera.  In addition, you get much more useful data with a point cloud – such as precise object location and volume.”

Mojave is the only flash LiDAR on the market designed to meet the needs of non-automotive industry, as well as automotive, applications. With its software-definable capabilities, depth accuracy error of less than 2%, and a single unit retail cost of $350, Mojave will be the first truly mass-market LiDAR. Mojave’s performance addresses crucial spatial awareness challenges without paying outrageous amounts for other sensors on the market.

Currently, specific use cases include elevator passenger monitoring, retail, patient monitoring in medical facilities, security cameras, robotics, smart cities, education and university research, and entrepreneurship.

Retail – Mojave addresses key concerns in a retail setting that include customer traffic patterns and behavior, shrinkage protection, product stocking, warehouse logistics, and violence detection. All these areas provide more peace of mind for a better customer experience and profitability.
Patient Monitoring & Security – Medical and rehabilitation facilities can use the Mojave sensor to monitor patient movements to minimize the risk of falling, lack of movement and other potential dangers such as security breaches from unauthorized visitors.

Robotics – Mojave meets the stringent automation needs in manufacturing, logistics, and other industries that have come to rely on robotics applications. Outperforming other sensors on the market with its precision, safety, and spatial awareness capabilities, Mojave stands out as a premier sensor choice.

Smart Cities – As smart cities continue to improve their use of technology, gathering information about travel patterns and public transit passenger behaviors, etc., has become critical to implementing an intelligent transportation system (ITS) that can provide the kind of high performance, accuracy, and speed possible with PreAct’s Mojave LiDAR.

Education and University Research – Technology is moving at record speed with universities being a knowledge-rich forum for professors and students to collaborate on the next generation of sensor innovation & application. Worldwide, university labs and centers are dedicated spaces providing testing environments to explore how sensor technology will better our lives.

Entrepreneurship and Inventors – With creativity abound, most educational institutions teach some form of entrepreneurship. Universities worldwide dedicate entrepreneurship centers for educators to guide student innovators to solve the world’s most pressing problems. PreAct’s sensor technology awaits the next solution to everyday business and life challenges.

The PreAct Mojave LiDAR will be available in September of this year and distributed globally by Digi-Key Electronics and Amazon. Engineering samples will be available August 16 and both products can be pre-ordered now by contacting PreAct.

For Mojave LiDAR specs, visit www.preact-tech.com/mojave

About PreAct Technologies  
PreAct Technologies is the market leader in near-field software-definable flash LiDAR technology and integrated SDK (software development kit). Its patent-pending suite of sensor technologies provides high resolution, affordable LiDAR solutions to a wide range of industries including robotics, healthcare, ITS, logistics, security, industrial, consumer electronics, trucking, and automotive. With unmatched quality and accuracy, PreAct’s edge processing algorithms drive technology resulting in 3D depth-maps of small objects at sub-centimeter accuracy up to 20 meters. PreAct’s LiDARs and SDK enable companies and innovators to address the industry’s most pressing business and technology needs. The firm is headquartered in Portland, Oregon, with offices in Ashburn, Virginia, and Barcelona Spain. For sales inquiries, please contact sales@preact-tech.com. For more information, visit www.preact-tech.com.

Go to the original article...

VoxelSensors announces Switching Pixels technology for AR/VR applications

Image Sensors World        Go to the original article...

GlobalNewswire: https://www.globenewswire.com/news-release/2023/05/29/2677822/0/en/VoxelSensors-Debuts-the-Global-Premiere-of-Revolutionary-Switching-Pixels-Active-Event-Sensor-Evaluation-Kit-for-3D-Perception-to-Seamlessly-Blend-the-Physical-and-Digital-Worlds.html

VoxelSensors Debuts the Global Premiere of Revolutionary Switching Pixels® Active Event Sensor Evaluation Kit for 3D Perception to Seamlessly Blend the Physical and Digital Worlds

BRUSSELS, Belgium, May 29, 2023 (GLOBE NEWSWIRE) -- VoxelSensors is to reveal its innovative 3D Perception technology, the Switching Pixels® Active Event Sensor (SPAES), and globally premiere the related Andromeda Evaluation Kit at AWE USA 2023. Experience this breakthrough technology from May 31 to June 2 at AWE booth #914 in Santa Clara (California, USA).

VoxelSensors’ Switching Pixels® Active Event Sensor is a novel category of ultra-low power and ultra-low latency 3D perception sensors for Extended Reality (XR) to seamlessly blend the physical and digital worlds.

Extended Reality device manufacturers require low power consumption and low latency 3D Perception technology to flawlessly blend the physical and digital worlds and unlock the true potential of immersive experiences. VoxelSensors’ patented Switching Pixels® Active Event Sensor technology has uniquely resolved these significant challenges and is the world’s first solution that has achieved a threshold of less than 10 milliwatts in terms of power consumption, combined with less than 5 milliseconds of latency. Furthermore, this is possible while being resistant to indoor and outdoor lighting at distances over 5 meters and being immune to crosstalk.

This breakthrough technology offers an alternative to traditional 3D sensors, eliminating the need for slow frames. It sends 3D data points in real-time serially to the device and application at nanosecond refresh rates. Designed for efficiency, SPAES delivers the lowest latency for perception applications at minimal power consumption addressing previously unmet needs such as precise segmentation, spatial mapping, anchoring, and natural interaction.

“SPAES disrupts the standard in 3D Perception,” says Christian Mourad, co-founder and VP of Engineering at VoxelSensors. “The Andromeda Evaluation Kit, available for the selected OEMs and integrators in the summer of 2023, demonstrates our commitment to advancing XR/AR/MR and VR applications. This innovation, however, isn’t limited to Extended Reality and expands into robotics, the automotive industry, drones, and medical applications.”

VoxelSensors was founded in 2020 by a team of seasoned experts in the field of 3D sensing and perception, with over 50 years of collective experience. The team’s success includes co-inventing an efficient 3D Time-of-Flight sensor and camera technology, which leading tech company Sony acquired in 2015.

In May 2023, VoxelSensors announced a €5M investment led by Belgian venture capitals Capricorn Partners and Qbic with contributions from the investment firm finance&invest.brussels, along with existing investors and the team. The funding will bolster VoxelSensors' roadmap, talent acquisition, and enhance customer relations in the U.S. and Asia.

“At VoxelSensors, we aim to fuse the physical and digital realms until they're indistinguishable,” says Johannes Peeters, co-founder and CEO of VoxelSensors. “With Extended Reality gaining momentum it is our duty to discover, create, work, and play across sectors like gaming, healthcare, and manufacturing. Our Switching Pixels® Active Event Sensor technology stands ready to pioneer transformative user experiences!”

For information related to an Andromeda Evaluation Kit or a possible purchase contact: sales@voxelsensors.com.

Go to the original article...

Videos du jour — onsemi, CEA-Leti, Teledyne e2v [June 7, 2023]

Image Sensors World        Go to the original article...


 

Overcoming Challenging Lighting Conditions with eHDR: onsemi’s AR0822 is an innovative image sensor that produces high-quality 4K video at 60 frames-per-second.


Discover Wafer-to-wafer process
: Discover CEA-Leti expertise in terms of hybrid bonding: the different stages of Wafer-to-wafer process in CEA-Leti clean room, starting with Chemical Mechanical Planarization (CMP), through wafer-to-wafer bonding, alignment measurement, characterization of bonding quality, grinding and results analysis.

 

Webinar - Pulsed Time-of-Flight: a complex technology for a simpler and more versatile system: Hosted by Vision Systems Design and presented by Yoann Lochardet, 3D Marketing Manager at Teledyne e2v in June 2022, this webinar discusses how, at first glance, Pulsed Time-of-Flight (ToF) can be seen as a very complex technology that is difficult to understand and use. That is true in the sense that this technology is state-of-the-art and requires the latest technical advancements. However, it is a very flexible technology, with features and capabilities that reduce the complexity of the whole system, allowing for a simpler and more versatile system.


Go to the original article...

SD Optics releases MEMS-based system "WiseTopo" for 3D microscopy

Image Sensors World        Go to the original article...

SD Optics has released WiseTopo, a MEMS-based microarray lens system that transforms a 2D microscopes into 3D. 
 
Attendees at Photonics West can see a demonstration at their booth #4128 between Jan 31 to Feb 2, 2023 at the Moscone Center in San Francisco, California.
 


SD OPTICS introduces WiseTopo with our core technology Mals lens, the Mems-based microarray lens system. WiseTopo transforms a 2D microscope into a 3D microscope with a simple plug-in installation, and it fits all microscopes. The conventional system has a limited depth of field, so a user has to adjust the focus manually by moving the z-axis. It is difficult to identify the exact shape of the object instantly.  The manual movements can cause deviations in the observation, missing information, incomplete inspection, and an increase in user work load. SD Optics' WiseTopo is the most innovative 3D microscope module empowered with the patented core technology Mals. WiseTopo converts a 2D microscope into a 3D microscope by replacing the image sensor. With this simple installation, WiseTopo resolves the depth-of-field issue without Z-axis movement. Mals is an optical Mems-based, ultra-fast variable focusing lens that implements curvature changes in the lens with the motion of individual micro-mirrors. Mals moves and focuses at a speed of 12Khz without z-axis mechanical movement. It is a semi-permanent digital lens technology that operates at any temperature and has no life cycle limit. WiseTopo provides ideal features in combination with our developed software. These features let users have a better understanding of an object in real time. WiseTopo provides an All-in-focus function where everything is in focus. The Auto-focus function automatically focuses on the Region of Interest Focus lock maintains focus when multiple focus ROIs are set in the z-axis, multi-focus lock stays in focus even when moving the X- and Y-axis. Auto-focus lock retains auto-focus during Z-axis movement and others. These functions maximize user convenience. WiseTopo and its 3D images will reveal necessary information that is hidden when using a 2D microscope. WiseTopo obtains in-focused images with fast varying focus technology and processes many 3D attributes such as shape matching and point cloud instantly. WiseTopo supports various 3D data formats for analysis. For example, a comparison between the reference 3D data with the real-time 3D data can be performed easily. In the microscope, objective lenses with different magnifications are mounted on the turret. Wisetopo provides all functions even when the magnification is changed. Wisetopo provides all 3D features in any microscope and can be used with all of them, regardless of the brand
3D images created in Wisetopo can be viewed in AR/VR. This will let users feel and observe 3D data in the metaverse space.
 

Go to the original article...

amsOSRAM announces new sensor Mira220

Image Sensors World        Go to the original article...

  • New Mira220 image sensor’s high quantum efficiency enables operation with low-power emitter and in dim lighting conditions
  • Stacked chip design uses ams OSRAM back side illumination technology to shrink package footprint to just 5.3mm x 5.3mm, giving greater design flexibility to manufacturers of smart glasses and other space-constrained products
  • Low-power operation and ultra-small size make the Mira220 ideal for active stereo vision or structured lighting 3D systems in drones, robots and smart door locks, as well as mobile and wearable devices

Press Release: https://ams-osram.com/news/press-releases/mira220

Premstaetten, Austria (14th July 2022) -- ams OSRAM (SIX: AMS), a global leader in optical solutions, has launched a 2.2Mpixel global shutter visible and near infrared (NIR) image sensor which offers the low-power characteristics and small size required in the latest 2D and 3D sensing systems for virtual reality (VR) headsets, smart glasses, drones and other consumer and industrial applications.

The new Mira220 is the latest product in the Mira family of pipelined high-sensitivity global shutter image sensors. ams OSRAM uses back side illumination (BSI) technology in the Mira220 to implement a stacked chip design, with the sensor layer on top of the digital/readout layer. This allows it to produce the Mira220 in a chip-scale package with a footprint of just 5.3mm x 5.3mm, giving manufacturers greater freedom to optimize the design of space-constrained products such as smart glasses and VR headsets.

The sensor combines excellent optical performance with very low-power operation. The Mira220 offers a high signal-to-noise-ratio as well as high quantum efficiency of up to 38% as per internal tests at the 940nm NIR wavelength used in many 2D or 3D sensing systems. 3D sensing technologies such as structured light or active stereo vision, which require an NIR image sensor, enable functions such as eye and hand tracking, object detection and depth mapping. The Mira220 will support 2D or 3D sensing implementations in augmented reality and virtual reality products, in industrial applications such as drones, robots and automated vehicles, as well as in consumer devices such as smart door locks.

The Mira220’s high quantum efficiency allows device manufacturers to reduce the output power of the NIR illuminators used alongside the image sensor in 2D and 3D sensing systems, reducing total power consumption. The Mira220 features very low power consumption at only 4mW in sleep mode, 40mW in idle mode and at full resolution and 90fps the sensor has a power consumption of 350mW. By providing for low system power consumption, the Mira220 enables wearable and portable device manufacturers to save space by specifying a smaller battery, or to extend run-time between charges.

“Growing demand in emerging markets for VR and augmented reality equipment depends on manufacturers’ ability to make products such as smart glasses smaller, lighter, less obtrusive and more comfortable to wear. This is where the Mira220 brings new value to the market, providing not only a reduction in the size of the sensor itself, but also giving manufacturers the option to shrink the battery, thanks to the sensor’s very low power consumption and high sensitivity at 940nm,” said Brian Lenkowski, strategic marketing director for CMOS image sensors at ams OSRAM.

Superior pixel technology

The Mira220’s advanced back-side illumination (BSI) technology gives the sensor very high sensitivity and quantum efficiency with a pixel size of 2.79μm. Effective resolution is 1600px x 1400px and maximum bit depth is 12 bits. The sensor is supplied in a 1/2.7” optical format.

The sensor supports on-chip operations including external triggering, windowing, and horizontal or vertical mirroring. The MIPI CSI-2 interface allows for easy interfacing with a processor or FPGA. On-chip registers can be accessed via an I2C interface for easy configuration of the sensor.

Digital correlated double sampling (CDS) and row noise correction result in excellent noise performance.

ams OSRAM will continue to innovate and extend the Mira family of solutions, offering customers a choice of resolution and size options to fit various application requirements.

The Mira220 NIR image sensor is available for sampling. More information about Mira220.


Mira220 image sensor achieves high quantum efficiency at 940nm to allow for lower power illumination in 2D and 3D sensing systems
Image: ams

The miniature Mira220 gives extra design flexibility in space-constrained applications such as smart glasses and VR headsets
Image: OSRAM



Go to the original article...

3D cameras for metaverse

Image Sensors World        Go to the original article...

Press release from II-VI Inc. announces joint effort with Artilux on a SWIR 3D camera for the "metaverse".

https://ii-vi.com/news/ii-vi-incorporated-and-artilux-demonstrate-a-3d-camera-for-enhanced-user-experience-in-the-metaverse/


 

PITTSBURGH and HSINCHU, TAIWAN, July 18, 2022 (GLOBE NEWSWIRE) – II‐VI Incorporated (Nasdaq: IIVI), a leader in semiconductor lasers, and Artilux, a leader in germanium silicon (GeSi) photonics and CMOS SWIR sensing technology, today announced a joint demonstration of a next-generation 3D camera with much longer range and higher image resolution to greatly enhance user experience in the metaverse.


Investments in the metaverse infrastructure are accelerating and driving the demand for sensors that enable more realistic and immersive virtual experiences. II-VI and Artilux combined their proprietary technologies in indium phosphide (InP) semiconductor lasers and GeSi sensor arrays, respectively, to demonstrate a miniature 3D camera that operates in the short-wavelength infrared (SWIR), at 1380 nm, resulting in significantly higher performance than existing cameras operating at 940 nm.


“The longer infrared wavelength provides better contrasts and reveals material details that are otherwise not visible with shorter-wavelength illumination, especially in outdoor environments,” said Dr. Julie Sheridan Eng, Sr. Vice President, Optoelectronic Devices & Modules Business Unit, II-VI. “By designing a camera that operates at 1380 nm instead of 940 nm, we can illuminate the scene with greater brightness and still remain well within the margins of eye safety requirements. In addition, the atmosphere absorbs more sunlight at 1380 nm than at 940 nm, which reduces background light interference, greatly improving the signal-to-noise ratio and enabling cameras with longer range and better image resolution.”


“The miniature SWIR 3D camera can be seamlessly integrated into next-generation consumer devices, many of which are under development for augmented-, mixed-, and virtual-reality applications,” said Dr. Neil Na, co-founder and CTO of Artilux. “II‑VI and Artilux demonstrated a key capability that will enable the metaverse to become a popular venue for entertainment, work, and play. The SWIR camera demonstration provides a glimpse of the future of 3D sensing in the metaverse, with displays that can identify, delineate, classify, and render image content, or with avatars that can experience real-time eye contact and facial expressions.” 


II-VI provided the highly integrated SWIR illumination module comprising InP edge-emitting lasers that deliver up to 2 W of output power and optical diffusers, in surface-mount technology (SMT) packages for low-cost and high-quality assembly. Artilux’s camera features a high-bandwidth and high-quantum-efficiency GeSi SWIR sensor array based on a scalable CMOS technology platform. Combined, the products enable a broad range of depth-sensing applications in consumer and automotive markets. 


About II-VI Incorporated
II-VI Incorporated, a global leader in engineered materials and optoelectronic components, is a vertically integrated manufacturing company that develops innovative products for diversified applications in communications, industrial, aerospace & defense, semiconductor capital equipment, life sciences, consumer electronics, and automotive markets. Headquartered in Saxonburg, Pennsylvania, the Company has research and development, manufacturing, sales, service, and distribution facilities worldwide. The Company produces a wide variety of application-specific photonic and electronic materials and components, and deploys them in various forms, including integrated with advanced software to support our customers. For more information, please visit us at www.ii-vi.com.


About Artilux
Artilux, renowned for being the world leader of GeSi photonic technology, has been at the forefront of wide-spectrum 3D sensing and consumer optical connectivity since 2014. Established on fundamental technology breakthroughs, Artilux has been making multidisciplinary innovations covering integrated optics, system architecture to computing algorithm, and emerged as an innovation enabler for smartphones, autonomous driving, augmented reality, and beyond. Our vision is to keep pioneering the frontier of photonic technologies and transform them into enrichment for real life experience. We enlighten the path from information to intelligence. Find out more at www.artiluxtech.com.


Go to the original article...

Review of indirect time-of-flight 3D cameras (IEEE TED June 2022)

Image Sensors World        Go to the original article...

C. Bamji et al. from Microsoft published a paper titled "A Review of Indirect Time-of-Flight Technologies" in IEEE Trans. Electron Devices (June 2022).

Abstract: Indirect time-of-flight (iToF) cameras operate by illuminating a scene with modulated light and inferring depth at each pixel by combining the back-reflected light with different gating signals. This article focuses on amplitude-modulated continuous-wave (AMCW) time-of-flight (ToF), which, because of its robustness and stability properties, is the most common form of iToF. The figures of merit that drive iToF performance are explained and plotted, and system parameters that drive a camera’s final performance are summarized. Different iToF pixel and chip architectures are compared and the basic phasor methods for extracting depth from the pixel output values are explained. The evolution of pixel size is discussed, showing performance improvement over time. Depth pipelines, which play a key role in filtering and enhancing data, have also greatly improved over time with sophisticated denoising methods now available. Key remaining challenges, such as ambient light resilience and multipath invariance, are explained, and state-of-the-art mitigation techniques are referenced. Finally, applications, use cases, and benefits of iToF are listed.



Use of time gates to integrate returning light


iToF camera measurement


Modulation contrast vs. modulation frequency used in iToF cameras


Trend of pixel sizes since 2012

Trend of pixel array sizes since 2012

Trend of near infrared pixel quantum efficiencies since 2010


Multigain column readout


Multipath mitigation

DOI link: 10.1109/TED.2022.3145762

Go to the original article...

ams OSRAM VCSELs in Melexis’ in-cabin monitoring solution

Image Sensors World        Go to the original article...

ams OSRAM VCSEL illuminator brings benefits of integrated eye safety to Melexis automotive in-cabin monitoring solution

Premstaetten, Austria (11 May, 2022) – ams OSRAM (SIX: AMS), a global leader in optical solutions, announces that it is supplying a high-performance infrared laser flood illuminator for the latest automotive indirect Time-of-Flight (iToF) demonstrator from Melexis.

The ams OSRAM vertical-cavity surface-emitting laser (VCSEL) flood illuminator from the TARA2000-AUT family has been chosen for the new, improved version of the EVK75027 iToF sensing kit because it features an integrated eye safety interlock. This provides for a more compact, more reliable and faster system implementation than other VCSEL flood illuminators that require an external photodiode and processing circuitry.

The Melexis evaluation kit demonstrates the combined capabilities of the new ams OSRAM 940nm VCSEL flood illuminator in combination with an interface board and a processor board and the MLX75027 iToF sensor. The evaluation kit provides a complete hardware implementation of iToF depth sensing on which automotive OEMs can run software for cabin monitoring functions such as occupant detection and gesture sensing.


More reliable operation, faster detection of eye safety risks

The new ams OSRAM VCSEL with integrated eye safety interlock is implemented directly on the micro-lens array of the VCSEL module, and detects any cracks or apertures that can cause an eye safety risk. Earlier automotive implementations of iToF sensing have used VCSEL illuminators that require an external photodiode, a fault-prone, indirect method of providing the eye safety interlock function.

The read-out circuit requires no additional components other than an AND gate or a MOSFET. This produces almost instant (<1µs) reactions to fault conditions. A lower component count also reduces the bill-of-materials cost compared to photodiode-based systems. By eliminating the use of an external photodiode, the eye safety interlock eliminates the false signals created by objects such as a passenger’s hand obscuring the camera module.

“Automotive OEMs are continually looking for ways to simplify system designs and reduce component count. By integrating an eye safety interlock into the VCSEL illuminator module, ams OSRAM has found a new way to bring value to automotive customers. Not only will it reduce component count, but also increase reliability while offering the very highest levels of optical performance,” said Firat Sarialtun, Global Segment Manager for In-Cabin Sensing at ams OSRAM.

“With the EVK75027, Melexis has gone beyond the provision of a stand-alone iToF sensor to offer automotive customers a high-performance platform for 3D in-cabin sensing. We are pleased to be able to improve the value of the EVK75027 by now offering the option of a more integrated VCSEL flood illuminator on the kit’s illuminator board,” said Gualtiero Bagnuoli, Marketing manager Optical Sensors.

The EVK75027 evaluation kit with ams OSRAM illumination board can be ordered from authorized distributors of Melexis products (https://www.melexis.com/en/product/EVK75027/Evaluation-Kit-VGA-ToF-Sensor).

There is also a white paper on the new illumination board for the EVK75027, describing the benefits of implementing an iToF system with a VCSEL flood illuminator that includes an eye safety interlock. The white paper can be downloaded here: https://www.melexis.com/Eye-safe-IR-illumination-for-3D-TOF

Article: https://ams-osram.com/news/press-releases/melexis-eye-safety-itof

Go to the original article...

Google AI Blog article on Lidar-Camera Fusion

Image Sensors World        Go to the original article...

A team from Google Research has a new blog article on fusing Lidar and camera data for 3D object detection. The motivating problem here seems to be the issue of misalignment between 3D LiDAR data and 2D camera data.


The blog discusses the team's forthcoming paper titled "DeepFusion: Lidar-Camera Deep Fusion for Multi-Modal 3D Object Detection" which will be presented at the IEEE/CVF Computer Vision and Pattern Recognition (CVPR) conference in June 2022. A preprint of the paper is available here.

Some excerpts from the blog and the associated paper:

LiDAR and visual cameras are two types of complementary sensors used for 3D object detection in autonomous vehicles and robots. To develop robust 3D object detection models, most methods need to augment and transform the data from both modalities, making the accurate alignment of the features challenging.

Existing algorithms for fusing LiDAR and camera outputs generally follow two approaches --- input-level fusion where the features are fused at an early stage, decorating points in the LiDAR point cloud with the corresponding camera features, or mid-level fusion where features are extracted from both sensors and then combined. Despite realizing the importance of effective alignment, these methods struggle to efficiently process the common scenario where features are enhanced and aggregated before fusion. This indicates that effectively fusing the signals from both sensors might not be straightforward and remains challenging.



In our CVPR 2022 paper, “DeepFusion: LiDAR-Camera Deep Fusion for Multi-Modal 3D Object Detection”, we introduce a fully end-to-end multi-modal 3D detection framework called DeepFusion that applies a simple yet effective deep-level feature fusion strategy to unify the signals from the two sensing modalities. Unlike conventional approaches that decorate raw LiDAR point clouds with manually selected camera features, our method fuses the deep camera and deep LiDAR features in an end-to-end framework. We begin by describing two novel techniques, InverseAug and LearnableAlign, that improve the quality of feature alignment and are applied to the development of DeepFusion. We then demonstrate state-of-the-art performance by DeepFusion on the Waymo Open Dataset, one of the largest datasets for automotive 3D object detection.






We evaluate DeepFusion on the Waymo Open Dataset, one of the largest 3D detection challenges for autonomous cars, using the Average Precision with Heading (APH) metric under difficulty level 2, the default metric to rank a model’s performance on the leaderboard. Among the 70 participating teams all over the world, the DeepFusion single and ensemble models achieve state-of-the-art performance in their corresponding categories.







Go to the original article...

Yole report on 3D imaging technologies

Image Sensors World        Go to the original article...

Full article here: https://www.i-micronews.com/will-3d-depth-cameras-return-to-android-phones/

Some excerpts:

Apple started using structured light for facial recognition technology in the iPhone X in 2017, ushering in an era of 3D depth imaging in the mobile field. Within the next year, in 2018, Android players Oppo, Huawei, and Xiaomi also launched front 3D depth cameras, using very similar structured light technologies to Apple. 

The Android camp attempted to use another 3D imaging technology, indirect Time of Flight (iToF). It was used for rear 3D depth cameras, for quick focus and imaging bokeh and some highly anticipated AR games and other applications.

The hardware for this technique is more compact than structured light, requiring only a ToF sensor chip, and a flood illuminator. The distance is computed by the time difference between emission and reception. Compared to structured light, it does not need much computing power, software integration is relatively simple, and overall, it has cost advantages.

LG, Samsung and Huawei used this kind of technology both for front and/or rear implementations.

For a while, no Android player included 3D depth cameras in their flagship phones. However, during Mobile World Congress 2022, Honor unexpectedly released the Magic 4 Pro with a 3D depth camera on the front of the phone. Will 3D depth cameras return to Android phones?







Market report: https://www.i-micronews.com/products/3d-imaging-and-sensing-technology-and-market-trends-2021/



Go to the original article...

css.php