Image Sensors World Go to the original article...
Arxiv.org paper "Billion-pixel X-ray camera (BiPC-X)" by Zhehui Wang, Kaitlin Anagnost, Cris W. Barnes, D. M. Dattelbaum, Eric R. Fossum, Eldred Lee, Jifeng Liu, J. J. Ma, W. Z. Meijer, Wanyi Nie, C. M. Sweeney, Audrey C. Therrien, Hsinhan Tsai, and Xin Que from Los Alamos National Laboratory, Dartmouth College, Gigajot, and Universit´e de Sherbrooke presents a 21MP tiled prototype:Archives for January 2021
Gigapixel X-Ray Camera
Intel Announces RealSense ID
Image Sensors World Go to the original article...
Intel unveils Realsense ID 3D camera F450 based on active stereo approach:LiDAR News: Livox, Aeva, Sense, Fraunhofer, Xilinx
Image Sensors World Go to the original article...
BusinessWire: Xpeng, a Chinese electric vehicle company, is to deploy Livox automotive-grade lidar technology in Xpeng’s new production model in 2021. Livox is Xpeng’s first partner in lidar technology.
Livox is enhancing the detection range of its Horiz sensor to 150m (for objects at 10% reflectivity), enabling Xpeng’s XPILOT system to easily detect any remote obstacle while on highways and urban roads. Livox’s customized solution for Xpeng also includes a new “ultra FPS” lidar technology concept. Through a cleverly designed rotating-mirror technology, the objects within the lidar’s ROI will acquire a 20Hz point cloud data when the whole system is working at a frame rate of 10Hz. The new ROI point cloud density is hence increased to 144 lines equivalent at 0.1 second without the need for extra laser transmitters. The increased point cloud density enables the faster detection of tiny objects on the road surface, including pedestrians, bicycles or even traffic cones. The horizontal FOV of Horiz has also reached 120°. This greatly enhances the smart driving experience by resolving many persistent challenges faced by drivers, including the removal of blind spots against cut-in vehicles.
PRNewswire: Aeva and InterPrivate SPAC announce that Sylebra Capital (Hong Kong) invests $200M on top of the investment in the merger deal between Aeva and InterPrivate. The combined proceeds from this financing, the previously announced private placement, and InterPrivate's cash in trust are now expected to exceed $560M.
PRNewswire: Sense Photonics announces that it has achieved an industry-first by successfully demonstrating 200-meter detection with its proprietary global shutter flash LiDAR system.
The Sense system uses proprietary emitter and SPAD sensor technologies. Sense Silicon, a BSI SPAD device with more than 140,000 pixels, is designed to work seamlessly with the Sense Illuminator, a distributed 940nm laser array of more than 15,000 VCSELs. Together, they are the core building blocks of Sense's camera-like architecture enabling the first high-resolution, eye-safe, global shutter flash LiDAR that can detect 10% reflective targets at 200 meters range in full sunlight outputting tens of millions of points per second. Global shutter acquisition sets a new standard in the long-range LiDAR industry by removing the need for complex motion blur correction while allowing pixel-level, frame-by-frame fusion with RGB camera data.
Sense Photonics says that "Our core technology of VCSELs and SPADs can be paired with different lenses and diffusers to create short-range and long-range products of various FOV and resolutions. We can go as wide as 180 x 90 FOV with uniform angular resolution that exceeds 0.4 degrees for short-range needs or we can go as narrow as 15 x 7.5 to achieve uniform angular resolution of 0.025 degrees for mid or long-range needs. The inherent architecture flexibility enables us to deliver on a broad range of automotive applications. More to come in a forthcoming announcement."
"We have delivered what industry experts thought was impossible due to our 940nm wavelength, and have created a revolutionary new architecture with the Sense Illuminator, Sense Silicon, and our state-of-the-art signal processing pipeline to miniaturize the data output," said Hod Finkelstein, CTO, Sense Photonics. "Our LiDAR systems will solve the shortcomings that OEMs, Tier 1 suppliers, and Robotaxi companies have been dealing with in competing LiDAR technologies."
Geared for mass-market automotive adoption, Sense uses mature manufacturing and cost-effective assembly processes used in today's consumer technology industries. Sense's flash architecture eliminates the need for fine alignment between emitter & receiver, maintaining sensor calibration and depth accuracy during shock and vibration. Additionally, the architecture is designed as a platform to allow for customer-specific product variations with a simple change in optics and the first to be able to provide both short- and long-range capabilities from the same architecture.
Customer evaluation systems are being finalized and will be available mid 2021 to meet current demand requirements, with start of production being planned for late 2024.
Fraunhofer Institute for Photonic Microsystems in Dresden (IPMS) designs and manufactures MEMS mirrors for Aeye LiDAR:
"The MEMS scanner is made of monocrystalline silicon, a material with several advantages: it is not only robust and proof against material fatigue but it has a high temperature and shock resistance. The silicon has a reflective coating that intensifies the reflection of the light. Thanks to positioning technology integrated in the chip, it is possible to continuously track where the mirror steers the laser beam and which position is being measured. This in turn enables correction to the point of operation."
SemiconductorEngineering publishes Xilinx promotional article saying that ZVision, Robosense, Baraja, Benewake, Blickfeld, Hesai, Innovusion, Opsys, OURS, Ouster, Phantom Intelligence, Pointcloud, SureStar, and many others use Xilinx platform for their LiDARs signal processing.
Omnivision Announces 40MP Sensor with 1.008um Pixel and Multi-Sampling CDS
Image Sensors World Go to the original article...
BusinessWire: OmniVision announces the OV40A, a 40MP, 1.008um pixel sensor that features super high gain and de-noise technologies in the 1/1.7” optical format. This sensor also offers multiple HDR options and supports 1080p slow-motion and high-speed video captures at 240 fps with PDAF.
“TSR estimates there will be 855 million image sensors with 40MP or higher resolution shipped to smartphone manufacturers in 2021, which presents a huge opportunity for this new image sensor,” said James Liu, senior technical marketing manager at OmniVision. “The OV40A’s unique combination of features is bringing flagship-level performance to the main, wide, ultrawide and video cameras in this fast-growing market segment.”
The OV40A supports super high gain of up to 256x with embedded multi-sampling de-noise functionality for enhanced low-light performance. This sensor also offers HDR through selective conversion gain, along with 2- and 3-exposure staggered HDR timing.
Built on OmniVision’s PureCel Plus-S stacked die technology, the OV40A integrates an on-chip, 4-cell (4C) color filter array and hardware remosaic, which provides high quality, 40MP Bayer output in real time. For low-light conditions, this sensor can use near-pixel binning to output a 10MP image, as well as 4K2K and 1080p video, with four times the sensitivity, yielding 2.0um pixel-equivalent low-light performance.
Output formats include 40MP at 30 fps, 10MP with 4C binning at 120 fps, 4K2K video at 60 fps and 1080p video at 240 fps. All of these formats can be captured with PDAF. Other features include a CPHY interface, multi-camera sync and a 34.7 degree CRA.
Samples of the new OV40A image sensor are available now.
sCMOS Sensors Stand-Off
Image Sensors World Go to the original article...
Arxiv.org paper "Evaluation of scientific CMOS sensors for sky survey applications" by Sergey Karpov, Armelle Bajat, Asen Christov, Michael Prouza, and Grigory Beskin from Institute of Physics, Czech Academy of Sciences, Prague, Czech Republic, and cKazan Federal University, Kazan, Russia compares BAE-Fairchild Imaging sCMOS sensor with Gpixel's:Omnivision Presents 1.998um NIR-enhanced Pixel and Sensor
Image Sensors World Go to the original article...
BusinessWire: OmniVision announces the OS04C10, a 1.998um pixel, 4 MP sensor for both IoT and home security cameras.
“AI-enabled IoT and home security cameras require excellent performance across all lighting conditions for accurate algorithm detection of faces, license plates and other items. Additionally, these cameras are often battery-powered,” said Cheney Zhang, senior marketing manager for the security segment at OmniVision. “The OS04C10 maintains the same high 4MP resolution as our popular OV4689 sensor, while adding improved NIR, ultra low light and HDR performance for these IoT and home security cameras, along with a new ultra low power mode that consumes 98.9% less power than the normal mode for longer battery life.”
The sensor features NIR responce-enhancing Nyxel technology and 2-exposure HDR.
Best vegetarian meals
Omnivision Unveils 32MP Sensor with 0.702um Pixels for Selfies
Image Sensors World Go to the original article...
BusinessWire: OmniVision announces the OV32B sensor, featuring a 0.702um pixel to provide 32 MP resolution in a 1/3” optical format. The OV32B also supports 2- and 3-exposure HDR timing for up to 8MP video modes and still previews. It features a 4-cell color filter array and on-chip hardware re-mosaic, which provides high quality, 32 MP Bayer output in real time—a feature that can be challenging for competitors to achieve in the 1/3” optical format.
“In 2021, TSR estimates there will be 244 million image sensors with 32 MP resolution shipped to smartphone manufacturers for use in selfie cameras,” said Arun Jayaseelan, staff marketing manager at OmniVision. “The OV32B strikes the perfect balance between small size and high resolution that the designers of these cameras are looking for.”
To boost autofocus accuracy, especially in low light, this sensor offers the option to integrate type-2, 2x2 microlens phase detection autofocus (ML-PDAF). It also provides a CPHY interface for greater throughput using fewer pins, as well as a DPHY interface. Output formats include 32 MP at 15 fps, 8 MP at 60 fps and 6 MP (16:9) at 90 fps—all with 4-cell binning. Additionally, the sensor can output 1080p video at 120 fps, 1.5 MP captures (16:9) at 240 fps and 720p video at 360 fps.
Samples of the OV32B image sensor are available now.
Teledyne to Acquire FLIR for $8B
Image Sensors World Go to the original article...
Teledyne and FLIR jointly announce that they have entered into a definitive agreement under which Teledyne will acquire FLIR in a cash and stock transaction valued at approximately $8.0 billion.
“At the core of both our companies is proprietary sensor technologies. Our business models are also similar: we each provide sensors, cameras and sensor systems to our customers. However, our technologies and products are uniquely complementary with minimal overlap, having imaging sensors based on different semiconductor technologies for different wavelengths,” said Robert Mehrabian, Executive Chairman of Teledyne. “For two decades, Teledyne has demonstrated its ability to compound earnings and cash flow consistently and predictably. Together with FLIR and an optimized capital structure, I am confident we shall continue delivering superior returns to our stockholders.”
Materials Recognition with ToF Camera
Image Sensors World Go to the original article...
Springer Machine Vision and Applications Journal publishes a paper "Classification of materials using a pulsed time-of-flight camera" by ShiNan Lang, Jizhong Zhang, Yiheng Cai, Xiaoqing Zhu, and Qiang Wu from Beijing University of Technology, China.Omnivision Announces Stacked AI Processor for DMS Applications
Image Sensors World Go to the original article...
Businesswire: OmniVision announces the OAX8000 AI-enabled, automotive ASIC for entry-level, stand-alone driver monitoring systems (DMS). The OAX8000 uses a stacked-die architecture to provide the industry’s only DMS processor with on-chip DDR3 SDRAM memory (1GB). This is also the only dedicated DMS processor to integrate a neural processing unit (NPU) and ISP which provides dedicated processing speeds up to 1.1 trillion operations per second for eye gaze and eye tracking algorithms. These fast processing speeds with 1K MAC of CNN acceleration, along with integrated SDRAM, enable the lowest power consumption available for DMS systems—the OAX8000 and OmniVision automotive image sensor consume just 1W in typical conditions, combined. Further optimizing DMS systems, this integration also reduces the board area for the engine control unit (ECU).Canon RF 24-105mm f4-7.1 STM review
Current and Future Technologies of Capsule Endoscopy
Image Sensors World Go to the original article...
Archives of Preventive Medicine publishes a paper "Analysis of current and future technologies of capsule endoscopy: A mini review" by Alexander P Brown and Ahalapitiya H Jayatissa from the University of Toledo, OH, USA.
"Many existing methods of endoscopy can be very uncomfortable and potentially even painful for a patient. Using a conventional endoscope is also limited in its usable range, unable to access a majority of the small bowel. Recent advancements in LEDs, optical design, and MEMS (microelectromechanical systems) technologies have provided the ability to create a wireless endoscope. Since its inception, the capsule endoscope has seen advancements in existing technology as well as the introduction of new components. As the capsule endoscope continues to advance, more application possibilities will grow as well."
History and Future of Radiation Imaging at CERN
Image Sensors World Go to the original article...
Elsevier Radiation Measurements Journal publishes a paper "History and future of radiation imaging with single quantum processing pixel detectors" by Erik H.M. Heijne from Czech Technical University in Prague.
"This introductory article treats aspects of the evolution of early semiconductor detectors towards modern radiation imaging instruments, now with millions of signal processing cells, exploiting the potential of silicon nano-technology. The Medipix and Timepix assemblies are among the prime movers in this evolution. Imaging the impacts in the detecting matrix from the individual ionizing particles and photons can be used to study these elementary quanta themselves, or allows one to visualize various characteristics of objects under irradiation. X-ray imaging is probably the most-used modality of the latter, and the new imagers can process each single incident X–photon to obtain an image with additional information about the structure and composition of the object. The atomic distribution can be imaged, taking advantage of the energy-specific X-ray absorption. A myriad of other applications is appearing, as reported in the special issue of this journal. As an example, in molecular spectroscopy, the sub-nanosecond timing in each pixel can deliver in real-time the mapping of the molecular composition of a specimen by time-of-flight for single molecules, a revolution compared with classical gel electrophoresis. References and some personal impressions are provided to illuminate radiation detection and imaging over more than 50 years. Extrapolations and wild guesses for future developments conclude the article."
Thesis on Low Power ToF Imaging
Image Sensors World Go to the original article...
MIT publishes a PhD Thesis "Algorithms and systems for low power time-of-flight imaging" by James Noraky.
"Time-of-flight (ToF) cameras are appealing depth sensors because they obtain dense depth maps with minimal latency. However, for mobile and embedded devices, ToF cameras, which obtain depth by emitting light and estimating its roundtrip time, can be power-hungry and limit the battery life of the underlying device. To reduce the power for depth sensing, we present algorithms to address two scenarios. For applications where RGB images are concurrently collected, we present algorithms that reduce the usage of the ToF camera and estimate new depth maps without illuminating the scene. We exploit the fact that many applications operate in nearly rigid environments, and our algorithms use the sparse correspondences across the consecutive RGB images to estimate the rigid motion and use it to obtain new depth maps.
Our techniques can reduce the usage of the ToF camera by up to 85%, while still estimating new depth maps within 1% of the ground truth for rigid scenes and 1.74% for dynamic ones. When only the data from a ToF camera is used, we propose algorithms that reduce the overall amount of light that the ToF camera emits to obtain accurate depth maps. Our techniques use the rigid motions in the scene, which can be estimated using the infrared images that a ToF camera obtains, to temporally mitigate the impact of noise. We show that our approaches can reduce the amount of emitted light by up to 81% and the mean relative error of the depth maps by up to 64%. Our algorithms are all computationally efficient and can obtain dense depth maps at up to real-time on standard and embedded computing platforms.
Compared to applications that just use the ToF camera and incur the cost of higher sensor power and to those that estimate depth entirely using RGB images, which are inaccurate and have high latency, our algorithms enable energy-efficient, accurate, and low latency depth sensing for many emerging applications."
Millimeter Wave Sensing Review
Image Sensors World Go to the original article...
Eindhoven University of Technology publishes an Arxiv.org paper "Millimeter Wave Sensing: A Review of Application Pipelines and Building Blocks" by Bram van Berlo, Amany Elkelany, Tanir Ozcelebi, and Nirvana Meratnia.




