Archives for January 2021

Gigapixel X-Ray Camera

Image Sensors World        Go to the original article...

Arxiv.org paper "Billion-pixel X-ray camera (BiPC-X)" by Zhehui Wang, Kaitlin Anagnost, Cris W. Barnes, D. M. Dattelbaum, Eric R. Fossum, Eldred Lee, Jifeng Liu, J. J. Ma, W. Z. Meijer, Wanyi Nie, C. M. Sweeney, Audrey C. Therrien, Hsinhan Tsai, and Xin Que from Los Alamos National Laboratory, Dartmouth College, Gigajot, and Universit´e de Sherbrooke presents a 21MP tiled prototype:

"The continuing improvement in quantum efficiency (above 90% for single visible photons), reduction in noise (below 1 electron per pixel), and shrink in pixel pitch (less than 1 micron) motivate billion-pixel X-ray cameras (BiPC-X) based on commercial CMOS imaging sensors. We describe BiPC-X designs and prototype construction based on flexible tiling of commercial CMOS imaging sensors with millions of pixels. Device models are given for direct detection of low energy X-rays (< 10 keV) and indirect detection of higher energies using scintillators. Modified Birks's law is proposed for light-yield nonproportionality in scintillators as a function of X-ray energy. Single X-ray sensitivity and spatial resolution have been validated experimentally using laboratory X-ray source and the Argonne Advanced Photon Source. Possible applications include wide field-of-view (FOV) or large X-ray aperture measurements in high-temperature plasmas, the state-of-the-art synchrotron, X-ray Free Electron Laser (XFEL), and pulsed power facilities."

Go to the original article...

Intel Announces RealSense ID

Image Sensors World        Go to the original article...

Intel unveils Realsense ID 3D camera F450 based on active stereo approach:

Go to the original article...

LiDAR News: Livox, Aeva, Sense, Fraunhofer, Xilinx

Image Sensors World        Go to the original article...

BusinessWire: Xpeng, a Chinese electric vehicle company, is to deploy Livox automotive-grade lidar technology in Xpeng’s new production model in 2021. Livox is Xpeng’s first partner in lidar technology.

Livox is enhancing the detection range of its Horiz sensor to 150m (for objects at 10% reflectivity), enabling Xpeng’s XPILOT system to easily detect any remote obstacle while on highways and urban roads. Livox’s customized solution for Xpeng also includes a new “ultra FPS” lidar technology concept. Through a cleverly designed rotating-mirror technology, the objects within the lidar’s ROI will acquire a 20Hz point cloud data when the whole system is working at a frame rate of 10Hz. The new ROI point cloud density is hence increased to 144 lines equivalent at 0.1 second without the need for extra laser transmitters. The increased point cloud density enables the faster detection of tiny objects on the road surface, including pedestrians, bicycles or even traffic cones. The horizontal FOV of Horiz has also reached 120°. This greatly enhances the smart driving experience by resolving many persistent challenges faced by drivers, including the removal of blind spots against cut-in vehicles.


PRNewswire: Aeva and InterPrivate SPAC announce that Sylebra Capital (Hong Kong) invests $200M on top of the investment in the merger deal between Aeva and InterPrivate. The combined proceeds from this financing, the previously announced private placement, and InterPrivate's cash in trust are now expected to exceed $560M.


PRNewswire: Sense Photonics announces that it has achieved an industry-first by successfully demonstrating 200-meter detection with its proprietary global shutter flash LiDAR system.

The Sense system uses proprietary emitter and SPAD sensor technologies. Sense Silicon, a BSI SPAD device with more than 140,000 pixels, is designed to work seamlessly with the Sense Illuminator, a distributed 940nm laser array of more than 15,000 VCSELs. Together, they are the core building blocks of Sense's camera-like architecture enabling the first high-resolution, eye-safe, global shutter flash LiDAR that can detect 10% reflective targets at 200 meters range in full sunlight outputting tens of millions of points per second. Global shutter acquisition sets a new standard in the long-range LiDAR industry by removing the need for complex motion blur correction while allowing pixel-level, frame-by-frame fusion with RGB camera data.

Sense Photonics says that "Our core technology of VCSELs and SPADs can be paired with different lenses and diffusers to create short-range and long-range products of various FOV and resolutions. We can go as wide as 180 x 90 FOV with uniform angular resolution that exceeds 0.4 degrees for short-range needs or we can go as narrow as 15 x 7.5 to achieve uniform angular resolution of 0.025 degrees for mid or long-range needs. The inherent architecture flexibility enables us to deliver on a broad range of automotive applications. More to come in a forthcoming announcement."

"We have delivered what industry experts thought was impossible due to our 940nm wavelength, and have created a revolutionary new architecture with the Sense Illuminator, Sense Silicon, and our state-of-the-art signal processing pipeline to miniaturize the data output," said Hod Finkelstein, CTO, Sense Photonics. "Our LiDAR systems will solve the shortcomings that OEMs, Tier 1 suppliers, and Robotaxi companies have been dealing with in competing LiDAR technologies."

Geared for mass-market automotive adoption, Sense uses mature manufacturing and cost-effective assembly processes used in today's consumer technology industries. Sense's flash architecture eliminates the need for fine alignment between emitter & receiver, maintaining sensor calibration and depth accuracy during shock and vibration. Additionally, the architecture is designed as a platform to allow for customer-specific product variations with a simple change in optics and the first to be able to provide both short- and long-range capabilities from the same architecture.

Customer evaluation systems are being finalized and will be available mid 2021 to meet current demand requirements, with start of production being planned for late 2024.

Fraunhofer Institute for Photonic Microsystems in Dresden (IPMS) designs and manufactures MEMS mirrors for Aeye LiDAR:

"The MEMS scanner is made of monocrystalline silicon, a material with several advantages: it is not only robust and proof against material fatigue but it has a high temperature and shock resistance. The silicon has a reflective coating that intensifies the reflection of the light. Thanks to positioning technology integrated in the chip, it is possible to continuously track where the mirror steers the laser beam and which position is being measured. This in turn enables correction to the point of operation."


SemiconductorEngineering publishes Xilinx promotional article saying that ZVision, Robosense, Baraja, Benewake, Blickfeld, Hesai, Innovusion, Opsys, OURS, Ouster, Phantom Intelligence, Pointcloud, SureStar, and many others use Xilinx platform for their LiDARs signal processing.

Go to the original article...

Omnivision Announces 40MP Sensor with 1.008um Pixel and Multi-Sampling CDS

Image Sensors World        Go to the original article...

BusinessWire: OmniVision announces the OV40A, a 40MP, 1.008um pixel sensor that features super high gain and de-noise technologies in the 1/1.7” optical format. This sensor also offers multiple HDR options and supports 1080p slow-motion and high-speed video captures at 240 fps with PDAF.

TSR estimates there will be 855 million image sensors with 40MP or higher resolution shipped to smartphone manufacturers in 2021, which presents a huge opportunity for this new image sensor,” said James Liu, senior technical marketing manager at OmniVision. “The OV40A’s unique combination of features is bringing flagship-level performance to the main, wide, ultrawide and video cameras in this fast-growing market segment.

The OV40A supports super high gain of up to 256x with embedded multi-sampling de-noise functionality for enhanced low-light performance. This sensor also offers HDR through selective conversion gain, along with 2- and 3-exposure staggered HDR timing. 

Built on OmniVision’s PureCel Plus-S stacked die technology, the OV40A integrates an on-chip, 4-cell (4C) color filter array and hardware remosaic, which provides high quality, 40MP Bayer output in real time. For low-light conditions, this sensor can use near-pixel binning to output a 10MP image, as well as 4K2K and 1080p video, with four times the sensitivity, yielding 2.0um pixel-equivalent low-light performance.

Output formats include 40MP at 30 fps, 10MP with 4C binning at 120 fps, 4K2K video at 60 fps and 1080p video at 240 fps. All of these formats can be captured with PDAF. Other features include a CPHY interface, multi-camera sync and a 34.7 degree CRA.

Samples of the new OV40A image sensor are available now.

Go to the original article...

sCMOS Sensors Stand-Off

Image Sensors World        Go to the original article...

Arxiv.org paper "Evaluation of scientific CMOS sensors for sky survey applications" by Sergey Karpov, Armelle Bajat, Asen Christov, Michael Prouza, and Grigory Beskin from  Institute of Physics, Czech Academy of Sciences, Prague, Czech Republic, and cKazan Federal University, Kazan, Russia compares BAE-Fairchild Imaging sCMOS sensor with Gpixel's:

"Scientific CMOS image sensors are a modern alternative for a typical CCD detectors, as they offer both low read-out noise, large sensitive area, and high frame rates. All these makes them promising devices for a modern wide-field sky surveys. However, the peculiarities of CMOS technology have to be properly taken into account when analyzing the data. In order to characterize these, we performed an extensive laboratory testing of two Andor cameras based on sCMOS chips -- Andor Neo and Andor Marana. Here we report its results, especially on the temporal stability, linearity and image persistence."


"...we may safely conclude that Andor Marana sCMOS is indeed a very promising camera for a sky survey applications, especially requiring high temporal resolution, and exceeds Andor Neo in nearly all aspects of it."

Go to the original article...

Omnivision Presents 1.998um NIR-enhanced Pixel and Sensor

Image Sensors World        Go to the original article...

BusinessWire: OmniVision announces the OS04C10, a 1.998um pixel, 4 MP  sensor for both IoT and home security cameras.

AI-enabled IoT and home security cameras require excellent performance across all lighting conditions for accurate algorithm detection of faces, license plates and other items. Additionally, these cameras are often battery-powered,” said Cheney Zhang, senior marketing manager for the security segment at OmniVision. “The OS04C10 maintains the same high 4MP resolution as our popular OV4689 sensor, while adding improved NIR, ultra low light and HDR performance for these IoT and home security cameras, along with a new ultra low power mode that consumes 98.9% less power than the normal mode for longer battery life.

The sensor features NIR responce-enhancing Nyxel technology and 2-exposure HDR.

Go to the original article...

Best vegetarian meals

Cameralabs        Go to the original article...

I love cooking and wanted to increase the number of vegetarian dishes for everyday family meals. Here are my favourites: quick, easy and very tasty!…

Go to the original article...

Omnivision Unveils 32MP Sensor with 0.702um Pixels for Selfies

Image Sensors World        Go to the original article...

BusinessWire: OmniVision announces the OV32B sensor, featuring a 0.702um pixel to provide 32 MP resolution in a 1/3” optical format. The OV32B also supports 2- and 3-exposure HDR timing for up to 8MP video modes and still previews. It features a 4-cell color filter array and on-chip hardware re-mosaic, which provides high quality, 32 MP Bayer output in real time—a feature that can be challenging for competitors to achieve in the 1/3” optical format.

In 2021, TSR estimates there will be 244 million image sensors with 32 MP resolution shipped to smartphone manufacturers for use in selfie cameras,” said Arun Jayaseelan, staff marketing manager at OmniVision. “The OV32B strikes the perfect balance between small size and high resolution that the designers of these cameras are looking for.

To boost autofocus accuracy, especially in low light, this sensor offers the option to integrate type-2, 2x2 microlens phase detection autofocus (ML-PDAF). It also provides a CPHY interface for greater throughput using fewer pins, as well as a DPHY interface. Output formats include 32 MP at 15 fps, 8 MP at 60 fps and 6 MP (16:9) at 90 fps—all with 4-cell binning. Additionally, the sensor can output 1080p video at 120 fps, 1.5 MP captures (16:9) at 240 fps and 720p video at 360 fps.

Samples of the OV32B image sensor are available now.

Go to the original article...

Teledyne to Acquire FLIR for $8B

Image Sensors World        Go to the original article...

Teledyne and FLIR jointly announce that they have entered into a definitive agreement under which Teledyne will acquire FLIR in a cash and stock transaction valued at approximately $8.0 billion.

At the core of both our companies is proprietary sensor technologies.  Our business models are also similar: we each provide sensors, cameras and sensor systems to our customers.  However, our technologies and products are uniquely complementary with minimal overlap, having imaging sensors based on different semiconductor technologies for different wavelengths,” said Robert Mehrabian, Executive Chairman of Teledyne.  “For two decades, Teledyne has demonstrated its ability to compound earnings and cash flow consistently and predictably.  Together with FLIR and an optimized capital structure, I am confident we shall continue delivering superior returns to our stockholders.

Go to the original article...

Materials Recognition with ToF Camera

Image Sensors World        Go to the original article...

Springer Machine Vision and Applications Journal publishes a paper "Classification of materials using a pulsed time-of-flight camera" by ShiNan Lang, Jizhong Zhang, Yiheng Cai, Xiaoqing Zhu, and Qiang Wu from Beijing University of Technology, China.

"We propose an innovative method of material classification based on the imaging model of pulsed time-of-flight (ToF) camera integrated with the unique signature that describes physical properties of each material named reflection point spread function (RPSF). First, the optimization method reduces the effect of material surface interreflection, which would affect RPSF and lead to decreased accuracy in classification, by alternating direction method of multipliers (ADMM). A method named feature vector normalization is proposed to extract material RPSF features. Second, according to the nonlinearity of the feature vectors, the structure of hidden layer neurons of radial basis function (RBF) neural network is optimized based on singular value decomposition (SVD) to improve generalization. Finally, the similar appearance of plastics and metals are classified on turntable-based measurement system by own design. The average classification accuracy reaches 93.3%, and the highest classification accuracy reaches 94.6%."

Go to the original article...

Omnivision Announces Stacked AI Processor for DMS Applications

Image Sensors World        Go to the original article...

Businesswire: OmniVision announces the OAX8000 AI-enabled, automotive ASIC for entry-level, stand-alone driver monitoring systems (DMS). The OAX8000 uses a stacked-die architecture to provide the industry’s only DMS processor with on-chip DDR3 SDRAM memory (1GB). This is also the only dedicated DMS processor to integrate a neural processing unit (NPU) and ISP which provides dedicated processing speeds up to 1.1 trillion operations per second for eye gaze and eye tracking algorithms. These fast processing speeds with 1K MAC of CNN acceleration, along with integrated SDRAM, enable the lowest power consumption available for DMS systems—the OAX8000 and OmniVision automotive image sensor consume just 1W in typical conditions, combined. Further optimizing DMS systems, this integration also reduces the board area for the engine control unit (ECU).

According to Yole Développement, the accelerated market drive for DMS is expected to generate a 56% CAGR between 2020 and 2025. This is being driven by the European Union’s Euro NCAP requirement that all new cars sold in the region have a DMS camera by 2022.

Most DMS processors on the market today are not dedicated to this application, requiring added circuitry to perform other system functions that consumes more power, occupies more board space and doesn’t allow room for on-chip SDRAM,” said Brian Pluckebaum, automotive product marketing manager at OmniVision. “By focusing the design of our OAX8000 ASIC on entry-level DMS, we were able to create the automotive industry’s most optimized solution.

The OAX8000’s on-chip NPU is supported by the popular TensorFlow, Caffe, MXNet and ONNX tool chains. Additionally, this ASIC embeds quad Arm Cortex A5 CPU cores with Neon technology for accelerated video encoding/decoding and on-chip video analytics algorithms, along with hardware for image processing, video encoding and RGB/IR processing. Its HDR processing capability allows the ASIC to accept input from RBG/IR image sensors. The integrated video encoder accepts up to 5MP captures from OmniVision’s automotive image sensors, and outputs up to 2K resolution video at 30fps.

Boot-up time for the OAX8000 is significantly faster than its nearest competitor. This rapid startup eliminates any delay between ignition and activation of the DMS camera. Additionally, it supports secure boot features to provide cybersecurity.

Other applications include processing occupant detection algorithms, such as distinguishing a baby from a grocery bag, and providing alerts when objects are left behind in the vehicle. Additionally, this ASIC can be used in automotive video security systems to perform functions such as FaceID, as well as preset driver-comfort settings (e.g., seat position) that are activated when the DMS first scans the driver’s face.

Samples of the new OAX8000 ASIC are available now. It is AEC-Q100 Grade 2 certified for automotive applications.

Go to the original article...

Canon RF 24-105mm f4-7.1 STM review

Cameralabs        Go to the original article...

The Canon RF 24-105mm f4-7.1 STM is a compact, light and affordable general-purpose zoom for the full-frame EOS R mirrorless system. Costing less than half the price of the RF 24-105mm f4L, the cheaper STM version actually turned out to be a lot better than I expected and in my review I’ll show you why it’s the kit zoom the Rf system has been waiting for!…

Go to the original article...

Current and Future Technologies of Capsule Endoscopy

Image Sensors World        Go to the original article...

Archives of Preventive Medicine publishes a paper "Analysis of current and future technologies of capsule endoscopy: A mini review" by Alexander P Brown and Ahalapitiya H Jayatissa from the University of Toledo, OH, USA.

"Many existing methods of endoscopy can be very uncomfortable and potentially even painful for a patient. Using a conventional endoscope is also limited in its usable range, unable to access a majority of the small bowel. Recent advancements in LEDs, optical design, and MEMS (microelectromechanical systems) technologies have provided the ability to create a wireless endoscope. Since its inception, the capsule endoscope has seen advancements in existing technology as well as the introduction of new components. As the capsule endoscope continues to advance, more application possibilities will grow as well."

Go to the original article...

History and Future of Radiation Imaging at CERN

Image Sensors World        Go to the original article...

Elsevier Radiation Measurements Journal publishes a paper "History and future of radiation imaging with single quantum processing pixel detectors" by Erik H.M. Heijne from Czech Technical University in Prague.

"This introductory article treats aspects of the evolution of early semiconductor detectors towards modern radiation imaging instruments, now with millions of signal processing cells, exploiting the potential of silicon nano-technology. The Medipix and Timepix assemblies are among the prime movers in this evolution. Imaging the impacts in the detecting matrix from the individual ionizing particles and photons can be used to study these elementary quanta themselves, or allows one to visualize various characteristics of objects under irradiation. X-ray imaging is probably the most-used modality of the latter, and the new imagers can process each single incident X–photon to obtain an image with additional information about the structure and composition of the object. The atomic distribution can be imaged, taking advantage of the energy-specific X-ray absorption. A myriad of other applications is appearing, as reported in the special issue of this journal. As an example, in molecular spectroscopy, the sub-nanosecond timing in each pixel can deliver in real-time the mapping of the molecular composition of a specimen by time-of-flight for single molecules, a revolution compared with classical gel electrophoresis. References and some personal impressions are provided to illuminate radiation detection and imaging over more than 50 years. Extrapolations and wild guesses for future developments conclude the article."

Go to the original article...

Thesis on Low Power ToF Imaging

Image Sensors World        Go to the original article...

MIT publishes a PhD Thesis "Algorithms and systems for low power time-of-flight imaging" by James Noraky.

"Time-of-flight (ToF) cameras are appealing depth sensors because they obtain dense depth maps with minimal latency. However, for mobile and embedded devices, ToF cameras, which obtain depth by emitting light and estimating its roundtrip time, can be power-hungry and limit the battery life of the underlying device. To reduce the power for depth sensing, we present algorithms to address two scenarios. For applications where RGB images are concurrently collected, we present algorithms that reduce the usage of the ToF camera and estimate new depth maps without illuminating the scene. We exploit the fact that many applications operate in nearly rigid environments, and our algorithms use the sparse correspondences across the consecutive RGB images to estimate the rigid motion and use it to obtain new depth maps.

Our techniques can reduce the usage of the ToF camera by up to 85%, while still estimating new depth maps within 1% of the ground truth for rigid scenes and 1.74% for dynamic ones. When only the data from a ToF camera is used, we propose algorithms that reduce the overall amount of light that the ToF camera emits to obtain accurate depth maps. Our techniques use the rigid motions in the scene, which can be estimated using the infrared images that a ToF camera obtains, to temporally mitigate the impact of noise. We show that our approaches can reduce the amount of emitted light by up to 81% and the mean relative error of the depth maps by up to 64%. Our algorithms are all computationally efficient and can obtain dense depth maps at up to real-time on standard and embedded computing platforms.

Compared to applications that just use the ToF camera and incur the cost of higher sensor power and to those that estimate depth entirely using RGB images, which are inaccurate and have high latency, our algorithms enable energy-efficient, accurate, and low latency depth sensing for many emerging applications."

Go to the original article...

Millimeter Wave Sensing Review

Image Sensors World        Go to the original article...

Eindhoven University of Technology publishes an Arxiv.org paper "Millimeter Wave Sensing: A Review of Application Pipelines and Building Blocks" by Bram van Berlo, Amany Elkelany, Tanir Ozcelebi, and Nirvana Meratnia.

"The millimeter wave spectrum is part of 5G and covers frequencies between 30 and 300 GHz corresponding to wavelengths ranging from 10 to 1 mm. Although millimeter wave is often considered as a communication medium, it has also proved to be an excellent 'sensor', thanks to its narrow beams, operation across a wide bandwidth, and interaction with atmospheric constituents. In this paper, which is to the best of our knowledge the first review that completely covers millimeter wave sensing application pipelines, we provide a comprehensive overview and analysis of different basic application pipeline building blocks, including hardware, algorithms, analytical models, and model evaluation techniques. The review also provides a taxonomy that highlights different millimeter wave sensing application domains. By performing a thorough analysis, complying with the systematic literature review methodology and reviewing 165 papers, we not only extend previous investigations focused only on communication aspects of the millimeter wave technology and using millimeter wave technology for active imaging, but also highlight scientific and technological challenges and trends, and provide a future perspective for applications of millimeter wave as a sensing technology."

Go to the original article...

css.php