Pig Facial Recognition

Image Sensors World        Go to the original article...

CounterpointResearch: Chinese Alibaba, JD.com, and Tencent are developing AI-based Smart Agriculture platforms which they believe will lead to improved agricultural efficiency, particularly with respect to pig rearing. Pig facial recognition works in a similar way to human facial recognition, recording details of the pig’s eyes, ears, snout and bristles.

Start-up Yingzi Technology, one of the first Chinese companies to come up with a pig facial recognition system, is trialing its system on a farm with 3,000 pigs. Yingzi claims that identification accuracy is more than 98% and that a pig can be identified even if it is moving in a herd.

Go to the original article...

Low Cost SWIR Options

Image Sensors World        Go to the original article...

IMVE publishes an article "Saving on SWIR" talks about cheaper than InGaAs option for SWIR imaging. One of them is Imec quantum dots:

"Quantum dots are nanocrystals that, depending on their size, offer different light absorption properties. For example, particles approximately 3nm in size absorb at 940nm, while particles around 5.5nm in size absorb at 1,450nm. The pixel stacks of the new sensor can be tuned to target a spectrum from visible light all the way up to 2µm wavelength.

‘Right now there isn’t much of a SWIR imaging market, because there is such a high [price] threshold for acquiring a SWIR camera,’ said
[Pawel Malinowski, Imec’s thin-film imagers programme manager.] ‘In a lot of machine vision applications people are not using SWIR because they cannot get a camera, so what we are hoping for is that because we can offer SWIR imaging at orders of magnitude lower price, then new applications will pop up.’

The first generation of Imec’s quantum dot sensor has a resolution of 758 x 512 pixels and a pixel pitch of 5µm. According to Malinowski, however, the second-generation chips, currently being tested, will have a pixel pitch as low as 1.8μm. He noted that the typical pixel pitch of an InGaAs sensor is between 15μm and 20μm.

Despite the lower fabrication cost and higher resolutions achievable with the new sensor technology, Malinowski said quantum efficiency – the performance achieved for the amount of light – will only be around 30 to 40 per cent; InGaAs sensors are able to offer 80 to 90 per cent quantum efficiency. He added: ‘I think that InGaAs will remain unbeatable in terms of high-end performance for the time being.
"

SWIR Vision company too pursues quantum dots:

"Quantum dot-based SWIR imaging technologies are also available from US-based SWIR Vision Systems, which has been selling its Acuros colloidal quantum dot (CQD) VIS-SWIR cameras from Q3 2018. The cameras are available in VGA (640 x 512 pixels), one-megapixel (1,280 x 1024 pixels), and full HD (1,920 x 1,080 pixels) formats.

‘Demand for these cameras has been increasing throughout 2019,’ said George Wildeman, CEO of SWIR Vision Systems, who remarked that the 1,920 x 1,080-pixel model, which has a resolution six times higher than the current standard 640 x 512-pixel InGaAs cameras, is the first of its kind to be commercially available. ‘There are a few high-resolution InGaAs cameras with 1,280 x 1,024-pixel sensor arrays, but these are very high cost,’ he said. ‘It is a big challenge to scale InGaAs cameras to larger array sizes without a large increase in their price point.


Yet another option is Emberion graphene imagers:

"The first product samples of the sensor, which offers VGA resolution, 20µm pixel pitch, 100fps frame rate, and a spectral range from 400nm to 2,000nm, will be available in June 2020.

‘This wide spectral range is the key advantage that our sensor provides over standard InGaAs sensors, which tend to go between 900nm to 1,700nm,’ said Jyri Hämäläinen, director of sales and marketing at Emberion. ‘Beyond 1,700nm is usually called “extended InGaAs”, and it is here that InGaAs technology becomes very expensive. In comparison our sensor is much more affordable while being able to detect these wavelengths.


Thanks to TL for the link!

Go to the original article...

Vision Processing Limitations in Stacked Image Sensors

Image Sensors World        Go to the original article...

Arizona State University, Tempe, publishes arxiv.org paper "Stagioni: Temperature management to enable near-sensor processing for energy-efficient high-fidelity imaging" by Venkatesh Kodukula, Saad Katrawala, Britton Jones, Carole-Jean Wu, and Robert LiKamWa.

"Many researchers advocate pushing processing close to the sensor to substantially reduce data movement. However, continuous near-sensor processing raises the sensor temperature, impairing the fidelity of imaging/vision tasks. We characterize the thermal implications of using 3D stacked image sensors with near-sensor vision processing units. Our characterization reveals that near-sensor processing reduces system power but degrades image quality. For reasonable image fidelity, the sensor temperature needs to stay below a threshold, situationally determined by application needs. Fortunately, our characterization also identifies opportunities -- unique to the needs of near-sensor processing -- to regulate temperature based on dynamic visual task requirements and rapidly increase capture quality on demand. Based on our characterization, we propose and investigate two thermal management strategies -- stop-capture-go and seasonal migration -- for imaging-aware thermal management. We present parameters that govern the policy decisions and explore the trade-offs between system power and policy overhead. Our evaluation shows that our novel dynamic thermal management strategies can unlock the energy-efficiency potential of near-sensor processing. For our evaluated tasks, our strategies save up to 53% of system power with negligible performance impact and sustained image fidelity."

Go to the original article...

LiDAR News: Velodyne, Luminar, Innovusion, Insight, Outsight, Baraja, Ouster, Sony, Mobileye

Image Sensors World        Go to the original article...

BusinessWire: Velodyne announces Anand Gopalan as its new CEO. Gopalan, who previously was Velodyne’s CTO, assumes the position from Velodyne’s legendary founder David Hall. Hall will continue as full-time Chairman of the Board and remain actively involved in directing the company’s technology, product vision and business strategy.

David Hall is more than the founder of Velodyne, he is also the founder of our industry. I am grateful for the trust he has placed in me and excited to lead a company with such a deep history in innovation as Velodyne. We are the forefront of our market, ready to drive the age of autonomy. Velodyne is bringing improved mobility and safety through versatility, responsiveness and agility,” said Gopalan.


BusinessWire: Luminar attempts to switch to a recurring revenue model. The company introduces Hydra Perception Compute Unit (PCU) reference design powered by the NVIDIA Xavier SoC. This solution is said to substantially shorten the industry timelines, enabling autonomy to be commercialized in production in 2022.

Hydra begins shipping this quarter and is available through a new subscription model -- the first of its kind for LiDAR. With the release of Hydra, Luminar has transitioned its core business from selling sensors to a subscription-based service for its autonomous vehicle development partners that enables a deeper integration throughout development cycles, increasing development speed as well as enabling more focused feature development.

Luminar LiDAR is now the established industry gold standard for performance and safety, and the perfect platform to enable the dramatic software and perception improvements required for automakers to transition from test vehicles to commercial autonomy,” said Austin Russell, Founder and CEO, Luminar. “We’ve been quietly developing Hydra, the most advanced 3D perception system in the industry, for over three years now and it’s time for our 40 partners and the rest of the world to see it.

Hydra is an integrated product of three key self-driving technologies:
  • Luminar’s LiDAR, built from the chip-level up;
  • Luminar’s new software suite, built and optimized specifically for Luminar LiDAR;
  • Luminar’s new perception computer, a reference design built on the NVIDIA Xavier SoC


BusinessWire: Innovusion announces Falcon long-range LiDAR. With a vertical and horizontal resolution of 0.07 degrees at 10 fps and a FoV of 110 degrees x 30 degrees, Falcon reaches a range of 120 meters on pedestrians for the entire 110-degree FoV.

In the last year, there has been an industry-wide delay in roadmaps for the deployment of Level 4 autonomous driving while driver assistance technology has become more prevalent. There are still gaps in autonomous system performance and the disengagement is still too high,” said Ian Zhu, Managing Partner at NIO Capital. “We are confident that with the release of Falcon, Innovusion is enabling the automotive industry to take the necessary steps towards LiDAR adoption for the greater market, whether that is in self-driving cars or otherwise.


BusinessWire: Insight LiDAR announced its Digital Coherent LiDAR, an ultra-high resolution, long-range LiDAR sensor targeted at the emerging autonomous vehicle (AV) market. Among the breakthroughs built into Digital Coherent LiDAR are:

  • Long Range – 200 meters to 10 percent reflectivity targets
  • Ultra-High resolution – up to 0.025 x 0.025 degrees
  • Large Field of View – 120 x 340 degrees
  • Direct Doppler velocity in every pixel
  • True solid-state, flexible fast-axis scanning
  • Complete immunity from sunlight and other lidar
  • Low-cost chip scale, all-semiconductor approach

Insight LiDAR’s patent portfolio covers not only the design and control of the laser source, critical for the FMCW detection technique, but also includes key system IP enabling Insight’s high-resolution, foveation, large field of view and long-range performance.


LaTribune: As reported earlier, Outsight raises its seed $20M investment for development of hyperspectral LiDAR that analyses the object material simultaneously with distance. The production model is expected to be completed by 2021.


BusinessWire: Baraja, developer of Spectrum-Scan LiDAR, unveils its sensing platform with inherent interference immunity. Baraja LiDAR is said to be the only system available today using randomly modulated continuous wave, technology that completely blocks interference from other LiDARs and environmental light sources.

Sensor interference is one of the leading causes of disengagements for autonomous vehicles today and the issue will only continue to grow as more LiDAR-equipped vehicles hit the road,” said Baraja Co-Founder and CEO, Federico Collarte. “Interference risks leaving the vehicle with blind-spots, and driving blind is obviously unacceptable. Our experience developing technology in the telecom industry uniquely positions Baraja to address the problem of interference by encoding the light transmitted by our laser, using the same mature, volume-produced components that encode information for interference-free communications.

Interference occurs when a LiDAR transmits laser light and picks up another source of light, from a different laser or environmental source, like bright sunlight, creating errors and uncertainty that manifest as vehicle blind spots. Today, this situation triggers the autonomous technology to disengage and hand over to the safe driver.

Baraja is addressing interference at the sensor level with its Spectrum-Scan technology, which forms the basis of its sensing platform. Spectrum-Scan works by rapidly switching the laser’s wavelength and transmitting light through a prism, which diffracts each color of light in a different direction. When the light returns to the sensor, it is only processed if wavelength, angle, timing and encoding matches on all signals, insuring immunity to interference. Baraja’s LiDAR operates at 1550 nm and exceeds the industry long-range sensing requirement of detecting a 10% reflectivity objects at more than 200m.



BusinessWire: Ouster introduces the ultra-wide 90-deg FoV OS0-128 LiDAR. “High-resolution perception has always been reserved for expensive, long-range applications. That’s finally beginning to change,” said Angus Pacala, CEO and co-founder of Ouster. “With Ouster’s full range of 128-channel sensors, we have a complete high-resolution sensor suite for every application, and for short-range applications, the OS0-128 is in a class of its own.

The OS0 and OS2 series offer a full range of resolution options, with the OS0 available with 32 or 128 channels, while the OS2 is available in 32, 64, and 128 configurations. The OS0-32 is priced at $6,000 and the OS0-128 at $18,000. The OS2-32 is priced at $16,000, the OS2-64 at $20,000 and the OS2-128 at $24,000.


PRNewswire
: Sony is to demo its "Solid State LiDAR which uses highly accurate distance measurement to gain a precise 3D grasp of real-life spaces" this week at CES.

Meanwhile, Intel Mobileye presents VIDAR - LiDAR functionality with cameras only. The name VIDAR has been coined by academic cycles:

Go to the original article...

PoLight Announces First Design Win

Image Sensors World        Go to the original article...

poLight announces that its AF TLens is being used in a smartwatch for children launched to market on 7th January 2020. The OEM is undisclosed. The watch has two cameras, one main camera used to take pictures which includes an advanced autofocus (AF) function delivered by poLight, and one camera integrated in the screen used for face camera without AF.

This is an important milestone for poLight and we are very proud to be included in this innovative smartwatch flagship,” said Øyvind Isaksen, CEO of poLight.

Go to the original article...

Infineon and PMD Present 5th Generation REAL3 Sensor

Image Sensors World        Go to the original article...

Webwire: Infineon has collaborated with software and 3D ToF system company pmdtechnologies to develop the world’s smallest 3D image sensor measuring just 4.4 x 5.1 mm. It can be incorporated into even the smallest devices with just a few elements.

With the fifth generation of our REAL3 chip we are once again demonstrating our leading position in the field of 3D sensors,” says Andreas Urschitz, President of the Power Management and Multimarket Division at Infineon, which also includes sensor business. “It’s robust, reliable, powerful, energy efficient and at the same time decisively small. We see great growth potential for 3D sensors, since the range of applications in the areas of security, image use and context-based interaction with the devices will steadily increase.” The 3D sensor also allows the device to be controlled via gestures, so that human-machine interaction is context-based and without touch.

The new 3D image sensor chip (IRS2887C) was developed in Graz, Dresden and Siegen and combines the expertise of Infineon’s and pmdtechnologies’ German and Austrian locations. Series production will begin in the middle of 2020. In addition, Infineon Technologies offers an optimized illumination driver (IRS9100C) that further improves performance, size and cost as a complete solution.


BusinessWire: pmdtechnologies is presenting its latest 3D ToF camera module based on the 5th generation REAL3 ToF image sensor from pmd and Infineon. The IRS2877C with VGA resolution depth data output and a newly designed 5µm pmd pixel core.

The new VGA 3D imager is the highest resolution, most flexible and robust depth sensor that has ever been developed by pmd and Infineon. "We’re passionate about setting new standards – and with the new VGA 3D camera module, which uses our IRS2877C imager, we did it again. Not only do we feature best in class performance, but we also provide the most dedicated depth sensing platform to our customers to develop their 3D application, which they can get on the market,” says Jochen Penne, Executive Board Member and Head of Business Development at pmd.

Go to the original article...

Automotive News: Bosch, Sense Photonics, Trieye

Image Sensors World        Go to the original article...

Bosch presents camera-based Virtual Visor:

"Bosch is offering a solution with the revolutionary Virtual Visor, a transparent LCD and intuitive camera, which replaces the traditional vehicle sun visor completely. As the first reimagined visor in nearly a century, Bosch’s technology utilizes intelligent algorithms to intuitively block the sun’s glare and not the view of the road ahead.

Virtual Visor links an LCD panel with a driver or occupant-monitoring camera to track the sun’s casted shadow on the driver’s face. The system uses artificial intelligence to locate the driver within the image from the driver-facing camera. It also utilizes AI to determine the landmarks on the face ‒ including where the eyes, nose and mouth are located ‒ so that it can identify shadows on the face. The algorithm analyzes the driver’s view, darkening only the section of the display through which light hits the driver’s eyes. The rest of the display remains transparent, no longer obscuring a large section of the driver’s field of vision.
"




PRNewswire: Sense Photonics announces Osprey, a short range automotive flash LiDAR based on Infineon-PMD ToF sensor:

"Infineon is very excited to be working closely with Sense Photonics as it continues to push the limits in terms of near-field LiDAR solutions based on our automotive qualified REAL3 Time-of-Flight imager," said Christian Herzum, Head of 3D-Sensing product line at Infineon.

"Our simple, camera-like architecture is a significant benefit to customers looking for a scalable LiDAR product," said Sense Photonics CEO Scott Burroughs. "By eliminating mechanical-scanning mechanisms, we've made Osprey much more manufacturable than other approaches. We believe this is critical to bringing the vision of autonomous driving to life."

Sense Photonics is now accepting pre-orders, with initial product availability beginning in Q2 2020. The cost per unit is $3,200 (plus shipping).


Trieye shows the advantages of its SWIR camera:

Go to the original article...

Omivision Unveils 48MP Smartphone Sensor with 1.2um Pixels

Image Sensors World        Go to the original article...

PRNewswire: OmniVision announces the OV48C, a 48MP image sensor with a large 1.2um pixel size for flagship smartphone cameras. The OV48C is the industry's first image sensor for high resolution mobile cameras with on-chip dual conversion gain HDR, which reduces motion artifacts and produces better SNR. This sensor also offers a staggered HDR option with on-chip combination for the maximum flexibility to select the best HDR method for a given scene.

"The combination of high resolution, large pixel size and high dynamic range is essential to providing the image quality required by flagship mobile phone designers for features such as night mode," said Arun Jayaseelan, staff marketing manager at OmniVision. "The OV48C is the only flagship mobile image sensor in the industry to offer the combination of high 48MP resolution, a large 1.2 micron pixel, high speed, and on-chip high dynamic range, which provides superior SNR, unparalleled low light performance and high quality 4K video."

Built on OmniVision's PureCel Plus stacked die technology, this 1/1.3" format sensor also integrates an on-chip, 4-cell color filter array and hardware remosaic, which provides 48MP Bayer output, or 8K video, in real time. In low light conditions, this sensor can use near-pixel binning to output a 12MP image for 4K2K video with four times the sensitivity, yielding a 2.4 micron-equivalent performance. The OV48C also uses 4C Half Shield phase detection for fast autofocus support.

Output formats include 48MP at 15 fps, 12MP with 4-cell binning at 60 fps, and 4K2K video at 60 fps with the extra pixels needed for EIS. This sensor also offers 1080p video with slow motion support at 240 fps, as well as 720p at 360 fps. OV48C samples are available now.

Go to the original article...

Isorg Optical Fingerprint Promo Video

Image Sensors World        Go to the original article...

Isorg publishes a video promoting its optical fingerprint sensor for smartphones:



Some more info from the company's page:

Go to the original article...

SmartSens Releases "Full HD Pro" Sensors

Image Sensors World        Go to the original article...

PRNewswire: SmartSens releases two 3MP CMOS sensors SC3235 and SC3320 for webcams and security cameras. With the new SmartPixel-2 DSI technology, The SC3320 comes equipped with larger optical format at 1/2.5" and Full HD Pro pixel count. This also guarantees a DR of up to 100dB with 60fps supporting 2-exposure HDR and NIR (850nm-940nm) imaging. SC3235 has 1/2.7" 2304H x 1296V array and leverages SmartSens' mature SmartPixel architecture.

Go to the original article...

Ambarella, Lumentum, and ON Semi Collaborate on 3D Sensing

Image Sensors World        Go to the original article...

BusinessWire: Ambarella, Lumentum, and ON Semiconductor announce a joint 3D sensing platform for access control systems and smart video security products such as video doorbells and door locks. The platform is based on Ambarella’s CV25 CVflow AI vision SoC, structured-light powered by Lumentum’s VCSEL technology, and ON Semiconductor’s AR0237IR image sensor.

ON Semiconductor’s RGB-IR sensor technology enables single sensor solutions to provide both visible and IR images in security and vision IoT applications,” said Gianluca Colli, VP and GM of the Commercial Sensing Division at ON Semiconductor. “Ambarella’s CV25 computer vision SoC, with its next-generation image signal processor (ISP), brings out the best image quality of our RGB-IR sensor, while providing powerful AI processing capability for innovative use cases in security applications.” Ambarella’s CV25 chip includes native support for RGB-IR CFA and HDR processing.

BusinessWire: Ambarella also announces CV22FS and CV2FS automotive camera ADAS SoCs with native support for RGGB, RCCB, RCCC, RGB-IR, and monochrome sensor formats:

Go to the original article...

Panasonic Presents Smart Fridge with Image Sensor

Image Sensors World        Go to the original article...

Panasonic publishes an arxiv.org paper "Smart Home Appliances: Chat with Your Fridge" by Denis Gudovskiy, Gyuri Han, Takuya Yamaguchi, and Sotaro Tsukizawa proposing an AI-equipped camera in a fridge:

"Current home appliances are capable to execute a limited number of voice commands such as turning devices on or off, adjusting music volume or light conditions. Recent progress in machine reasoning gives an opportunity to develop new types of conversational user interfaces for home appliances. In this paper, we apply state-of-the-art visual reasoning model and demonstrate that it is feasible to ask a smart fridge about its contents and various properties of the food with close-to-natural conversation experience. Our visual reasoning model answers user questions about existence, count, category and freshness of each product by analyzing photos made by the image sensor inside the smart fridge. Users may chat with their fridge using off-the-shelf phone messenger while being away from home, for example, when shopping in the supermarket. We generate a visually realistic synthetic dataset to train machine learning reasoning model that achieves 95% answer accuracy on test data. We present the results of initial user tests and discuss how we modify distribution of generated questions for model training based on human-in-the-loop guidance. We open source code for the whole system including dataset generation, reasoning model and demonstration scripts."

Go to the original article...

IEDM 2019: Sony SWIR Imager

Image Sensors World        Go to the original article...

Sony IEDM paper "High-definition Visible-SWIR InGaAs Image Sensor using Cu-Cu Bonding of III-V to Silicon Wafer" by S. Manda, R. Matsumoto, S. Saito, S. Maruyama, H. Minari, T. Hirano, T. Takachi, N. Fujii, Y. Yamamoto, Y. Zaizen, T. Hirano, and H. Iwamoto describes a process of bonding small InGaAs dies onto a Si wafer:

"We developed a back-illuminated InGaAs image sensor with 1280x1040 pixels at 5-um pitch by using Cu-Cu hybridization connecting different materials, a III-V InGaAs/InP of photodiode array (PDA), and a silicon readout integrated circuit (ROIC). A new process architecture using an InGaAs/InP dies-to-silicon wafer and Cu-Cu bonding was established for high productivity and pixel-pitch scaling. We achieved low dark current and high sensitivity for wavelengths ranging from visible to short-wavelength infrared (SWIR)."

Go to the original article...

BSI Pixel Passivation Quality Tracking

Image Sensors World        Go to the original article...

MDPI paper "Electrical Characterization of the Backside Interface on BSI Global Shutter Pixels with Tungsten-Shield Test Structures on CDTI Process" by Célestin Doyen, Stéphane Ricq, Pierre Magnan, Olivier Marcelot, Marios Barlas, and Sébastien Place from ST Micro and Université de Toulouse is a part of the Special issue on the 2019 International Image Sensor Workshop (IISW2019).

"A new methodology is presented using well known electrical characterization techniques on dedicated single devices in order to investigate backside interface contribution to the measured pixel dark current in BSI CMOS image sensors technologies. Extractions of interface states and charges within the dielectric densities are achieved. The results show that, in our case, the density of state is not directly the source of dark current excursions. The quality of the passivation of the backside interface appears to be the key factor. Thanks to the presented new test structures, it has been demonstrated that the backside interface contribution to dark current can be investigated separately from other sources of dark current, such as the frontside interface, DTI (deep trench isolation), etc."


"With these MOS capacitor and W-shield gate transistor test structures, it is possible to electrically characterize the backside interface of BSI technology at the end of a process using a tungsten shield. By means of two known characterization methods, Dit and NEFF, which are the two important parameters for dark current, can be extracted. It is, therefore, possible to investigate if the dark current mainly comes from the backside interface, and to discriminate the origin of the backside dark current.

In the case presented in this study, the difference in Idark behavior is explained by quality passivation differences of the backside interface between wafers. COCOS measurements are useful to characterize the interface just after a material deposit, however, it cannot be used with a fully processed wafer, unlike the methodology used on the new structures presented in this study. A drawback of this method is the presence of a charging effect that forces some caution on the execution of measurements, but this effect can be recovered and is not present in pixel operating conditions. In addition to these Idark contribution studies, these dedicated devices with associated characterizations can be helpful for process monitoring, TCAD calibration, and reliability works.
"

Go to the original article...

Omnivision Announces 11.3MP HDR Security Sensor, VGA Sensors with 2.2um GS Nyxel Pixels

Image Sensors World        Go to the original article...

PRNewswire: OmniVision announces the OS12D40, a 1.4um pixel, 11.3MP image sensor with on-chip remosaic (4-cell to Bayer) color converter and on-chip HDR processing. When in full-HD 1080p mode, this sensor's 3-exposure HDR with on-chip combination and tone mapping provides best in class video captures. This is superior to the competing method, known as staggered HDR, which relies on additional passes that introduce motion artifacts, especially in low light. Additionally, OmniVision's PureCel Plus-S stacked architecture enables each pixel to perform optimally to further improve HDR in scenes with widely contrasting bright and dark areas.

"With this new image sensor, we're setting the standard for best in class, mass market security camera performance," said David Shin, product marketing manager for the security segment at OmniVision. "This means both commercial and home security systems will now be able to better capture moving objects across all lighting conditions in full-HD 1080p mode, while using artificial intelligence (AI) or human operators to selectively take 4K2K images without HDR. The latter is important when the need for greater detail is identified, such as capturing an intruder's facial features or reading a car's license plate number. Additionally, we achieve 2.8 micron-equivalent pixel performance using 4-cell binning, to provide excellent low light image quality in 1080p mode."

Industry analysts predict that the security and surveillance camera market is growing at a more than 15% CAGR, and will exceed 400M units in 2024.

Other features include a large 1/2.49" optical format, 9 degree CRA, a 10b ADC and a 4-lane MIPI transceiver (2.5 Gbps/lane).

Integrated selective conversion gain technology allows the pixel conversion gain to be dynamically switched between low and high, depending on the scene being captured in combination with the sensor's other features, including PureCel Plus-S stacked pixel technology for reduced crosstalk and maximum QE.

The OS12D40 uses a 4-cell color filter pattern. It has an on-chip 4-cell to Bayer remosaic converter, in order to provide 4K video at 60fps with 20% additional pixels for EIS. In a 4-cell binned mode, it can output an impressive 2.8MP/1080p resolution with 20% additional pixels for EIS video and images at four times the sensitivity. This sensor also supports both CPHY and DPHY interfaces, and can output 11.3MP, 4512x2512 16:9 captures at 60fps, 4K video at 60fps and 1080p video at 240fps.

OS12D40 samples are available now in a fan-out and chip-scale wafer level package.


PRNewswire: OmniVision announces the expansion of its BSI GS sensor family with new VGA imagers that feature the industry's smallest pixel size of 2.2um—the OG0VA image sensor and OC0VA CameraCubeChip™ wafer-level camera module. Additionally, the OC0VA is the first CameraCubeChip with Nyxel technology. They have a high QE of 40% at 940nm and 60% at 850nm.

The OG0VA image sensor provides 640x480 VGA resolution at 240fps and 320x240 QVGA resolution at 480 fps, in the optical format of 1/10 inches. The OC0VA CameraCubeChip combines this sensor with image signal processing and optics into a 2.69 x 3.04 x 3.04mm wafer-level camera module. Additionally, their low light sensitivity is excellent, with significantly lower gain than the industry's typical 3.0um pixel size for an improved SNR.

"There is a growing need for global shutter technology at a variety of resolution levels to accurately capture the images of moving objects, along with excellent NIR performance and small size," said Devang Patel, senior staff marketing manager for the security and emerging segments at OmniVision. "The OG0VA and OC0VA expand our family of the industry's smallest GS imagers by providing VGA resolution options with the best NIR performance in a global shutter device."


Omnivision also continues the series of videos on its 8MP automotive sensor features:














Go to the original article...

Porsche to Adopt Trieye SWIR Sensor

Image Sensors World        Go to the original article...

VentureBeat, EETimes, JerusalemPost: TriEye is partnering with Porsche to us SWIR sensing technology, with the hopes of advancing the performance of ADAS and autonomous vehicles.

Our collaboration with Porsche has been exceptional from day one, and we look forward to growing this potential,” said TriEye CEO and co-founder Avi Bakal. “The fact that Porsche, a leading car manufacturer, has decided to invest in TriEye and evaluate TriEye’s CMOS-based SWIR camera to help further improve Advanced Driver Assistance Systems is a significant vote of confidence in our technology.

Porsche executive board and development member Michael Steiner says, “We see great potential in this sensor technology that paves the way for the next generation of driver assistance systems and autonomous driving functions. SWIR can be a key element: It offers enhanced safety at a competitive price.

Go to the original article...

LiDAR News: Robosense, Bosch, Velodyne, Sony

Image Sensors World        Go to the original article...

BusinessWire, Thomas-PR: RoboSense solid-state LiDAR RS-LiDAR-M1Simple (Simple Sensor Version) is now ready for customer delivery, priced at $1,898. The new RS-LiDAR-M1Simple is less than half the size of the previous version, with dimensions of 4.3” x 1.9” x 4.7” (110mm x 50mm x 120mm), and its hardware performance is "virtually equal to the serial production version provided to OEMs." The main body design of this automotive-grade solid-state LiDAR is finalized and ready for shipment.

In addition, RoboSense will demonstrate the world’s first smart solid-state LiDAR, the RS-LiDAR-M1Smart (Smart Sensor Version) with an on-vehicle public road test. The RS-LiDAR-M1 family is said to have the performance advantages of traditional mechanical LiDAR, simultaneously also taking into consideration requirements for the mass production of vehicles. The RS-LiDAR-M1 family meets every automotive-grade requirement, including intelligence, low cost, stability, simplified structure and small size, vehicle body design friendliness, and algorithm processed semantic-level perception output results.

The RS-LiDAR-M1 is an optimal choice for the serial production of self-driving cars, far superior to mechanical LiDAR. The sooner solid-state LiDAR is used, the sooner production will be accelerated to mass-market levels,” said Mark Qiu, RoboSense COO.

RS-LiDAR-M1 Family Features:
  • 125 laser beams: the RS-LiDAR-M1 has a field of view of 120°*25°, which is the MEMS solid-state LiDAR’s largest field of view among released products worldwide. RoboSense uses 905nm lasers with low cost, automotive grade and small size instead of expensive 1550nm lasers. RoboSense breaks ranging ability limits to 150m at 10% NIST target, which is also MEMS solid-state LiDAR’s longest detection range. The frame rate of RS-LiDAR-M1 is increased to 15Hz, which can reduce the impact of point cloud distortion caused by target movement.
  • World’s smallest MEMS solid-state LiDAR: the size has been reduced by half, one-tenth the size of conventional 64-beam mechanical LiDAR.
  • Reduced parts for lower cost, shorter production time, and large-scale production capacity. Parts have reduced from hundreds to dozens in comparison to traditional mechanical LiDARs, greatly reducing the cost and shortening production time -- achieving a breakthrough in manufacturability. The coin-sized module processes the optical-mechanical system results to meet autonomous driving performance and mass production requirements.
  • Modular design: the scalability and layout flexibility of the optical module lay the foundation for subsequent MEMS LiDAR products and support the customization of products for different application cases.
  • Stable and reliable: the RS-LiDAR-M1 uses VDA6.3 as the basis for project management, and the development of all modules undergoes a complete V model closed loop. RoboSense fully implemented IATF16949 quality management system and ISO26262 functional safety standards, combining ISO16750 test requirement and other automotive-grade reliability specifications to verify the RS-LiDAR-M1 series of products. MEMS mirror is the core component in RS-LiDAR-M1. According to the AEC-Q100 standard, combining the characteristics of MEMS micro-mirror, a total of ten verification test groups are designed covering temperature, humidity, packaging process, electromagnetic compatibility, mechanical vibration and shock, life-time, and others. The cumulative test time for all test samples has now exceeded 100,000 hours. The RS-LiDAR-M1 uses 905nm lasers to achieve long-distance and also meets Class 1 laser safety. The longest-running prototype has been tested for more than 300 days, while the total road test mileage exceeds 150,000 kilometers with no degradation found in various testing scenarios.
  • All-weather: In Vienna, Austria, the RS-LiDAR-M1 was tested for rain and fog under different light and wind speed conditions. The test results prove that the RS-LiDAR-M1 has met the standards, and the final mass-produced RS-LiDAR-M1 will adapt to all climatic and working conditions.
  • Minimal wear and tear: as a solid-state LiDAR, the RS-LiDAR-M1 has minimal wear and tear vs. movable mechanical structures, eliminating potential optoelectronic device failures due to mechanical rotation. The characteristics of solid-state provide a reasonable internal layout, heat dissipation, and stability -- a leap in quality as compared to mechanical LiDAR.

The hardware-only version of RS-LiDAR-M1 is currently available for customers, with a retail price of $1898. RoboSense will deliver current orders from key customers and upgrade the production line in Q1 2020, completing retail product delivery by Q1 2020.




Reuters: Bosch announces that it has developed LiDAR too: "The new Bosch sensor will cover both long and close ranges – on highways and in the city. By exploiting economies of scale, Bosch wants to reduce the price for the sophisticated technology and render it suitable for the mass market."

BusinessWire: Velodyne is to present its Five Diamonds rating system to clarify and standardize terminology for ADAS features. The system aims to encourage transparency in the marketplace and promote the maximum positive effect of ADAS technologies.

FinancialTimes: Sony joins the race to develop automotive LiDAR.

Smartphones probably made the biggest impact in the 21st century in terms of changing people’s lives. Mobility is next,” says Kenichiro Yoshida, Sony CEO. The company's new solid-state LiDAR is said to be Si-based, long-range, lower-cost, compact, and insensitive to vibrations, according to FT sources.

In spite of success on image sensor market, Sony penetration to automotive applications has been quite limited. According to TSR, in 2018, Sony was on 5th place in automotive image sensors with a 3% market share, compared with 62% of ON Semi and 20% of OmniVision.

I kept on asking why we couldn’t reverse our market position despite our sensors obviously being better than others,” says Terushi Shimizu, EVP of Sony Semiconductor group. “But we didn’t want to be drawn into the cost-cutting competition. We want our sensors to be used because our technology is better.

Update: Zacks Research too publishes an article on Sony automotive LiDAR plans.

Go to the original article...

Omnivision Announces Low Power ISP for Security Cameras

Image Sensors World        Go to the original article...

PRNewswire: OmniVision announces the OA805, a video processor that supports HEVC compression with the lowest power consumption in the industry.

The OA805 has a boot-up time that is significantly faster than its nearest competitor. This rapid startup eliminates any delay between motion detection and video recording, potentially allowing the camera to instantly alert users of suspicious activities. Within 0.1 seconds, the OA805 can go from completely powered off to fully functional.

"High-end surveillance cameras need video processors that can cope with high-definition 4K resolution video streams. However, high resolution video translates into high power consumption, and manufacturers have had to either settle for lower resolution video to conserve power in their battery-powered systems, or to rely on hard-wired solutions," said David Ho, product marketing manager at OmniVision. "With the OA805, this power versus resolution trade-off is eliminated. Its support for both HEVC and H.264 video compression, in combination with the industry's lowest power consumption and fastest boot-up time, allows designers to incorporate leading-edge performance into products that their customers can quickly and easily install anywhere, so they never miss a thing."

Its HDR processing capability allows the OA805 to accept input from RBG/IR image sensors, for videos taken during the day or at night, in conditions with widely contrasting bright and dark images.

As an upgrade from OmniVision's OV798, the OA805 adds HEVC capability, consumes less power, boots up faster and offers higher resolution processing. This video processor accepts up to 16-megapixel captures from an image sensor and outputs up to 4K resolution video at 30 fps using HEVC encoding and decoding. It also supports multiple video streams at lower resolution, including H.264 1080p resolution at 60fps, as well as HDR and RGB-IR.

The OA805 video processor is available now.


In an unrelated news, Omnivision Product Manager Celine Baron features in a number of videos about the company's 8MP automotive sensor:









Go to the original article...

EPFL & Canon Create 1MP SPAD Sensor

Image Sensors World        Go to the original article...

EPFL and Canon publish Arxiv.org paper "A megapixel time-gated SPAD image sensor for 2D and 3D imaging applications" by Kazuhiro Morimoto, Andrei Ardelean, Ming-Lo Wu, Arin Can Ulku, Ivan Michel Antolovic, Claudio Bruschini, and Edoardo Charbon.

"We present the first 1Mpixel SPAD camera ever reported. The camera features 3.8ns time gating and 24kfps frame rate; it was fabricated in 180nm CIS technology. Two pixels have been designed with a pitch of 9.4μm in 7T and 5.75T configurations, respectively, achieving a maximum fill factor of 13.4%. The maximum PDP is 27%, median DCR 2.0cps, variation in gating length 120ps, position skew 410ps, and rise/fall time less than 550ps, all FWHM at 3.3V of excess bias. The sensor was used to capture 2D/3D scenes over 2m with an LSB of 5.4mm and a precision better than 7.8mm. Extended dynamic range is demonstrated in dual exposure operation mode. Spatially overlapped multi-object detection is experimentally demonstrated in single-photon time-gated ToF for the first time."

Go to the original article...

Koito Presents Headlights with Image Sensors

Image Sensors World        Go to the original article...

Japanese automotive lighting solutions manufacturer Koito presents its headlight products with embedded image sensors:

Go to the original article...

Multispectral Cameras in 3D Imaging

Image Sensors World        Go to the original article...

Fraunhofer and University Ilmenau SPIE paper "Single-frame three-dimensional imaging using spectral-coded patterns and multispectral snapshot cameras" by Chen Zhang, Anika Brahm, Andreas Breitbarth, Maik Rosenberger, and Gunther Notni extends structured light 3D concept to multispectral imaging:

"We present an approach for single-frame three-dimensional (3-D) imaging using multiwavelength array projection and a stereo vision setup of two multispectral snapshot cameras. Thus a sequence of aperiodic fringe patterns at different wavelengths can be projected and detected simultaneously. For the 3-D reconstruction, a computational procedure for pattern extraction from multispectral images, denoising of multispectral image data, and stereo matching is developed. In addition, a proof-of-concept is provided with experimental measurement results, showing the validity and potential of the proposed approach."

Go to the original article...

Multispectral Cameras in 3D Imaging

Image Sensors World        Go to the original article...

Fraunhofer and University Ilmenau SPIE paper "Single-frame three-dimensional imaging using spectral-coded patterns and multispectral snapshot cameras" by Chen Zhang, Anika Brahm, Andreas Breitbarth, Maik Rosenberger, and Gunther Notni extends structured light 3D concept to multispectral imaging:

"We present an approach for single-frame three-dimensional (3-D) imaging using multiwavelength array projection and a stereo vision setup of two multispectral snapshot cameras. Thus a sequence of aperiodic fringe patterns at different wavelengths can be projected and detected simultaneously. For the 3-D reconstruction, a computational procedure for pattern extraction from multispectral images, denoising of multispectral image data, and stereo matching is developed. In addition, a proof-of-concept is provided with experimental measurement results, showing the validity and potential of the proposed approach."

Go to the original article...

Over 90% QE Soft-X-ray CMOS Sensor

Image Sensors World        Go to the original article...

Japanese Applied Physics Express Journal publishes a paper "High-exposure-durability, high-quantum-efficiency (>90%) backside-illuminated soft-X-ray CMOS sensor" by Tetsuo Harada, Nobukazu Teranishi, Takeo Watanabe, Quan Zhou, Jan Bogaerts, and Xinyang Wang from University of Hyogo, Shizuoka University, and Gpixel.

"We develop a high-quantum-efficiency, high-exposure-durability backside-illuminated CMOS image sensor for soft-X-ray detection. The backside fabrication process is optimized to reduce the dead-layer thickness, and the Si-layer thickness is increased to 9.5 μm to reduce radiation damage. Our sensor demonstrates a high quantum efficiency of greater than 90% in the photon-energy range of 80–1000 eV. Further, its EUV-regime efficiency is ~100% because the dead-layer thickness is only 5 nm. The readout noise is as low as 2.5 e− rms and the frame rate as high as 48 fps, which makes the device practical for general soft X-ray experiments.

...we developed a new CMOS sensor with further improvements to the backside process to afford a thicker Si layer of 9.5 μm; we called this sensor the SP3 sensor. This soft-X-ray/EUV-regime SP3 image sensor is also based on the Gpixel BSI CMOS image sensor, GSENSE400SQBSI. ...We made two changes to the backside fabrication process for the SP3 relative to the SBSA: the silicon thickness was changed from 3.5 to 9.5 μm to suppress radiation damage, and that the implantation energy was decreased by one digit to reduce the non-sensitive-layer thickness. Our CMOS sensor adopts a rolling shutter and a high dynamic range (HDR) scheme using the double-conversion gain method, and has 2048 (H) × 2048 (V) 11 μm pixels.
"


Thanks to NT for the link!

Go to the original article...

IEDM 2019: Sony 3-Layer Organic+Si Sensor

Image Sensors World        Go to the original article...

Sony IEDM 2019 paper "Three-layer Stacked Color Image Sensor With 2.0-μm Pixel Size Using Organic Photoconductive Film" by H. Togashi, T. Watanabe, M. Joei, T. Hayashi, S. Hirata, S. Fukuoka, Y. Ando, Y. Sato, J. Yamamoto, I. Yagi, M. Murata, M. Kuribayashi, F. Koga, T. Yamaguchi, Y. Oike, T. Ezaki, and T. Hirayama combines 3T organic PD pixel with 4T Si-based pixel ideas:

"A three-layer stacked color image sensor was formed using an organic film. The sensor decreases the false color problem as it does not require demosaicing. Furthermore, with the 2.0-μm pixel image sensor, improved spectral characteristics owing to green adsorption by the organic film above the red/blue photodiode, were successfully demonstrated."

Go to the original article...

Kingpak and Tong Hsing to Merge

Image Sensors World        Go to the original article...

Digitimes: Two major Taiwan-based CMOS sensor packaging companies Tong Hsing and Kingpark have agreed to merge to turn them into one of the world's largest CIS packaging houses.

Taipei Times: Tong Hsing plans to acquire all of the shares in Kingpak after its board of directors approved a share-swap deal. The companies hope to complete the deal by June 30, 2020 and expect to pursue emerging opportunities in the smartphone, ADAS, IoT, the Internet of Vehicles, VR, and AR markets, the filing said.

The two companies have little overlap in customers and products, but they are highly complementary,” Tong Hsing president Heinz Ru said at a press conference at the Taiwan Stock Exchange. Kingpak is to become a wholly owned unit of Tong Hsing and be delisted from the Taipei Exchange when the merger is completed.

Go to the original article...

IEDM 2019: Sony 48MP All-Pixel PDAF Sensor

Image Sensors World        Go to the original article...

Sony paper at IEDM 2019 presents "A 1/2inch 48M All PDAF CMOS Image Sensor Using 0.8µm Quad Bayer Coding 2×2OCL with 1.0lux Minimum AF Illuminance Level" by T. Okawa, S. Ooki, H. Yamajo, M. Kawada, M. Tachi, K. Goi, T. Yamasaki, H. Iwashita, M. Nakamizo, T. Ogasahara, Y. Kitano, and K. Tatani.

"Currently, there are two coding trends in mobile image sensors: Quad Bayer coding (QBC) and dual photodiode (DPD). QBC realizes high resolution and high dynamic range (HDR), whereas DPD achieves high phase detection auto focus (PDAF) performance. We propose a QBC with 2×2 on-chip lens (2×2OCL) architecture as a potential next-generation high-performance CMOS image sensor. This combines high resolution, HDR, and high PDAF performance in one sensor. The critical issues of 2×2OCL are degradation of the resolution due to the sensitivity difference between 4 pixels under the same color filter and increasing the crosstalk among different colors. To overcome these issues, the OCL and pixel isolation shapes were optimized respectively. The world`s first image sensor using 2×2OCL architecture we prepared in this paper, has 1/2 inch 48M pixels with 0.8µm QBC for high resolution, and all pixel PDAF achieved a minimum AF illuminance level of 1 lux."

Go to the original article...

PDAF Pixel Analysis

Image Sensors World        Go to the original article...

OSA Optics Express publishes an open access paper "Joint electromagnetic and ray-tracing simulations for quad-pixel sensor and computational imaging" by Guillaume Chataignier, Benoit Vandame, and Jérôme Vaillant from InterDigital and University Grenoble Alpes, France.

"Since Canon released the first dual-pixel autofocus in 2013, this technique has been used in many cameras and smartphones. Quad-pixel sensors, where a microlens covers 2x2 sub-pixels, will be the next development. In this paper we describe the design for such sensors; related wave optics simulations; and results, especially in terms of angular response. Then we propose a new method for mixing wave optics simulations with ray tracing simulations in order to generate physically accurate synthetic images. Those images are useful in a co-design approach by linking the pixel architecture, the main lens design and the computer vision algorithms."

Go to the original article...

DIY Image Sensor

Image Sensors World        Go to the original article...

Instructables publish a Sean Hodgins' project on DIY 32 x 32 image sensor and a camera based on it:


Go to the original article...

TSR Market Data

Image Sensors World        Go to the original article...

BusinessKorea quotes TSR report on image sensor market:

"According to market research firm TSR, the global image sensor market is expected to have grown from US$13,116 million to US$15,883.9 million this year. At present, the market shares of Sony, Samsung Electronics and SK Hynix are 48.3 percent, 21 percent and 2.1 percent, respectively. The current market size is about 25 percent of the size of the NAND flash market and the former is predicted to catch up with the latter in the near future.

In the third quarter of this year, Sony was the world’s eighth-largest semiconductor company in terms of sales despite the fact that image sensors are almost the only semiconductor product it produces.
"

Go to the original article...

Toshiba Teli on Machine Vision Sensor Trends

Image Sensors World        Go to the original article...

Toshiba Teli publishes a presentation on trends in machine vision cameras:

Go to the original article...

css.php