Archives for January 2020

Ex-Awaiba Team Founds Optasensor and Unveils its First Product

Image Sensors World        Go to the original article...

Optasensor GmbH, a recently founded German company with expertise in the field of photonics integration, design and manufacturing of specially tailored small factor visualization and sensing solutions announces Osiris Micro Camera Module Series, a camera module with 1mm x 1mm footprint, customized wafer level optics, including NIR Cut filter allowing easy vision integration into constrained spaces.


Being an independent Optics and Sensor Module design and manufacturing company it offers Osiris Micro camera module adaptions, using customized and standard CMOS image sensors from worldwide manufacturers. The Camera Module integrates tailor made optical elements allowing endoscope specific oriented solutions, including customised wafer level or single replicated (allowing miniature lensing down to 500um diameter) optics, filters and prisms for light shaping as well as the possibility of integration of miniature illumination solutions.


The founders are pioneers in developing and producing Microcameras in high volume for disposable endoscope market worldwide, with the former company Awaiba. OptaSensor GmbH thus draw on more than a decade of knowledge on miniature fully enclosed camera systems modules, with a focus on the medical market.

The company will exhibit Osiris MCM Series on the coming SPIE Photonics West trade show in San Francisco as well on MD&M trade show in Anaheim.

Go to the original article...

Gigajot Wins NASA SBIR Phase II Project to Develop a Photon-Counting UV/EUV Sensor

Image Sensors World        Go to the original article...

PRNewswire: Gigajot has been awarded a NASA Phase II SBIR project to develop a UV/EUV photon number resolving image sensor based on the QIS technology.

The QIS is a platform technology and can be used in a wide range of imaging and photonics applications ranging from consumer to high-end (e.g. scientific). It provides excellent photon-counting capability, low dark current, high resolution, and high-speed operation. Also, QIS is compatible with the mainstream CMOS image sensor processes. The QIS devices can be designed and implemented in different formats (from a few pixels to hundreds of megapixels), different pixel sizes (from sub-micron to more than ten microns), and different spectral sensitivity (UV-VIS-NIR).

In this project, Gigajot will deliver a novel platform photon-counting image sensor technology, QIS, for future NASA mission concept studies as well as other scientific and consumer applications. The outcome of this project will be a large-format visible-blind CMOS UV/EUV photon-counting sensor with accurate photon-counting capability. The novel sensor will provide several features and capabilities that are not available with other high-sensitivity detectors, these features include: accurate photon-number-resolving, zero dead time, low voltage and power requirements, high spatial resolution, and room temperature operation, among others.

Jiaju Ma, the project's principal investigator and the CTO of Gigajot, noted, "Enabled by our patented innovations, the novel image sensor will combine a linear multi-bit photon-counting response in each detection cycle, zero dead time, low dark current, low operating voltage, the capability of room temperature operation, and strong radiation hardness. When combined with the existing advanced back-surface passivation techniques and band-pass filters, it can produce accurate visible-blind UV/EUV photon-counting with high quantum efficiency."

"Besides scientific applications, there are also sizeable markets for the proposed technology in automotive, medical, industrial, defense, and security applications." Said Gigajot's President & CEO, Saleh Masoodian, "A large-format UV/EUV CMOS photon-counting sensor can potentially have numerous applications in these markets. For example, UV imaging is used in dermatology to identify and visualize epidermal dermatologic conditions. With the high spatial resolution and single-photon sensitivity provided by Gigajot innovative imaging technologies, the details and features of the subcutaneous skin can be more accurately visualized. The reflected-UV imaging is also widely used to detect scratches and digs on the optical surfaces. For instance, the semiconductor industry uses UV imaging to perform an automated inspection on the photomasks. This inspection needs sensors with ultra-high spatial resolution and large format to quickly scan a large area and detect defects with submicron size. Gigajot is looking for partners interested in specific applications for this new imaging technology."

Go to the original article...

Imec Announces New Hyperspectral Devices

Image Sensors World        Go to the original article...

Imec presents a number of new hyperspectral cameras. The SWIR one is based on a new InGaAs sensor:

"Compared to VNIR-range based systems, SWIR-based hyperspectral imaging solutions capture more of the molecular structure of the screened objects (like moisture, lipid, protein contents) and allow for high added value distinctive material analysis. This widens the application potential in domains such as recycling, food sorting and quality grading, security & surveillance solution, etc."

Go to the original article...

Gpixel Announces PulSar Technology for VUV/EUV, Soft X-Ray and Electron Direct Imaging

Image Sensors World        Go to the original article...

Gpixel announces PulSar (PS) technology, extending the range of all GSENSE BSI sensors to detect vacuum UV (VUV) light, extreme UV (EUV) light and soft x-ray photons with QE approaching 100%. In addition, the technology demonstrates a good resistance to radiation damage in soft x-ray detection applications.

Gpixel’s new PulSar (PS) technology eliminates the AR coating on the BSI sensor surface and incorporates a new passivation technique to reduce the thickness of the non-sensitive layer at the sensor surface, reducing the dark current, and boosting the resistance to radiation damage.

Gpixel is very proud to offer our new technology to address the needs of our scientific customers,” says Wim Wuyts, CCO of Gpixel. “We are excited to have the opportunity to make this technology available across our broad range of large format BSI image sensors, offering our customers the unique combination of VUV/EUV and soft x-ray imaging options using our standard scientific CMOS GSENSE product family. PulSar (PS) variants are electronically compatible with their corresponding standard BSI sensors, allowing customers to quickly and easily accommodate the new sensors in existing designs.

Prototype samples of the GSENSE2020BSI-PS are available now for evaluation, and GSENSE400BSI-PS samples will be available in mid 2020. Other variants available upon request.

Go to the original article...

Photo-Gates are Back, but in a New Form

Image Sensors World        Go to the original article...

MDPI paper "Fully Depleted, Trench-Pinned Photo Gate for CMOS Image Sensor Applications" by Francois Roy, Andrej Suler, Thomas Dalleau, Romain Duru, Daniel Benoit, Jihane Arnaud, Yvon Cazaux, Catherine Chaton, Laurent Montes, Panagiota Morfouli, and Guo-Neng Lu from ST Micro, IMEP-LaHC, LETI-CEA, and University Lyon is a part of Special issue on the 2019 International Image Sensor Workshop (IISW2019). The paper proposes a solution of quite an important issue associated with deep high energy implants:

"Tackling issues of implantation-caused defects and contamination, this paper presents a new complementary metal–oxide–semiconductor (CMOS) image sensor (CIS) pixel design concept based on a native epitaxial layer for photon detection, charge storage, and charge transfer to the sensing node. To prove this concept, a backside illumination (BSI), p-type, 2-µm-pitch pixel was designed. It integrates a vertical pinned photo gate (PPG), a buried vertical transfer gate (TG), sidewall capacitive deep trench isolation (CDTI), and backside oxide–nitride–oxide (ONO) stack. The designed pixel was fabricated with variations of key parameters for optimization. Testing results showed the following achievements: 13,000 h+ full-well capacity with no lag for charge transfer, 80% quantum efficiency (QE) at 550-nm wavelength, 5 h+/s dark current at 60 °C, 2 h+ temporal noise floor, and 75 dB dynamic range. In comparison with conventional pixel design, the proposed concept could improve CIS performance."

Go to the original article...

Sigma saves Canon EOS-M

Cameralabs        Go to the original article...

Sigma has finally released its fabulous triplet of f1.4 APSC lenses in the EF-M mount, transforming Canon's neglected EOS M system. Find out why you'll want them in my video review!…

The post Sigma saves Canon EOS-M appeared first on Cameralabs.

Go to the original article...

Sony Opens Automotive Design Center in Oslo, Norway

Image Sensors World        Go to the original article...

Johannes Solhusvik becomes the Head of Automotive Design Centre at Sony Europe B.V. In the past, Johannes used to be BU-CTO of Aptina’s Automotive and Industrial Business Unit. Later, he was Omnivision's GM of Europe Design Center located in Norway. Now, one can expect a big boost of Sony automotive image sensor business in Europe.

Go to the original article...

e2v Celebrates 50th Anniversary of the CCD

Image Sensors World        Go to the original article...

Teledyne e2v releases a first of a few publications on the 50th Anniversary of the invention of CCD. These publications is aimed to highlight e2v's long term commitment to CCD design and fabrication in the UK, for space, science, astronomy and other demanding applications:

"Teledyne e2v’s CCD fabrication facility is critical to the success and quality of future space science missions and remains committed to being the long-term supplier of high specification and quality devices for the world’s major space agencies and scientific instruments producers."

Go to the original article...

Event-based Detection Dataset for Automotive Applications

Image Sensors World        Go to the original article...

Prophesee publishes "A Large Scale Event-based Detection Dataset for Automotive" by Pierre de Tournemire. Davide Nitti, Etienne Perot, Davide Migliore, Amos Sironi:

"We introduce the first very large detection dataset for event cameras. The dataset is composed of more than 39 hours of automotive recordings acquired with a 304x240 ATIS sensor. It contains open roads and very diverse driving scenarios, ranging from urban, highway, suburbs and countryside scenes, as well as different weather and illumination conditions. Manual bounding box annotations of cars and pedestrians contained in the recordings are also provided at a frequency between 1 and 4Hz, yielding more than 255,000 labels in total. We believe that the availability of a labeled dataset of this size will contribute to major advances in event-based vision tasks such as object detection and classification. We also expect benefits in other tasks such as optical flow, structure from motion and tracking, where for example, the large amount of data can be leveraged by self-supervised learning methods."

Go to the original article...

LiDAR News: IHS Markit, Blickfeld, Espros, Xenomatix

Image Sensors World        Go to the original article...

Autosens publishes Dexin Chen, Senior Technology Analyst at IHS Markit presentation "The race to a low-cost LIDAR system" in Brussels in Sept. 2019:




Another presentation at Autosens Brussels 2019 explains the technology behind Blickfeld LiDAR "The New Generation of MEMS LiDAR for Automotive Applications" by Timor Knudsen, Blickfeld Head of Embedded Software:




Yet another presentation "A novel CCD LiDAR imager" by Beat De Coi – Founder and CEO, ESPROS Photonics Corporation says that there is no value in the ability to detect single photons in LiDAR, like SPADs do:




XenomatiX CEO Filip Geuens presents "Invisible integration of solid-state LIDAR to make beautiful self-driving cars"


Go to the original article...

ISP News: ARM, Geo, Qualcomm

Image Sensors World        Go to the original article...

Autosens publishes ARM Director ISP Algorithms & Image Quality Alexis Lluis Gomez talk "ISP optimization for ML/CV automotive applications" in Brussels in Sept. 2019:



GEO Semi's Bjorn Grubelich, Product Manager for Automotive Camera Solutions, presents CV challenges in pedestrian detection in a compact automotive camera:



Qualcomm announces mid-rand and low-end mobile SoC Snapdragon 720G, 662 and 460. Even the low-end 460 chip supports 48MP imaging:


The mid-range 720G supports 192MP camera:

Go to the original article...

FLIR Demos Automotive Thermal Cameras for Pedestrian Detection

Image Sensors World        Go to the original article...

FLIR publishes a couple of videos about advantages of thermal cameras for pedestrian detection:



Go to the original article...

Autosens Detroit 2020 Agenda

Image Sensors World        Go to the original article...

Autosens Detroit to be held in mid-May 2020 publishes its agenda with many image sensor related presentations:

  • The FIR Revolution: How FIR Technology Will Bring Level 3 and Above Autonomous Vehicles to the Mass Market
    Yakov Shaharabani, CEO, Adasky
  • The Future of Driving: Enhancing Safety on the Road with Thermal Sensors
    Tim LeBeau, CCO, Seek Thermal
  • RGB-IR Sensors for In-Cabin Automotive Applications
    Boyd Fowler, CTO, OmniVision
  • Robust Inexpensive Frequency Domain LiDAR using Hamiltonian Coding
    Andreas Velten, Director, Computational Optics Group, University of Wisconsin-Madison
  • The Next Generation of SPAD Arrays for Automotive LiDAR
    Wade Appelman, VP Lidar Technology, ON Semiconductor
  • Progress with P2020 – developing standards for automotive camera systems
    Robin Jenkin, Principal Image Quality Engineer, NVIDIA
  • What’s in Your Stack? Why Lidar Modulation Should Matter to Every Engineer
    Bill Paulus, VP of Manufacturing, Blackmore Sensors and Analytics, Inc
  • Addressing LED flicker
    Brian Deegan, Senior Expert, Valeo Vision Systems
  • The influence of colour filter pattern and its arrangement on resolution and colour reproducibility
    Tsuyoshi Hara, Solution Architect, Sony
  • Highly Efficient Autonomous Driving with MIPI Camera Interfaces
    Hezi Saar, Sr. Staff Product Marketing Manager, Synopsys
  • Tuning image processing pipes (ISP) for automotive use
    Manjunath Somayaji, Director of Imaging R&D, GEOSemiconductor
  • ISP optimization for ML/CV automotive applications
    Alexis Lluis Gomez, Director ISP Algorithms & Image Quality, ARM
  • Computational imaging through occlusions; seeing through fog
    Guy Satat, Researcher, MIT
  • Taming the Data Tsunami
    Barry Behnken, Co-Founder and SVP of Engineering, AEye

Go to the original article...

Samsung and Oxford Propose Image Sensors Instead of Accelerometers for Activity Recognition in Smartwatches

Image Sensors World        Go to the original article...

Arxive.org paper "Are Accelerometers for Activity Recognition a Dead-end?" by Catherine Tong, Shyam A. Tailor, and Nicholas D. Lane from Oxford University and Samsung says that image sensors might better suit for tracking calories in smartwatches and smartbands:

"Accelerometer-based (and by extension other inertial sensors) research for Human Activity Recognition (HAR) is a dead-end. This sensor does not offer enough information for us to progress in the core domain of HAR---to recognize everyday activities from sensor data. Despite continued and prolonged efforts in improving feature engineering and machine learning models, the activities that we can recognize reliably have only expanded slightly and many of the same flaws of early models are still present today.

Instead of relying on acceleration data, we should instead consider modalities with much richer information---a logical choice are images. With the rapid advance in image sensing hardware and modelling techniques, we believe that a widespread adoption of image sensors will open many opportunities for accurate and robust inference across a wide spectrum of human activities.

In this paper, we make the case for imagers in place of accelerometers as the default sensor for human activity recognition. Our review of past works has led to the observation that progress in HAR had stalled, caused by our reliance on accelerometers. We further argue for the suitability of images for activity recognition by illustrating their richness of information and the marked progress in computer vision. Through a feasibility analysis, we find that deploying imagers and CNNs on device poses no substantial burden on modern mobile hardware. Overall, our work highlights the need to move away from accelerometers and calls for further exploration of using imagers for activity recognition.
"

Go to the original article...

CIS Production Short of Demand, Prices Rise

Image Sensors World        Go to the original article...

IFNews quotes Chinese-language Soochow Securities report that Sony, Samsung, and OmniVision have recently increased their image sensor prices by 10%-20% due to supply shortages.

Soochow Securities analyst says: "According to our grassroots research, the size of this [CIS] market may even exceed that of memory." (Google translation)


Digitimes too reports that Omnivision "will raise its quotes for various CMOS image sensors (CIS) by 10-40% in 2020 to counter ever-expanding demand for application to handsets, notebooks, smart home devices, automotive and security surveillance systems."

Go to the original article...

Trieye Automotive SWIR Presentation

Image Sensors World        Go to the original article...

Autosens publishes Trieye CTO Uriel Levy's presentation "ShortWave Infrared Breaking the Status Quo" in Brussels in Sept. 2019:

Go to the original article...

LiDAR News: Ibeo, Velodyne, Outsight

Image Sensors World        Go to the original article...

ArsTechnica: "Ibeo Operations Director Mario Brumm told Ars that Ibeo's next-generation lidar, due out later this year, would feature an 128-by-80 array of VCSELs coupled with a 128-by-80 array of SPADs. Ibeo is pursuing a modular design that will allow the company to use different optics to deliver a range of models with different capabilities—from a long-range lidar with a narrow field of view to a wide-angle lidar with shorter range. Ibeo is aiming to make these lidars cheap enough that they can be sold to automakers for mass production starting in late 2022 or early 2023."

IEEE Spectrum quotes Velodyne new Anand Gopalan talking about the company's new solid state Velabit LiDAR:

Gopalan wouldn’t say much about how it works, only that the beam-steering did not use tiny mirrors based on micro-electromechanical systems (MEMS). “It differs from MEMS in that there’s no loss of form factor or loss of light,” he said. “In the most general language, it uses a metamaterial activated by low-cost electronics.”

Autosens publishes Outsight President Raul Bravo's presentation at Brussels Sept. 2019 conference:

Go to the original article...

SmartSens Announces SmartClarity H Series

Image Sensors World        Go to the original article...

PRNewswire: SmartSens launches six new sensors as part of the SmartClarity H Series family. The SC8238H, SC5238H, SC4210H, SC4238H, SC2210H, and SC2310H sensors have resolutions in range of 2MP to 8MP and are said to provide a significant improvement over previous generation products in lowering dark current and reducing white pixels and temperature noise suitable to use in home surveillance and other video based applications.

"Overheating and high temperature has been the culprit that creates bottlenecks to excellent CIS image quality, especially in the field of non-stop running security applications. As the leader in Security CIS sensors, we've always set our sight in tackling this issue. As we see the improvements from our high-performing CIS technologies," noted Chris Yiu, CMO of SmartSens, "We're reaching the level comparable to other top-notch industry leaders."

Samples of these six products are now available.

Go to the original article...

Fujifilm XT200 review – preview

Cameralabs        Go to the original article...

The Fujifilm X-T200 is an entry-level mirrorless camera, featuring a 24 Megapixel APSC sensor, built-in viewfinder, fully-articulated 3.5in touchscreen, mic input and uncropped 4k video. Find out more in my preview! …

The post Fujifilm XT200 review – preview appeared first on Cameralabs.

Go to the original article...

Himax Unveils Low Power VGA Sensor

Image Sensors World        Go to the original article...

GlobeNewswire: Himax announces the commercial availability for HM0360, said to be an industry-first ultra-low power and low latency BSI CMOS sensor with autonomous modes of operations for always on, intelligent visual sensing applications such as human presence detection and tracking, gaze detection, behavioral analysis, and pose estimation for growing markets like smart home, smart building, healthcare, smartphone and AR/VR devices. Himax is currently working with leading AI framework providers such as Google and industry partners to develop reference design that can enable low power hardware and platform options to reduce time to market for intelligent edge vision solutions.

One of the key challenges to 2-dimensional image sensing for computer vision is the high power consumption and data bandwidth of the sensor and processing,” said Amit Mittra, CTO of Himax Imaging. “The HM0360 addresses this opportunity by delivering a very low power image sensor that achieves excellent image quality with high signal-to-noise ratio and dynamic range, which allows algorithms to operate under challenging lighting conditions, from bright sunlight to moonlight. The VGA resolution can double the range of detection over Himax’s HM01B0 QVGA sensor, especially to support greater than 90-degree wide field of view lens. Additionally, the HM0360 introduces several new features to reduce camera latency, system overhead and power consumption.

Smart sensors that can run on batteries or energy harvesting for years will enable a massive number of new applications over the next decade,” said Pete Warden, Technical Lead of TensorFlow Lite for Microcontrollers at Google. “TensorFlow Lite's microcontroller software can supply the brains behind these products but having low-power sensors is essential. Himax’s camera can operate at less than one milliwatt, which allows us to create a complete sensing system that's able to run continuously on battery power alone.

Unique features of HM0360 include:

  • Several autonomous modes of operation
  • Pre-metering function to ensure exposure quality for every event frame
  • Short sensor initialization time
  • Extremely low less than 2ms wake up time
  • Fast context switching and frame configuration update
  • Multi-camera synchronization
  • 150 parallel addressable regions of interests
  • Event sensing modes with programmable interrupts to allow the host processor to be placed in low power sleep until notified by the sensor
  • Operating up to VGA resolution of 60 frames per second sensor at 14.5mW and consumes less than 500µW using binning mode readout at 3FPS
  • Supporting multiple power supply configurations with minimal passive components to enable a highly compact camera module design for next generation smart camera devices.

M0360 is currently available for sampling and will be ready for mass production in the second quarter of 2020.

Go to the original article...

ADI Demos ToF Applications

Image Sensors World        Go to the original article...

Analog Devices publishes a video presenting its ToF solution inside PicoZense camera for face recognition:



Another demo talks about in-cabin driver status monitoring:



Yet another demo shows ToF camera-equipped autonomous robot:

Go to the original article...

ON Semi on Automotive Image Sensor Challenges

Image Sensors World        Go to the original article...

AutoSens publishes a video "Overview of the Challenges and Requirements for Automotive Image Sensors" by Geoff Ballew, Senior Director of Marketing, Automotive Sensing Division, ON Semiconductor presented at Autosens Brussels in Sept. 2019:

Go to the original article...

Reverse Engineered: Sony CIS inside Denso Automotive Camera, Tesla Model 3 Triple Camera, Samsung Galaxy Note 10+ ToF Camera

Image Sensors World        Go to the original article...

SystemPlus has published few interesting reverse engineering reports. "Denso’s Monocular Forward ADAS Camera in the Toyota Alphard" reveals Sony 1.27 automotive sensor with improved night vision:

"The monocular camera, manufactured by Denso in Japan, is 25% smaller and uses 30% less components and parts than the previous model. Also, for the first time in an automotive camera, we have found a Sony CMOS image sensor. This 1.27M-pixel sensor offers higher sensitivity to the near-infrared sensor, and the low-light use-designed lens housing improves the camera’s night-time identification of other road users and road signs. Regarding processing, the chipset is composed of a Sony image signal processor, a Toshiba image recognition processor, and a Renesas MCU."


"Triple Forward Camera from Tesla Model 3" "captures front images over up to 250 meters that are used by the Tesla Model 3 Driver Assist Autopilot Control Module Unit. The system integrates a serializer but no computing.

Tesla chose dated but well known and reliable components with limited bandwidth, speed and dynamic range. This is probably due to the limited computing performance of the downstream Full Self-Driving chipset. This three-cameras setup is therefore very far from the robotic autonomous vehicle (AV) technology used on Waymo and GM Cruise vehicles. The level of autonomy Tesla will achieve with this kind of hardware is the trillion-dollar question you will be able to assess with this report.

We assume this system is manufactured in the USA. It is an acquisition module without treatment using three cameras, narrow, main and wide, with 3 CMOS Image Sensors with 1280×960 1.2Mp resolution.
"


"Samsung Galaxy Note 10+ 3D Time of Flight Depth Sensing Camera Module" "is using the latest generation of Sony Depth sensing Solutions’ Time-of-Flight (ToF) camera, which is unveiled in this report.

It includes a backside illumination (BSI) Time of Flight Image Sensor array featuring 5µm size pixels and 323 kilopixel VGA resolution developed by Sony Depth sensing Solutions. It has one VCSEL for the flood illuminator coming from a major supplier.

The complete system features a main red/green/blue camera, a Telephoto, a Wide-angle Camera Module and a 3D Time of Flight Camera.
"

Go to the original article...

5 CMOS Sensor Companies Order 34 Inspection Machines from Camtek

Image Sensors World        Go to the original article...

PRNewswire: Camtek has received orders for 34 systems for 2D inspection of CMOS sensors from five leading manufacturers, of which 25 are from two customers. Most of the orders are expected to be installed during the first half of 2020.

Ramy Langer, COO, comments, "Our longstanding expertise in inspection technologies designed specifically for the CMOS image sensors market, makes the EagleT and EagleT Plus the ultimate inspection tools for this market."

Go to the original article...

1 Tera-fps Camera Needed to Observe Signal Travel through Neurons

Image Sensors World        Go to the original article...

Optics.org: Science Magazine paper "Picosecond-resolution phase-sensitive imaging of transparent objects in a single shot" by Taewoo Kim, Jinyang Liang, Liren Zhu, and Lihong V. Wang from Caltech says that fast frame speed is needed to study some biology processes:

"As signals travel through neurons, there is a minute dilation of nerve fibers that we hope to see. If we have a network of neurons, maybe we can see their communication in real time," Professor of Medical Engineering and Electrical Engineering Lihong Wang says.

"Here, we present phase-sensitive compressed ultrafast photography (pCUP) for single-shot real-time ultrafast imaging of transparent objects by combining the contrast of dark-field imaging with the speed and the sequence depth of CUP. By imaging the optical Kerr effect and shock wave propagation, we demonstrate that pCUP can image light-speed phase signals in a single shot with up to 350 frames captured at up to 1 trillion frames per second. We expect pCUP to be broadly used for a vast range of fundamental and applied sciences."

Go to the original article...

JPEG vs HEIF: Canon commits to HIF

Cameralabs        Go to the original article...

Time for JPEG to retire? With the 1Dx III, Canon becomes the first traditional camera company to back the more efficient HEIF format which delivers better quality in smaller files. Here's how Canon makes it work.…

The post JPEG vs HEIF: Canon commits to HIF appeared first on Cameralabs.

Go to the original article...

Samsung-Corephotonics Unveils Foveated Automotive Camera

Image Sensors World        Go to the original article...

Smartphone folded-zoom lens and multi-camera solutions developer Corephotonics acquired by Samsung a year ago, announces its first product since the acquisition - Roadrunner automotive camera:


Thanks to AB for the pointer!

Go to the original article...

Omnivision Aims to Close the Gap with Sony and Samsung and Lead the Market in 1 Year

Image Sensors World        Go to the original article...

IFNews quotes Laoyaoba interview with Omnivision's SVP of Global Sales Wu Xiaodong giving a lot of interesting info about the company plans:
  • Omnivision's 64MP high-end smartphone sensor is expected to enter mass production soon this year
  • Although in terms of global market share Omnivision ranks third with 12.4%, it scores first with 48% share in security, second with 30% share in autonomous vehicles, first with 50% in computing, first with 48% in emerging businesses such as IoT, and first with 81% share on medical CIS market
  • From 2018 to 2019, the overall CIS market size grew at AAGR of 20%. After 2020, AAGR is expected to go down to 10%.
  • In the end of August 2019, Will Semi has completed the acquisition of Omnivision and Superpix and officially renamed them to Omnivision Group
  • Omnivision Group currently has more than 2,000 customers, with annual chip shipments exceeding 13 billion.
  • Omnivision has R&D centers in the US, Japan, Europe, China, and Singapore.
  • So far, Omnivision employs a total of 1,300 engineers and has more than 4,000 patents.
  • Omnivision Group cooperates with TSMC, SMIC, Huali (HLMC), Dongfang (DDF), and other foundries.
"In the past, our gap [with Sony and Samsung has been,] may be, about one year. Last year, we were half a year behind, and our goal is to achieve new products to be leveled this year, and to achieve a lead next year," says Wu Xiaodong.

Go to the original article...

IRNova on LWIR Polarimetric Imaging

Image Sensors World        Go to the original article...

As mentioned in comments, Sweden-based IRNova publishes an application note "Polarimetric QWIP infrared imaging sensor" talking about its Garm LW Pol camera.

"Quantum well infrared photodetectors (QWIP) are by design inherently suited for polarization sensitive imaging. The detection principle in regular QWIPs relies on etched 2-D gratings to couple the light to the quantum wells for absorption. By replacing the 2D gratings with 1D (lamellar) gratings polarization sensitivity is added to the thermal detection.

Thermal imaging is a great way to detect objects, but it requires the objects to be of different temperature or to have different emissivity than the background. Polarization detection further extends the possibility to differentiate between objects that have the same temperature but consist of different materials, since infrared polarized light can be generated by reflection or emission of radiation from planar surfaces. This allows for detecting objects that are previously undetectable by an infrared detector since they may be covered under a canvas or they may have a low thermal signature like an UAV.
"

Go to the original article...

Event-Based News: Prophesee, Inivation, Samsung

Image Sensors World        Go to the original article...

EETimes publishes Junko Yoshida's interview with Luka Verre, Prophesee CEO. Few quotes:

"The commercial product we have is a VGA sensor. It’s in mass production. We are currently deploying shipping for industrial applications.

We have a new sensor generation, which is an HD sensor, so one million pixels, 720p. This is the result of joint cooperation we have done with Sony, which will be published at ISEC [ISSCC, probably] in February in San Francisco.

There has been some research work done together with Sony. Yes, Sony is indeed interested in event-based technology, but unfortunately I cannot tell you more than that. One of the main challenges we have been solving, moving from the VGA sensor to the HD sensor is the capability now to stack the sensor, to use a very advanced technology node that enables us to reduce the pixel pitch. So to make actually the sensor much smaller and cost-effective.

Automotive remains one of the key verticals we are targeting, because our technology, event-based technology, shows clear benefit in that space with respect to low latency detection, low data rate and high dynamic range.

...we did some tests in some controlled environments with one of the largest OEMs in Europe, and we compared side by side the frame-based sensor with an event-based sensor, showing that, while the frame-based camera system was failing evening in fusion with a radar system, our system was actually capable to detect pedestrians in both daylight conditions and night light conditions.
"


iniVation wins Best of Innovation award from the CES 2020 in the category ‘Embedded Vision’.



The award is for the company's newest product, the DVXplorer that uses an all-new custom-designed sensor from Samsung. DVXplorer is said to be the world’s first neuromorphic camera employing technologies suitable for mass-production applications.

Thanks to TL for the link!


Samsung's Hyunsurk Eric Ryu presented their event driven pixels at the 2nd International Workshop on Event-based Vision and Smart Cameras:

Go to the original article...

css.php