Eric Fossum, ON Semi, and Kodak Win Emmy Award

Image Sensors World        Go to the original article...

Dartmouth: US National Academy of Television announces 2021 Technology & Engineering Emmy Awards. The Award for Invention and Pioneering Development of Intra-Pixel Charge Transfer CMOS Image Sensors goes to:
  • Eric Fossum
  • ON Semiconductor
  • Eastman Kodak

Go to the original article...

Yole on Machine Vision Market

Image Sensors World        Go to the original article...

Yole Developpement publishes a report "The industrial vision market matters, and the ecosystem is reconfiguring."

"The supply chain of key image sensor components has centralized. Yole estimates that the top five camera players have 53% market share in industrial cameras. The top three image sensor players have more than 78% market share.

As software further improves camera function, leading players like Cognex and Basler have acquired software companies to strengthen their competitiveness. Smaller players are merging and gradually becoming larger players, for example TKH Group has merged many smaller camera players.

There have also been strong alliances, including upstream and downstream mergers to become giants, such as the recent Teledyne acquisition of FLIR. We have also seen some Chinese players come to the surface, such as Hikrobot, Huaray, and Imavision. They have grown by absorbing the technology of external players. As global manufacturing shifts to China, the Chinese machine vision market will be huge. Chinese machine vision players will therefore become important to watch in this market."

Go to the original article...

Gpixel Starts a Line of Charge-Domain TDI Sensors

Image Sensors World        Go to the original article...

Gpixel announces the first sensor in a new family of line scan CMOS sensors supporting true charge-domain time delay integration (TDI). GLT5009BSI is a BSI, TDI image sensor with 5 um pixels and 9072 pixel horizontal resolution. The sensor has two photosensitive bands, 256 stages and 32 stages respectively, enabling a HDR mode.

GLT5009BSI’s 5 um pixel provides a full well capacity of 16 ke- and noise of 8 e- which delivers more than 66 dB DR. Read out of the image data is achieved through 86 pairs of sub-LVDS channels at a combined maximum data rate of 72.58 Gbps. This output architecture supports line rates up to 600 kHz using 10-bit single band mode, 300 kHz using 12-bit single band mode.

The length of the photosensitive area is 45.36 mm and the sensor is assembled in a 269 pin uPGA package.

With the launch of the first sensor in the GLT family, Gpixel is able to address a new segment of applications requiring higher speed and more sensitivity than can be achieved with existing line scan products. We are excited to bring this high-end technology to our customers enabling them to address these demanding applications,” says Wim Wuyts, CCO of Gpixel.

GLT5009BSI engineering samples can be ordered now for delivery in March, 2021.

Go to the original article...

ams Announces 13.8MP and 8MP Global Shutter Sensors

Image Sensors World        Go to the original article...

BusinessWire: ams introduces the CSG family of image sensors for industrial vision equipment which achieves higher resolution at very high frame rates. The new CSG14K and CSG8K sensors are supplied in – respectively – a 1” or a 1/1.1” optical format.

The CSG14K is a global shutter image sensor that combines resolution of 13.8MP with high-speed operation: in 10-bit mode at full resolution, the sensor can capture images at a maximum rate of 140fps, and at 93.6fps in 12-bit mode. The CSG8K achieves even higher speeds of 231fps in 10-bit and 155fps in 12-bit mode at its full resolution of 8MP.

They are the first products to gain the benefits of a pixel design which is notable for its low noise and high sensitivity, plus HDR mode.

Peter Vandersteegen, Marketing Manager of the CMOS Image Sensors business line at ams, said: “AOI is a vital part of the quality control process in modern factories. By delivering a fast frame rate and higher resolution, the CSG image sensors provide a simple way for industrial camera manufacturers to upgrade the performance of their products, and to enable their cus-tomers to raise throughput, productivity and quality – all in a standard optical format.

The CSG sensors feature a sub-LVDS data interface like that of the ams CMV family of image sensors. Both sensors are supplied in a 20mm x 22mm LGA package, share the same footprint and pinout, and are software-compatible. The CSG14K has a 1:1 aspect ratio, and is ideal for use in C-mount, 29mm x 29mm industrial cameras. The CSG8K has a 16:9 aspect ratio, suita-ble for video. 

The CSG14K and CSG8K sensors are available for sampling.

Go to the original article...

Security Vulnerability of Rolling Shutter CMOS Sensors

Image Sensors World        Go to the original article...

Arxiv.org paper "They See Me Rollin': Inherent Vulnerability of the Rolling Shutter in CMOS Image Sensors" by Sebastian Köhler, Giulio Lovisotto, Simon Birnbach, Richard Baker, and Ivan Martinovic from Oxford University, UK, warns of security problem in machine vision systems relying on rolling shutter sensors.

"As a balance between production costs and image quality, most modern cameras use Complementary Metal-Oxide Semiconductor image sensors that implement an electronic rolling shutter mechanism, where image rows are captured consecutively rather than all-at-once.

In this paper, we describe how the electronic rolling shutter can be exploited using a bright, modulated light source (e.g., an inexpensive, off-the-shelf laser), to inject fine-grained image disruptions. These disruptions substantially affect camera-based computer vision systems, where high-frequency data is crucial in extracting informative features from objects.

We study the fundamental factors affecting a rolling shutter attack, such as environmental conditions, angle of the incident light, laser to camera distance, and aiming precision. We demonstrate how these factors affect the intensity of the injected distortion and how an adversary can take them into account by modeling the properties of the camera. We introduce a general pipeline of a practical attack, which consists of: (i) profiling several properties of the target camera and (ii) partially simulating the attack to find distortions that satisfy the adversary's goal. Then, we instantiate the attack to the scenario of object detection, where the adversary's goal is to maximally disrupt the detection of objects in the image. We show that the adversary can modulate the laser to hide up to 75% of objects perceived by state-of-the-art detectors while controlling the amount of perturbation to keep the attack inconspicuous. Our results indicate that rolling shutter attacks can substantially reduce the performance and reliability of vision-based intelligent systems."

Go to the original article...

Photonfocus Presents First Global Shutter UV Camera

Image Sensors World        Go to the original article...

Photonfocus unveils MV4-D1280U-H01-GT camera said to be the world's first global shutter UV camera. The 1.3MP BSI sensor is custom designed and has a  QE with > 40% in 170 - 800 nm band.


Thanks to TL for the pointer!

Go to the original article...

Next Generation EDOF

Image Sensors World        Go to the original article...

OSA Optics Express publishes a paper "Depth-of-field engineering in coded aperture imaging" by Mani Ratnam Rai and Joseph Rosen from Ben-Gurion University of the Negev, Israel.

"Extending the depth-of-field (DOF) of an optical imaging system without effecting the other imaging properties has been an important topic of research for a long time. In this work, we propose a new general technique of engineering the DOF of an imaging system beyond just a simple extension of the DOF. Engineering the DOF means in this study that the inherent DOF can be extended to one, or to several, separated different intervals of DOF, with controlled start and end points. Practically, because of the DOF engineering, entire objects in certain separated different input subvolumes are imaged with the same sharpness as if these objects are all in focus. Furthermore, the images from different subvolumes can be laterally shifted, each subvolume in a different shift, relative to their positions in the object space. By doing so, mutual hiding of images can be avoided. The proposed technique is introduced into a system of coded aperture imaging. In other words, the light from the object space is modulated by a coded aperture and recorded into the computer in which the desired image is reconstructed from the recorded pattern. The DOF engineering is done by designing the coded aperture composed of three diffractive elements. One element is a quadratic phase function dictating the start point of the in-focus axial interval and the second element is a quartic phase function which dictates the end point of this interval. Quasi-random coded phase mask is the third element, which enables the digital reconstruction. Multiplexing several sets of diffractive elements, each with different set of phase coefficients, can yield various axial reconstruction curves. The entire diffractive elements are displayed on a spatial light modulator such that real-time DOF engineering is enabled according to the user needs in the course of the observation. Experimental verifications of the proposed system with several examples of DOF engineering are presented, where the entire imaging of the observed scene is done by single camera shot."

Go to the original article...

LiDAR News: Levandowski, Aeva, DENSO, Ouster, Outsight, Argo, Valeo, Hyundai, Velodyne

Image Sensors World        Go to the original article...

World IP Review: The outgoing Trump administration has granted a full pardon to Anthony Levandowski, the former LiDAR head at Waymo, sentenced to 18 months in prison for stealing trade secrets.

In a memo, released on January 20, 2021, the administration says Levandowski “paid a significant price for his actions and plans to devote his talents to advance the public good.

It also cited a quote from the sentencing judge in the case in which he described Levandowski as a “brilliant, groundbreaking engineer that our country needs.

BusinessWire: Ouster and Outsight partner on the first integrated solution in the lidar industry with embedded pre-processing software. This plug-and-play system is designed to deliver real-time, processed 3D data and designed to be integrated into any application within minutes. The solution combines Ouster’s high-resolution digital lidar sensors with Outsight’s perception software which detects, classifies, and tracks objects without relying on machine learning.

ReutersDENSO partners with Aeva to develop next-generation sensing and perception systems. Together, the companies will advance FMCW LiDAR and bring it to the mass vehicle market.

MSNGroundTruthAutonomy: Argo.ai presents its new platform featuring 6 LiDARs and 11 cameras. Some of the versions even have a multi-storied LiDAR pyramid on the roof:


ETNews reports that Hyundai is contemplating using Valeo SCALA LiDAR in its first autonomous vehicle scheduled to release in 2022. The reason for choosing Valeo is quite interesting:

"This decision is likely based on the fact that Velodyne has yet to reach a level to mass-produce LiDAR sensors even though it is working with Hyundai Mobis, which invested $54.3 million (60 billion KRW) in Velodyne, on the development. 

Velodyne received an $50 million investment (3% stake) from Hyundai Mobis back in 2019. Although it stands at the top of the global market for LiDAR sensors, a supply of automotive LiDAR sensors for a research and development purpose is its only experience with automotive LiDAR sensors. It is reported that it has yet to reach Hyundai Motor Group’s requests due to its lack of experience with mass-production of automotive LiDAR sensors. Although it was planning to supply LiDAR sensors that will be used for a level 3 autonomous driving system, its plan is now facing a setback.

Velodyne is currently working with Hyundai Mobis at Hyundai Mobis’s Technical Center of Korea in Mabuk and is focusing on securing its ability to mass-produce automotive LiDAR sensors while having the sensors satisfy reliability that future cars require. The key is for Velodyne to minimize any different in qualities between products during mass-production

Valeo is the only company in the world that has succeeded in mass-producing automotive LiDAR sensors. It supplied “SCALA Gen. 1” to Audi for Audi’s full-size sedan “A8”. SCALA Gen. 1 is a 4-channel LiDAR sensor and it has a detection range of about 150 meters."

Go to the original article...

International Image Sensor Society on LinkedIn

Image Sensors World        Go to the original article...

International Image Sensor Society (IISS) has opened a LinkedIn page. Please feel free to follow to be updated about the latest events and announcements:

Go to the original article...

12-ps Resolution Vernier Time-to-Digital Converter for SPAD Sensor

Image Sensors World        Go to the original article...

MDPI paper "A 13-Bit, 12-ps Resolution Vernier Time-to-Digital Converter Based on Dual Delay-Rings for SPAD Image Sensor" by Zunkai Huang, Jinglin Huang, Li Tian,Ning Wang, Yongxin Zhu, Hui Wang, and Songlin Feng from Shanghai Advanced Research Institute, Chinese Academy of Sciences, presents a fairly complex pixel.

"In this paper, we propose a novel high-performance TDC for a SPAD image sensor. In our design, we first present a pulse-width self-restricted (PWSR) delay element that is capable of providing a steady delay to improve the time precision. Meanwhile, we employ the proposed PWSR delay element to construct a pair of 16-stages vernier delay-rings to effectively enlarge the dynamic range. Moreover, we propose a compact and fast arbiter using a fully symmetric topology to enhance the robustness of the TDC. To validate the performance of the proposed TDC, a prototype 13-bit TDC has been fabricated in the standard 0.18-µm complementary metal–oxide–semiconductor (CMOS) process. The core area is about 200 µm × 180 µm and the total power consumption is nearly 1.6 mW. The proposed TDC achieves a dynamic range of 92.1 ns and a time precision of 11.25 ps. The measured worst integral nonlinearity (INL) and differential nonlinearity (DNL) are respectively 0.65 least-significant-bit (LSB) and 0.38 LSB, and both of them are less than 1 LSB. The experimental results indicate that the proposed TDC is suitable for SPAD-based 3D imaging applications."

Go to the original article...

WDR Sensor with Binary Image Feature

Image Sensors World        Go to the original article...

IET Electronics Letters publishes a paper "CMOS image sensor for wide dynamic range feature extraction in machine vision" by Hyeon‐June Kim from Kangwon National University, Korea.

"The proposed pixel structure has two operating modes, the normal and WDR modes. In the normal operating mode, the proposed CIS captures a normal image with high sensitivity. In addition, as a unique function, a bi‐level image is obtained for real‐time FE even if a pixel is saturated in strong illumination conditions. Thus, compared to typical CISs for machine vison, the proposed CIS can reveal object features that are blocked by light in real time. In the WDR operating mode, the proposed CIS produces a WDR image with its corresponding bi‐level image. A prototype CIS was fabricated using a standard 0.35‐μm 2P4M CMOS process with a 320 × 240 format (QVGA) with 10‐μm pitch pixels. At 60 fps, the measured power consumption was 5.98 mW at 3.3 V for pixel readout and 2.8 V for readout circuitry. The dynamic range of 73.1 dB was achieved in the WDR operating mode."

Go to the original article...

Smartsens Released More than 30 Tapeouts in 2020

Image Sensors World        Go to the original article...

Smartsens reports that it has released more than 30 tapeouts in 2020 or one tapeout every 12 days, on average. The company also won the "Unicorn Enterprise of the Year Award" from 2021 China Semiconductor Investment Alliance Annual Conference and China IC Billboard:

Go to the original article...

CMOS Sensors Design with Synopsys Custom Compiler

Image Sensors World        Go to the original article...

While most of analog design in the industry is done with Cadence EDA tools, Imasenic CTO Adria Bofill Petit presents an alternative path with Synopsys Custom Compiler:

Go to the original article...

Call for Papers for Special Issue of 2022 IEEE TED on Solid-State Image Sensors

Image Sensors World        Go to the original article...

Over the last decade, solid-state image sensors have sustained impressive technological developments as well as growth in existing markets such as camera phones, automotive cameras, security and industrial cameras and medical/scientific cameras. This has included:
  • sub-micron pixels,
  • high dynamic range sensors for automotive and machine vision,
  • time-of- flight sensors for 3D imaging,
  • 3-dimensional integration (wafer level stacking) for small and efficient imaging systems on a chip,
  • sub-electron read noise pixels and avalanche photodetectors for single-photon imaging,
  • detector structures for non-cooled infrared imaging,
  • and many others.
Solid-state image sensors are also taking off into new applications and markets (IoT, 3D imaging, medical, biometrics and others). Solid-state image sensors are now key components in a vast array of consumer and industrial products. This special issue will provide a focal point for reporting these advancements in an archival journal and serve as an educational tool for the solid-state image sensor community. Previous special issues on solid-state image sensors were published in 1968, 1976, 1985, 1991, 1997, 2003, 2009 and 2016.
  • Topics of interest include, but are not limited to:
  • Pixel device physics (New devices and structures, Advanced materials, Improved models and scaling, Advanced pixel circuits, Performance enhancement for QE, Dark current, Noise, Charge Multiplication Devices, etc.)
  • Image sensor design and performance (New architectures, Small pixels and Large format arrays, High dynamic range, 3D range capture, Low voltage, Low power, High frame rate readout, Scientific-grade, Single-Photon Sensitivity)
  • Image-sensor-specific peripheral circuits (ADCs and readout electronics, Color and image processing, Smart sensors and computational sensors, System on a chip)
  • Non-visible “image” sensors (Enhanced spectral response e.g., UV, NIR, High energy photon and particle detectors e.g., electrons, X-rays, Ions, Hybrid detectors, THz imagers)
  • Stacked image sensor architectures, fabrication, packaging and manufacturing (two or more tiers, back-side illuminated devices)
  • Miscellaneous topics related to image sensor technology
Submission deadline: July 30, 2021
Publication date: June 2022

Go to the original article...

GEO Semi Reports the 250 Automotive OEM Design Win Milestone

Image Sensors World        Go to the original article...

BusinessWire: GEO Semiconductor announces surpassing a major milestone for the company, 250 Automotive OEM design wins. These camera video processor (CVP) design wins represent engagements with over 30 different Tier 1 suppliers, and over a dozen of the world’s top automotive OEMs. 

GEO released it’s first automotive product in 2015 and made the strategic decision to exclusively develop CVPs for automotive from that point forward. In the past 5 years we leveraged our world class team, our focused product strategy, and our customers to propel us to grow to the position of market leadership.” said Dave Orton, GEO Semiconductor CEO. “The world’s leading automotive companies chose GEO due to our camera, video, and computer vision expertise, and our ability to provide timely cutting edge solutions for these complex applications.
 

Go to the original article...

Bucket-Brigade Device Inventor Kees Teer Passed Away at the Age of 95

Image Sensors World        Go to the original article...

ED: A former Philips Research Labs head Kees Teer passed away at the age of 95. Kees was the inventor of a bucket-brigate device, the predecessor of the CCD.

Go to the original article...

Smartsens Claims #1 Spot in CIS Volume for Machine Vision Applications

Image Sensors World        Go to the original article...

Smartsens publishes a promotional video on global shutter advantages where it claims to be #1 in terms of machine vision image sensors shipment volume:



Update: Smartsens has updated the video with explanations on of the machine vision market positioning:

Go to the original article...

Samsung Aims to Take a Lead on Automotive CIS Market

Image Sensors World        Go to the original article...

PulseNews reports that, currently, Samsung market share in automotive image sensors is only 2%, after ON Semi, Omnivision, and Sony. Samsung intends to increase it and take a lead in automotive sensors.



Go to the original article...

ams’ NanEye Endoscopic Camera Reverse Engineering

Image Sensors World        Go to the original article...

SystemPlus publishes a reverse engineering of ams’ NanEye endoscopic camera:

"To achieve an exceedingly small size and minimal cost, the NanEye relegates memory and image processing functionality off-chip and uses low-voltage differential signaling to stream image data at 38 Mbps. The NanEye includes a wafer-level packaged (WLP) 1 x 1 mm2 249 x 250-pixel front-side illuminated CMOS image sensor designed by AWAIBA (acquired by ams in 2015) and WLO technology developed by Heptagon (acquired by ams in 2016). Through-silicon via technology connects the sensor to the 4-pad solder-masked ball grid array package on the backside, facilitating integration into novel imaging products. The camera can be ordered with several preset optical configurations with an F-stop range of F2.4 – 6.0 and a field of view (FOV) range of 90° – 160°. The version analyzed in this report has an F-stop of F#4.0 and FOV of 120°."

Go to the original article...

Samsung CIS Capacity Expansion Chart

Image Sensors World        Go to the original article...

IFNews quotes HSBC report showing Samsung CIS capacity expanison chart:

Go to the original article...

SPAD Super-Resolution Sensing

Image Sensors World        Go to the original article...

Nature publishes a joint paper of Bonn University, Germany, and Glasgow University, UK, "Super-resolution time-resolved imaging using computational sensor fusion" by C. Callenberg, A. Lyons, D. den Brok, A. Fatima, A. Turpin, V. Zickus, L. Machesky, J. Whitelaw, D. Faccio, and M. B. Hullin.

"Imaging across both the full transverse spatial and temporal dimensions of a scene with high precision in all three coordinates is key to applications ranging from LIDAR to fluorescence lifetime imaging. However, compromises that sacrifice, for example, spatial resolution at the expense of temporal resolution are often required, in particular when the full 3-dimensional data cube is required in short acquisition times. We introduce a sensor fusion approach that combines data having low-spatial resolution but high temporal precision gathered with a single-photon-avalanche-diode (SPAD) array with data that has high spatial but no temporal resolution, such as that acquired with a standard CMOS camera. Our method, based on blurring the image on the SPAD array and computational sensor fusion, reconstructs time-resolved images at significantly higher spatial resolution than the SPAD input, upsampling numerical data by a factor 12×12, and demonstrating up to 4×4 upsampling of experimental data. We demonstrate the technique for both LIDAR applications and FLIM of fluorescent cancer cells. This technique paves the way to high spatial resolution SPAD imaging or, equivalently, FLIM imaging with conventional microscopes at frame rates accelerated by more than an order of magnitude."

Go to the original article...

Brigates Prepares $207M IPO at Shanghai Stock Exchange

Image Sensors World        Go to the original article...

EastMoney, CapitalWhale, ElecFans: Yet another China-based image sensor company prepares an IPO at Shanghai Stock Exchange - Brigates (Chinese name - Ruixinwei or Ruixin Micro-Tech Innovation or Kunshan Ruixin.)

"The IPO of the Science and Technology Innovation Board intends to raise 1.347 billion yuan for the R&D and industrialization projects of high-end image sensor chips and movement, as well as development and technology reserve funds.

So, what is the advantage of Ruixinwei?

The prospectus declares that: the company’s technologies and products in the field of high-end image chip customization and high-sensitivity camera cores have reached the domestic leading and internationally advanced level; “has a number of domestic leading and internationally advanced core technologies” and “breaks through foreign countries. "The technology monopoly of giants", "The company has become one of the few companies in the world that master ECCD technology", "It has replaced and surpassed similar foreign products, and filled many gaps in the field of domestic image sensors", "A few global suppliers Business One", "in a dominant position."

The Shanghai Stock Exchange took note of the company's above statement and requested the company to list the basis for its described market position.

In the reply letter, Ruixin Micro stated that it has revised "replacement and surpasses similar foreign products" in the prospectus to "partially replace similar foreign products", and at the same time, it will "achieve a subversive replacement of vacuum analog signal device technology." "Modified to "Realize the renewal of vacuum analog signal device technology".

For other statements, Ruixinwei believes that the statement is well-founded. In particular, the company mentioned that it is "the few companies in the world that master ECCD technology."

“The MCCD and ECCD technology independently developed by Ruixin Micro is helpful to improve the imaging quality of the image sensor.”

“At present, CMOS image sensor is the mainstream technology route, accounting for nearly 90% of the image sensor market. Ruixin Micro is equivalent to taking a new technological path. Ruixin Micro has developed a high-sensitivity camera core with MCCD technology as its core, has achieved industrialization. However, ECCD process development is very difficult, and currently there are relatively few publicly available materials."

Go to the original article...

Luminar CES Presentation Compares LiDAR Approaches

Image Sensors World        Go to the original article...

Luminar publishes its presentations at CES2021. The first one done by Matt Weed compares the LiDAR technologies:


In its investor presentation, Luminar also shows its single-pixel InGaAs sensor integrated onto a Si ROIC and costing $3:

Go to the original article...

Modeling of Current-Assisted Photonic Demodulator for ToF Sensor

Image Sensors World        Go to the original article...

Hong Kong University publishes a video presentation "Compact Modeling of Current-Assisted Photonic Demodulator for Time-of-Flight CMOS Image Sensor" by Cristine Jin Delos Santos. The work has won Best Student Paper Award at IEEE Student Symposium on Electron Devices and Solid-State Circuits (s-EDSSC) in October 2020.

Go to the original article...

Review of SPAD Photon-to-Digital Converters

Image Sensors World        Go to the original article...

MDPI paper "3D Photon-to-Digital Converter for Radiation Instrumentation: Motivation and Future Works" by Jean-François Pratte, Frédéric Nolet, Samuel Parent, Frédéric Vachon, Nicolas Roy, Tommy Rossignol, Keven Deslandes, Henri Dautet, Réjean Fontaine, and Serge A. Charlebois from Université de Sherbrooke, Canada, reviews the new opportunities coming from SPAD stacked chip integration.

"Analog and digital SiPMs have revolutionized the field of radiation instrumentation by replacing both avalanche photodiodes and photomultiplier tubes in many applications. However, multiple applications require greater performance than the current SiPMs are capable of, for example timing resolution for time-of-flight positron emission tomography and time-of-flight computed tomography, and mitigation of the large output capacitance of SiPM array for large-scale time projection chambers for liquid argon and liquid xenon experiments. In this contribution, the case will be made that 3D photon-to-digital converters, also known as 3D digital SiPMs, have a potentially superior performance over analog and 2D digital SiPMs. A review of 3D photon-to-digital converters is presented along with various applications where they can make a difference, such as time-of-flight medical imaging systems and low-background experiments in noble liquids. Finally, a review of the key design choices that must be made to obtain an optimized 3D photon-to-digital converter for radiation instrumentation, more specifically the single-photon avalanche diode array, the CMOS technology, the quenching circuit, the time-to-digital converter, the digital signal processing and the system level integration, are discussed in detail."

Go to the original article...

Comments on Hamamatsu Patents Ownership Transfer to Sionyx

Image Sensors World        Go to the original article...

It appears that Federal Circuit decision to transfer an ownership of a number of Hamamatsu patents to SiOnyx has attracted quite a lot of attention from lawyers.

Troy & Schwartz comments: "These days, Non-Disclosure Agreement (NDA) templates are readily available on-line, often free-of-charge, making them an attractive alternative for many.  The problem with these templates is they are not necessarily applicable to the contracting parties’ unique circumstances and/or do not properly anticipate dealings between the parties. A poorly drafted, one-size-fits-all NDA can make or break a patent-infringement case many years into the future.

The outcome may well have been different had the NDA not “directed” the ownership of all future patents emanating from Sionyx’s confidential information to Sionyx.   Furthermore, any resulting patent relying on confidential information emanating from both parties should have designated both Cary and an Hamamtsu inventor as joint inventors no matter where the patent applications were filed.  Inventorship does not, however, mean that the inventor(s) is also the owner(s) of the patent.

As this case illustrates, an NDA can be a critical factor in determining patent (and other IP) ownership.  An NDA should be tailor-made for the particular situation at hand with particular emphasis on protecting the disclosing party which is often an individual inventor or a small start-up company."

Finnegan comments: "So, even though both parties ended up with co-inventors on the disputed US and foreign patents, SiOnyx ended up as the sole owner of all those patents.  The proofs established that confidential information relating to the patents came solely from SiOnyx.  Hence, the terms of the NDA led to SiOnyx being the sole owner of all disputed patents."

Oliff writes: "Hamamatsu argued on appeal that because the district court acknowledged that Hamamatsu's personnel were co-inventors of the patents, it should have at most granted SiOnyx co-ownership rights. The Federal Circuit stated that inventorship was irrelevant to the issue of ownership, and that the terms of the NDA stated that when patents arose from a party's confidential information, that party would fully own such patents."

Go to the original article...

Apple iPhone 12 Cameras are Cheaper than iPhone 11’s

Image Sensors World        Go to the original article...

Counterpoint Research says that iPhone 12 cameras cost $3.6 less than iPhone 11's:

Go to the original article...

ToF News: Chronoptics, Opnous, Microsoft

Image Sensors World        Go to the original article...

Chronoptics announces its ToF noise filter and ToF camera based on Melexis VGA sensor:



Opnous publishes a datasheet of its OPNCAM8508 QVGA ToF camera based on the company's imager:


Opnous also unveils OPN6001 ToF ISP chip:


Opnous OPNM8518A VGA camera module consumes 500mW typ and has a range of 1.2m:


MDPI paper "Evaluation of the Azure Kinect and Its Comparison to Kinect V1 and Kinect V2" by Michal Tölgyessy, Martin Dekan, Ľuboš Chovanec, and Peter Hubinský from Institute of Robotics and Cybernetics, Slovakia shows Microsoft progress over the years.

"The Azure Kinect is the successor of Kinect v1 and Kinect v2. In this paper we perform brief data analysis and comparison of all Kinect versions with focus on precision (repeatability) and various aspects of noise of these three sensors. Then we thoroughly evaluate the new Azure Kinect; namely its warm-up time, precision (and sources of its variability), accuracy (thoroughly, using a robotic arm), reflectivity (using 18 different materials), and the multipath and flying pixel phenomenon. Furthermore, we validate its performance in both indoor and outdoor environments, including direct and indirect sun conditions. We conclude with a discussion on its improvements in the context of the evolution of the Kinect sensor. It was shown that it is crucial to choose well designed experiments to measure accuracy, since the RGB and depth camera are not aligned. Our measurements confirm the officially stated values, namely standard deviation ≤17 mm, and distance error <11 mm in up to 3.5 m distance from the sensor in all four supported modes. The device, however, has to be warmed up for at least 40–50 min to give stable results. Due to the time-of-flight technology, the Azure Kinect cannot be reliably used in direct sunlight. Therefore, it is convenient mostly for indoor applications."

Go to the original article...

Samsung Adds 4th Sensor to its 108MP Lineup

Image Sensors World        Go to the original article...

BusinessWireSamsung introduces its latest 108MP mobile sensor, the 0.8um 1/1.33-inch ISOCELL HM3. This is the 4th 108MP sensor in the company's lineup after HMX, HM1, and HM2.

Samsung has been at the forefront of bringing the most pixels to mobile image sensors as well as various supporting technologies that take sensor performances to the next level,” says Duckhyun Chang, EVP of the sensor business at Samsung Electronics. “The ISOCELL HM3 is the culmination of Samsung’s latest sensor technologies that will help deliver premium mobile experiences to today’s smart-device users.

For faster AF, the HM3 integrates an improved Super PD Plus feature. Super PD Plus adds AF-optimized micro-lenses over the phase detection focusing agents, increasing measurement accuracy of the agents by 50%.

The HM3 also adopts Smart ISO Pro, HDR  technology which uses an intra-scene dual conversion gain (iDCG) solution. Smart ISO Pro simultaneously captures a frame in both high and low ISO, then merges them into a single image in 12-bit color depth and with reduced noise. As Smart ISO Pro does not require multiple exposure shots to create a standard HDR image, it can significantly reduce motion-artifacts. In addition, with a low-noise mode, it improves the light sensitivity by 50% over its predecessor.

The HM3’s pixel layout is arranged in three-by-three single color structures suitable for nine-pixel binning. By merging nine neighboring pixels, the 108Mp HM3 mimics a 12Mp image sensor with large 2.4μm-pixels. With an improved binning hardware IP, the HM3 supports seamless transitions between 108MP and 12MP resolutions.

Design of the new sensor has also been optimized to reduce power in preview mode by 6.5%.

Samsung ISOCELL HM3 is currently in mass production.

Go to the original article...

CES News: Intel Demos Realsense ID and MEMS LiDAR

Image Sensors World        Go to the original article...

Intel posted two videos with demos of its recently announced Realsense ID and  L515 MEMS LiDAR:

Go to the original article...

css.php