Nature Paper on Camera-Equipped Toilet

Image Sensors World        Go to the original article...

Vice.com: Nature publishes a Stanford University paper on camera usage in the toilet "A mountable toilet system for personalized health monitoring via the analysis of excreta" by Seung-min Park, Daeyoun D. Won, Brian J. Lee, Diego Escobedo, Andre Esteva, Amin Aalipour, T. Jessie Ge, Jung Ha Kim, Susie Suh, Elliot H. Choi, Alexander X. Lozano, Chengyang Yao, Sunil Bodapati, Friso B. Achterberg, Jeesu Kim, Hwan Park, Youngjae Choi, Woo Jin Kim, Jung Ho Yu, Alexander M. Bhatt, Jong Kyun Lee, Ryan Spitler, Shan X. Wang & Sanjiv S. Gambhir.

"The ‘smart’ toilet, which is self-contained and operates autonomously by leveraging pressure and motion sensors, analyses the user’s urine using a standard-of-care colorimetric assay that traces red–green–blue values from images of urinalysis strips, calculates the flow rate and volume of urine using computer vision as a uroflowmeter, and classifies stool according to the Bristol stool form scale using deep learning, with performance that is comparable to the performance of trained medical personnel. Each user of the toilet is identified through their fingerprint and the distinctive features of their anoderm, and the data are securely stored and analysed in an encrypted cloud server. The toilet may find uses in the screening, diagnosis and longitudinal monitoring of specific patient populations."

To estimate the speed, size, and other spatial parameters, the prototype system creates 3D image using a stereo pair of GoPro Hero 7 cameras in 1.2MP 240fps high speed mode. Another two cameras are used for color and shape analysis and a person identification.

Go to the original article...

Starsky Post-Mortem

Image Sensors World        Go to the original article...

Stefan Seltz-Axmacher, CEO and founder of Starsky Robotics, publishes an analysis of the failure of his autonomous tracking company. Forbes reporter Brad Templeton publishes his view on the company demise too. The two articles point to the problems in AI technology for autonomous driving:


"In 2016, we became the first street-legal vehicle to be paid to do real work without a person behind the wheel. In 2018, we became the first street-legal truck to do a fully unmanned run, albeit on a closed road. In 2019, our truck became the first fully-unmanned truck to drive on a live highway. And in 2020, we’re shutting down.

There are too many problems with the AV industry to detail here: the professorial pace at which most teams work, the lack of tangible deployment milestones, the open secret that there isn’t a robotaxi business model, etc. The biggest, however, is that supervised machine learning doesn’t live up to the hype.

Rather than seeing exponential improvements in the quality of AI performance (a la Moore’s Law), we’re instead seeing exponential increases in the cost to improve AI systems — supervised ML seems to follow an S-Curve.
"


"The S-Curve here is why Comma.ai, with 5–15 engineers, sees performance not wholly different than Tesla’s 100+ person autonomy team. Or why at Starsky we were able to become one of three companies to do on-public road unmanned tests (with only 30 engineers)."

Go to the original article...

Tamron 70-180mm f2.8 Di III VXD review – preview

Cameralabs        Go to the original article...

The Tamron 70-180mm f2.8 Di III VXD is a telephoto zoom for Sony’s Alpha mirrorless cameras and corrected for full-frame sensors. It complements Tamron's other e-mount zooms with fast f2.8 focal ratios: the 17-28mm f2.8 Di III RXD and the 28-75mm f2.8 Di III RXD. Check out our preview!…

The post Tamron 70-180mm f2.8 Di III VXD review – preview appeared first on Cameralabs.

Go to the original article...

Hybrid Bonding Thesis

Image Sensors World        Go to the original article...

University of Grenoble Alpes publishes a PhD Thesis "Numerical and Experimental Investigations on Mechanical Stress in 3D Stacked Integrated Circuits for Imaging Applications" by Clément Sart.

"In recent years, a number of physical and economical barriers have emerged in the race for miniaturization and speed of integrated circuits. To circumvent these issues, new processes and architectures are continuously developed. In particular, a progressive shift towards 3D integration strategies is currently observed in the semiconductor industry as an alternative path to further transistor downscaling. This innovative approach consists in combining chips of different technologies or different functionalities into a single module. A possible strategy to realize such heterogeneous systems is to stack chips on top of each other instead of tiling them on the plane, enabling considerable benefits in terms of compactness and versatility, but also increased performance.

This is especially true for image sensor chips, for which vertical stacking allows the incorporation of additional functionalities such as advanced image signal processing. Among various methods to achieve direct vertical interconnections between stacked chips, a promising method is Cu/SiO2 hybrid bonding, enabling simultaneous mechanical and electrical connection with a submicron interconnection pitch mostly limited by photolithography resolution and alignment accuracy.The mechanical integrity of the different electrical connection elements for such a 3D integrated imager-on-logic device is of critical importance.

The aim of this thesis is to investigate the mechanical robustness of this relatively new architecture in semiconductor manufacturing during its fabrication, aiming to address a number of possible issues from a thermomechanical perspective. In this work, thermomechanical stresses building up in the image sensor during chip processing and assembly onto a package are investigated, and the interactions between the different system components analyzed. The mechanical integrity of several key structures is studied, namely (i) interconnection pads at the hybrid bonding interface between the imager/logic chips, (ii) bondpad structures below the wires connecting the imager to the package substrate, and (iii) semiconductor devices in the image sensor, through in-situ evaluation of process-induced mechanical stresses using doped Si piezoresistive stress sensors. To do so, for each item a combined numerical and experimental approach was adopted, using morphological, mechanical and electrical characterizations, then correlated or extended by thermomechanical finite element analyses, allowing to secure product integration from a thermomechanical perspective.
"

Go to the original article...

Quanergy Changes CEO, Raises More Money

Image Sensors World        Go to the original article...

BusinessWire: Quanergy appoints Kevin J. Kennedy as the company’s new CEO and secures an new funding round. Kennedy keeps serving as a senior managing director of Blue Ridge Partners, one of the Quanergy investors. "While many LiDAR companies are focused on building LiDAR solely for transportation purposes, since its inception, Quanergy has emphasized the development of its technology for multiple industries,” says Kennedy. “With this new capital, we are deepening our investment in our team and our technology and are positioned to prove the value of LiDAR for broader market applications."

BusinessWire: Louay Eldada has stepped down from his positions as Quanergy CEO and board member, effective January 13, 2020. His new role in the company is defined as "Senior Evangelist."

Go to the original article...

Analog-to-Information CMOS Sensor for Image Recognition

Image Sensors World        Go to the original article...

CEA-Leti publishes a PhD Thesis "Exploring analog-to-information CMOS image sensor design taking advantage on recent advances of compressive sensing for low-power image classification" by Wissam Benjilali.

"Recent advances in the field of CMOS Image Sensors (CIS) tend to revisit the canonical image acquisition and processing pipeline to enable on-chip advanced image processing applications such as decision making. Despite the tremendous achievements made possible thanks to technology node scaling and 3D integration, designing a CIS architecture with on-chip decision making capabilities still a challenging task due to the amount of data to sense and process, as well as the hardware cost to implement state-of-the-art decision making algorithms.

In this context, Compressive Sensing (CS) has emerged as an alternative signal acquisition approach to sense the data in a compressed representation. When based on randomly generated sensing models, CS enables drastic hardware saving through the reduction of Analog to Digital conversions and data off-chip throughput while providing a meaningful information for either signal recovery or signal processing. Traditionally, CS has been exploited in CIS applications for compression tasks coupled with a remote signal recovery algorithm involving high algorithmic complexity. To alleviate this complexity, signal processing on CS provides solid theoretical guarantees to perform signal processing directly on CS measurements without significant performance loss opening as a consequence new ways towards the design of low-power smart sensor nodes.Built on algorithm and hardware research axes, this thesis illustrates how Compressive Sensing can be exploited to design low-power sensor nodes with efficient on-chip decision making algorithms.

After an overview of the fields of Compressive Sensing and Machine Learning with a particular focus on hardware implementations, this thesis presents four main contributions to study efficient sensing schemes and decision making approaches for the design of compact CMOS Image Sensor architectures. First, an analytical study explores the interest of solving basic inference tasks on CS measurements for highly constrained hardware. It aims at finding the most beneficial setting to perform decision making on Compressive Sensing based measurements.

Next, a novel sensing scheme for CIS applications is presented. Designed to meet both theoretical and hardware requirements, the proposed sensing model is shown to be suitable for CIS applications addressing both image rendering and on-chip decision making tasks. On the other hand, to deal with on-chip computational complexity involved by standard decision making algorithms, new methods to construct a hierarchical inference tree are explored to reduce MAC operations related to an on-chip multi-class inference task. This leads to a joint acquisition-processing optimization when combining hierarchical inference with Compressive Sensing.

Finally, all the aforementioned contributions are brought together to propose a compact CMOS Image Sensor architecture enabling on-chip object recognition facilitated by the proposed CS sensing scheme, reducing as a consequence on-chip memory needs. The only additional hardware compared to a standard CIS architecture using first order incremental Sigma-Delta Analog to Digital Converter (ADC) are a pseudo-random data mixing circuit, an +/-1 in-Sigma-Delta modulator and a small Digital Signal Processor (DSP). Several hardware optimization are presented to fit requirements of future ultra-low power (≈µW) CIS design.
"

Go to the original article...

Velodyne Moves Production Overseas, Lays Off 140 Employees

Image Sensors World        Go to the original article...

Bloomberg reports that Velodyne Lidar was sued for laying off 140 workers with one day’s notice. Velodyne was expected to provide 60 days notice, but instead told employees in a written notice they were being let go because of the pandemic. The ex-employees complaint claims that “had already begun transferring production jobs overseas beginning in the summer of 2019 and had planned to continue doing so prior to the outbreak of Covid-19.

It appears to be another indication that LiDAR Mega-factory project in San Jose does not go well. Just a year ago, David Hall, Velodyne Founder and then-CEO, said "San Jose has a large and available skilled labor force that, while not price competitive with anywhere in Asia, does a higher quality job than we would get by assembling the units elsewhere."

Silion Valley Business Journal: Velodyne is valued at about $1.8b after raising about $225M from investors including Nikon, Ford, and Baidu.

Go to the original article...

Single-Photon CMOS Pixel Using Multiple Non-Destructive Signal Sampling

Image Sensors World        Go to the original article...

MDPI paper "Simulations and Design of a Single-Photon CMOS Imaging Pixel Using Multiple Non-Destructive Signal Sampling" by by Konstantin D. Stefanov, Martin J. Prest, Mark Downing, Elizabeth George, Naidu Bezawada, and Andrew D. Holland from The Open University, UK, and European Southern Observatory, Germany, describes a 10um pixel with 0.15e- noise in 180nm process.

"A single-photon CMOS image sensor (CIS) design based on pinned photodiode (PPD) with multiple charge transfers and sampling is described. In the proposed pixel architecture, the photogenerated signal is sampled non-destructively multiple times and the results are averaged. Each signal measurement is statistically independent and by averaging, the electronic readout noise is reduced to a level where single photons can be distinguished reliably. A pixel design using this method was simulated in TCAD and several layouts were generated for a 180-nm CMOS image sensor process. Using simulations, the noise performance of the pixel was determined as a function of the number of samples, sense node capacitance, sampling rate and transistor characteristics. The strengths and limitations of the proposed design are discussed in detail, including the trade-off between noise performance and readout rate and the impact of charge transfer inefficiency (CTI). The projected performance of our first prototype device indicates that single-photon imaging is within reach and could enable ground-breaking performances in many scientific and industrial imaging applications."

Go to the original article...

Ibeo 4D LiDAR Looks Similar to Apple iPad Pro

Image Sensors World        Go to the original article...

Ibeo presented its 4D solid-state LiDAR at EPIC World Photonics Technology Summit in San Francisco on Feb 3, 2020. It looks quite similar to the one inside Apple iPad Pro 2020, other than a much longer range of Ibeo LiDAR:

iPad Pro 2020 LiDAR:


Ibeo LiDAR:



Go to the original article...

Emberion Graphene-based SWIR Sensor Presentation

Image Sensors World        Go to the original article...

Emberion CEO Tapani Ryhanen presented the company technology at EPIC World Photonics Technology Summit 2020 held on Feb. 3 in San Francisco:


Go to the original article...

IWISS2020 Cancellation

Image Sensors World        Go to the original article...

The bi-annual International Workshop on Imaging Systems and Image Sensors (IWISS) that was supposed to be held in Tokyo, Japan in November 2020 is cancelled due to coronavirus pandemic. The next IWISS is scheduled for November 2022.

Go to the original article...

Fraunhofer Converts IR Photons to Visible Through Quantum Entanglement

Image Sensors World        Go to the original article...

Fraunhofer IOF reports: "Bio-substances such as proteins, lipids and other biochemical components can be distinguished based on their characteristic molecular vibrations. These vibrations are stimulated by light in the mid-infrared to terahertz range and are very difficult to detect with conventional measurement techniques.

But how can information from these extreme wavelength ranges be made visible? The quantum mechanical effect of photon entanglement is helping the researchers allowing them to harness twin beams of light with different wavelengths. In an interferometric setup, a laser beam is sent through a nonlinear crystal in which it generates two entangled light beams. These two beams can have very different wavelengths depending on the crystal’s properties, but they are still connected to each other due to their entanglement.

“So now, while one photon beam in the invisible infrared range is sent to the object for illumination and interaction, its twin beam in the visible spectrum is captured by a camera. Since the entangled light particles carry the same information, an image is generated even though the light that reaches the camera never interacted with the actual object,” explains [Markus] Gräfe. The visible twin essentially provides insight into what is happening with the invisible twin.
"

Go to the original article...

Actlight Announces Array of DPDs

Image Sensors World        Go to the original article...

Yahoo, PRNewswire: ActLight announces that the Dynamic PhotoDiode (DPD) sensor array has been fabricated and passed the first set of tests.

"The development of a very performant 3D image sensor based on our patented DPD technology is a great challenge for us at ActLight," said Serguei Okhonin, ActLight Co-Founder and CEO. "Seeing the performance of the first prototypes, in particular the absence of crosstalk between pixels and the first pictures produced by the array, and also considering that prototypes were built with standard CMOS image sensors technology give us the highest level of motivation to continue to invest in this project to build the high performance 3D image sensor that exceed the market expectations in terms of precision and efficiency."

Go to the original article...

International SPAD Sensor Workshop Goes Virtual

Image Sensors World        Go to the original article...

Due to coronavirus pandemy, International SPAD Sensor Workshop 2020 (ISSW2020) will be run as a virtual conference on June 8-9 this year. The agenda is tightly packed with excellent presentations:

  • Charge-Focusing SPAD Image Sensors for Low Light Imaging Applications
    Kazuhiro Morimoto, Canon
  • Custom silicon technologies for high detection efficiency SPAD arrays
    Angelo Gulinatti, Politecnico di Milano
  • LFoundry: SPAD, status and perspective
    Giovanni Margutti, Lfoundry
  • Device and method for a precise breakdown voltage detection of APD/SPAD in a dark environment
    Alexander Zimmer, XFAB
  • Ge on Si SPADs for LIDAR and Quantum Technology Applications
    Douglas Paul, University of Glasgow
  • 3D-Stacked SPAD in 40/45nm BSI Technology
    Georg Rohrer, AMS
  • BSI SPAD arrays based on wafer bond technology
    Werner Brockherde, Fraunhofer
  • Planar Microlenses for SPAD sensors
    Norbert Moussy, CEA-LETI
  • 3D Integrated Frontside Illuminated Photon-to-Digital Converters: Status and Applications
    Jean-Francois Pratte, University of Sherbrooke
  • Combining linear and SPAD-mode diode operation in pixel for wide dynamic range CMOS optical sensing
    Matthew Johnston, Oregon State University
  • ToF Image Sensor Systems using SPADs and Photodiodes Simon Kennedy, Monash University
  • A 1.1 mega-pixels vertical avalanche photodiode (VAPD) CMOS image sensor for a long range time-of-flight (TOF) system
    Yukata Hirose, Panasonic
  • Single photon detector for space active debris removal and exploration
    Alexandre Pollini, CSEM
  • 4D solid state LIDAR – NEXT Generation NOW
    Unsal Kabuk, IBEO
  • Depth and Intensity LiDAR imaging with Pandion SPAD array
    Salvatore Gnecchi, OnSemi
  • LIDAR using SPADs in the visible and short-wave infrared
    Gerald Buller, Heriot-Watt University
  • InP-based SPADs for Automotive Lidar
    Mark Itzler, Argo AI
  • Custom Focal Plane Arrays of SWIR SPADs
    Erik Duerr, MIT Lincoln Labs
  • CMOS SPAD Sensors with Embedded Smartness
    Angel Rodriguez-Vasquez, University of Seville
  • Modelling TDC Circuit Perfromance for SPAD Sensor Arrays
    Daniel van Blerkom, Ametek (Forza)
  • Data processing of SPAD sensors for high quality imaging
    Chao Zhang, Adaps Photonics
  • Scalable, Multi-functional CMOS SPAD arrays for Scientific Imaging
    Leonardo Gasparini, FBK
  • Small and Smart SPAD Pixels
    Edoardo Charbon, EPFL
  • High-resolution imaging of the spatio-temporal dynamics of protein interactions via fluorescence lifetime imaging with SPAD arrays
    Simon Ameer-Beg, King's College
  • Image scanning microscopy with classical and quantum correlation contrasts
    Ron Tenne, Weizmann Institute
  • Imaging oxygenation by near-infrared optical tomography based on SPAD image sensors
    Martin Wolf, ETH Zurich
  • Raman spectroscopy utilizing a time resolving CMOS SPAD line sensor with a pulsed laser excitation
    Ilkka Nissinen, University of Oulu
  • Optical wireless communication with SPAD receivers
    Hiwa Mahmoudi, TU Wien
  • SPAD Arrays for Non-Line-of-Sight Imaging
    Andreas Velten, University of Wisconsin

Go to the original article...

LiDAR News: Blickfeld, Cepton, SiLC, Velodyne, Espros

Image Sensors World        Go to the original article...

Munich, Germany-based LiDAR start-up Blickfeld completes its Series A financing round led by the VC unit of Continental together with Wachstumsfonds Bayern, with participation of the existing investors Fluxunit – OSRAM Ventures, High-Tech Gründerfonds, TEV (Tengelmann Ventures) and Unternehmertum Venture Capital Partners. Blickfeld will use the new financial resources to ramp up production, qualify its LiDAR sensors for the automotive market and strengthen the application development and sales teams for industrial markets.

The safety of autonomous vehicles is based on LiDAR sensor technology. We see Blickfeld in a unique position here, as our technology stands out due to its mass market capability,” says Blickfeld co-founder Florian Petit. “But the mobility sector is not the only area of application for our LiDAR sensors and recognition software: Numerous other successful customer projects in logistics, smart cities or the security sector confirm our approach, as does the financial commitment of the venture capital unit of Continental, Bayern Kapital and our previous investors. We are now looking forward to taking the next steps into series production.

The start-up Blickfeld, founded by Mathias Müller, Florian Petit and Rolf Wojtech, has grown to a team of now over 100 people since it was founded three years ago.


Mission publishes an interview with Cepton CEO Jun Pei:

"In the next decade or two Lidar will be just as common as cameras. The third dimension gives you an extra piece of data that’s critical while also removing a concern. Jun explains that there are more concerns with privacy when dealing with cameras. Lidar doesn’t have that issue because it doesn’t worry about facial recognition or color. It doesn’t measure the privacy-related data that people have issues with.

So with that said, the future is not about improving accuracy, it’s more about cost, reliability, and deployment in applications.
"

PRNewswire: SiLC Technologies, the developer of single-chip FMCW LiDAR, closes $12M in seed funding led by Dell Technologies Capital and joined by Decent Capital, ITIC Ventures, and several angel investors. SiLC will use the funding to scale its R&D and operations to develop its FMCW silicon photonic 4D+ Vision Chip platform.

The announcement follows a successful demo of the fully-integrated FMCW chip able to detect objects smaller than one and a half inches at a range of nearly 200 meters, translating to an effective resolution of around 0.01 degrees vertically and horizontally. This level of performance capability can enable a vehicle traveling at highway speed to stop or avoid objects at more than 200 meters range, a critical aspect of autonomous vehicle navigation and safety.

"This is my third startup and by far the most exciting, both at a technology level and the size of the markets it addresses. We believe we have an opportunity to transform several industries," said Mehdi Asghari, founder and CEO, SiLC. "Our 4D+ Vision Chip technology will not only make LiDAR a commercial reality but will also enable applications ranging from robotics to AR/VR to biometric scanning."


Here is the SiLC CEO presentation at AutoSens Brussels 2019:



TechBriefs interviews Velodyne CEO Anand Gopalan about the challenges in autonomous car design:

"On the autonomous side, there are two things that are very challenging. The first is that you are dealing with the tyranny of corner cases. There are a lot of critical corner scenarios that autonomous vehicles have to deal with, which require a lot more innovation in software, sensor, and computing hardware. For example, say you have an autonomous robo-taxi that has dropped a pedestrian at a curbside and now needs to pull back into the main traffic. It needs to make sure everything around the vehicle is safe: the passenger has moved away from the vehicle, there are no bicyclists zooming by, vehicles trying to pull in — all sorts of things you might not encounter in just riding down the street. People are dealing with what I call the tyranny of corner cases by sometimes modifying software and in some cases going back to the drawing board in terms of hardware.

The second aspect is speed. Fleets of vehicles are being deployed in some very dense urban environments, driving at 30 miles per hour or so. But in order to make a viable car you need to go to at least 40 to 45 miles per hour. This introduces many new challenges in terms of perception as well as speed of reaction.
"


Autosens publishes Espros CCD LiDAR presentation by Beat De Coi, Founder and CEO, in Brussels:


Go to the original article...

Yole Forecats 1M Robotic Vehicles Till 2030

Image Sensors World        Go to the original article...

Yole Developpement report "Sensors for Robotic Mobility 2020" forecasts:

"Regardless of the naysayers, robotic vehicle technology will provide the Netflix of mobility before 2032.

Carmakers developing Advanced Driver Assistance System (ADAS) technology have now mainly chosen a camera-and-radar approach. As Mr E. Musk, the CEO of Tesla, said: “LiDAR is a fool’s errand […] in the automotive context”

Growth rates are expected to be impressive. In 2019 production of robotic vehicles was in the range of a few thousand worldwide. Yole analysts expect production volumes to reach 400Ku units annually, with cumulative production of 1B units, by 2032. This ramp up forecast is based on a 51% compound annual growth rate (CAGR) for the next 15 years. By then, the total revenue associated with the production of robotic vehicles will reach $60B. 40% of that figure will originate from the vehicles themselves, 28% will come from sensing hardware, 28% from computing hardware and the remaining 4% will be from integration. This means that within 15 years complete industries will be structured around robotic vehicle technologies.

When looking closer to the present, in 2024 Yole analysts expect sensor revenues to reach $0.4B for LiDAR, $60M for radar, $160M for cameras, $230M for IMUs and $20M for GNSS devices. The split between the different sensor modalities may not stay the same for the 15 years to come.

Nevertheless the total envelope for sensing hardware should reach $17B in 2032, while, for comparative purposes, computing should be in the same range.
"

Go to the original article...

Quantum Semiconductor Proposes SiGeC on CMOS Itergration for Imaging and LiDARs

Image Sensors World        Go to the original article...

SBIR Transition flyer on Quantum Semiconductor LLC works on SiGeC super-lattice integration in CMOS process describes the advantages of this approach:

"Product development is being planned for chip design and manufacture at a US-based BiCMOS foundry. The Gen 1 prototype sensor is a 128x128 CMOS sensor array with near single photon counting, high dynamic range, suitable for passive imaging and LIDAR, operating in Visible and NIR, with large internal gain (greater than 100K) at low voltages (less than 3.5V).

The development of Group-IV superlattice films capable of covering SWIR to 1.6µm with a coefficient of absorption comparable to that of InGaAs, is currently underway. Gen 2 sensors will incorporate Group IV superlattices into photo-diodes with large internal gain, to make large 1 MegaPixel CMOS Image Sensor arrays for near-photon counting, high dynamic range in SWIR.
"

Go to the original article...

UX Factory Image Sensor with Integrated AI

Image Sensors World        Go to the original article...

engnews24h.com quotes Park Jun-young, CEO of a startup UX Factory, Korea, saying that the company has developed an image sensor with integrated AI engine:

"This 'cognitive' sensor, which will be located next to the main image sensor, is designed to be used only for face recognition, object recognition, and QR code recognition. AI chip technology from UX Factory and technology from a domestic image sensor design company are combined. This chip is an ultra-low-power chip that reduces the amount of power when an electronic device recognizes an object by a hundredth of a conventional sensor.

“If the existing image sensor's object recognition operating power was 1 W (watt), the product developed this time can reduce the power to a maximum of 10 mW (milliwatt)


The new image sensor appears to be a continuation of K-Eye cooperation project with KAIST presented at ISSCC in 2017.

"The goal of Park is to promote a sample chip in the second half, and then to mass-produce the chip in the first half of next year and apply it to home appliances."

Go to the original article...

Huawei Smartphone Claimed to Measure Human Body Temperature with Si-based Cameras

Image Sensors World        Go to the original article...

cnTechPost quotes Huawei Consumer Business CEO Richard Yu on-line interview:

"When asked if the P40 series has some functional design in terms of hygiene, Yu mentioned that the P40 Pro+ can detect human body temperature very accurately through a rear camera with a unique algorithm.

Yu added that its global sales team initially stated that such a feature was not needed, but the Chinese team insisted. Of course, Yu pointed out that Huawei cares about user privacy, and related functions require consumer authorization to turn on.

According to Yu's outlook, with AI training and sensor cooperation, Huawei products will have a bigger stage in the future. He also previewed an app that can detect data such as breathing rate, pressure value, heart rate, etc., which is currently ready and will be the first to be launched for Chinese users.
"

Go to the original article...

Tamron 20mm f2.8 Di III M1:2 review – sample images

Cameralabs        Go to the original article...

The Tamron 20mm f2.8 Di III OSD M1:2 is an ultra-wide prime lens for Sony’s E-mount mirrorless cameras. It's part of a three lens series from Tamron all of which let you focus closer than most rivals. Ahead of my full review I have a selection of sample images for you!…

The post Tamron 20mm f2.8 Di III M1:2 review – sample images appeared first on Cameralabs.

Go to the original article...

TechInsights Finds Sony ToF Sensor Inside iPad Pro LiDAR, iFixit Tests LiDAR Operation

Image Sensors World        Go to the original article...

TechInsights twits first info from Apple iPad Pro 2020 teardown, saying that LiDAR sensor is made by Sony.

Update: TechInsights fixed a typo in the original twit. The spatial resolution is 0.03MP, 10x lower than initially reported.

"TechInsights has begun the teardown process of #Apple iPad Pro (Model A2068). Our early findings indicate a 4.18 mm x 4.30 mm (18.0 mm²) #Sony ToF sensor with 0.03 MP resolution & 10 µm pitch pixels within the #LiDAR system. Our analysis continues with in-depth reports to follow."


iFixIt publishes a teardown video showing that LiDAR IR illumination pattern is less dense than a FaceID one:


Go to the original article...

Current-Assisted SPAD

Image Sensors World        Go to the original article...

Vrije Universiteit Brussel, Belgium, publishes a paper "Current-Assisted Single Photon Avalanche Diode (CASPAD) Fabricated in 350 nm Conventional CMOS" by Gobinath Jegannathan, Hans Ingelberts, and Maarten Kuijk.

"A current-assisted single-photon avalanche diode (CASPAD) is presented with a large and deep absorption volume combined with a small p-n junction in its middle to perform avalanche trigger detection. The absorption volume has a drift field that serves as a guiding mechanism to the photo-generated minority carriers by directing them toward the avalanche breakdown region of the p-n junction. This drift field is created by a majority current distribution in the thick (highly-resistive) epi-layer that is present because of an applied voltage bias between the p-anode of the avalanching region and the perimeter of the detector. A first CASPAD device fabricated in 350-nm CMOS shows functional operation for NIR (785-nm) photons; absorbed in a volume of 40 × 40 × 14 μm3. The CASPAD is characterized for its photon-detection probability (PDP), timing jitter, dark-count rate (DCR), and after pulsing."

Go to the original article...

Sony Statement on Coronavirus Impact

Image Sensors World        Go to the original article...

Sony releases "Statement Regarding the Impact of the Spread of the Novel Coronavirus:"

"At this time, there has been no material impact on the production of CMOS image sensors, including any impact on the procurement of materials. However, Sony's primary customers in this segment are smartphone makers who rely on supply chains in China, and although recovery in these supply chains has led to sales gradually returning to normal levels, there is a risk that going forward sales could be impacted by a slowdown in the smartphone market."

Go to the original article...

Cambridge Mechatronics 3D Sensing Technology

Image Sensors World        Go to the original article...

Cambridge Mechatronics uses Apple iPad Pro LiDAR announcement opportunity to emphasize advantages of its 3D sensing technology:

"Systems using Indirect Time of Flight (iToF) technology have shipped in Android smartphones for some time, but their practical working range is only around two metres. This has limited their use to camera enhancements such as portrait photo background blurring. Apple advise their Direct Time of Flight (dToF) technology has a useful range of five metres.

To unlock the broadest range of AR user experiences, accurately measuring depth of ten metres or more is necessary. All technologies in use today compromise system resolution and performance when increasing range. However, CML has developed technology combining optical components, actuators and software to increase working range to ten metres and more without any compromise to measurement resolution or performance. This gives a best of both worlds solution targeted at smartphones, tablets and other mobile devices.

CML’s 3D sensing enhancement technology is available to licence now. We are working with our global partners, including major device brands and their supply chains, to bring the most engaging and immersive next generation AR experiences to consumers.
"


Update: A PCT Patent Application WO2020030916 "Improved 3D Sensing" by David Richards and Joshua Carr describes the company's approach:

"...there is provided an apparatus for use in generating a three-dimensional representation of a scene, the apparatus comprising: a time-of-flight (ToF) imaging camera system comprising a multipixel sensor and a light source and arranged to emit illumination having a spatially-nonuniform intensity over the field of view of the sensor; and an actuation mechanism for moving the illumination across at least part of the field of view of the sensor, thereby enabling generation of the representation. This may be achieved without moving the sensor.

The non-uniform illumination may be any form of illumination, including a beam of light, a pattern of light, a striped pattern of light, a dot pattern of light.
"

Go to the original article...

ST Announces 3rd Generation Global Shutter Stacked Sensors

Image Sensors World        Go to the original article...

GlobeNewswire: STMicro aims to computer-vision applications with new high-speed image sensors with global shutter. The new stacked sensors feature class-leading pixel size, high sensitivity, and low crosstalk.

The VD55G0 with 640 x 600 pixels and the VD56G3 with 1.5MP measure 2.6mm x 2.5mm and 3.6mm x 4.3mm, respectively, said to be the smallest on the market in relation to resolution. Embedded optical-flow processing in the VD56G3 calculates movement vectors, without the need for host computer processing. Samples are shipping now to lead customers.

These new global shutter image sensors are based on our third generation of advanced pixel technology and deliver significant improvements in performance, size, and system integration,” said Eric Aussedat, Imaging Sub-Group General Manager and EVP of the Analog, MEMS and Sensors Group, STMicro. “They are enabling another step forward in computer-vision applications, empowering designers to create tomorrow’s smart, autonomous industrial and consumer devices.

Go to the original article...

Senseeker Announces 8 µm and 12 µm Pitch Dual-Band IR DROICs

Image Sensors World        Go to the original article...

Senseeker Engineering announces the Oxygen RD0092, the world's first 8 µm pitch dual-band digital readout IC (DROIC). The Oxygen RD0092 supports a 1280 x 720 frame size at over 500 fps and dual-polarity inputs to provide compatibility with all industry-standard direct-injection detector materials. The solution was designed to optimize infrared imaging system performance through state-of-the-art integrated features and multiple operating modes that offer flexibility for a wide range of high-performance application requirements.

"The RD0092 is our first off-the-shelf readout product and we wanted to make sure that it strikes the right balance between being feature-rich and easy to operate," said Thomas Poonnen, Director of Engineering. "You can change operating modes or window sizes on the fly and toggle detector polarity or checkerboard integration pattern between frames, all of which can be accomplished by flipping just a few bits."


Senseeker Engineering announces the Magnesium MIL RP0092, an advanced 12 µm pitch high dynamic range dual-band digital pixel readout IC (DPROIC). Product sales are restricted to customers who have approval from the U.S. Government. The Magnesium MIL RP0092 supports a 1280 x 720 frame size at up-to 120 fps, with dual-polarity inputs to provide compatibility with all industry-standard direct-injection compatible detector materials.


Thanks to MJ for the pointer!

Go to the original article...

Yole on Coronavirus Impact on CIS Market

Image Sensors World        Go to the original article...

Yole Developpement's Q4 2019 quarterly monitor "CIS: Q4 2019 went way above forecast but this was before COVID-19" states:

MARKET DYNAMICS:

  • Q4 2019 is 17% above revenue forecast and reaches US$5,746 million: 11.3% of upside is due to volume upside and 9.1% due to ASP3 upside.
  • The coronavirus outbreak impact of the epidemic will influence mostly on mobile and consumer CIS market with a drop in forecast on the global smartphone market expected on Q1 and Q2 2020.
  • 2019 YoY revenue growth is higher than expected and reaches 25%, with a Q2Q growth at 38% in Q4 2019.
  • CIS YoY growth should slow down to 7% in 2020 but this number will be aggravated by the outbreak of COVID-19.
  • Long term growth should go below 10% within 5 years.

Y2020 NUMBERS: The best ever year for the CIS industry

This time reality exceeded Yole Développement (Yole) forecast quite significantly. Yole had predicted revenue of US$17.2b for 2020 and this prediction ended 11% below the confirmed numbers for the year. The extensive growth of CIS has brought this semiconductor specialty to revenues of US$19.3b in 2019, exceeding 4.6% of total semiconductor sales.

Reflecting on the year’s dynamics, Q1 and Q2 2019 had been underwhelming, both running 6% below expectation in a context of smartphone market saturation and trade war rhetoric. Q3 and Q4 totally reversed the gloomy trend of H1 2020, and the release of exciting smartphones with numerous cameras propelled the industry to overcapacity bringing US$5.7 billion per quarter to the ecosystem…

Q1 & Q2 2020: Short term forecast will be impacted by COVID-19

What we cannot predict at Yole is the possibility of a systemic recession,” comments Pierre Cambou, Principal Analyst, Imaging at Yole. “People will still buy smartphones and smart speakers in 2021 so the risk is more a contamination coming from the financial sector than a biological threat,” he adds.

Go to the original article...

CNN for Event-Based Sensors

Image Sensors World        Go to the original article...

University of Zurich-ETH publishes a video supplement to the paper "Event-based Asynchronous Sparse Convolutional Networks" by Nico Messikommer, Daniel Gehrig, Antonio Loquercio, and Davide Scaramuzza.

"Recently, pattern recognition algorithms, such as learning-based methods, have made significant progress with event cameras by converting events into synchronous dense, image-like representations and applying traditional machine learning methods developed for standard cameras. However, these approaches discard the spatial and temporal sparsity inherent in event data at the cost of higher computational complexity and latency. In this work, we present a general framework for converting models trained on synchronous image-like event representations into asynchronous models with identical output, thus directly leveraging the intrinsic asynchronous and sparse nature of the event data. We show both theoretically and experimentally that this drastically reduces the computational complexity and latency of high-capacity, synchronous neural networks without sacrificing accuracy.

In addition, our framework has several desirable characteristics: (i) it exploits spatio-temporal sparsity of events explicitly, (ii) it is agnostic to the event representation, network architecture, and task, and (iii) it does not require any train-time change, since it is compatible with the standard neural networks' training process.

We thoroughly validate the proposed framework on two computer vision tasks: object detection and object recognition. In these tasks, we reduce the computational complexity up to 20 times with respect to high-latency neural networks. At the same time, we outperform state-of-the-art asynchronous approaches up to 24% in prediction accuracy.
"

Go to the original article...

Organic Photodetector Review

Image Sensors World        Go to the original article...

Applied Physics Reviews publishes a paper "Photodetectors based on solution-processable semiconductors: Recent advances and perspectives" by Yalun Xu and Qianqian Lin from Wuhan University, China.

"Along with the remarkable progress in the field of organics, those based on quantum dots, and recently emerged perovskite optoelectronics, photodetectors based on these solution-processable semiconductors have shown unprecedented success. In this review, we present the basic operation mechanism and the characterization of the performance metrics based on these novel materials systems. Then, we focus on the current research status and recent advances with the following five aspects: (i) spectral tunability, (ii) cavity enhanced photodetectors, (iii) photomultiplication type photodetectors, (iv) sensitized phototransistors, and (v) ionizing radiation detection. At the end, we discuss the key challenges facing these novel photodetectors toward manufacture and viable applications. We also point out the opportunities, which are promising to explore and may require more research activities."

Go to the original article...

Daniel Loeb Prepares New Push to Split Sony

Image Sensors World        Go to the original article...

Financial Times and Reuters report that Daniel Loeb’s hedge fund Third Point LLC uses lower Sony stock price to build a stake in the company again to push for changes that possibly include spinning off its image sensor business.

Go to the original article...

css.php