Archives for July 2020

One Investor’s View on Sony CIS Business

Image Sensors World        Go to the original article...

Forbes contributor Stephen McBride publishes his view on Sony CIS business:

"The growth in sensor sales over the past three years has been nothing short of remarkable. This year Sony will generate more profits from imaging than any of its other business lines. Sensors are on track to generate $1.92 billion in profits… 10% more than Sony’s long-established gaming arm.

Sony absolutely dominates the image sensor industry, accounting for over 50% of global sales.

The battle among smartphone firms to make better cameras has been a boon for Sony. It controls over 70% of the smartphone sensor market. And it’s been the exclusive maker of image sensors for every iPhone since 2010.

And when it comes to quality, Sony is in a league of its own. Its image sensors are so far ahead, it charges 2X as much as its closest competitor.

Right now, Sony’s world-class imaging business is flying under the radar. But as the computer vision boom takes off, I expect the stock to attract a lot of hype. And with image sensors becoming a larger part of Sony’s business, it could easily soar 300%+ in the coming years.

Go to the original article...

3D Camera and Analytics Startup Raises $51M

Image Sensors World        Go to the original article...

VentureBeat, Forbes: Density, a San Francisoco-based startup building AI-powered 3D cameras, closes a $51M investment round, following $23M in previous funding.

Density uses 3D depth cameras and an AI cloud-based software to enable social distancing and occupancy analytics. Pepsi, Delta, Verizon, Uber, Marriot, and ExxonMobil are among its clients that use its service to figure out which parts of their offices get the most use and which the least and to deliver people-counting metrics to hundreds and thousands of their employees.

The Density 3D camera consists of over 800 components sourced from 137 supply chains. The camera attaches above a doorway and tracks movement with two Class 1 infrared lasers. The data is transferred via Wi-Fi to Density’s cloud-hosted backend, where it’s processed and analyzed. A web dashboard, SMS messages, signage, and mobile apps provide insights like the real-time capacity of a room and historical crowd sizes, while an API allows third-party apps, services, and websites to make use of the data in many other ways.

Density’s infrared tracking method offers a major advantage over other approaches: privacy. Unlike a security camera, its sensors can’t determine the gender or ethnicity of the people it tracks, nor perform invasive facial recognition. “It’s far easier to do a camera,” says Density CEO Andrew Farah. “But we believe the data pendulum has swung too far in one direction. It’s good to see people ask about data being collected … We knew that the right market was corporate clients with office space because our sensor can do occupancy detection inside of a room where a camera can’t go.

Go to the original article...

Tower Updates on its Q2 2020 Imaging Business

Image Sensors World        Go to the original article...

SeekingAlpha: Tower Q2 2020 earnings call has few updates on the foundry's imaging business:

"Moving to our Sensor Business unit, we forecast single-digit year-over-year growth in spite of several segments being strongly adversely impacted by COVID-19. For example, dental x-ray sensors where we have major market share. The impact is mainly seen in the form of customer requests to push out orders, dental clinics were closed for a long period and are still not back to full speed hence customers are very inventory cautious. Industrial sensors namely manufacturing line inspections is also down as a function of reduced new manufacturing line build out. Due to our very strong CIS platform and compatibility, our fingerprint sensors namely the under OLED 101 sensor and the under LCD lens type modules, we were developed and qualified in very short times. We target our first volume production revenue in the fourth quarter wrapping throughout 2021.

Our time-of-flight program is moving along very well. We are prototyping the first sensor to our lead customer this quarter and plans for after production in the first half of next year. This would be our first product moving to mass production utilizing our stacked wafer BSI pixel level bonding platform.

And as stated, for us, one of the beauties of that market is that we were able to very, very quickly develop and qualify flows because of a very, very strong and diverse CIS core that we have throughout the company. So CMOS image sensor throughout the company, and take advantage of a market need. As far as the time-of- flight, there's a variety of applications are going out for time of flight. It's predominantly at 300 millimeter. And it's focused with backside illumination and stack wafers. And that is also more than one customer with blue chip customers that are signed up on long-term agreements. Of course, we have to perform, but that is a very, very substantial market for us.

But we have seen a very, very big reduction in the forecast of the image sensors for dental X-ray.

Go to the original article...

Yole Forecasts 2020-25 Camera Module Market

Image Sensors World        Go to the original article...

Yole Developpement report "CMOS Camera Module Industry for Consumer & Automotive 2020" forecasts the revenue of the global camera module market to expand from $31.3B in 2019 to $57.0B in 2025, at 12.8% CAGR.

"Beyond the sensor itself, innovations in all the different subcomponents of the camera module are in high demand. The introduction of periscope lenses was a major event that allowed 5x or even 10x optical magnification within the existing thickness of mobile phones. Optical image stabilization (OIS) is another critical technology for photography especially for telephoto, hence players are also looking for innovations in this area, using new materials, MEMS or liquid lens to replace the Voice Coil Motor (VCM) approach.

Technical upgrades of camera modules include the “Active Alignment” process to align multiple cameras well. There will be several innovations in camera module integration, like pop-up cameras or side-up cameras, and under-screen cameras in future.

The industry leader LG Innotek continues to maintain its position due to the large orders from Apple. Closing in are Ofilm and Sunny Optical, who have climbed to the second and third positions by relying on the strong domestic market in China, replacing Semco and Foxconn/Sharp. In the years to come the US-China trade war could play a big role in reorganizing the ranking of Compact Camera Module (CCM) players.

Go to the original article...

Memristor-Based Smart Image Sensor

Image Sensors World        Go to the original article...

National Science Review paper "Networking retinomorphic sensor with memristive crossbar for brain-inspired visual perception" by Shuang Wang, Chen-Yu Wang, Pengfei Wang, Cong Wang, Zhu-An Li, Chen Pan, Yitong Dai, Anyuan Gao, Chuan Liu, Jian Liu, Huafeng Yang, Xiaowei Liu, Bin Cheng, Kunji Chen, Zhenlin Wang, Kenji Watanabe, Takashi Taniguchi, Shi-Jun Liang, and Feng Miao from Nanjing University, China, and National Institute for Materials Science, Japan, proposes a pixel array that can recognaze objects:

"Comparing to human vision, conventional machine vision composed of image sensor and processor suffers from high latency and large power consumption due to physically separated image sensing and processing. Neuromorphic vision system with brain-inspired visual perception provides a promising solution to solve the challenge. Here we propose and demonstrate a prototype neuromorphic vision system by networking retinomorphic sensor with a memristive crossbar. We fabricate the retinomorphic sensor by using WSe2/h-BN/Al2O3 van der Waals heterostructures with gate-tunable photoresponses, to closely mimic the human retinal capabilities in simultaneously sensing and processing images. We then network such sensor with a large-scale Pt/Ta/HfO2/Ta one-transistor-one-memristor (1T1R) memristive crossbar, which serves as the role similar to the visual cortex in human brain. The realized neuromorphic vision system allows for fast letter recognition and object tracking, indicating the capabilities of image sensing, processing and recognition in the full analog regime. Our work suggests that such neuromorphic vision system may open up unprecedented opportunities in future visual perception applications."

Go to the original article...

Brookman Presents its ToF Products Catalog

Image Sensors World        Go to the original article...

Brookman has updated its ToF products page with a number of cameras based on BT008D sensor:

Brookman also congratulates Shizuoka University Prof. Shoji Kawahito, one of the founding members and a chairman of the company, with reception of the “Tamba Takayanagi Award/Achievement Award”. "This is the result of excellent achievements in the pioneering research on high performance and high functionality of image sensors, which the professor has been working on for a long time, and their practical application. Currently, CMOS image sensors continue to evolve from conventional 2D imaging to 3D sensing. We will continue to do our utmost to return the accumulated research results to society by continuing to challenge the possibilities of image sensing."

Go to the original article...

e2v Unveils ToF Image Sensor

Image Sensors World        Go to the original article...

Teledyne e2v introduces Hydra3D, an 832 x 600 pixel resolution ToF CMOS sensor. Hydra3D is based on a 10µm three-tap pixel.

Update: GlobeNewswire: Ha Lan Do Thu, Marketing Manager for 3D imaging at Teledyne e2v says, “We are very pleased to announce our newest Time-of-Flight sensor, the first multi-tap high resolution sensor in the market. Our partnership with Tower allows us to provide customers with the highest level of 3D performance, including uncompromised image quality in both 2D and 3D mode, in all operation conditions.

Hydra3D comes with an evaluation kit (Hydra3D EK), enabling customers to evaluate the sensor in multiple application setups. The kit includes a compact 2/3-inch optical format calibrated module, which includes a light source for near infrared illumination and an optic. Two versions will be available targeted at performing the Time-of-Flight principle at short-range distances (up to 5 metres) or mid-range distances (up to 10 metres) and with a field-of-view of 60° x 45° or a field of view of 40° x 30°, while capturing real-time 3D information at a full resolution.

Rafael Romay, VP of Professional Imaging at Teledyne e2v, says: “The great technology innovation and partnership with Tower has been key in the development of this innovative new ToF image sensor, helping us to bring to market this best-in-class solution.

Avi Strum, SVP and GM of Sensors and Display BU at Tower, added “We are very excited about the release of Hydra3D. Our strong partnership with Teledyne e2v goes back more than 15 years and many of their state-of-the-art products are manufactured by Tower. The Hydra3D ToF product is aligned well with Tower’s strategic investment in the ToF market. We look forward to many other Teledyne e2v products utilizing our world class CIS technology.

Samples will be available in August 2020 and evaluation kits will be available in September 2020.

Go to the original article...

Sony Transfers its Fabs Management to NEC Facilities

Image Sensors World        Go to the original article...

Sony: NEC Facilities and Sony Semiconductor Manufacturing Co. has agreed to jointly establish SSN Facilities as a new company that manages facilities at semiconductor production bases. SSN Facilities will undertake facility management, repair work, maintenance work, etc. of Sony Semiconductor Manufacturing from September 1, this year. “SSN” stands for Sustainable & Smart Next generation.

NEC Facilities has a wealth of experience and specialized human resources, knowledge, and know-how regarding factory facility management at manufacturing bases, mainly semiconductors and electronic components. With the establishment of SSN Facilities, as an outsourcer of facility management in the manufacturing industry, we aim to further expand our business, centering on the semiconductor manufacturing field, where demand continues to grow as digitalization continues.

Sony Semiconductor Manufacturing outsources facility management operations at seven production sites in Japan to SSN Facilities to ensure stable operation and maintenance of clean rooms and facilities, as well as to enhance and streamline operations, as well as its own business.

Go to the original article...

Sony A7S III review

Cameralabs        Go to the original article...

The Sony A7S III is a high-end full-frame mirrorless camera aimed at pro videographers. It films 4k 120p, has a flip-screen, the best EVF to date and much more! Find out why it could be the best pro video camera in my review!…

The post Sony A7S III review appeared first on Cameralabs.

Go to the original article...

QURV Startup to Develop SWIR Image Sensors

Image Sensors World        Go to the original article...

Barcelona, Spain-based The Institute of Photonic Sciences (ICFO) launches a spin-off company, Qurv Technologies. The new company develops wide-spectrum image sensor technologies and integrated solutions for computer vision applications, addressing the needs of an autonomous and intelligent new world.

Qurv’s graphene/quantum dot image sensors platform technology allow operation from the visible to the SWIR range and can be integrated with current CMOS low-cost, high-manufacturability processes. Qurv’s "plug and play" approach aims to bring advanced machine vision capabilities to markets that are not accessible by the current state of the art SWIR sensors.

Qurv incubated in the KTT Launchpad for more than 6 years and holds a portfolio of more than 10 patent families. The incubation has received support from the Government of Catalonia, the Ministry of Economy, Industry and Competitiveness of Spain, the European Research Council, the Barcelona City Council and the Castelldefels City Council, the European Regional Development Funds allocated to Generalitat de Catalunya for emerging technology clusters, and the European Union’s Horizon 2020 research and innovation program.

Antonios Oikonomou, Qurv’s CEO, comments, "Nature itself hides a vast amount of information beyond what is visible. By harnessing and efficiently processing this information, a new era in health, security and decision-making will emerge. However, no mass-deployable solution exists to provide these capabilities at scale and to everyone. With the immense support of the KTT unit at ICFO, we are now ready to achieve precisely this- to bring a technology once available only in the lab to the world.

Stijn Goossen, the company’s CTO adds, “Our unprecedented expertise of the graphene/quantum dot stack puts us in an optimal position to leverage the benefits of integration with silicon CMOS technology in terms of functionality, performance and addressable markets. World-renowned experts in graphene, Prof. Frank Koppens and quantum dots, Prof. Gerasimos Konstantatos, have been key in the early technology development. We are delighted to announce that they will take up the role of scientific advisors to the company while further maturing the technology.

Go to the original article...

Photron Presents FastCam with 4MP 1440fps APS-C Image Sensor

Image Sensors World        Go to the original article...

Photron unveils FASTCAM NOVA R2 with "unique CMOS image sensor technologies:"

  • 4MP resolution
  • 2048 x 2048 pixels at 1,440fps
  • 1920 x 1080 pixels at 2,560fps
  • ISO 8,000 monochrome
  • ISO 2,500 color
  • Global Electronic Shutter: 1ms to 2.7μs independent of frame rate
  • 12b ADC

Go to the original article...

US Blacklists O-Film

Image Sensors World        Go to the original article...

Nikkei, Reuters: Camera module maker O-Film is one of 11 Chinese companies added to the U.S. Commerce Department's Entity List over alleged human rights abuses involving China's Uighur Muslim minority. The Shenzhen-listed O-Film supplies its camera module to many companies, including Apple, Microsoft, HP, Dell, General Motors, Amazon, Samsung, Huawei, Oppo, ZTE, and Sony.

"Since the establishment of its business entity of CCM business unit in 2012, OFILM has been focusing on the development and manufacturing of image modules. In just four years, it has become the largest manufacturer of image modules in the world, and the gap with other module manufacturers has expanded year by year. According to the market research firm TSR, the market share of OFILM’s CCM shipments exceed 20% in 2018, ranking first of the industry."

O-Film publishes an official statement on the blacklisting:

"As a global leader in technological innovation and advanced high-technology manufacturing, OFILM has always abided by the laws and regulations of the nations where we operate. We treat our employees equally and protect their rights and interests. Every year, multiple times, including in 2020, OFILM has passed independent third-party Corporate Social Responsibility workplace audits organized by our customers, including surprise inspections. We have received the RBA certification of the Responsible Business Alliance.

Ofilm employees are never coerced to work for us.

With great respect, we call on the United States to re-examine its recent decision. We look forward to communicating fully with the relevant US government departments. We also look forward to a full airing of the facts and to a just outcome.

Go to the original article...

LiDAR News: Leddartech, Xaos, IDTechEx, Velodyne, Uber

Image Sensors World        Go to the original article...

Leddartech's Frantz Saintellemy, President and COO, features in a podcast:

China-based Xaos Sensor presents its MEMS-based LiDARs priced at $200:

IDTechEx publishes a nice video with review of different LiDAR approaches on the market:

Forbes contributor Sabbir Rangwala publishes his analysis of Velodyne merger with GRAF and going public:

  • Velodyne's valuation post-deal close has grown from ~$1.8B (Velodyne was estimated at a valuation of $1.6B before the merger announcement) to ~$3B as of July 21, 2020
  • The ASP per unit drops from $7K in 2019 to $600 in 2024 – which is ~ a 10X reduction, and therefore a 10X increase in volumes shipped
  • The growth in market share and unit volumes is based on a transition from the 360° FOV LiDAR products (Surround LiDAR which Velodyne has traditionally dominated) to the Vela series of products in which there is significant competition, and Velodyne is just starting to develop
  • Finally, profitability and cash flow – they currently lose about $50M/year, break even by 2023 as the Vela products kick in
  • The above analysis indicates that a part of their growth will need to come from acquisitions

Bloomberg reports that Uber conciders a guilty plea by its former LiDAR engineer Anthony Levandowski is proof that he’s a liar, and supports its decision to make Levandowski alone shoulder a $180M legal award Google won against him.

He agreed to plead guilty to Google-Waymo LiDAR trade secret theft and was driven into bankruptcy when Google won a contract-breach arbitration case against him. Levandowski was counting on Uber’s promise when it first hired him to provide legal cover, known as indemnification, from his former employer.

Uber now says it has no obligation to reimburse Levandowski for the $180M.

Go to the original article...

Past, Present, and Future of Face Recognition

Image Sensors World        Go to the original article...

A preprint paper "Past, Present, and Future of Face Recognition: A Review" by Insaf Adjabi, Abdeldjalil Ouahabi, Amir Benzaoui, and Abdelmalik Taleb-Ahmed from University of Bouira, Algeria, University of Tours, France, and University of Valenciennes, France, presents the challenges for the facial recognition algorithms:

"Face recognition is one of the most active research fields of computer vision and pattern recognition, with many practical and commercial applications including identification, access control, forensics, and human-computer interactions. Significant methods, algorithms, approaches, and databases have been proposed over recent years to study constrained and unconstrained face recognition. 2D approaches reached some degree of maturity and reported very high rates of recognition. This performance is achieved in controlled environments where the acquisition parameters are controlled, such as lighting, angle of view, and distance between the camera-subject. However, if the ambient conditions (e.g., lighting) or the facial appearance (e.g., pose or facial expression) change, this performance will degrade dramatically. 3D approaches were proposed as an alternative solution to the problems mentioned above. The advantage of 3D data lies in its invariance to pose and lighting conditions, which has enhanced recognition systems efficiency. 3D data, however, is somewhat sensitive to changes in facial expressions. This review presents the history of face recognition technology, the current state-of-the-art methodologies, and future directions."

Go to the original article...

Jim Janesick’s Work at SRI

Image Sensors World        Go to the original article...

SRI publishes an article about Jim Janesick's recent works on image sensors for space astronomy:

"Janesick, senior principal research scientist at SRI’s Advanced Imaging lab, has been with the institute for 20 years and before that was at NASA’s famed Jet Propulsion Laboratory (JPL) for 22.

Janesick is the designer of SRI’s CMOS spaceborne imagers onboard the European Space Agency’s (ESA) Solar Orbiter launched in 2020, and NASA’s Parker Solar Probe launched in 2018, missions that orbit the sun to study solar physics. Janesick notes that “after many years of advanced development, SRI’s CMOS imagers were awarded a TRL6 rating,” referring to the Technology Readiness Level (TRL) scale of 1 to 9 that NASA uses. “Once the team was at TRL6 along with successful ground-based prototype demonstrations, NASA gave the green light to use SRI’s CMOS imager in an instrument called the Solar and Heliospheric Imager, or SoloHI. This automatically gave the same rating to the Wide-Field Imager for Parker Solar Probe (WISPR) instrument since both missions use the same CMOS imager.”

NASA and ESA selected SRI’s imager because they were designed and fabricated to withstand the sun’s harsh radiation environment over several years at close range. As such, the spacecrafts are capable of capturing the closest images of the sun.

As the Parker Probe and Solar Orbiter proceeds with its missions, Janesick continues his as well. These days, he is most excited about two upcoming SRI missions; the Europa Clipper spacecraft, scheduled for a 2024 launch and the Geostationary Operational Environmental Satellite (GOES)-U also scheduled for 2024. GOES will fly a solar instrument called Compact Coronagraph (CCOR) and the Europa Clipper will fly a Jupiter-oriented instrument named Europa Imaging System (EIS). GOES will use the same CMOS imager as the SoloHi imager. The Europa spacecraft will have the first large-scale flight approved CMOS imager ever flown (2k x 4k pixels). “We do an extensive testing and selection process in finding several perfect flight candidates, and we’re at that stage now for Europa,” Janesick states.

Jim Janesick is known to the broad cycles of image sensor designers for writing a book on Photon Transfer Curve (PTC), one of the most important characterization tools today. He received Exceptional Lifetime Achievement Award from International Image Sensor Society in 2019.

Go to the original article...

EETimes: CIS Business Not Affected by Coronavirus

Image Sensors World        Go to the original article...

EETimes reporter Junko Yoshida publishes an interview with Yole Developpement analysts "Covid Economy: How Damaged Are We?" The CIS business still goes strong in spite of pandemy:

Go to the original article...

Espros on ToF Illuminator Importance

Image Sensors World        Go to the original article...

Espros July 202 Newsletter talks about importance of ToF light emitter:

Due to high illumination power, significant heat generation by the illumination warms up not only the illuminator, but the whole camera. Thus, good thermal management is key. Heat dissipation is required to keep the illumination as cold as possible.

It is to note, that the illumination power decreases significantly with higher temperature. The radiance of the LED in Figure 1 drops by 20% from room temperature to 100°C (junction) which reduces the operating range of the TOF camera at hight temperature.

It is also to note, that the rise and fall time of LEDs is current dependent, shown in Figure 2. The lower the current, the longer the rise and fall time. A variation in rise or fall time generates a significant distance shift. In the example shown in Figure 2, the change of rise/fall time is approx. 18ns between
a current of 100 and 3000mA. Without extra calibration and compensation, a distance shift of 2.7m can be observed! This is really significant.

Rules of thumb:
  • A good thermal management of the illumination is key.
  • When operating the camera with different LED currents, a separate calibration with at least offset compensation is required.
  • Constant illumination power during the whole measurement cycle is key.
  • Make sure that the illumination covers the required field of view, but not more.
  • The modulation waveform is not important because 4th order harmonics or other effects are calibrated and compensated during runtime.

Go to the original article...

4D Light-in-Flight Imaging with SPADs

Image Sensors World        Go to the original article...

EPFL and Canon paper "Superluminal Motion-Assisted 4-Dimensional Light-in-Flight Imaging" by Kazuhiro Morimoto, Ming-Lo Wu, Andrei Ardelean, Edoardo Charbon presents XYZT capture of light propagation.

"Advances in high speed imaging techniques have opened new possibilities for capturing ultrafast phenomena such as light propagation in air or through media. Capturing light-in-flight in 3-dimensional xyt-space has been reported based on various types of imaging systems, whereas reconstruction of light-in-flight information in the fourth dimension z has been a challenge. We demonstrate the first 4-dimensional light-in-flight imaging based on the observation of a superluminal motion captured by a new time-gated megapixel single-photon avalanche diode camera. A high resolution light-in-flight video is generated with no laser scanning, camera translation, interpolation, nor dark noise subtraction. A machine learning technique is applied to analyze the measured spatio-temporal data set. A theoretical formula is introduced to perform least-square regression, and extra-dimensional information is recovered without prior knowledge. The algorithm relies on the mathematical formulation equivalent to the superluminal motion in astrophysics, which is scaled by a factor of a quadrillionth. The reconstructed light-in-flight trajectory shows a good agreement with the actual geometry of the light path. Our approach could potentially provide novel functionalities to high speed imaging applications such as non-line-of-sight imaging and time-resolved optical tomography."

Go to the original article...

LiDAR News: Ibeo, Velodyne, Hesai, Cepton, Luminar, SiLC

Image Sensors World        Go to the original article...

Ibeo publishes a webinar explaining the company's approach to the solid-state LiDAR:

In another webinar, Ibeo presents its view on AI challenges and solutions in autonomous driving.

BusinessWire, BusinessWire: Velodyne announces a long-term global licensing agreement with Hesai Photonics Technology encompassing 360° surround-view lidar sensors. As a result of this agreement, Velodyne and Hesai have agreed to dismiss current legal proceedings in the U.S., Germany and China that exist between the two companies.

We think this agreement will expand the adoption of lidar world-wide and save lives,” says David Hall, Velodyne Founder and Chairman of the Board. The relationship with Hesai is the third major licensing agreement for Velodyne’s lidar technology.

BusinessWire: Cepton hires Andrew Klaus as country manager for Japan who used to work at the same position for Innoviz.

Earlier this year, Cepton concluded a successful Series C financing round led by Koito Manufacturing, the world’s largest automotive lighting Tier 1. Around the same time, Cepton expanded its business team in Europe with the appointment of two Directors of Product Management and Marketing.

BusinessWire: Luminar announces an expansion of its leadership as it drives into its next phase of growth in automotive. Over the next 18 months, the company is scaling its technology into series production, starting with Volvo in 2022, and will begin shipping its Iris sensing and perception platform within the year. Luminar hired 5 new executives from ZF, Mobileye, Magic Leap, and Goldman Sachs.

PRNewswire: SiLC, a developer of single-chip FMCW LiDAR, announces that Frost & Sullivan has selected the company for its 2020 North American 3D/4D LiDAR Imaging Industry Technology Innovation Award. This recognition comes on the heels of SiLC being selected by EETimes as one of the emerging silicon startups to watch worldwide. According to the Frost & Sullivan report, SiLC's proprietary 4D LIDAR chip is ideally positioned to disrupt the global LIDAR market due to its unique capabilities with broad applications, including autonomous vehicles, machine vision, and augmented reality.

"We're delighted by this recognition and greatly appreciate the depth and quality of Frost & Sullivan's analysis of SiLC's breakthrough Smart 4D Vision Sensor technology," said Mehdi Asghari, CEO of SiLC. "We also appreciate that Frost & Sullivan highlighted the breadth of our technology, which has the potential to replace ToF-based LiDAR sensors used in applications from automotive advanced driver assistance systems (ADAS) and self-driving autonomous vehicles to augmented reality, security, and industrial machine vision."

Go to the original article...

ST Reports Decline in Imaging Sales

Image Sensors World        Go to the original article...

SeekingAlpha publishes ST Q2 earnings call with updates on the company's imaging business:

"Net revenues were $2.09 billion, down 6.5% on a sequential basis. As expected, this was due to a decline in Automotive, Analog and Imaging products, partially offset by growth in Microcontrollers, Digital and Power Discrete.

Net revenues decreased 4% year-over-year with lower sales in Imaging, Automotive and MEMS, partially offset by higher sales in microcontrollers, digital, analog and power discrete.

AMS (Division] revenues decreased 10.1% with MEMS and Imaging lower while Analog sales were higher.

During the quarter, we also won sockets for our Global Shutter Automotive Imaging Solution for driver monitoring systems from two major OEMs. And this is an important step in our diversification strategy related to optical sensing solutions.

Go to the original article...

Smartsens SC500AI Sensor Improves Read Noise to 0.63e-

Image Sensors World        Go to the original article...

PRNewswire: SmartSens announces the SC500AI widescreen smart image sensor. The majority of 5MP security camera sensors offer a 4:3 aspect ratio that is not optimized for the widescreen format of modern LCD displays. The SmartSens SC500AI addresses this shortcoming with 1620P 16:9 5MP video output in the same form factor.

In a side-by-side comparison with SmartSens' previous generation sensor, the new SC500AI reduces the dark current from 389 e- at 80˚C to 210 e-. The total RN, or Read Noise, is reduced from 0.75e- to 0.63e-. And the sensitivity level also shows a noticeable improvement, growing from 2800 mV/lux-sec to 3680 mV/lux-sec.

These improvements are made possible by SmartSens' unique SFCPixel technology, which takes advantage of the close proximity between the source follower and the photodiodes to increase the sensitivity level, producing high-quality night-vision images. SmartSens' proprietary PixGain technology additionally enables the sensor to achieve excellent HDR performance even under glaring sunlight.

Existing customers of SmartSens' previous-generation products of P2P will see a hassle-free system upgrade to the SC500AI, which is compatible with 1/3-inch 5MP lenses in a wide array for professional security products.

"SmartSens continues to build on its AI series of sensors, offering our customers a range of solutions utilizing the latest photosensor technologies to address the most challenging lighting conditions," said Chris Yu, Chief Marketing Officer of SmartSens. "We continue to strengthen our portfolio to match new applications and our customers' quickly-evolving needs."

The SC500AI Image Sensor is available for testing immediately.

Go to the original article...

Development of Reliable WLCSP for Automotive Applications

Image Sensors World        Go to the original article...

MDPI paper "Development of Reliable, High Performance WLCSP for BSI CMOS Image Sensor for Automotive Application" by Tianshen Zhou, Shuying Ma, Daquan Yu, Ming Li, and Tao Hang from Shanghai Jiao Tong University, Xiamen University, and Huatian Technology (Kunshan) Electronics belongs to a Special Issue "Smart Image Sensors."

"To meet the urgent market demand for small package size and high reliability performance for automotive CMOS image sensor (CIS) application, wafer level chip scale packaging (WLCSP) technology using through silicon vias (TSV) needs to be developed to replace current chip on board (COB) packages. In this paper, a WLCSP with the size of 5.82 mm × 5.22 mm and thickness of 850 μm was developed for the backside illumination (BSI) CIS chip using a 65 nm node with a size of 5.8 mm × 5.2 mm. The packaged product has 1392 × 976 pixels and a resolution of up to 60 frames per second with more than 120 dB dynamic range. The structure of the 3D package was designed and the key fabrication processes on a 12” inch wafer were investigated. More than 98% yield and excellent optical performance of the CIS package was achieved after process optimization. The final packages were qualified by AEC-Q100 Grade 2."

Go to the original article...

Assorted News: ST, Sony, ON Semi, ASE, Photon Force

Image Sensors World        Go to the original article...

GlobeNewswire: ST reports that Aura Aware is using ST’s FlightSense technology in a smart distance-awareness portable device suitable for use at retail counters and check-in desks. The easy-to-setup device displays a green OK signal that changes to red if a person crosses a safe minimum-distance threshold.

AnandTech reports that after more than four years of being acquired by Sony, Altair Semiconductor is renaming itself as Sony Semiconductor Israel. The AI inference processor that’s been integrated into the new IMX500/501 sensors, was developed by Altair/Sony Semiconductor Israel.

"We have been honored to be part of Sony for the past four years, playing a key role in the company’s core business,” says Sony Semiconductor Israel CEO Nohik Semel, “To better reflect our long-term commitment to our partners and customers, as well as the quality of our offering, we have decided to change Altair’s company name to Sony.

ON Semi publishes a promotional video about robotic vision applications:

Digitimes reports that ASE starts mass production of LiDAR modules in 2H2020: Taiwan's backend house ASE Technology is expected to start volume production of LiDAR modules in the second half of 2020 as it has indirectly entered supply chains of first-tier automakers through its international clients. ASE Technology is said to aggressively incorporate AI technology to support smart production of ToF LiDARs.

Talking about LiDARs, Forbes contributor Sabbir Rangwala publishes a comparison table of possible spots for LiDAR in a car:

Edinburgh, UK-based Photon Force, a provider of time-resolved SPAD cameras, has received a Business Start-Up Award from the Institute of Physics (IOP).

"Founded in 2015 as a spin-out from Robert Henderson’s renowned CMOS Sensors and Systems Group at the University of Edinburgh, Photon Force has won the IOP accolade for the development of its ground-breaking sensors that enable ultrafast, single photon sensitive imaging. Photon Force sensors are used worldwide and facilitate progress in applications including quantum physics, communications and biomedical imaging/neuroscience."

Go to the original article...

Thesis on SPAD Integration into 28nm SOI Process

Image Sensors World        Go to the original article...

INL - Institut des Nanotechnologies de Lyon, France, publishes a PhD Thesis "Integration of Single Photon Avalanche Diodes in Fully Depleted Silicon-on-Insulator Technology" by Tulio Chaves de Albuquerque. It starts with a nice introduction into generic SPAD technology and then goes into its integration into FDSOI process.

Go to the original article...

AMOLED Displays with In-Pixel Photodetector

Image Sensors World        Go to the original article...

Intechopen publishes a book chapter "AMOLED Displays with In-Pixel Photodetector" by By Nikolaos Papadopoulos, Pawel Malinowski, Lynn Verschueren, Tung Huei Ke, Auke Jisk Kronemeijer, Jan Genoe, Wim Dehaene, and Kris Myny from Imec.

"The focus of this chapter is to consider additional functionalities beyond the regular display function of an active matrix organic light-emitting diode (AMOLED) display. We will discuss how to improve the resolution of the array with OLED lithography pushing to AR/VR standards. Also, the chapter will give an insight into pixel design and layout with a strong focus on high resolution, enabling open areas in pixels for additional functionalities. An example of such additional functionalities would be to include a photodetector in pixel, requiring the need to include in-panel TFT readout at the peripherals of the full-display sensor array for applications such as finger and palmprint sensing."

Meanwhile, Vkansee works with China-based Tianma to productize its on-dusplay optical fingerprint sensor:

"Vkansee’s proprietary Matrix Pinhole Image Sensing (MAPIS) – is integrated into the mobile phone OLED display panel, effectively turning the entire display into a high-resolution fingerprint lens allowing simple installation of the image sensor anywhere or everywhere under the OLED display screen. Unlike other solutions that implement FOD and yield poor quality fingerprint images, the MAPIS OLED solution captures high-quality images, because the in-panel optical design avoids the influence of obstructing TFT driver circuits."

We are focused on bringing our novel MAPIS optical fingerprinting technology to users across the globe to improve security and convenience, and hope to make MAPIS optics as a standard design of OLED,” stated Jason Chaikin, President of VKANSEE. “In partnership with Tianma, we’re confident this will happen in the near future. We believe integrating the MAPIS optics into the OLED screen will greatly change the fingerprint sensor industry similar to the history of integrating touch sensing into the OLED screen.

Go to the original article...

CMOS Sensor Pioneer Gene Weckler Passed Away

Image Sensors World        Go to the original article...

Gene Peter Weckler died of complications from Alzheimer’s on December 3, 2019. He was 87 years old.

Among his significant contributions to image sensor technology, in 1967 Gene published a seminal paper entitled: “Operation of pn junction photodetectors in a photon flux integrating mode,” which was published in the IEEE J. Solid-State Circuits. Nearly every image sensor built since then has operated in this mode. Gene also published several early papers on what we now call passive pixel image sensors during his time at Fairchild.

In 1971 he co-founded RETICON to further commercialize the technology. RETICON was acquired by EG&G in 1977. Gene stayed with EG&G for twenty years serving in many management roles including Director of Technology for the Opto Divisions. In 1997 Gene co-founded Rad-icon to commercialize the use of CMOS-based solid-state image sensors for use in x-ray imaging. Rad-icon was acquired by DALSA in 2008. Gene retired from full time work in 2009 but continued as a member of the Advisory Board for the College of Engineering at Utah State University.

In 2013, Gene Weckler received International Image Sensor Society (IISS) Exceptional Lifetime Achievement Award.

An oral history recording can be found here:

Go to the original article...

Resolving Fast Movement in Low Light with QIS

Image Sensors World        Go to the original article...

Purdue University publishes its paper presented at 16th European Conference on Computer Vision (ECCV) 2020 "Dynamic Low-light Imaging with Quanta Image Sensors" by Yiheng Chi, Abhiram Gnanasambandam, Vladlen Koltun, and Stanley H. Chan.

"Imaging in low light is difficult because the number of photons arriving at the sensor is low. Imaging dynamic scenes in low-light environments is even more difficult because as the scene moves, pixels in adjacent frames need to be aligned before they can be denoised. Conventional CMOS image sensors (CIS) are at a particular disadvantage in dynamic low-light settings because the exposure cannot be too short lest the read noise overwhelms the signal. We propose a solution using Quanta Image Sensors (QIS) and present a new image reconstruction algorithm. QIS are single-photon image sensors with photon counting capabilities. Studies over the past decade have confirmed the effectiveness of QIS for low-light imaging but reconstruction algorithms for dynamic scenes in low light remain an open problem. We fill the gap by proposing a student-teacher training protocol that transfers knowledge from a motion teacher and a denoising teacher to a student network. We show that dynamic scenes can be reconstructed from a burst of frames at a photon level of 1 photon per pixel per frame. Experimental results confirm the advantages of the proposed method compared to existing methods."

Go to the original article...

Pet Nose-Print Recognition Technology

Image Sensors World        Go to the original article...

CnTechPost: Chinese Alipay insurance platform has announced the opening of pet nose-print recognition technology and has joined forces with insurers to apply this technology to dogs and cats insurance for the first time.

According to Alipay, the success rate of pet nose-print recognition technology exceeds 99% and is expected to be applied to urban pet management and lost pet scenarios in the future.

Go to the original article...

Microsoft Develops Under-Display Camera Solution for Videoconferencing

Image Sensors World        Go to the original article...

Microsoft Research works on embedding a camera under a display for videoconferencing:

"From the earliest days of videoconferencing it was recognized that the separation of the camera and the display meant the system could not convey gaze awareness accurately. Videoconferencing systems remain unable to recreate eye contact—a key element of effective communication.

Locating the camera above the display results in a vantage point that’s different from a face-to-face conversation, especially with large displays, which can create a sense of looking down on the person speaking.

Worse, the distance between the camera and the display mean that the participants will not experience a sense of eye contact. If I look directly into your eyes on the screen, you will see me apparently gazing below your face. Conversely, if I look directly into the camera to give you a sense that I am looking into your eyes, I’m no longer in fact able to see your eyes, and I may miss subtle non-verbal feedback cues.

"With transparent OLED displays (T-OLED), we can position a camera behind the screen, potentially solving the perspective problem. But because the screen is not fully transparent, looking through it degrades image quality by introducing diffraction and noise.

To compensate for the image degradation inherent in photographing through a T-OLED screen, we used a U-Net neural-network structure that both improves the signal-to-noise ratio and de-blurs the image.

We were able to achieve a recovered image that is virtually indistinguishable from an image that was photographed directly.

Via MSPowerUser

Go to the original article...

Unispectral Announces Tunable NIR Filter for Multipectral Cameras

Image Sensors World        Go to the original article...

PRNewswire: Unispectral announces what it calls the industry’s first mass market ColorIR Tunable NIR filter and spectral IR camera. Unispectral’s tunable filter turns low cost IR cameras into 700-950nm spectral cameras. It is best suited for facial recognition, consumer portable devices, IOT, robotics and mass market cameras. ColorIR products enable advanced machine vision, material sensing and computational photography.

The core product consists of a tunable MEMS filter assembled on a camera module. RaspberryPi is used to capture parameters and interface by USB or WiFi toPC or Mobile device. SDK is included to develop additional applications.

Our excellent team is proud to roll out this tunable filter which connects seeing with sensing. It makes spectral cameras accessible for mass-market platforms. The market strives to find an effective solution for adding spectral information to cameras and we believe our technology offers the best blend of performance, and cost,” said Ariel Raz, CEO of Unispectral.

The ColorIR camera captures multiple frames in different NIR wavelengths, filtered by a miniature Fabry–Pérot optical cavity MEMS element. This unique solution breaks the price for legacy spectral cameras, thereby enabling new markets and use cases.

Use Cases of ColorIR:
  • Security Market: Facial Authentication, Access Control, Payment Terminals, Fake Bills detection,
  • Smartphone Camera: image enhancement, , , low light and shadow picture corrections,
  • Medical Market: Remote health inspection
  • Agriculture: Fruit inspection, Pesticide detection
  • Industrial: Production line Inspection,
  • Vehicle: DMS

The ColorIR tunable Mems EVK is available for pre-order. Shipping is planned for end of July.

Go to the original article...