Archives for August 2020

Axcelis Purion XEmax is Fully Capable of 15MeV Energy

Image Sensors World        Go to the original article...

After initial misunderstanding with 15MeV and 12MeV statements in Axcelis announcements, I got an official clarification from the company:

Dear Vladimir,

I saw your recent article on our system and reference to energy range and I wanted to clear up any confusion. Our system does indeed have an energy range of 15 MeV. I just forgot to update the chart on slide 21 on our IR presentation when we posted earnings last week. That update to our website posting is in the process, thank you for noticing that. I’ve attached the press release on the product launch and the amended IR presentation for reference.

Sorry about the confusion, and thanks for providing such good coverage on the exciting image sensor market.

Best regards,

Maureen Hart
Director, Corporate Communications
Axcelis Technologies, Inc.


Go to the original article...

Hamamatsu Builds New Fab

Image Sensors World        Go to the original article...

Hamamatsu announces completion of construction of a new factory building at the Shingai Factory (Shingai-cho, Minami-ku, Hamamatsu City, Japan) to cope with the increasing sales demand for opto-semiconductors, X-ray image sensors and X-ray flat panel sensors. This new fab will start operations in October this year.

Go to the original article...

Is Yokogawa Minimal Fab Suitable for Image Sensor Production?

Image Sensors World        Go to the original article...

Yokogawa brought AIST, Japan-originated Minimal Fab project to production. It uses 0.5-inch wafers and does not need a clean room to operate:








Minimal Fab process implements most of the steps needed for image sensor production, even a wafer thinning and bonding option. The most notable omission is a high energy ion implantation. So far, only lower than 50KeV implantation is supported. For a higher energy, one needs to process wafers at the large fab.


AIST presents Minimal Fab advantages:


There is a similar Futrfab work in the US promoting 2-inch wafer fab. However, its progress appears to be slow and it is far from being a market-ready solution for now.

Go to the original article...

LiDAR News: Lumotive, Robosense, Light, Microvision

Image Sensors World        Go to the original article...

Lumotive announces Early Access Program (EAP) to accelerate adoption of its LiDAR technology. The EAP provides engineering support and early access to Lumotive's software-defined beam-steering technology to enable customized product design and rapid system integration.

While our 3D-sensing products leverage innovative Liquid Crystal Metasurfaces (LCMs) to provide significant performance and cost advantages in several key markets, we know that customers want to accelerate time-to-market for their sensing systems with differentiated, application-specific features,” said Lumotive co-founder and CEO, William Colleran. “For companies developing LiDAR products targeting automotive, industrial and consumer markets, our EAP delivers access to Lumotive’s technology -- including our software-defined beam-steering API -- well before general availability. In exchange, Lumotive gains a number of early-adopter partners and valuable insight into their product requirements which drives our own core technology development.


BusinessWire: RoboSense launches an 80 laser-beam LiDAR ready for customer delivery with early-bird price of $12,800, and standard price of $15,800. The performance of the RS-Ruby Lite is close to that of the 128 laser-beam LiDAR RS-Ruby, with a vertical angular resolution of 0.1 degrees and 160m @ 10% ranging ability (with the longest detection range of 230 meters), making it suitable to address medium-and-high-speed autonomous driving applications.



IEEE Spectrum publishes an article on Light.Co new plan of 3D camera for cars "Will Camera Startup Light Give Autonomous Vehicles Better Vision than Lidar?" The article starts from Ligh. Co history of 16-lens L16 camera and Nokia 9 smartphone project:

I think our timing was bad,” Light CEO Dave Grannan says. “In 2019 smartphone sales started to shrink, the product became commoditized, and there was a shift from differentiating on quality and features to competing on price. We had a premium solution, involving extra cameras and an ASIC, and that was not going to work in that environment.

Light began an R&D program in early 2019 to further refine its algorithms for use in autonomous vehicles. In mid-2019, with the consumer phone and camera market looking sour, the company announced that it was getting out of that business, and pivoted the entire company to focus on sensing systems for autonomous vehicles.

We can cover a longer range—up to 1000 meters, compared with 200 or so for lidar,” says Grannan. “The systems we are building can cost a few thousand dollars instead of tens of thousands of dollars. And our systems use less power, a key feature for electric vehicles.

In these days, Light is testing the first prototypes, trying different numbers of cameras in the array, a variety of focal lengths, and optimizing the design. So far, Light uses FPGA for the depth map calculations but a dedicated ASIC should be available by early 2021.

The prototype is still in stealth. Light Co. expects to unveil it and announce partnerships with AV companies later this year.


Microvision presents its first automotive MEMS LiDAR capable of 20M points/s rate, 200m range in sunlight, and adaptive FOV:


Go to the original article...

Pointcloud Inc. Presents Coherent ToF Camera

Image Sensors World        Go to the original article...

San Francisco-based startup Pointcould, Opris Consulting, and University of Southampton, UK publish Arxiv paper "A universal 3D imaging sensor on a silicon photonics platform" by Christopher Rogers, Alexander Y. Piggott, David J. Thomson, Robert F. Wiser, Ion E. Opris, Steven A. Fortune, Andrew J. Compston, Alexander Gondarenko, Fanfan Meng, Xia Chen, Graham T. Reed, and Remus Nicolaescu.

"A large-scale two-dimensional array of coherent detector pixels operating as a light detection and ranging (LiDAR) system could serve as a universal 3D imaging platform. Such a system would offer high depth accuracy and immunity to interference from sunlight, as well as the ability to directly measure the velocity of moving objects. However, due to difficulties in providing electrical and photonic connections to every pixel, previous systems have been restricted to fewer than 20 pixels. Here, we demonstrate the first large-scale coherent detector array consisting of 512 (32×16) pixels, and its operation in a 3D imaging system. Leveraging recent advances in the monolithic integration of photonic and electronic circuits, a dense array of optical heterodyne detectors is combined with an integrated electronic readout architecture, enabling straightforward scaling to arbitrarily large arrays. Meanwhile, two-axis solid-state beam steering eliminates any tradeoff between field of view and range. Operating at the quantum noise limit, our system achieves an accuracy of 3.1 mm at a distance of 75 metres using only 4 mW of light, an order of magnitude more accurate than existing solid-state systems at such ranges. Future reductions of pixel size using state-of-the-art components could yield resolutions in excess of 20 megapixels for arrays the size of a consumer camera sensor. This result paves the way for the development of low cost and high performance 3D imaging cameras, enabling new applications from robotics to autonomous navigation."

Go to the original article...

Samsung et al. Explores Subwavelength-Sized Color Pixel Approaches

Image Sensors World        Go to the original article...

Samsung, UCB, and Hong Kong University researches publish a Nature paper "Subwavelength pixelated CMOS color sensors based on anti-Hermitian metasurface" by Joseph S. T. Smalley, Xuexin Ren, Jeong Yub Lee, Woong Ko, Won-Jae Joo, Hongkyu Park, Sui Yang, Yuan Wang, Chang Seung Lee, Hyuck Choo, Sungwoo Hwang, and Xiang Zhang.

"The demand for essential pixel components with ever-decreasing size and enhanced performance is central to current optoelectronic applications, including imaging, sensing, photovoltaics and communications. The size of the pixels, however, are severely limited by the fundamental constraints of lightwave diffraction. Current development using transmissive filters and planar absorbing layers can shrink the pixel size, yet there are two major issues, optical and electrical crosstalk, that need to be addressed when the pixel dimension approaches wavelength scale. All these fundamental constraints preclude the continual reduction of pixel dimensions and enhanced performance. Here we demonstrate subwavelength scale color pixels in a CMOS compatible platform based on anti-Hermitian metasurfaces. In stark contrast to conventional pixels, spectral filtering is achieved through structural color rather than transmissive filters leading to simultaneously high color purity and quantum efficiency. As a result, this subwavelength anti-Hermitian metasurface sensor, over 28,000 pixels, is able to sort three colors over a 100 nm bandwidth in the visible regime, independently of the polarization of normally-incident light. Furthermore, the quantum yield approaches that of commercial silicon photodiodes, with a responsivity exceeding 0.25 A/W for each channel. Our demonstration opens a new door to sub-wavelength pixelated CMOS sensors and promises future high-performance optoelectronic systems."


"Future directions for this work include integration with CMOS readout circuit arrays (Supplementary Fig. 11), extension of the spectral response to the blue end of the visible spectrum and optimization of the geometry to provide color-sorting for obliquely angled excitation.

As the demand for smaller pixel size and higher resolution in imaging and display technologies increases, our work advances the state-of-the-art by showing for the first time, PIN readout, three-color sorting over a two-dimensional surface without sacrificing responsivity. Furthermore, the sub-wavelength sized pixels are demonstrated based on the principle of AH coupling and fabricated via CMOS-compatible processes into vertical shallow junction PIN nanocylinders that efficiently convert optical energy to a clear electrical readout without crosstalk. Our work promises future compact, small pixelated, high-performance optoelectronic systems.
"

Go to the original article...

Infineon Expects ToF Sensor Market to Exceed 1.1B Euro in 2023

Image Sensors World        Go to the original article...

Infineon quarterly earnings report presents the company forecast of 3D ToF sensor market:

Go to the original article...

Axcelis Ships its First 12MeV Implanter to CIS Customer for Evaluation

Image Sensors World        Go to the original article...

Axcelis reports that it has shipped first Purion XEmax evaluation system to a large image sensor customer. In a previous version of this announcement, Axcelis claimed it supports up to 15MeV energies, but now it just says "more than 12MeV."

Go to the original article...

Photodiode Array Cross-Section

Image Sensors World        Go to the original article...

i-Micronews: SystemPlus publishes a comparison of camera modules in Samsung smartphones. One of the image sensor cross-section pictures has an interesting diagonal line crossing a photodiode array:

Go to the original article...

Himax Low Power Sensor + AI Processor Solution Attracts Interest from the Industry

Image Sensors World        Go to the original article...

GlobeNewswire: Himax updates on its imaging business in Q2 2020 earnings report:

"In order for Himax’s WiseEye technology to reach its maximum potential, the Company has adopted a flexible business model whereby, in addition to total solution where it provides processor, image sensor and AI algorithm, the Company also offers those individually as key parts in order to address the market’s different needs and widen its market coverage. For customers who own their own algorithm and wish to develop their own applications, Himax can provide its ultralow power AI processor and image sensor without algorithm. The customer can piggyback on Himax’s technology and focus their effort on bringing AI to edge devices by transforming a wide range of sensor data, including video, sound, movement, gesture, among others, into actionable information, all with extremely low power consumption. For those customers/partners whose main business is to provide AI processors, Himax can offer its ultralow power image sensors without its AI processor and algorithm.

For the total solution offering, Himax launched a computer vision human detection notebook solution which has been well recognized and is being incorporated into the next generation premium notebook models of key OEMs and ODMs. Himax’s total solutions are also being integrated into a wide range of other applications such as TV, doorbell, door lock, air conditioner, etc. by engaging leading players in those industries. For the other type of business model where Himax only offers key parts, the Company’s strategy is to actively participate in the ecosystems led by the world’s leading AI and cloud service providers. A recent illustration of this strategy is an announcement for the collaboration with Google whereby, running on Google’s TensorFlow Lite for Microcontrollers kernel, Himax provided its AI processor with CNN (convolutional neural network) based SDK (software development kit) for developers to generate deep learning inferences with video and voice commands data to boost overall system performance while consuming extremely low power.

Being an official partner of Google’s TensorFlow, Himax gets to enjoy the enormous network of its ecosystem participants. Just over a month after the announcement, Himax is already receiving inquiries from large corporations and individual AI developers alike with application ideas covering a broad range of industries. The Company is very encouraged by the enthusiastic discussions about possible WiseEye applications that are taking place in various user groups for emerging AI market ideas. Last but not least, Himax is working closely with other leading AI and cloud service providers worldwide to incorporate WiseEye edge AI solution into their ecosystems, in an attempt to reach the broadest market coverage possible. Himax is extremely excited about these developments.

Due to the accelerated adoption of work-from-home and online education, demand for Himax’s CMOS image sensor for notebook and IP camera will remain strong during the third quarter.

The Company’s industry-first 2-in-1 CMOS image sensor has penetrated into the laptop ecosystem for the most stylish super slim bezel design with 3 types of popular application features, namely RGB sensor for video conference, RGB/IR sensor for Windows Hello facial recognition, and/or ultralow power AI computer vision for human presence detection. Himax expects to see small volume in certain premium notebook models in late 2020 with more volume expected in the coming years.

For the traditional human vision segments, Himax also sees strong demand in multimedia applications such as car recorders, surveillance, drones, home appliances, and consumer electronics, among others.
"

Himax shows some use cases for its WiseEye solution:

Go to the original article...

Strained Si PD Sensitivity Enhanced into SWIR Band

Image Sensors World        Go to the original article...

Phys.org: Yonsei University, Korea, researches manages to enhance 10nm-thin Si layer sensitivity all the way to 1550nm. Science paper "Breaking the absorption limit of Si toward SWIR wavelength range via strain engineering" by Ajit K. Katiyar, Kean You Thai, Won Seok Yun, JaeDong Lee, and Jong-Hyun Ahn shows the results:

"Silicon has been widely used in the microelectronics industry. However, its photonic applications are restricted to visible and partial near-infrared spectral range owing to its fundamental optical bandgap (1.12 eV). With recent advances in strain engineering, material properties, including optical bandgap, can be tailored considerably. This paper reports the strain-induced shrinkage in the Si bandgap, providing photosensing well beyond its fundamental absorption limit in Si nanomembrane (NM) photodetectors (PDs). The Si-NM PD pixels were mechanically stretched (biaxially) by a maximum strain of ~3.5% through pneumatic pressure–induced bulging, enhancing photoresponsivity and extending the Si absorption limit up to 1550 nm, which is the essential wavelength range of the lidar sensors for obstacle detection in self-driving vehicles. The development of deformable three-dimensional optoelectronics via gas pressure–induced bulging also facilitated the realization of unique device designs with concave and convex hemispherical architectures, which mimics the electronic prototypes of biological eyes."

Go to the original article...

Edge AI Processing Presentation

Image Sensors World        Go to the original article...

Hailo presentation on edge AI vision processing shows that one needs 80K MAC operations per pixel to achieve a 86% image classification accuracy. This can require quite a high power consumption, depending on the image resolution and a frame rate:

Go to the original article...

Omdia: Samsung Reduces Market Share Gap with Sony

Image Sensors World        Go to the original article...

KoreanInvestors quotes Omdia Research claiming that Samsung gradually reduces the market share gap with Sony:

"According to market researcher OMDIA, the global market share gap in the CMOS image sensor between Sony and Samsung has narrowed significantly this year.

Sony’s global market share, which rose to as high as 56.2% in the third quarter of 2019, fell to an estimated 42.5% in the second quarter of 2020.

In the same period, the market share of Samsung, the world’s No. 2 image sensor maker, rose to 21.7% from 16.7%, narrowing the gap with the Japanese company to 20.8 percentage points from 39.5 percentage points.

Analysts attribute Samsung’s advancement in the image sensor market to the increase in shipments of its high-end products and a growing client base such as Chinse electronics firm Xiaomi Corp.

“Image sensors are one of the three key products of Samsung in its system chip product lines that can become the world’s number one,” said Inyup Kang, head of System LSI Business & President at Samsung.

The global spread of the Covid-19 coronavirus also played a role in Samsung’s greater market share, as many U.S. consumer electronics companies are delaying the launch of new products, using Sony’s image sensors, while Samsung’s major clients in China have raised their order volumes of image sensors.

Samsung plans to focus on high-end products to raise its market share as the global market of high-resolution image sensors are estimated to rise at an average annual rate of 87% until 2024.

SK Hynix unveiled new CMOS image sensors under the brand “Black Pearl,” targeting a mid-tier market last year. Its market share rose from around 2% in 2019 to 3.4% in the second quarter of this year.
"

Go to the original article...

Entangled Photon Imaging with SPADs

Image Sensors World        Go to the original article...

EPFL and Glasgow University publish Arxiv paper "Quantum illumination imaging with a single-photon avalanche diode camera" by Hugo Defienne, Jiuxuan Zhao, Edoardo Charbon, and Daniele Faccio.

"Single-photon-avalanche diode (SPAD) arrays are essential tools in biophotonics, optical ranging and sensing and quantum optics. However, their small number of pixels, low quantum efficiency and small fill factor have so far hindered their use for practical imaging applications. Here, we demonstrate full-field entangled photon pair correlation imaging using a 100-kpixels SPAD camera. By measuring photon coincidences between more than 500 million pairs of positions, we retrieve the full point spread function of the imaging system and subsequently high-resolution images of target objects illuminated by spatially entangled photon pairs. We show that our imaging approach is robust against stray light, enabling quantum imaging technologies to move beyond laboratory experiments towards real-world applications such as quantum LiDAR."

Go to the original article...

Brookman ToF Gesture Recognition Demo

Image Sensors World        Go to the original article...

Brookman shows how its BT008D pToF sensors excels in finger gesture recognition in comparison with competing ToF solutions. The demo is prepared in collaboration with Toppan Printing Co., Ltd., one of Brookman's major investors.

Go to the original article...

Synopsys Demos 24Gbps D-PHY/C-PHY IP

Image Sensors World        Go to the original article...

Synopsys shows MIPI C-PHY/D-PHY IP Performance at 24 Gbps. MIPI C-PHY/D-PHY IP interoperating with an unspecified image sensor in C-PHY mode up to 3.5 Gsps per trio and D-PHY mode up to 4.5 Gbps per lane. The IP is available in FinFET processes for camera applications:

Go to the original article...

Newsight Imaging eToF Webinar

Image Sensors World        Go to the original article...

Newsight Imaging publish a webinar explaining its NSI1000 eTOF sensor operation:

Go to the original article...

Imaging Beyond the Speed of Light

Image Sensors World        Go to the original article...

A Youtube video explains seemingly impossible effects described in EPFL and Canon Archive.org paper published a few weeks ago:

Go to the original article...

More about Sony Quarterly Results

Image Sensors World        Go to the original article...

SeekingAlpha publishes Sony earnings call transcript with some updates on image sensor business:

"Next is in IS&S, Image Sensing & Solutions. Fiscal '20 quarter one sales decreased 11% year-on-year to ¥206.2 billion, and operating income decreased ¥24.1 billion to ¥25.4 billion.

Fiscal '20 sales are expected to decrease 7% to ¥1 trillion, and operating income is expected to decrease ¥105.6 billion to ¥130 billion. Now I will explain the state of our sensor business. Fiscal '20 sales of image sensors for mobile products are expected to decrease compared to fiscal '19, primarily due to a decrease in end-user product sales by one of our major customers, the deceleration of the smartphone market and a shift to mid-range and moderately priced models in that market resulting from the impact of the spread of COVID-19 and significant reduction in component and finished goods inventory by Chinese customer. Profitability is expected to be impacted by a decrease in gross margins and an increase in depreciation and manufacturing-related costs associated with production equipment we purchased in the previous fiscal year when we expected growth as well as higher research and development costs.

We do not expect to grow sales of mobile sensing products compared to fiscal '19 because adoption by smartphone makers has been slow and sales of flagship models, which already use our products have decreased due to the shift in market conditions. Sales of image sensors to AV have also decreased due to the contraction of the sensor market for digital cameras, resulting from the impact of the spread of COVID-19. We expect the market to contract in 1 year as much as we had previously expected it would contract over the next approximately 3 years.

In order to respond quickly to the changes in the environment, especially for image sensors for mobile products, we will modify our strategy, mainly in the areas of investment, research and development and customer base. We have already significantly reduced investment in capacity to supply demand in the fiscal year ending March 31, 2022, because we can supply that demand by stockpiling strategic inventory through utilization of our excess production capacity this fiscal year.

The forecast for cumulative capital expenditures for the 3 fiscal years began April 1, 2018, which we explained in the past, has been reduced ¥50 billion from approximately ¥700 billion to approximately ¥650 billion. And we are carefully reviewing the timing of planned capital expenditures in fiscal '21 and beyond. We will review the projects and priorities for research and development spending as well to ensure that they fit with the recent trends in the smartphone market and changes in our major customers' needs. However, in order to maintain and increase our future technological competitive advantage, we will not drastically reduce the number of projects or the budget. We intend to more proactively expand and diversify our customer base, which we're cautious to do previously due to production capacity constraints.

Over the mid- to long term, we will work to expand the applications for image sensors and the market overall by introducing edge-sensing products that use senses equipped with AI processing functionality, and we will steadfastly work to grow this business. We plan to complete within approximately 1 year an enhancement of our business model to adapt to the recent changes in the environment, and we expect to return the business to the path of profit growth from the second half of fiscal '21.

....About sensors, changes in the market and how are the changes occurring. For one thing, all over the world, there is poor sense in the market, deterioration of the market, and that is impacting the sensor sales. And also, the higher-priced products, well, it's, you could say, shifting to the moderate -- more moderate-priced models overall. So for our image sensors, especially the high-end image sensors that we sell, the high-end models are decreasing in sales. So that's impacting our business.

But as far as a large trend is concerned, the phones -- smartphones going larger and using multiple lenses, that will continue. The performance of -- for the cameras required for smartphones, for video and the camera photos, the demand for the higher quality will continue. Therefore, we believe the demand should come back sometime in the future.

...regarding image sensor, the capacity and the capacity factor and the second quarter. So the capacity for this quarter, for fiscal 2020 at the end of first quarter, and that's -- that's 133,000 per month at the master price; and also at the end of second -- of the second quarter, 135,000 per month. So we will gradually increase the capacity. That's our plan.

And also, the number of wafers to be input. The first quarter the actual figure is -- the average of 3 months is 126,000 for mobile and also for digital camera, and there were some adjustments made for production. And also for the projection for second quarter for that, the simple average for 3 months is 112,000. So for mobile and digital camera, I think there's going to be more production adjustment.

And then, well, for Sensing segment, the sales is expected to come down, and what is the magnitude of the impact? Well, last year, actual was a little over of ¥230 billion and it's a strong ¥230 billion. So generally, it's like 1/3 of that is the reduction in sensors or sensing products. That's 1/3. So a big point about that is that as of last year, we thought that the growth can be expected. So we made the capital investment and also, we have increased our R&D expenditures. And that has been the impact.
"

In a separate news, a Twitter post presents CIS market share chart from an unidentified Korean source. It shows a Q2 2020 market share taken away by somebody from both Sony and Samsung:

Go to the original article...

Sony Reports Drop of Image Sensor Sales, Reduces FY Forecast

Image Sensors World        Go to the original article...

Sony reports a drop of image sensor sales in its last fiscal quarter started on April 1, 2020:

Go to the original article...

Sony Edge Analytics Use Cases

Image Sensors World        Go to the original article...

Sony presents a number of use cases for its REA-C1000 Edge Analytics Appliance - a separate box for now. Possibly, Sony intends to integrate some of its functionality onto the AI-enabled image sensors in the future. The power consumption of the Edge Analytics Appliance box is 40W.




Go to the original article...

Renesas Announces Reference Design for its 8MP Image Sensor

Image Sensors World        Go to the original article...

BusinessWire: Renesas introduced an UHD surveillance camera reference design to address today’s high-accuracy object detection and recognition needs for video security and surveillance systems. Developed in collaboration with Novatek Microelectronics and designed by Systemtec Corporation Ltd, the reference design includes a camera image sensor board with PDAF, and a ISP board along with auto focus zoom lens software.

Built around Renesas’ RAA462113FYL CMOS sensor and Novatek’s dual core SoC ISP, the surveillance camera reference design uses several other Renesas ICs that address its signal chain electrical functions. The CIS board includes the RAA462113FYL, DC/DC buck converters, LDOs, motor driver and lens. The ISP board features the SoC and associated signal chain components.

An ever-increasing demand for security and surveillance camera systems drives the need for better object detection and recognition capabilities with higher imaging accuracy,” said DK Singh, Director, Systems and Solutions Team at Renesas. “Our surveillance camera with 4K resolution and PDAF function can deliver much faster autofocus results compared with conventional contrast-detection autofocus. We are excited that our close collaboration with Novatek and Systemtec makes this surveillance system reference design more accessible for customers worldwide.

Go to the original article...

ASM Presents its Camera Assembly Capabilities

Image Sensors World        Go to the original article...

ASM Q2 2020 investor presentation reports weaker booking for CIS packaging and shows its camera module assembly solutions:

Go to the original article...

Galaxycore Overtakes Omnivision to Become #3 in Units Market Share, Prepares IPO

Image Sensors World        Go to the original article...

According to IBK Securities report, Galaxycore becomes the world's #3 in CIS units market share (left chart below). However, Omnivision still keeps #3 spot in terms of revenue (right chart).


i-Micronews: GalaxyCore files for IPO on Shanghai’s Science and Technology Innovation Board (Star Stock Market) worth as much as CNY 6.96 billion (USD 991.62 million). According to Frost&Sullivan, in 2019, the company shipped 1.31 billion image sensors, occupying a 20.7% volume market share globally. In terms of revenue, it ranked eighth with CNY 3.19 billion.

Go to the original article...

Black Phosphorus Promise

Image Sensors World        Go to the original article...

AIP Applied Physics Reviews publishes National University of Singapore paper "Black phosphorus photonics toward on-chip applications" by Li Huang and Kah-Wee Ang.

"Unceasing efforts have been devoted to photonics based on black phosphorus ever since it came under the spotlight of two-dimensional materials research six years ago. The direct bandgap of black phosphorus is tunable by layer number, vertical electric field, and chemical doping, covering a broad spectrum for efficient light manipulation. The optical anisotropy further enables the identification and control of light polarization. Along with high carrier mobility, nonlinear optical properties, and integration capability due to its layered lattice structure, black phosphorus manifests itself as a promising multipurpose material for chip-scale optoelectronics. In this manuscript, we review the research on black phosphorus photonics, with a focus on the most fundamental active functions in photonic circuits: photodetection, electro-optic modulation, light emission, and laser pulse generation, aiming at evaluating the feasibility of integrating these black phosphorus-based components as a compact system for on-chip applications."

Go to the original article...

ID Quantique Announces 2nd Smartphone with its Image Sensor-based Random Number Generator

Image Sensors World        Go to the original article...

ID Quantique (IDQ) announces that its image sensor based Quantum Random Number Generator (QRNG) chip has been integrated in the Vietnamese ‘Vsmart Aris 5G’ smartphone.

With its compact size and low power consumption, our latest Quantis QRNG chip can be embedded in a smartphone to ensure trusted authentication and encryption of sensitive information. It brings a new level of security to the mobile phone industry. This is truly the first mass market application of quantum technologies,” says Grégoire Ribordy, CEO and co-founder of ID Quantique.

Implementing ID Quantique QRNG in the Aris 5G smartphone is part of getting VinSmart customers access to the most advanced technology in the world. This breakthrough in terms of quantum enhanced security technology offers benefits for services including banking, medical data and personal information. In the near future, Vinsmart will continue to research and perfect the next-generation of its 5G offering to accelerate the universalization of this technology in VietNam,” says Tran Minh Trung, Deputy CEO of VinSmart.


IDQ opens a dedicated web page "Quantum Random Number Generation (QRNG) for mobile phones."

"At its core, the QRNG chip contains a light-emitting diode (LED) and an image sensor. Due to quantum noise, the LED emits a random number of photons, which are captured and counted by the image sensor’s pixels, giving a series of raw random numbers that can be accessed directly by the user applications. These numbers are also fed to a deterministic random bit generator algorithm (DRBG) which distills further the entropy of quantum origin to produce random bits in compliancy to NIST 800-90A/B/C standard.

The Quantis QRNG Chip allows live status verification: if a failure is detected in the physical process, the random bit stream is immediately disabled, the user is notified, and an automatic recovery procedure is performed to produce QRNG data again.
"

Go to the original article...

From Single-pixel ToF Histogram to 3D Spatial Image

Image Sensors World        Go to the original article...

Phys.org, OSA Optica: University of Glasgow, TU Delft, and Politecnico di Milano publishe a paper "Spatial images from temporal data" by Alex Turpin, Gabriella Musarra, Valentin Kapitany, Francesco Tonolini, Ashley Lyons, Ilya Starshynov, Federica Villa, Enrico Conca, Francesco Fioranelli, Roderick Murray-Smith, and Daniele Faccio.

"Traditional paradigms for imaging rely on the use of a spatial structure, either in the detector (pixels arrays) or in the illumination (patterned light). Removal of the spatial structure in the detector or illumination, i.e., imaging with just a single-point sensor, would require solving a very strongly ill-posed inverse retrieval problem that to date has not been solved. Here, we demonstrate a data-driven approach in which full 3D information is obtained with just a single-point, single-photon avalanche diode that records the arrival time of photons reflected from a scene that is illuminated with short pulses of light. Imaging with single-point time-of-flight (temporal) data opens new routes in terms of speed, size, and functionality. As an example, we show how the training based on an optical time-of-flight camera enables a compact radio-frequency impulse radio detection and ranging transceiver to provide 3D images."


Go to the original article...

css.php