Archives for July 2019

MEMS and Imaging Summit

Image Sensors World        Go to the original article...

MEMS and Imaging Summit to be held in Grenoble, France on Sept. 25-27, 2019 publishes a part of its interesting agenda:

  • Considerations of Optical Fingerprint and 3D Face Recognition Sensors for Cellular Security Applications, Avi Strum, SVP, TowerJazz
  • Sensors in Mixed Reality, Sunil Acharya, Senior Director of Sensor Development Hololens, Microsoft
  • A Touch of Finnish Sense, Antti Vasara, President & CEO, VTT
  • Discovering New Dimensions: Our Vision to Sense the World, Christian Herzum, Senior Director 3D-Sensing & Discretes, Infineon Technologies
  • 7.2um Pixel Event-Based Vision Sensor with Frame Readout, Raphael Berner, Head of chip design, Insightness AG
  • PIXCURVE: a Global Approach for Curved Optical Components, David Henry, Head of Packaging and Assembly Laboratory, Optics and Photonics Division,CEA-Leti
  • Wafer Scale Image Sensors: New Developments and Technology Challenges for the X-ray Market, Thalis Anaxagoras, Founder, ISDI
  • Industrial Atomic Layer Deposition for Image Sensors, Mikko Söderlund, Director, Technical Sales Europe, Beneq
  • Excellence in Microlens Imprint Lithography and Wafer-Stacking, Reinhard Voelkel, CEO, SUSS MicroOptics SA
  • AI Close to the Sensor: an Approach for Energy Efficient and Real Time AI, Bram Senave, Business Development Manager, Easics
  • Organic Image Sensors for Fingerprint Acquisition in Smartphone Display and NIR Development for Hybrid CMOS Imager, Benjamin Bouthinon, Optics Manager, ISORG
  • 3D Depth Sensing: Time-of-Flight Technology and Application, Morin Dehan, Technology Engineer, Sony Depthsensing Solutions
  • Electrowetting Based Liquid Lenses – a Novel Technology that Improves Imaging Device Performance, Frederic Laune, Corning
  • Hyperspectral Imaging for Wafer Inspection, Jan Makowski, CEO / CTO, LuxFlux

Go to the original article...

Nikon Z 85mm f1.8 S review

Cameralabs        Go to the original article...

The Nikon Z 85mm f1.8 S is a short telephoto lens designed for Nikon’s full-frame Z mirrorless system. It becomes the first native prime lens in the system aimed at portrait photographers who desire classic head-and-shoulders compositions with shallow depth-of-field effects. Thomas compares it to rivals in his in-depth review!…

The post Nikon Z 85mm f1.8 S review appeared first on Cameralabs.

Go to the original article...

LeddarTech Settles LiDAR Patent Dispute with Phantom Intelligence

Image Sensors World        Go to the original article...

GlobeNewsWore: LeddarTech announces that its patent infringement case against Phantom Intelligence was settled favorably in the Federal Court of Canada.

In December 2015 LeddarTech challenged Phantom Intelligence for the illegal use of its patented LiDAR sensing technology protected by Canadian Patent No. 2,710,212, entitled “Detection and Ranging Methods and Systems” and relating to systems and methods for acquiring an optical signal and converting it into a digital domain. The patent has been filed in 2007 and granted in 2014. It's said to be a core driver of LeddarTech’s IP and is being practiced in all of its LiDAR products for ADAS and autonomous driving (AD) applications. The settlement, the amount, terms and conditions of which are confidential, was agreed under a court-sponsored mediation process following which Phantom Intelligence became a customer of LeddarTech.

LeddarTech is proud of its years of innovation as a pioneer in LiDAR technology for ADAS and AD,” stated Pierre Olivier, CTO at LeddarTech. “We have always been confident that we would be successful in the defense of our intellectual property, and we will continue to be vigilant in protecting it.

LeddarTech patent portfolio consists of 72 patents: 52 granted and 20 pending. The company intends to continue monitoring market for possible infringements of its proprietary LiDAR technology.

Go to the original article...

Sony Re-Orgs Renames Image Sensor Division, Reports Sales Growth

Image Sensors World        Go to the original article...

Sony reports quarterly results for the quarter ended June 30, 2019. Probably, as a response to the calls to spin-off its semiconductor business, Sony splits it between different divisions, so that there is no Semiconductor Solutions division anymore. Image sensors become a part of "Imaging & Sensing Solutions Segment." It's not immediately clear what are the other products in this new segment. I was unable to find any official statement on that. A European site "Sony Image Sensing Solutions" talks about industrial cameras with no mention of image sensors.


Update: Sony official earnings call transcript explains the name change. It sounds like a reply to the spin-off proposals:

• From this quarter, we have changed the name of the Semiconductors segment to Imaging & Sensing Solutions (“I&SS”).
• Now I will explain the background and reasoning behind the change in name of the segment.
• The portion of Semiconductors segment revenue that comes from image sensors has been increasing every year, is expected to be approximately 85% of the segment this fiscal year and is expected to increase even more going forward.
• Image sensors are hybrids between analog and digital semiconductors, and, in terms of technology and business model, differ from logic LSI and memory, which most people think of when they hear the word semiconductors.
• Compared with logic LSI and memory, which require frequent capacity upgrades to maintain competitiveness due to quickly evolving process miniaturization, image sensors do not require regular, large capital investments because products can be differentiated through improvements in functionality and the addition of new features without having to upgrade production capacity.
• Moreover, since the image sensors business is focused on custom products that are differentiated through features and functionality, and because we have expanded our customer base the last several years and obtained a large share of the market, we have established a business model that experiences less impact from fluctuations in the market known as the silicon cycle.
• Over the last 10 years, we have achieved an extremely high level of compound annual sales growth at 17%, primarily from smartphone applications, and we have made significant investments to increase capacity as a result. However, we expect the investment requirements of this business to decrease significantly as the acute increase in demand transitions to a milder growth trajectory.
• The strategy for future growth in the I&SS segment is to develop AI sensors which make our sensors more intelligent by combining artificial intelligence with the sensors themselves.
• Development of these sensors will require us to leverage not only the strength of the hardware technology in the I&SS segment, such as the stacking of sensors on logic and copper-to-copper connections, but also the AI technology and diverse application technology in other parts of Sony, so our efforts in this area will span the entire Sony Group.
• We think that AI and sensing will be used across a wide range of applications such as autonomous driving, IoT, games and immersive entertainment. As such, we think there is a possibility that image sensors will evolve from the hardware they are today to solutions and platforms as visual data and sensing information is processed in a sophisticated manner inside sensors.
• The image sensor business is important because it is one of the pillars of the growth strategy of the Sony Group. We changed the name of the segment this time to assist your understanding of the characteristics and future strategy of this business, which I just explained.



Also, Sony gives few more details on the CIS business:

• FY19 Q1 sales increased 14% year-on-year to 230.7 billion yen and operating income increased 20.4 billion yen to 49.5 billion yen, primarily due to a significant increase in image sensor sales for mobile devices.
• Demand for our image sensors continues to be strong and our market share of image sensors for mid-range and high-end models of major smartphone makers remains high, due to adoption of multiple sensors per camera and growing demand for high value-added sensors made using large die-sizes.
• We are currently utilizing 100% of our internal capacity.
• However, concerns about the impact of trade issues in the second half of the fiscal year remain. We have already been conservative when forecasting the impact of these issues, but, because we want to evaluate the risks over the course of the first half of the fiscal year, we have made no changes to our April forecast.


Update #2: Yahoo: “It surprised us that the image sensor business was really strong,” said Masahiro Wakasugi, an analyst with Bloomberg Intelligence. “We thought there might be a risk that Huawei could be cutting some orders. We think going forward the image sensor business will be one of the key contributors to positive sales and earnings results for Sony.

Go to the original article...

TowerJazz Updates on its CIS Business

Image Sensors World        Go to the original article...

SeekingAlpha publishes a transcript of TowerJazz Q2 2016 earnings call. Few updates on image sensor business:

"Looking into our CMOS image sensor business, our largest application and market is the industrial market. As previously discussed, we have seen a pullback, which our customers attribute to the trade war. It is starting to pick up now with new projects, many of which are targeted towards large screen display inspection using very high resolution global shutter sensors. All of our new projects are based on our state-of-the-art global shutter pixels in our 65-nanometer, 12-inch line in Uozu. We expect these projects to ramp towards the end of next year. Orders for present products are forecast by customers to recover with wafer starts beginning during the fourth quarter of this year.

We have won a large face recognition sensor project for smartphones. It will be based on indirect time-of-flight technology and we'll use state-of-the-art stacking technology, utilizing our 300-millimeter, 65-nanometer platform. In parallel, for mobile applications, we are working with 3 leading fingerprint companies for under OLED and under LCD optical sensors, based upon our unique pixel technology. These projects are expected to begin to ramp in 2020, utilizing our well-established 0.18 micron, 200-millimeter CIS technology.

In the high-end photography area, we're moving along with the next-generation stacking sensor project, partnering with an undisputed leader in the market, targeted to ramp in 2021. Medical and dental x-ray demand has remained stable with strong margins. We see an increased demand for large CMOS-based panels. We're now in the final qualification stages of new products with one of the leading providers. Additionally, we are fully qualified and started to ship single-die wafer-scale medical x-ray sensors on 300-millimeter with 2 additional customers planning final product tapeout in the fourth quarter of this year.

...we have decided to accelerate our planned expansion and to allocate $100 million to increase the capacity of our 300-millimeter Uozu fab in Japan. Equipment should begin to arrive in this center, with most to all tools expected to be qualified during the first half of 2020. This investment not only increases our 300-millimeter wafer capacity but will drive additional benefits that tie to new and large 200-millimeter partnership activities. At image sensing, most of the capacity growth there is indeed at the end of 2020 and 2021, 2022.
"

Go to the original article...

Olympus TOUGH TG6 review

Cameralabs        Go to the original article...

The Olympus TOUGH TG-6 is the latest model in the company's massively popular rugged series. It’s waterproof to 15 metres, freezeproof to -10C, can withstand a crushing force of 100Kg and a drop from 2.1m. It keeps the 25-100mm zoom of its predecessor with its bright aperture and unsurpassed macro, but upgrades the screen and underwater modes. Find out if it's still the champion of tough cameras in Ken's review!…

The post Olympus TOUGH TG6 review appeared first on Cameralabs.

Go to the original article...

Image Sensors with Frustrated Charge Transport

Image Sensors World        Go to the original article...

Journal of Applied Physics publishes a paper "Organic photodetectors with frustrated charge transport for small-pitch image sensors" by Z. Ma and C. K. Renshaw from University of Central Florida, Orlando, FL.

"We demonstrate a frustrated organic photodetector (F-OPD) that utilizes frustrated charge-transport to quench forward-bias current and provide a low-current, light-independent OFF state. Photocurrent is collected efficiently with −3 V reverse-bias recovering the sensitive OPD response with higher than 10-bit dynamic range. This intrinsic switching mechanism eliminates the need for thin-film transistors (TFTs) to provide readout control in high-resolution image sensors. Eliminating TFTs simplifies fabrication processing, improves fill-factor, and enables higher resolution image sensors on nonplanar, stretchable, or large-area substrates for a variety of imaging applications. We simulate image sensors and show that the performance is limited by the OFF state uniformity experimentally observed across 45 devices. We simulate performance in a 900-pixel array and show that the demonstrated F-OPDs can scale into megapixel arrays with a noise-equivalent power of less than 0.6 mW/cm2 and a dynamic range of more than 6-bits; better uniformity can substantially improve this performance for large arrays."


"The F-OPD utilizes a blocking layer integrated with the OPD to frustrate charge collection and provide a low-current OFF state under forward bias. A few volts of reverse bias switches the pixel into a conducting ON state where the OPD photocurrent is efficiently collected."


"We have demonstrated F-OPDs utilizing frustrated charge-transport to enable transistor-free pixels for organic image sensors. A blocking layer at the anode reduces forward-bias current a thousand-fold and provides a low-current, light-independent OFF state; meanwhile, a few volts of reverse-bias recovers the high sensitivity and dynamic range typical of OPDs. The F-OPD operates like conventional passive pixels but elimination of the readout transistor avoids (1) allocation of pixel area to the transistor, (2) definition of subpixel features for drain/source/gate/insulator/channel, and (3) additional gate interconnects spanning the circuit. Pixel functionality is defined by a single, monolithic stack to allow nearly 100% FF and small-pixel-pitch using minimal fabrication steps. Combined with high-resolution transfer patterning for organic circuits,12 F-OPDs could enable scaling OISs to a less than 10 μm pixel-pitch limited only by edge effects and lateral photoconductive leakage. This streamlined processing can also reduce cost for curved, flexible, lightweight, large-area, and/or attritable OISs."

Go to the original article...

Automotive and Security Markets

Image Sensors World        Go to the original article...

IFNews: Credit Suisse report on Asia semiconductor market has a part about Kingpack CIS packaging business for ON Semi. Kingpack uses its proprietary wirebonding technology to make the tiny iBGA packages which has been qualified with AEC Q-100 Grade 1. "Kingpak holds a strong position in the automotive CIS packaging market and the company should be able to sustain its leading position as the company is the only supplier certified by AEC-Q100 Grade 1 qualification."

Go to the original article...

Imaging through Noise with Quantum Illumination

Image Sensors World        Go to the original article...

ResearchGate, Arxiv.org: University of Glasgow, UK, paper "Imaging Through Noise With Quantum Illumination" by Thomas Gregory, Paul-Antoine Moreau, Ermes Toninelli, and Miles J. Padgett proposes a detection technique that preferentially select photon-pair events over isolated background events. This is somewhat similar to what SPAD designers do to reject the sunlight, but not exactly the same:

"The contrast of an image can be degraded by the presence of background light and sensor noise. To overcome this degradation, quantum illumination protocols have been theorised (Science 321 (2008), Physics Review Letters 101 (2008)) that exploit the spatial correlations between photon-pairs. Here we demonstrate the first full-field imaging system using quantum illumination, by an enhanced detection protocol. With our current technology we achieve a rejection of background and stray light of order 5 and also report an image contrast improvement up to a factor of 5.5, which is resilient to both environmental noise and transmission losses. The quantum illumination protocol differs from usual quantum schemes in that the advantage is maintained even in the presence of noise and loss. Our approach may enable laboratory-based quantum imaging to be applied to real-world applications where the suppression of background light and noise is important, such as imaging under low-photon flux and quantum LIDAR."


"We have demonstrated a quantum illumination protocol to perform full-field imaging achieving a contrast enhancement through the suppression of both background light and sensor noise. Structure within the thermal background illumination is potentially a-priori unknown and therefore cannot be suppressed with a simple ad-hoc background subtraction. Through resilience to environmental noise and losses, such a quantum illumination protocol should find applications in real-world implementations including quantum microscopy for low light-level imaging, quantum LIDAR imaging applications, and quantum RADAR. Improvements in detector technologies such as SPAD arrays capable of time-tagging events should enable time-of-flight applications to be realised and applied outside of the laboratory through the increased acquisition speed and time resolution that they enable."

Go to the original article...

Volvo LiDAR Cost Estimated at $22

Image Sensors World        Go to the original article...

Not all LiDARs are expensive. TechInsights teardown report of Volvo 31360888 Brake Assist LiDAR found in V40 car estimates its manufacturing costs at only $22.61, although the retail price is more than $586:

Go to the original article...

Hynix Shifts Fab Capacity from Memory to CIS

Image Sensors World        Go to the original article...

KoreaTimes. KoreaHerald: SK Hynix said yesterday that it will convert part of its M10 DRAM fab in Icheon, Gyeonggi Province, to CIS production. “This is to reduce DRAM wafer capacity considering the DRAM demand environment and to strengthen the competitiveness of its CIS business,” said the company.


Go to the original article...

ST Q2 Earnings Call: Structured Light vs ToF in Smartphones

Image Sensors World        Go to the original article...

SeekingAlpha: ST updates on its Q2 imaging business results:

"During Q2, we had at least an Imaging sensor and/or a MEMS device in all of the top 10 smartphones currently on this market. We also continue to earn design wins and ramp shipments for our time-of-flight sensor, analog products and RF products for 4G front-end modules.

Clearly, in Q2, the performance of growth which was higher than the midpoint of our range is mainly related to specialized imaging product.

Well, about specialized imaging also, again, I will not comment on our competitor, but I will comment, okay, the visibility we have and what ST is doing. Well, it is clear that you know that ST is a key player in the 3D sensing for the face recognition. Since 2017 second half, the unique foolproof technology is based on what we call structured light where ST is a key player, and we are still, let's say, growing in this kind of application.

In parallel, components in the smartphone, there is clearly some other components like ambient licensing or, let's say, Time-of-Flight based proximity sensor or autofocus assist or ranging sensor. And clearly, okay, I repeat that we have accumulated a huge volume in this Time-of-Flight and we continue. That's the reason why, as I told you, in the top 10 smartphones on the market today, we have either a special imaging sensor, including this Time-of-Flight or a MEMS sensor. Also, we disclosed to you during the recent quarter that now ST is important player on imaging licensing on the smartphone or, let's say, other wearable application. Now in term of trend of the industry, it is clearly that one trend we are seeing is introduction of indirect Time-of-Flight for the world-facing camera first, which certainly will be, let's say, a future competitive solution to address the depth map sensing. Now ST here, I confirm to you that we have a very strong road map with a very competitive and high-performing product that we will deliver to the market, whatever in iOS or Android phone. So, this is, let's say, the dynamic specific to ST we have on this application for the smartphone.

I confirm to you that we are in competition -- for the time, you have only one unique solution for face recognition 3D sensing, it is the structured light. The other, let's say, architecture in term of system are, let's say, less foolproof. Well -- and again, this is what we confirmed. We said to the Capital Market Day and we know that in the near future, certainly, architecture like structured light improved and indirect Time-of-Flight will be in competition. ST addressed the two technology architecture. And for sure, certainly, Sony is more addressing the indirect Time-of-Flight kind of architecture, and we will be in competition. But we do not see, let's say, dramatic or material change in the dynamic for the short term."

...for the time being, the structured light for the front-facing is a technology, okay, again, since H2 2017 and certainly will continue for a while. As we disclosed to you, we do believe that at a certain moment of time, indirect Time-of-Flight base architecture will certainly come up on this kind of application. Presently, as far as the performance is, let's say, consistent with the structured light. Presently, okay, some advantages in form factor or something like that. Well, this trend, okay, is confirmed and ST will compete overall on both architecture. And then you know that as, let's say, generic trend for semiconductor, the challenge will be always to reduce the form factor to improve the sector offering and to reduce the cost of ownership.

Then on the world-facing, it is clear that indirect Time-of-Flight based solution will be certainly the winning architecture. And again, okay, here, you will see maybe this year and certainly next year introduction of this kind of technology. For the short term, I mean this year, it is not revenue for ST, but we are offering a solution for 2020 and beyond and we will be a key competitor in this market. Now then other competitors like ambient licensing or, let's say, ranging sensor based on direct Time-of-Flight will continue as the RGB camera will have more and more pixel and you need to have to focus assist and this kind of stuff. So, no major change, no change compared to what we said at Capital Market Day.
"

Go to the original article...

Sony RX100 VII review

Cameralabs        Go to the original article...

The Sony RX100 VII is a high-end compact designed for travel, action, video and vlogging, with a 20 Megapixel 1in sensor, 24-200mm zoom, flip screen, popup viewfinder, fast burst shooting, 4k video and mic input. Find out if it’s the compact for you in my review-so-far!…

The post Sony RX100 VII review appeared first on Cameralabs.

Go to the original article...

Image Sensor Americas Agenda

Image Sensors World        Go to the original article...

Image Sensors Americas conference to be held on October 15-16, 2019 in San Jose, CA announces its agenda. Naturally, a good part of it is image sensor presentations:

  • Towards Large-Scale, SPAD-Based ToF Imagers for Automotive, Robotic and EdgeAI Applications
    Wade Appelman | VP of Sales and Marketing of SensL Technologies
  • CMOS Image Sensors for Bio-medical, Industrial and Scientific Applications: Current Challenges and Perspectives
    Renato Turchetta | CEO of IMASENIC Advanced Imaging S.L.
  • Near Field Depth Generation From a Single Image Sensor
    Paul Gallagher | Vice-President of Strategic Marketing of Airy3D
  • Ge-on-Si ToF Imager Sensor SoC
    Neil Na | Co-Founder and Chief Science Officer of Artilux
  • Imaging Sensors and Systems for a Genomics Revolution
    Tracy Fung | Sr. Staff Engineer, Product Development CMOS Lead of Illumina
  • Far-Infrared thermal camera an effortless solution for improving ADAS detection robustness
    Emmanuel Bercier | Strategy and Automotive Market Manager of ULIS-SOFRADIR

Go to the original article...

Ams Bets Big on 3D Sensing

Image Sensors World        Go to the original article...

Ams Q2 2019 report emphasize the company's focus on 3D sensing solutions:

Go to the original article...

TechInsights’ State of the Art of Smartphone Imagers Review – Part 3

Image Sensors World        Go to the original article...

Techinsights' posts "The state of the art of smartphone imagers" are based on Ray Fontaine presentation at IISW 2019 in June. Part 3 covers "Back-Illuminated Active Si Thickness, Deep Trench Isolation (DTI)."

"DTI was first introduced to back-illuminated pixels with conventional or slightly thicker active Si, and then optimized to enable substantially thicker active Si over time. For example, DTI came to early 1.0 µm pixels with a 2.5 µm to 2.7 µm active Si thickness and later enabled active Si up to 3.9 µm thick. Studying the 0.8 µm and 0.9 µm pixel generations it is clear an active Si thickness of >3.5 µm was selected to achieve sufficient pixel performance."

Go to the original article...

2013 Review of 3D Cameras

Image Sensors World        Go to the original article...

Not much has changed since 2013 when Nova Science Publishes unveiled a book with chapter of 3D imaging "A Review on Commercial Solid State 3D Cameras for Machine Vision." The review talks about PMD, MESA, Raytrix, TriDiCam, Fotonic, and Sick approaches:

Go to the original article...

Verge of CFA Diversity Era?

Image Sensors World        Go to the original article...

MDPI paper "The Effect of the Color Filter Array Layout Choice on State-of-the-Art Demosaicing" by Ana Stojkovic, Ivana Shopovska, Hiep Luong, Jan Aelterman, Ljubomir Jovanov, and Wilfried Philips from Ghent University, Belgium comes up with an interesting statement:

"In this study, by comparing performance of two state-of-the-art generic algorithms, we evaluate the potential of modern CFA-demosaicing. We test the hypothesis that, with the increasing power of NN-based demosaicing, the influence of optimal CFA design on system performance decreases. This hypothesis is supported with the experimental results. Such a finding would herald the possibility of relaxing CFA requirements, providing more freedom in the CFA design choice and producing high-quality cameras.

From this study, we derive a conclusion about the constantly increasing reconstruction power of the modern learning based demosaicing algorithms towards adaptiveness to any CFA design without loss in the reconstruction quality (which used to be dependent on the quality of the CFA design). This conclusion leads to a finding regarding the future opportunities for camera manufacturing and image reconstruction, specifically in combining lower hardware requirements with powerful reconstruction techniques. In other words, this means that, with the modern learning-based demosaicing methods, camera manufacturers have more freedom in the choice of the CFA pattern layout, without a noticeable loss in the image quality. In that direction, the patterns can be adapted to improve other image properties and facilitate various imaging tasks, such as the Quad Bayer that was designed to improve noise reduction in low-light imaging.
"

Go to the original article...

SmartSens Launches Two Industrial Grade Sensors – SC2310T and SC4210T

Image Sensors World        Go to the original article...

PRNewswire: SmartSens announced two new CMOS sensors SC2310T and SC4210T with unique pixel architecture to deliver superior low-light sensitivity and HDR of 100db, combined with industrial temperature range from -30C to 85C.

The new product are said to be leading the market with SNR1s of 0.21 lux. The 2MP SC2310T and 4MP SC4210T SmartClarity Sensors are based on 3um BSI pixel technology and support video capture in 60 fps in HDR mode.

"Over the last nine years, we have built a rich portfolio of imaging sensors used in electronics and hardware products, including sports cameras, drones, robot cleaners, consumer automotive cameras, surveillance and smart home cameras. With the growing demand for superior quality imaging sensors outside of traditional consumer applications, SC2310T and SC4210T will extend our product line for our customers," said Chris Yiu, CMO, SmartSens. "The ability to deliver unparalleled image quality in extreme lighting conditions, and being able to operate under critical environmental temperature ranges, makes the devices an attractive choice for next-generation catch-all video applications."

SC2310T and SC4210T are currently available for sampling and are expected to enter volume production in July.

Go to the original article...

Tamron 17-28mm f2.8 Di III review

Cameralabs        Go to the original article...

The Tamron 17-28mm f2.8 Di III RXD is a small and light wide-angle zoom with a fast f2.8 focal ratio that's designed for Sony E-mount full-frame cameras. It nicely complements Tamron’s earlier 28-75mm f2.8 and comes in at a lower price than Sony’s FE 16-35mm f2.8. See how they compare in Thomas's review!…

The post Tamron 17-28mm f2.8 Di III review appeared first on Cameralabs.

Go to the original article...

Rumor on 5 New Sony Full Frame Sensors

Image Sensors World        Go to the original article...

Sony E-Mount Rumors publishes what it calls "leaked datasheets" of 5 full-frame CMOS sensors: IMX311, IMX313, IMX409, IMX521, IMX554. The most unusual one is IMX311 having 45-deg angled pixels. Note that the resolution of ~12,000 x ~4,000 of square pixels does not have the same aspect ratio as the optical format 41mm x 30mm, possibly due to 45 angle:


IMX521 is a high speed sensor with quad CFA:

Go to the original article...

Depth Sensing in Automotive Applications

Image Sensors World        Go to the original article...

First part of RSIP webinar series on automotive AI talks about ways to sense depth in ADAS and autonomous driving applications:

Go to the original article...

ToF News: Broadcom, Renesas, Opnous

Image Sensors World        Go to the original article...

ToF market becomes rather crowded. Many companies enter it anticipating a fast growth.

Broadcom AFBR-S50MV85G is APD pixel-based distance and motion measurement ToF sensor. It supports up to 3000 frames per second with up to 16 illuminated pixels. The sensor is aimed to industrial applications and gesture sensing and is said to have best-in-class ambient light suppression of up to 200k Lux. So, the use in outside environments should not be a problem.

Features:
  • Integrated 850 nm laser light source
  • Between 7-16 illuminated pixels
  • FoV of up to 12.4°x 6.2°
  • Very fast measurement rates of up to 3 kHz
  • Variable distance range up to 10m
  • Operation up to 200k Lux ambient light
  • Works well on all surface conditions
  • Laser Class 1 eye safe ready
  • Accuracy better than 1%
  • Drop-in compatible within the AFBR-S50 sensor platform


Renesas ISL29501 (Intersil) ToF processor external emitter and detector. The sensor operates on i-ToF in-phase/out-phase priciple:


Shanghai, China-based Opnous offers a number of ToF sensors with different resolutions:

Go to the original article...

Recent Image Sensor Videos: Omnivision, Prophesee, Intel

Image Sensors World        Go to the original article...

Omnivision publishes its CEO Boyd Fowler explanation of HALE technology:



Prophesee CMO Guillaume Butin presents another use case of its event-driven sensors, vibration monitoring:



Other Prophesee videos explain differences between event-driven and frame-based sensors:





Intel explains how its coded light 3D camera works:


Go to the original article...

Recent Image Sensor Videos: Omnivision, Prophesee, Intel

Image Sensors World        Go to the original article...

Omnivision publishes its CEO Boyd Fowler explanation of HALE technology:



Prophesee CMO Guillaume Butin presents another use case of its event-driven sensors, vibration monitoring:



Other Prophesee videos explain differences between event-driven and frame-based sensors:





Intel explains how its coded light 3D camera works:


Go to the original article...

LiDAR News: Voyant Photonics, Aeye

Image Sensors World        Go to the original article...

Techcrunch: NYC-based LiDAR startup Voyant Photonics raises $4.3M investment from Contour Venture Partners, LDV Capital and DARPA. The founding team of the startup came from Lipson Nanophotonics Group at Columbia University.

"In the past, attempts in chip-based photonics to send out a coherent laser-like beam from a surface of lightguides (elements used to steer light around or emit it) have been limited by a low field of view and power because the light tends to interfere with itself at close quarters.

Voyant’s version of these “optical phased arrays” sidesteps that problem by carefully altering the phase of the light traveling through the chip.
"

This is an enabling technology because it’s so small,” says Voyant CEO and co-founder Steven Miller. “We’re talking cubic centimeter volumes.

It’s a misconception that small lidars need to be low-performance. The silicon photonic architecture we use lets us build a very sensitive receiver on-chip that would be difficult to assemble in traditional optics. So we’re able to fit a high-performance lidar into that tiny package without any additional or exotic components. We think we can achieve specs comparable to lidars out there, but just make them that much smaller.



BusinessWire: Aeye publishes a whitepaper "AEye Redefines the Three “R’s” of LiDAR – Rate, Resolution, and Range." Basically, it proposes to bend the performance metrics in such a way that Aeye LiDAR looks better:

Extended Metric #1: From Frame Rate to Object Revisit Rate

It is universally accepted that a single interrogation point, or shot, does not deliver enough confidence to verify a hazard. Therefore, passive LiDAR systems need multiple interrogations/detects on the same object or position over multiple frames to validate an object. New, intelligent LiDAR systems, such as AEye’s iDAR™, can revisit an object within the same frame. These agile systems can accelerate the revisit rate by allowing for intelligent shot scheduling within a frame, with the ability to interrogate an object or position multiple times within a conventional frame.

In addition, existing LiDAR systems are limited by the physics of fixed laser pulse energy, fixed dwell time, and fixed scan patterns. Next generation systems such as iDAR, are software definable by perception, path and motion planning modules so that they can dynamically adjust their data collection approach to best fit their needs. Therefore, Object Revisit Rate, or the time between two shots at the same point or set of points, is a more important and relevant metric than Frame Rate alone.



Extended Metric #2: From Angular Resolution to Instantaneous (Angular) Resolution

The assumption behind the use of resolution as a conventional LiDAR metric is that the entire Field of View will be scanned with a constant pattern and uniform power. However, AEye’s iDAR technology, based on advanced robotic vision paradigms like those utilized in missile defense systems, was developed to break this assumption. Agile LiDAR systems enable a dynamic change in both temporal and spatial sampling density within a region of interest, creating instantaneous resolution. These regions of interest can be fixed at design time, triggered by specific conditions, or dynamically generated at run-time.

“Laser power is a valuable commodity. LiDAR systems need to be able to focus their defined laser power on objects that matter,” said Allan Steinhardt, Chief Scientist at AEye. “Therefore, it is beneficial to measure how much more resolution can be applied on demand to key objects in addition to merely measuring static angular resolution over a fixed pattern. If you are not intelligently scanning, you are either over sampling, or under sampling the majority of a scene, wasting precious power with no gain in information value.”



Extended Metric #3: From Detection Range to Classification Range

The traditional metric of detection range may work for simple applications, but for autonomy the more critical performance measurement is classification range. While it has been generally assumed that LiDAR manufacturers need not know or care about how the domain controller classifies or how long it takes, this can ultimately add latency and leave the vehicle vulnerable to dangerous situations. The more a sensor can provide classification attributes, the faster the perception system can confirm and classify. Measuring classification range, in addition to detection range, will provide better assessment of an automotive LiDAR’s capabilities, since it eliminates the unknowns in the perception stack, pinpointing salient information faster.

Unlike first generation LiDAR sensors, AEye’s iDAR is an integrated, responsive perception system that mimics the way the human visual cortex focuses on and evaluates potential driving hazards. Using a distributed architecture and edge processing, iDAR dynamically tracks objects of interest, while always critically assessing general surroundings. Its software-configurable hardware enables vehicle control system software to selectively customize data collection in real-time, while edge processing reduces control loop latency. By combining software-definability, artificial intelligence, and feedback loops, with smart, agile sensors, iDAR is able to capture more intelligent information with less data, faster, for optimal performance and safety.



Medium: Researches from Baidu Research, the University of Michigan, and the University of Illinois at Urbana-Champaign demo a way to hide objects from discovering by LiDAR:

Go to the original article...

Imec and Holst Centre Transparent Fingerprint Sensor

Image Sensors World        Go to the original article...

Charbax publishes a video interview with Hylke Akkerman (Holst Centre) and Pawel Malinowski (Imec) on the transparent fingerprint sensor that won the 2019 I-Zone Best Prototype Award SID Display Week:

Go to the original article...

Basler Announces ToF Camera with Sony Sensor

Image Sensors World        Go to the original article...

Basler unveils Blaze ToF camera based on Sony DepthSense IMX556PLR sensor technology:

Go to the original article...

Under-Display News

Image Sensors World        Go to the original article...

IFNews: Credit Suisse report on smartphone display market talks about under-display selfie camera in Oppo phones:

"Oppo also became the first smartphone brand to unveil an engineering sample with under-display selfie camera last week, by putting the front facing camera under the AMOLED display, although we believe its peers such as Xiaomi, Lenovo, Apple, Huawei, etc., are also working on similar solution. This technology allows a real full screen design as there is no hole or notch on the display, and the screen can act as a screen when the front camera is not in use. Nevertheless, the display image quality in the area surrounding the camera seems to be worse than the rest of the display as it requires special treatment and processing. Moreover, the native image quality (resolution, contrast, brightness, etc.) taken by the under-display selfie camera is also not comparable with current front facing camera. Our checks suggest the brands (not just Oppo) are currently working with software/AI companies for post-processing."


The report also talks about the efforts to reduce under-display fingerprint sensor thickness:

"All of the flagship Android smartphones showcased at the MWC Shanghai are equipped with under-display fingerprint sensing, mostly adopting optical sensor with only Samsung using ultrasonic sensor, and none of them is using Face ID-like biometric sensing. We believe under-display fingerprint is becoming the mainstream for Android's high-end smartphones and could further proliferate into mid-end as the overall cost comes down. We estimate overall under-display fingerprint shipment of ~200 mn units in 2019E (60 mn units for ultrasonic and 140 mn units for optical), up from ~30 mn units in 2018, and could further increase to 300 mn units in 2020E, excluding the potential adoption by iPhone.

For the optical under-display fingerprint, our checks suggest the industry is working on (1) thinner stacking for 5G; (2) half-screen sensing for OLED panel; (3) single-point sensing for LCD panel; and (4) full-screen in-cell solution for LCD panel. As mentioned earlier, 5G smartphone will consume more battery power and it will be necessary to reduce the thickness of the under-display fingerprint module for more room to house a bigger battery.

Currently, optical under-display fingerprint sensor module has a thickness of nearly 4 mm, as its structure requires certain distance between the CMOS sensor and the AMOLED display to have the best optical imaging performance. Given the overall thickness of the handset nowadays is around 7.5-9.0 mm, smartphone makers are required to sacrifice the battery capacity to make extra room for the optical under-display fingerprint sensor. The new structure for 5G smartphone that Goodix and Egis are working on will be adopting MEMS Pinhole structure, replacing the current 2P/3P optical lens structure, given the MEMS Pinhole design could achieve total thickness of 0.5-0.8 mm, versus ~4 mm for 2P/3P optical lens. Our checks suggest the supply chain is preparing sampling/qualification of the new structure in 2H19 for mass production in 2020.
"

Go to the original article...

Sony A7r IV review

Cameralabs        Go to the original article...

The Sony A7r Mark IV is a full-frame mirrorless camera with 61 Megapixels, 10fps shooting, 4k video up to 30p, built-in stabilisation and a Pixel Shift Composite mode that generates images with up to 240 Megapixels. Find out if it's the high-res body for you in my full review!…

The post Sony A7r IV review appeared first on Cameralabs.

Go to the original article...

css.php