2018 Harvest Imaging Forum Agenda

Image Sensors World        Go to the original article...

Albert Theuwissen announces agenda of Harvest Imaging Forum, to be held on December 6-7 in Delft, the Netherlands.

Day 1 of the forum is devoted to "Efficient embedded deep learning for vision applications," presented by Marian VERHELST (KU Leuven, Belgium):
  1. Introduction into deep learning
    From neural networks (NN) to deep NN
    Benefits & applications
    Training and inference with deep NN
    Types of deep NN
    Sparse connectivity
    Residual networks
    Separable models
    Key enablers & challenges
  2. Computer architectures for deep NN inference
    Benefits and limitations of CPU and GPUs
    Exploiting NN structure in custom processors
    Architecture level exploitation: spatial reuse in efficient datapaths
    Architecture level exploitation: temporal reuse in efficient memory hierarchies
    Circuit level exploitation: near/in memory compute
    Exploiting NN precision in custom processors
    Architecture level exploitation: reduced and variable precision processors
    Circuit level exploitation: mixed signal neural network processors
    Exploiting NN sparsity:
    Architecture level exploitation: computational and memory gating
    Architecture level exploitation: I/O compression
  3. HW and SW optimization for efficient inference
    Co-optimizing NN topology and precision with hardware architectures
    Hardware modeling
    Hardware-aware network optimization
    Network-aware hardware optimization
  4. Trends and outlook
    Dynamic application-pipelines
    Dynamic SoCs
    Beyond deep learning, explainable AI
    Outlook
Day 2 is devoted to "Image and Data Fusion," presented by Wilfried PHILIPS (imec and Ghent University, Belgium):
  1. Data fusion: principles and theory
    Bayesian estimation
    Priors and likelihood
    Information content, redundancy, correlation
    Application to image processing: recursive maximum likelihood tracking, pixel fusion
  2. Pixel level fusion
    Sampling grids and spatio-temporal aliasing
    Multi-modal sensors, interpolation
    Temporal fusion and superresolution
    Multi-focal fusion
  3. Multi-camera image fusion
    Occlusion and inpainting
    Uni and multimodal Inter-camera pixel fusion
    Fusion of heterogeneous sources: camera, lidar, radar
    Applications: time of flight, hyperspectral, hdr, multiview imaging
    Fusion of heterogeneous sources: radar, video, lidar
  4. Geometric fusion
    Multi-view geometry
    Fusion of point clouds
    Image stitching
    Simultaneous localization and mapping
    Applications: remote sensing from drones and vehicles
  5. Inference fusion in camera networks
    Multi-camera calibration
    Occlusion reasoning for multiple cameras with an overlapping viewpoint
    Multi-camera tracking
    Cooperative fusion and distributed processing

Go to the original article...

Pyxalis Presents GS HDR Sensor

Image Sensors World        Go to the original article...

Pyxalis seems to expand its activity beyond custom image sensors to standard products. At the Vision Show in Stuttgart, Germany, the company presented a flyer of its Robin chips with 3.2um global shutter pixels, said to provide "artifact-free in-pixel HDR." The new sensor outputs ASIL data per frame, suitable for automotive applications:


Thanks to AB for the photo from Pyxalis booth!

Go to the original article...

Ouster Discusses its LiDAR Principles

Image Sensors World        Go to the original article...

PRNewswire: Ouster unveils the details of its LiDAR technology. Several breakthroughs covered by recently granted patents have enabled Ouster's move toward state-of-the art, high volume, silicon-based sensors and lasers that operate in a near-infrared light spectrum.

Ouster's multi-beam Lidar is said to carry significant advantages over traditional approaches:

True solid state - Ouster's core technology is a two chip (one monolithic laser array, one monolithic receiver ASIC) solid state lidar core, which is integrated in the mechanically scanning product lines (the OS-1 and OS-2) and will be configured as a standalone device in a future solid state product. Unlike competing solid state technologies, Ouster's two chip lidar core contains no moving parts on the macro or micro scale while retaining the performance advantages of scanning systems through its multi-beam flash lidar technology.

Lower cost at higher resolution - Ouster's OS-1 64 sensor costs nearly 85% less than competing sensors, making it the most economical sensor on the market. In an industry first, Ouster has decoupled cost from increases in resolution by placing all critical functionality on scalable semiconductor dies.

Simplified architecture - Ouster's multi-beam flash lidar sensor contains a vastly simpler architecture than other systems. The OS-1 64 contains just two custom semiconductor chips capable of firing lasers and sensing the light that reflects back to the sensor. This approach replaces the thousands of discrete, delicately positioned components in a traditional lidar with just two.

Smaller size and weight - Because of the sensor's simpler architecture, Ouster's devices are significantly smaller, lighter weight and more power efficient, making them a perfect fit for unmanned aerial vehicles (UAVs), handheld and backpack-based mapping applications, and small robotic platforms. With lower power and more resolution, drone and handheld systems can run longer and scan faster for significant increases in system productivity.

In an article on the company's website, CEO Angus Pacala wrote, "I'm excited to announce that Ouster has been granted foundational patents for our unique multi-beam flash lidar technology which allows me to talk more openly about the incredible technology we've developed over the last three years and why we're going to lead the market with a portfolio of low-cost, compact, semiconductor-based lidar sensors in both scanning and solid state configurations."

The US10063849 "Optical system for collecting distance information within a field" and US9989406 "Systems and methods for calibrating an optical distance sensor." disclose LiDAR Tx side consisting of an array of VCSEL lasers and Rx side - an array of SPADs. The VCSEL lasers project an set of points on the subject, while each SPAD has a small FOV aligned with the projection point in order to cut the ambient light. Also, the Rx side optics has a 2nm-narrow spectral filter, again to cut more of the ambient light illumination. All this is placed on a rotating platform:


Angus Pacala also publishes an explanatory article in the Company's blog on Medium and gives an interview to ArsTechnica. Few quotes:

"While our technology is applicable to a wide range of wavelengths, one of the more unique aspects of our sensors is their 850 nm operating wavelength. The lasers in a lidar sensor must overcome the ambient sunlight in the environment in order to see obstacles. As a result lidar engineers often choose operating wavelengths in regions of low solar flux to ease system design. Our decision to operate at 850 nm runs counter to this trend.

A plot of solar photon flux versus wavelength at ground level (the amount of sunlight hitting the earth versus wavelength) shows that at 850 nm there is almost 2x more sunlight than at 905 nm, up to 10x more sunlight than at 940nm, and up to 3x more sunlight than 1550 nm — all operating wavelengths of legacy lidar systems.



We’ve gotten plenty of strange looks for our choice given that it runs counter to the rest of the industry. However, one of our patented breakthroughs is exceptional ambient light rejection which makes the effective ambient flux that our sensor sees far lower than the effective flux of other lidar sensors at other wavelengths, even accounting for the differences in solar spectrum. Our IP turns what would ordinarily be a disadvantage into a number of critical advantages:

  • Better performance in humidity
  • Improved sensitivity in CMOS: Silicon CMOS detectors are far more sensitive at 850 nm than at longer wavelengths. There is as much as a 2x reduction in sensitivity just between 850 and 905 nm. Designing our system at 850 nm allows us to detect more of the laser light reflected back towards our sensor which equates to longer range and higher resolution.
  • High quality ambient imagery
  • Access to lower power, higher efficiency technologies

...the flood illumination in a conventional flash lidar, while simpler to develop, wastes laser power on locations the detectors aren’t looking. By sending out precision beams only where our detectors are looking, we achieve a major efficiency improvement over a conventional flash lidar.

Our single VCSEL die has the added advantage of massively reducing system complexity and cost. Where other lidar sensors have tens or even hundreds of expensive laser chips and laser driver circuits painstakingly placed on a circuit board, Ouster sensors use a single laser driver and a single laser die. A sliver of glass no bigger than a grain of rice is all that’s needed for an OS-1–64 to see 140 meters in every direction. It’s an incredible achievement of micro-fabrication that our team has gotten this to work at all, let alone so well.

The second chip in our flash lidar is our custom designed CMOS detector ASIC that incorporates an advanced single photon avalanche diode (SPAD) array. Developing our own ASICs is key to our breakthrough performance and cost, but the approach is not without risk. Ouster’s ASIC team has distinguished themselves time and again and they’ve now delivered seven successful ASICs — each more powerful, more reliable, and more refined than the previous."

Go to the original article...

Photoneo 3D Camera Wins Vision Show Award

Image Sensors World        Go to the original article...

Optics.org reports: "The winner of this year’s VISION Award, presented by Imaging & Machine Vision magazine, was named at the conference as Photoneo. Its new PhoXi 3D Camera is said to be the highest resolution and highest accuracy area based 3D camera available. It is based on Photoneo’s patented technology called Parallel Structured Light implemented by a custom CMOS image sensor.

The developer says this “novel approach” makes it the most efficient technology for high resolution scanning in motion. The key features of Parallel Structured Light include: scanning in a rapid motion – one frame acquisition, 40 m/s motion possible; 10x higher resolution and accuracy with more efficient depth coding technique with per pixel measurement possible; no motion blur resulting from its 10 µs per pixel exposure time; and rapid acquisition of 1068x800 point-clouds and texture up to 60 fps.
"

Photoneo claims that its custom designed image sensor is the key to the high performance of its 3D camera:

"Photoneo has developed a new technique of one frame 3D sensing that can offer high resolution common for multiple frame structured light systems, with fast, one frame acquisition of TOF systems. We call it Parallel Structured Light and it runs thanks to our exotic image sensor."


The company's patent application US20180139398 updates on the "exotic image sensor" over the earlier version circa 2014:


The 3D camera offers a nice trade-off between the resolution and speed:




Update: IMVE too publishes an article on Photoneo technology.

Go to the original article...

Himax Updates on 3D Imaging, CIS Business

Image Sensors World        Go to the original article...

Globenewswire: Himax quarterly earnings release updates on the company's CIS and 3D imaging business:

"Himax has participated in most of the smartphone OEMs’ ongoing 3D sensing projects covering all three types of technologies, namely structured light, active stereo camera (ASC) and time-of-flight, where it provides 3D sensing total solution, or just the projector or optics inside the module, depending on the customers’ needs. By offering either the projector or critical optics, Himax has been collaborating with a small handful of smartphone names that have in-house capability to come up with their own customized 3D sensing solutions. Himax already has one such end customer using its technology for mass production with two more in the pipeline targeting 2019 product launch.

For most Android smartphone makers who don’t have such in-house capability, however, the Company aims to provide total solution to enable their 3D sensing. At present, the 3D sensing adoption for this market remains low. The adoption is hindered primarily by the prevailing high hardware cost of 3D sensing and the long development lead time required for 3D sensing to integrate it into the smartphone and the lack of killer applications. Instead of 3D sensing, most of the Android phone makers have chosen the lower cost finger print technology which can achieve similar phone unlock and online payment functions with somewhat compromised user experience.

Reacting to their lukewarm response, Himax is working on the next generation 3D sensing with an aim to leapfrog the market by providing high performance, easy to adopt and yet cost friendly total solutions, targeting most of the Android smartphone players. In addition, Himax is providing 3D sensing developer kit which is being used to develop applications over both smartphone and non-smartphone platforms. Himax believes that 3D sensing will be widely used by more Android smartphone makers when the ecosystem is able to substantially lower the cost of adoption while offering easy-to-use, fully-integrated total solutions, for which Himax is playing a key part.

The Company has mentioned previously that 3D sensing can have a wide range of applications beyond smartphone. While smartphone remains its top priority, the Company has started to explore business opportunities in various industries by leveraging its SLiM 3D sensing total solution. Such industries are typically less sensitive to cost and always require a total solution. Himax recently announced collaboration with Kneron, an industry leader in edge-based artificial intelligence, to develop an AI-enabled 3D sensing security and surveillance solution is just an example of real world applications using its 3D sensing technology.

On CMOS image sensor business updates, Himax continues to make great progress with its two machine vision sensor product lines, namely, near infrared (“NIR”) sensor and Always-on-Sensor (“AoS”). NIR sensor is a critical part for both of the Company’s structured light and ASC 3D sensing total solutions. On the AoS product line, the joint offering of Emza and Himax technologies uniquely positions the Company to provide ultra-low power, smart imaging sensing total solutions, leveraging Himax’s industry leading super low power CIS and ASIC designs and Emza’s unique AI-based computer vision algorithm. The Company is pleased with the status of engagement with leading players in areas such as connected home, smart building and security, all of which new frontiers for Himax.
"

Go to the original article...

Caterpillar Develops LiDAR for Tracks

Image Sensors World        Go to the original article...

InternationalMining: Caterpillar’s Command for Hauling automation system used to use Velodyne LiDAR sensor for its tracks. Cat has now developed its own in-house LiDAR sensor, Cat LiDAR. While the Velodyne is used in hundreds of haul trucks across Western Australia and elsewhere – it is said to lack the reliability and capability to meet Cat’s long-term needs.

For example, Velodyne was not able to work in cold climates below freezing, while LiDAR would often detect dust as a hazard, causing an unnecessary track stop.

Cat LiDAR has been in field tests for the past year and one commercial unit has already been shipped to a new Command for Hauling customer. The OEM is expected to make it available as a replacement option for existing operations, Cat said.

The new system include greater tolerance of extreme temperatures – it has been tested down to -40°C, improvement in accuracy of operating distances between vehicles and obstructions, enhanced ability to distinguish between hazards and non-hazards, the ability to measure the diagnostics and health of the LiDAR sensor.

Cat says the new LiDAR has been proven to last three times longer than the previous sensor when it comes to reporting first failure.

Go to the original article...

Gait Recognition in China

Image Sensors World        Go to the original article...

Techcrunch, AP: Chinese AI startup Watrix has recently raised $14.5m to further develop its gait recognition technology that is supposed to complement a face recognition in security and surveillance cameras. The technology is already being used by police in Beijing and Shanghai where it can identify individuals even when their face is obscured or their back is turned.

Huang Yongzhen, the CEO of Watrix, said that its system can identify people from up to 50 meters away, even with their back turned or face covered. This can fill a gap in facial recognition, which needs close-up, high-resolution images of a person’s face to work.

You don’t need people’s cooperation for us to be able to recognize their identity,” Huang said in an interview in his Beijing office. “Gait analysis can’t be fooled by simply limping, walking with splayed feet or hunching over, because we’re analyzing all the features of an entire body.

Go to the original article...

High Speed Imaging from Sparse Photon Counts

Image Sensors World        Go to the original article...

Arxiv.org paper "A `Little Bit' Too Much? High Speed Imaging from Sparse Photon Counts" by Paramanand Chandramouli, Samuel Burri, Claudio Bruschini, Edoardo Charbon, and Andreas Kolb from University of Siegen, Germany, and Swiss Federal Institute of Technology, Lausanne, Switzerland shows the power of machine learning in recovering nice images from single-photon mess:

"Recent advances in photographic sensing technologies have made it possible to achieve light detection in terms of a single photon. Photon counting sensors are being increasingly used in many diverse applications. We address the problem of jointly recovering spatial and temporal scene radiance from very few photon counts. Our ConvNet-based scheme effectively combines spatial and temporal information present in measurements to reduce noise. We demonstrate that using our method one can acquire videos at a high frame rate and still achieve good quality signal-to-noise ratio. Experiments show that the proposed scheme performs quite well in different challenging scenarios while the existing denoising schemes are unable to handle them."

Go to the original article...

Ams Pre-Releases NaneyeM Module

Image Sensors World        Go to the original article...

BusinessWire: Ams announces the pre-release of the NanEyeM, a miniature integrated Micro Camera Module (MCM) assembly with a tiny footprint at the image sensor end of just 1mm2. The NanEyeM is aimed for integration into space-constrained industrial and consumer designs, providing new embedded vision capabilities in products such as smart toys and home appliances.

The NanEyeM offers a resolution of 100kpixel, 10-bit digital readout, and features a Single-Ended Interface Mode (SEIM). Like a standard SPI, the SEIM channel is easy to implement in any host processor and provides a cost-optimized solution without the need for LVDS deserialization. The maximum frame rate over the SEIM interface is 58 fps at a clock rate of 75MHz.

The NanEyeM features a custom multi-element lens which greatly improves the effective resolution of the sensor and reduces distortion compared to competing sensors that have a single-element lens. The MTF (Modulation Transfer Function) is >50% in the corners, distortion is <15% and color aberration is <1Px. Designers who wish to add high-resolution video capability in space-constrained enclosures have until now been hampered by the size of the industrial image sensors on the market. The introduction of the NanEyeM module opens up new possibilities to add camera capability in the smallest spaces,” said Tom Walschap, Marketing Director in the CMOS Image Sensor business line at ams. “Provided in an easy-to-use module format with a convenient digital output, designers can quickly add camera capability with little development effort.

The NanEyeM image sensor will be available for sampling in Q2 2019.

Go to the original article...

ToF-Based People Counter

Image Sensors World        Go to the original article...

ST presents a use case for its ToF proximity sensor:

Go to the original article...

SiOnyx Night Vision Demo

Image Sensors World        Go to the original article...

SiOnyx publishes a demo of its Aurora camera:

"The Sionyx Aurora camera looking at buffalo grazing about 1.5 hours after sunset. The first part of the recording is taken using Aurora's Twilight mode and the second part using Color Night Vision. Notice the pinkish color of the grass and trees. This is "Earth Glow" where IR energy collected in the atmosphere during the day time is reflected by plants at night. Aurora is able to detect and take advantage of that IR light."

Go to the original article...

Yole on Components for 3D Sensing

Image Sensors World        Go to the original article...

Yole Developpement publishes a nice webcast "Components for 3D Sensing Revolution:"



The webcast has an interesting comparison of cost of various 3D cameras:

Go to the original article...

Ams Announces 3.2um GS Pixel Sensor, the Fastest among 1-inch Sensors

Image Sensors World        Go to the original article...

BusinessWire: ams introduces a new global shutter sensor for machine vision and Automated Optical Inspection (AOI) equipment which offers better image quality and higher throughput than any previous device that supports the 1” optical format.

The new CSG14k image sensor features 14MP resolution at a "frame rate considerably higher than any comparable device on the market offers today." The CSG14k’s 12-bit output provides sufficient dynamic range to handle wide variations in lighting conditions and subjects. The sensor’s global shutter with true CDS (Correlated Double Sampling) produces high-quality images of fast-moving objects free of motion artefacts.

The high performance and resolution of the CSG14k are the result of innovations in the design of the sensor’s 3.2µm x 3.2µm pixels. The new pixel design is 66% smaller than the pixel in the previous generation of 10-bit ams image sensors, while offering a 12-bit output and markedly lower noise.

Future advances in factory automation technology are going to push today’s machine vision equipment beyond the limits of its capabilities. The breakthrough in image quality and performance offered by the CSG14k gives manufacturers of machine vision systems headroom to support new, higher throughput rates while delivering valuable improvements in image quality and resolution,” said Tom Walschap, Marketing Director in the CMOS Image Sensors business line at ams.

The CSG14k will be available for sampling in the first half of 2019.

Go to the original article...

TowerJazz Announces Automotive SPAD Parameters, LeddarTech Combines SPADs with CIS

Image Sensors World        Go to the original article...

GlobeNewswire: TowerJazz's 0.18um CIS SPAD platform offers an integrated solution with superb figures of merit. Its photon detection efficiency (PDE) is similar to, or better than, the leading stand-alone SPADs on the market. The dark count rate (DCR) is less than 100Hz/um^2 at 60°C and less than 1KHz/um^2 at 100°C (especially suited for automotive applications), and jitter of less than one nanosecond. This sophisticated platform also saves silicon, and therefore, reduces cost of mass production.

TowerJazz's 0.18um CIS SPAD process has been chosen by LeddarTech for its next generation automotive LiDAR solutions, combining CMOS image sensors and SPAD on the same chip. Integration of everything on the same chip is said to save silicon cost.

With our advanced CIS SPAD technology, we are able to provide groundbreaking manufacturing solutions for the growing LiDAR and automotive markets. We are pleased to work with LeddarTech, a true innovator in solid state-LiDAR technology,” said Avi Strum, TowerJazz SVP and GM, CMOS Image Sensor Business Unit.

Go to the original article...

Pinnacle Promises to Minimize Motion Artifacts in ON Semi HDR Sensor

Image Sensors World        Go to the original article...

PRWeb: Pinnacle Imaging Systems and ON Semiconductor jointly announce a lower cost HDR video surveillance solution capable of capturing 120 dB DR scenes with 1080p 30 fps output.

The Pinnacle Imaging Systems Denali-MC HDR ISP IP Core running on Xilinx Zynq 7030 SoC has been ported to support ON Semi AR0239 sensor with Denali-MC’s motion compensation algorithms minimize motion artifacts often associated with multi-exposure HDR capture and Pinnacle’s locally adaptive tone mapping.

As a technology partner, ON Semiconductor has been instrumental in providing the critical support necessary to bring this project to fruition,” said Alfred Zee, CEO of Pinnacle Imaging Systems. “The high dynamic range capabilities of the ON Semiconductor AR0239 sensor, coupled with the performance of the Xilinx Zynq SoC, make an ideal foundation for our Ultra HDR Surveillance Platform. Working closely with the ON Semiconductor team, we’ve been able to achieve the best possible HDR and low light performance from the AR0239 CMOS image sensor.

Go to the original article...

Himax on 3D Sales Projections

Image Sensors World        Go to the original article...

Himax Fact Sheet has the company's forecast on 3D imaging product sales:

Go to the original article...

2019 IISW Call For Papers

Image Sensors World        Go to the original article...

The 2019 International Image Sensor Workshop (IISW) is to be held on June 24-27 in Snowbird, Utah. Now in its 33rd year, the workshop is intended for image sensor technologists; in order to encourage attendee interaction and a shared experience, attendance is limited, with strong acceptance preference given to workshop presenters. The scope of the workshop includes all aspects of electronic image sensor design and development. In addition to regular oral and poster papers, the workshop will include invited talks and announcement of International Image Sensors Society (IISS) Award winners.


  • The deadline for abstract submission is February 4, 2019 (PST).
  • Authors will be notified of the acceptance of their abstract by March 14, 2019.
  • Final-form 4-page paper submission date is May 3, 2019.
  • Presentation material submission date is June 14, 2019.

Go to the original article...

Theoretical Way to Overcome Photon Shot Noise Limits

Image Sensors World        Go to the original article...

Nature publishes a fairly theoretical paper by Hoi-Kwan Lau & Aashish A. Clerk from University of Chicago "Fundamental limits and non-reciprocal approaches in non-Hermitian quantum sensing." The paper points to a way to overcome what is considered to be a fundamental limit - photon shot noise. The authors also plan to attack thermal noise in their future research.

"In a quantum setting, optical sensors are typically limited because light is made up of particles, and this discreteness leads to unavoidable noise. But this study revealed an unexpected method to combat that limitation... We think we’ve uncovered a new strategy for building extremely powerful quantum sensors."

Unfortunately, this revolution will not happen overnight. Everything in this paper is highly theoretical. A sensing system nonreciprocity is said to be the key factor in increasing the signal while keeping the same noise:


In conclusion, the authors say: "We... discussed a new method for enhancing dispersive measurement using effective non-Hermitian physics, namely the use of nonreciprocity to enhance sensing. We show that nonreciprocity allows one to arbitrarily exceed the fundamental bound on the measurement rate of a reciprocal sensor, and discussed a simple implementation that does not require any amplification processes. We also show that nonreciprocity can enhance the sensitivity of mode-splitting type sensor.

Finally, we note that the general theory developed in this work could be easily applied to more general kinds of sensing problems. For example, the same formalism could be used to understand the performance of non-Hermitian sensors when thermal noise dominates (as would be the case for systems deep in the classical limit).
"

Via Phys.org

Go to the original article...

Yole Presentation at European Imaging & Sensors Summit

Image Sensors World        Go to the original article...

Yole Developpement publishes its CIS market presentation at European Imaging & Sensors Summit held in Grenoble, France, in September 2018. Few interesting slides:

Go to the original article...

ON Semi Reports Quarterly Results

Image Sensors World        Go to the original article...

SeekingAlpha: ON Semi earnings call gives few details about the company's image sensor business:

"Our momentum in automotive image sensors continues to accelerate. Key factors driving our growth in the automotive image sensor market are significant technology lead over competition and the industry’s most extensive product portfolio, giving customers more choices than before. With a complete line of image sensors, including one, two, and eight megapixels, we are the only provider of a complete range of pixel densities on a single platform for the next generation ADAS and autonomous driving applications.

Furthermore, with our recent acquisition of SensL we now have capability to provide LiDAR sensors in addition to image sensors, radar, and ultrasonic sensors. We are the only semiconductor supplier with capability to provide all four types of sensors for ADAS and autonomous driving. We believe that this capability will not only drive significant content for us but will also provide a key differentiating advantage to us as the automotive industry moves to sensor fusion architectures for ADAS and autonomous driving.

With recently introduced X-class image sensors, we expect to further strengthen our leadership in machine vision and robotics markets.
"

ON Semi also publishes a revenue dynamics comparison of its imaging business with other divisions:

Go to the original article...

AMETEK Acquires Forza Silicon for $40m

Image Sensors World        Go to the original article...

PRNewswire: AMETEK announces that it has completed acquisition of Forza Silicon Corporation, a leader in the design and production of high-performance imaging sensors used in medical, defense, commercial and industrial applications.

"Forza is a highly strategic technology acquisition for AMETEK," said David A. Zapico, AMETEK Chairman and CEO. "Customers rely on Forza's leading-edge design capability to support their advanced sensor development projects. Forza also provides our Vision Research business with custom sensor design and production capability, allowing for accelerated development of next generation sensor technology for use across our market leading, high-speed cameras."

Forza has annual sales of approximately $20m and was acquired for approximately $40m.

AMETEK is a global manufacturer of electronic instruments and electromechanical devices with annualized sales of approximately $4.8b. The common stock of AMETEK is a component of the S&P 500 Index.

Forza joins AMETEK as part of its Electronic Instruments Group (EIG) - a leader in advanced analytical, monitoring, testing, calibrating and display instruments with annualized sales of $3.0b.

Go to the original article...

Huawei Honor Magic 2 Phone Features 6 Cameras

Image Sensors World        Go to the original article...

Huawei Honor Magic 2 smartphone has 3 front and 3 rear cameras.

BGR reports: "In terms of imaging, there is a combination of 16-megapixel wide-angle, 24-megapixel monochrome and a 16-megapixel ultra wide-angle sensors on the back of the device. For selfies, there is a combination of 16-megapixel camera with two 2-megapixel depth sensors. This selfie shooter setup combined with IR face scanner for 3D face unlock is embedded within the slider mechanism."

Go to the original article...

Canon CIS Business Platform

Image Sensors World        Go to the original article...

PRNewswire: Canon USA reminds that it is now offering select CMOS sensor products for sale to the industrial market. "For several decades, Canon has been a leader in developing and manufacturing advanced CMOS sensors with state-of-the-art technologies, which until now, were for exclusive use in Canon products," said Kazuto Ogawa, president and COO, Canon USA, Inc. "It was a natural evolution to expand into a new business platform that leverages our expertise in sensor manufacturing to target the growing market demands for high-quality industrial imaging solutions."

Canon sensors include:

3U5MGXS Sensor - with an electronic global shutter, and an all pixel progressive reading at 120fps, the Canon 3U5MGXS CMOS 5MP sensor features a low power consumption of 500mW. The 3U5MGXS is now available.


35MMFHDXSCA Sensor - featuring an large 19um pixel pitch, the 35MMFHDXSCA uses new pixel and readout circuitry technologies that deliver a 2.76MP resolution. The 35MMFHDXSCA is now available.


120MXS Sensor - by incorporating close to the same number of pixels as photoreceptors in the human eye, the 120MXS CMOS sensor delivers 120 MP resolution at 9.4fps in an APS-H format. This sensor targets the needs of the inspection, aerial mapping, life sciences, digital archiving and transportation industries and is available now.


2U250MRXS Sensor – with a readout speed of 1.25 billion pixels per second, the prototype 2U250MRXS CMOS sensor, delivers 250MP resolution in an APS-H format. No more information about this sensor is available.

Go to the original article...

Samsung Image Sensor Sales Growth

Image Sensors World        Go to the original article...

Samsung quarterly report updates on the company CIS business:

"For the System LSI Business, overall earnings improved thanks to the growing demand for image sensors in China... In particular, the image sensor business achieved record-high quarterly results driven by greater adoption of multiple cameras and high-resolution sensors by smartphone makers.

For 2019, Samsung expects solid earnings growth to continue, bolstered by rising demand for image sensors used in more sophisticated camera specifications... The Company will also focus on diversifying its product line-up to include 3D sensors, fingerprint-on-display sensors, and chips used in automotive and IoT applications.

For the Foundry Business, earnings continued to grow QoQ thanks to increased demand for mobile APs and image sensors... Looking to the fourth quarter, demand for mobile APs and image sensors is expected to decline amid weak seasonality for smartphone components.
"

Go to the original article...

Nikon Z 35mm f1.8S review

Cameralabs        Go to the original article...

The Nikon Z 35mm f1.8 S is a mild wide-angle prime lens for Nikon’s full-frame Z-series mirrorless cameras. A popular focal length for street photography and general-purpose use, it features an f1.8 focal ratio and fast and quiet focusing. Thomas puts this first Z-mount prime lens through its paces in his review-so-far!…

The post Nikon Z 35mm f1.8S review appeared first on Cameralabs.

Go to the original article...

Espros 3D Imaging Market Forecast

Image Sensors World        Go to the original article...

Espros October Newsletter features the company's view on the 3D imaging market growth:

"The Compound Annual Rate Growth (CAGR) for 3D imaging is expected to some 40%, starting at some $2 billion in 2017. We clearly see the hockey stick coming."

Go to the original article...

Nano-Antenna Based IR Detectors Review

Image Sensors World        Go to the original article...

MDPI publishes Universiti Sains Malaysia, King Saud University, and Qassim University paper "Nano-Antenna Coupled Infrared Detector Design" by Mohamed H. Mubarak, Othman Sidek, Mohamed R. Abdel-Rahman, Mohd Tafir Mustaffa, Ahmad Shukri Mustapa Kamal, and Saad M. Mukras.

"Since the 1940s, infrared (IR) detection and imaging at wavelengths in the two atmospheric windows of 3 to 5 and 8 to 14 μm has been extensively researched. Through several generations, these detectors have undergone considerable developments and have found use in various applications in different fields including military, space science, medicine and engineering. For the most recently proposed generation, these detectors are required to achieve high-speed detection with spectral and polarization selectivity while operating at room temperature. Antenna coupled IR detectors appear to be the most promising candidate to achieve these requirements and has received substantial attention from research in recent years. This paper sets out to present a review of the antenna coupled IR detector family, to explore the main concepts behind the detectors as well as outline their critical and challenging design considerations. In this context, the design of both elements, the antenna and the sensor, will be presented individually followed by the challenging techniques in the impedance matching between both elements. Some hands-on fabrication techniques will then be explored. Finally, a discussion on the coupled IR detector is presented with the aim of providing some useful insights into promising future work."

Go to the original article...

Nano-Antenna Based IR Detectors Review

Image Sensors World        Go to the original article...

MDPI publishes Universiti Sains Malaysia, King Saud University, and Qassim University paper "Nano-Antenna Coupled Infrared Detector Design" by Mohamed H. Mubarak, Othman Sidek, Mohamed R. Abdel-Rahman, Mohd Tafir Mustaffa, Ahmad Shukri Mustapa Kamal, and Saad M. Mukras.

"Since the 1940s, infrared (IR) detection and imaging at wavelengths in the two atmospheric windows of 3 to 5 and 8 to 14 μm has been extensively researched. Through several generations, these detectors have undergone considerable developments and have found use in various applications in different fields including military, space science, medicine and engineering. For the most recently proposed generation, these detectors are required to achieve high-speed detection with spectral and polarization selectivity while operating at room temperature. Antenna coupled IR detectors appear to be the most promising candidate to achieve these requirements and has received substantial attention from research in recent years. This paper sets out to present a review of the antenna coupled IR detector family, to explore the main concepts behind the detectors as well as outline their critical and challenging design considerations. In this context, the design of both elements, the antenna and the sensor, will be presented individually followed by the challenging techniques in the impedance matching between both elements. Some hands-on fabrication techniques will then be explored. Finally, a discussion on the coupled IR detector is presented with the aim of providing some useful insights into promising future work."

Go to the original article...

Arm Develops Always-On Face Unlock

Image Sensors World        Go to the original article...

Arm reports that its machine learning-based always-on mobile face unlock achieves over 98% accuracy. It relies quite heavily on a low power low resolution image sensor and a RGB-depth sensor:

Go to the original article...

e2v Unveils 5MP 2.8um-Pixel GS Sensor

Image Sensors World        Go to the original article...

Teledyne e2v announces the expansion of its Snappy family of sensors with a new 5MP device. The Snappy 5M is designed for barcode reading, 2D scanning and similar applications. Available in both monochrome and color, the Snappy 5M has a small 1/1.8-inch format, containing a 2.8μm, low-noise, global shutter pixel. The device can stream video at ~50fps at 10 bits over a 4 wire, MIPI CSI-2 interface.

Snappy 5M is designed to enable fast, extended range scanning and includes unique features and region of interest modes:

  • A Fast Self Exposure (FSE) mode automatically calculates the optimum integration time that is applied to the first image from the device. The mode is user programmable and provides continuous fast decoding, tolerating any kind of lighting or dynamic lighting environment. This is advantageous compared with conventional auto exposure methods, improving convergence speed and robustness.
  • A Smart ROI feature searches for barcodes in the image frame, and reports their locations as metadata in the image footer. The regions of the image containing barcodes are discerned from the background image to considerably reduce downstream image processing (FPGA/CPU/DSP) power, time and cost. Up to 16 different regions can be detected simultaneously. Other forms of repetitive signatures such as printed character strings can also be detected for document scanning and OCR applications.

Go to the original article...

css.php