Archives for September 2019

Epticore and Opnous

Image Sensors World        Go to the original article...

There is a number of ToF startups in China. Opnous has been been mentioned here a couple of months ago. Apparently, the company has licensed Brookman (Japan) ToF sensor. A recent company presentation tells more about the company technology and differentiation:


Epticore Microelectronics is another startup in China developing ToF technology. The company's presentation shows its products and plans:

Go to the original article...

WLO for Next Generation Under-Display Fingerprint Modules

Image Sensors World        Go to the original article...

IFNews: GF Securities report on CIS industry shows the next generation under-display fingerprint module that uses WLO to reduce thickness. GF Securities forecasts the ultra-thin screen fingerprint sensors shipments of 94M units in 2020, accounting for 7% of global smartphone shipments, respectively.

Go to the original article...

Espros pToF Presentation

Image Sensors World        Go to the original article...

Espros presentation at MEMS Consulting seminar in China gives interesting info on the company's long range solution for automotive LiDARs:

Go to the original article...

ToF Sensors in Mobile Devices

Image Sensors World        Go to the original article...

TheElec reports that Sony ToF sensors inside LG Innotek modules will be used in Apple 2020 models of iPad and iPhone. Samsung Galaxy S10 5G and Galaxy Note 10+ use Sony ToF sensors too.

LG G8 ThinQ smartphones use ToF sensor from PMD-Infineon combined with ams VCSEL:

Go to the original article...

Autosens Brussels Awards

Image Sensors World        Go to the original article...

Autosens Brussels announces its awards in several categories, some of them image sensing-related:

Most Exciting Start-up – sponsored by Sony Semiconductor Solutions
  • Winner: TriEye
  • Silver: Outsight
  • Silver: WaveSense
Best in Class Perception System – sponsored by Varroc Lighting Systems
  • Winner: OmniVision Technologies – OAX4010 ISP ASIC
  • Silver: General Motors – Transparent Trailer for 2020 GMC Sierra and Silverado HD
  • Silver: Innoviz – InnovizOne LiDAR
Most Innovative In-Cabin Application
  • Winner: Daimler – MBUX Interior Assistant
  • Silver: Eyeris – In-vehicle Scene Understanding AI
  • Silver: Seeing Machines – FOVIO Driver Monitoring Technology


Go to the original article...

ams Hyperspectral Sensor for Consumer Applications

Image Sensors World        Go to the original article...

ams presents hyperspectral sensor for NIR sensing of spectral signatures in consumer applications:

Go to the original article...

Huawei Mate30 Pro 5G Camera

Image Sensors World        Go to the original article...

Huawei Mate30 Pro 5G smartphone has some unique camera features:

Go to the original article...

Interview with Eric Fossum

Image Sensors World        Go to the original article...

Art19 publishes an hour-long interview with Eric Fossum:

Go to the original article...

OmniVision’s Automotive SoC Claimed to Have Industry’s Best Low-Light Performance, Lowest Power, and Smallest Size

Image Sensors World        Go to the original article...

PRNewswire: OmniVision announces 1.3MP OX01F10 SoC, an 1/4" 3.0um pixel image sensor with ISP for automotive rear view camera (RVC) and surround view system (SVS).

"Analysts predict that SVS and RVC will continue to hold the majority share in the automotive camera market, with over 50% of the total market volume through 2023. SVS, in particular, is expected to double its growth between now and 2023 due to increased customer adoption," said Andy Hanvey, director of automotive marketing at OmniVision. "Our OX01F10 SoC provides the best option for automotive designers responding to this growing consumer demand for better RVCs, along with the expansion of SVS into the mainstream market. Additionally, this SOC's functional safety features allow module providers to create a single platform for both the viewing cameras and the machine vision applications that require ASIL B."

OmniVision's dual conversion gain (DCG) technology is employed in this SoC to achieve a high dynamic range of 120dB with only two captures, as opposed to three required by the competition, which minimizes motion artifacts while reducing power consumption and boosting low-light performance. The OX01F10 features less than 300mW typical power consumption, which is said to be 30% lower than competitors.

Its integrated ISP features:
  • Lens chromatic aberration correction
  • Advanced noise reduction and local tone mapping
  • Optimizations for the on-chip image sensor's PureCel Plus technology

PureCel Plus technology provides what is said to be an industry best SNR1 of 0.19 lux. This results in the OX01F10 performing better than the competition across challenging lighting conditions. The OX01F10 SoC is AEC-Q100 Grade 2 certified and samples are available now.

Go to the original article...

Outsight Develops 3D LiDAR-Spectrometer-on-Wheels for Automotive Applications

Image Sensors World        Go to the original article...

BusinessWire: Outsight launches its 3D Semantic Camera for autonomous driving and other industries. Outsight founders, Raul Bravo, co-founder and CEO of former company Dibotics and Cedric Hutchings, co-founder of Withings and former VP of Nokia Technologies, joined forces to create a new entity that aims to combine the software assets of Dibotics with 3D sensor technology. Together with Dibotics’ other co-founder Oliver Garcia and Scott Buchter, co-founder of Lasersec, the four have assembled a global team of top talent in San Francisco, Paris, and Helsinki to turn their vision into reality.

We are excited to unveil our 3D Semantic Camera that brings an unprecedented solution for a vehicle to detect road hazards and prevent accidents.” - Cedric Hutchings, CEO and Co-founder of Outsight.

Outsight's 3D Semantic Camera is said to be able to bring Full Situation Awareness and new levels of safety/reliability for currently man-controlled machines like Level 1- 3 ADAS, Construction/Mining equipment, Helicopters, etc, but also accelerate the emergence of fully automated Smart Machines like Level 4- 5 Self Driving Cars, Robots, Drones, Autonomous flying taxis etc.

"Our 3D Semantic Camera is not only able to tackle current driving safety problems , but bring driving safety to new levels. With being able to unveil the full reality of the world by providing information that was previously invisible, we at Outsight are convinced that a whole new world of applications will be unleashed. This is just the beginning." - Raul Bravo, President and Co-founder of Outsight.

The technology is the very first of its kind to be intended to provide Full Situation Awareness in a single device. It’s a mass-producible, “all in one solution” technology with the ability to simultaneously perceive and comprehend the environment from hundreds of meters, including the key chemical composition of objects (Skin, Cotton, Ice, Snow, Plastic, Metal, Wood...).

This is partly made possible through the development of a low powered, long range and eye-safe broadband laser that allows for material composition to be identified through active hyperspectral analysis. Combined with its 3D SLAM on Chip capability (Simultaneous Localization and Mapping), Outsight's technology is able to unveil the Full Reality of the world in real-time. Outsight's 3D Semantic Camera is capable of providing actionable information and object classification through the onboard SoC that does not rely on “Machine Learning”, resulting in lower power consumption and bandwidth needed. This new approach eliminates the need for massive data sets for training and the guesswork is eliminated through actually “measuring” the objects. Being able to determine the material of an object adds a new level of confidence to determine what the camera is actually seeing.

It’s able to not only see and measure, but comprehend the world, as it provides the position, the size and the full velocity of all moving objects in its surroundings, providing valuable information for path planning and decision making. The 3D Semantic Camera can provide important information regarding road conditions and can, for example, identify black ice and other hazardous road conditions. This feature is vital for safety in ADAS systems for example. The system can also quickly identify pedestrians and bicyclists through its material identification capabilities.

Outsight has already started joint development programs with key OEMs and Tier1 providers in Automotive, Aeronautics and Security-Surveillance markets and will progressively open the technology to other partners in Q1-2020.






Thanks to JB for the info!

Go to the original article...

Synopsys, Himax Announce AI Vision Processor

Image Sensors World        Go to the original article...

GlobeNewswire: Himax announces the WiseEye WE-I Plus, an AI accelerator-embedded ASIC platform solution to develop and deploy CNN-based machine learning (ML) models on AIoT applications including smart home appliances and surveillance systems.

The WiseEye WE-I Plus ASIC adopts a programmable processor with enhanced DSP features and a power-efficient CDM, HOG and JPEG hardware accelerator for real-time motion detection, object detection, and image processing. To address the issue of rising security risk surrounding AIoT applications, the WiseEye WE-I Plus ASIC is equipped with comprehensive hardware and software integrated security solutions such as security boot, security OTA and security metadata output over TLS. In order to meet the demand for ultra-low power and long battery life, in addition to low-power-driven ASIC design, the embedded LDO and multi-state PMU have been purposely built to support shutdown, AoS (always on sensing) and CV efficient operation modes. Furthermore, an associated software library with a comprehensive tool chain is provided for efficient implementation of ML technology when processing captured data from image, voice and ambient sensors.

The demand for battery-powered smart devices with AI-enabled intelligent sensing is rapidly growing, especially in markets such as home appliances, door lock, TV, notebook and building control or security. Our WiseEye WE-I Plus ASIC platform solution can be used with popular ML frameworks for the development of a wide range of applications in audio, video and signal processing where power is a strict constraint and on-device memory is limited. We are receiving positive feedbacks from our partners and leading industry players,” said Jordan Wu, President and CEO of Himax.

The chip is based on Synopsys ARC EV7x Vision Processor IP:



Go to the original article...

Sony E 16-55mm f2.8 G review

Cameralabs        Go to the original article...

The Sony E 16-55mm f2.8 G is a high-end general-purpose zoom for E-mount mirrorless cameras with APSC sensors, like the A6000 series. It’s Sony’s first f2.8 zoom designed for APSC mirrorless and one that owners of its higher-end cameras in the series have been crying out for. Find out if it meets expectations in my full review!…

The post Sony E 16-55mm f2.8 G review appeared first on Cameralabs.

Go to the original article...

Sony Officially Rejects Call to Spin-off Image Sensor Business

Image Sensors World        Go to the original article...

PRNewswire: Sony publishes "Letter from the CEO to Sony’s Shareholders and All Stakeholders" rejecting the possibility of spin-off image sensor business:

"...On June 13, 2019, Third Point LLC (“Third Point”) issued a public letter to investors suggesting that Sony should consider spinning-off and publicly listing our semiconductor business, which would effectively separate Sony into an entertainment company and a semiconductor (technology) company. We appreciate Third Point’s strong interest in Sony and welcome the fact that many people have been reminded of the value and further growth opportunities of that business.

Sony’s Board and management team, along with external financial and legal advisors in Japan and the U.S., conducted an extensive analysis of Third Point’s recommendations. Following this review, Sony’s Board, which is comprised of a majority of independent outside directors with diverse experience in a variety of industries, unanimously concluded that retaining the semiconductor business (now called the Imaging & Sensing Solutions (“I&SS”) business) is the best strategy for enhancing Sony’s corporate value over the long term. This is based on the fact that the I&SS business is a crucial growth driver for Sony that is expected to create even more value going forward through its close collaboration with the other businesses and personnel within the Sony Group. The Board also reaffirmed that to maintain and further strengthen its own competitiveness, it would be best for the I&SS business to stay within the Sony Group.

In its letter, Third Point described our semiconductor business, which is centered on image sensors, as a “Japanese crown jewel and technology champion.” Sony’s Board and management team share this view and are excited about the immense potential the I&SS business brings Sony. We expect it to not only further expand its current global number one position in imaging applications, but also continue to grow in new and rapidly developing markets such as the Internet of Things (“IoT”) and autonomous driving. We also expect it will contribute to the creation of a safer and more reliable society through its innovative technology.

While Sony’s Board and management team do not agree with Third Point’s recommendation to spin-off and publicly list the I&SS business, we will continue to proactively evaluate Sony’s business portfolio, pursue asset optimization within each business, and supplement our public disclosures as we execute on our strategy to increase shareholder value over the long term.

...Our strategy for future growth of the I&SS business is to develop AI sensors which make our sensors more intelligent by embedding artificial intelligence (AI) into the sensors themselves. We envisage AI and sensing being used across a wide range of applications such as IoT, autonomous driving, games and advanced medicine, and believe there is a potential for image sensors to evolve from the hardware they are today, to a solutions and platforms business.

...Our analysis, which was carried out in collaboration with outside financial advisors, also identified multiple meaningful sources of dis-synergy if the I&SS business was to separate from Sony and operate as a publicly listed independent company. These dissynergies include increased patent licensing fees, reduced ability to attract talent, increased costs and management resources as a publicly listed company, and tax inefficiencies, in addition to the time required for making the public listing.
"

Go to the original article...

NHK Future TV Technology Relies in 3D Vision

Image Sensors World        Go to the original article...

NHK STRL presentation from May 2019 talks about the company's vision for 2030-40 TV technology where 3D imaging takes a central role:

Go to the original article...

CNN Processor in Every Pixel

Image Sensors World        Go to the original article...

Manchester and Bristol Universities, UK, publish arxiv.org paper "A Camera That CNNs: Towards Embedded Neural Networks on Pixel Processor Arrays" by Laurie Bose, Jianing Chen, Stephen J. Carey, Piotr Dudek, and Walterio Mayol-Cuevas (see the video presentation in an earlier post.)

"We present a convolutional neural network implementation for pixel processor array (PPA) sensors. PPA hardware consists of a fine-grained array of general-purpose processing elements, each capable of light capture, data storage, program execution, and communication with neighboring elements. This allows images to be stored and manipulated directly at the point of light capture, rather than having to transfer images to external processing hardware. Our CNN approach divides this array up into 4x4 blocks of processing elements, essentially trading-off image resolution for increased local memory capacity per 4x4 "pixel". We implement parallel operations for image addition, subtraction and bit-shifting images in this 4x4 block format. Using these components we formulate how to perform ternary weight convolutions upon these images, compactly store results of such convolutions, perform max-pooling, and transfer the resulting sub-sampled data to an attached micro-controller. We train ternary weight filter CNNs for digit recognition and a simple tracking task, and demonstrate inference of these networks upon the SCAMP5 PPA system. This work represents a first step towards embedding neural network processing capability directly onto the focal plane of a sensor."

Go to the original article...

Omnivision Connects Arm ISP IP with its Automotive Sensor

Image Sensors World        Go to the original article...

PRNewswire: OmniVision has combined its OX03A1Y sensor with FPGA-based Arm Mali-C71 ISP for a dual-mode automotive camera module.

"OmniVision's dual-mode image sensor showcases the Mali-C71's ability to process multiple real-time inputs with one pipeline, capturing both human display and computer vision images with a single image sensor, at the highest possible quality," said Tom Conway, director of product management, automotive and IoT Line of Business, Arm.

"Arm's ISP intellectual property is an important part of the automotive ecosystem, and they are a key partner for OmniVision," said Celine Baron, staff automotive product marketing manager at OmniVision. "This collaboration demonstrates the high performance that can be achieved by combining our premium 2.5MP image sensor with Arm's ISP, for automotive applications that need both computer vision and human displays from a single camera module."

OmniVision and Arm used an FPGA emulating the Mali-C71 ISP to simultaneously process images captured by the OX03A1Y sensor for both computer vision and human displays. This sensor uses an RCCB clear color filter pattern to capture high quality images in all lighting conditions. The Mali-C71 then processes the data concurrently, outputting two simultaneous image signals for both human viewing and machine vision.

The OX03A1Y is the industry's first image sensor to feature a 3.2µm pixel with 120dB HDR, dual conversion gain (DCG) and an RCCB color filter. DCG provides motion free HDR to ~85dB, for the best images when vehicles are in motion. The RCCB color filter allows in more light, which, in combination with OmniBSI-2 pixel, produces low-light performance with SNR1 at 0.09 lux, all the while with low power consumption. This is the first sensor to integrate all three capabilities. Additionally, the OX03A1Y is shipping in volume to automotive customers.

The OX03A1Y is available in a small 8.0 x 7.2mm chip-scale package, which is 35% smaller than competing image sensors. Additionally, this image sensor's power consumption is 20% lower than the competition.

The 2.5MP OX03A1Y image sensor integrates advanced ISO 26262 ASIL B functional safety features.

Go to the original article...

QIS Sensors to Help NASA Missions

Image Sensors World        Go to the original article...

EurekAlert: NASA is awarding a team of researchers from Rochester Institute of Technology and Dartmouth College a grant to develop a detector capable of sensing and counting single photons for future astrophysics missions. The detector leverages Quanta Image Sensor (QIS) technology and measures every photon that strikes the image sensor. While other sensors have been developed to see single photons, the QIS has several advantages including the ability to operate at room temperature, resistance to radiation and the ability to run on low power.

"This will deliver critical technology to NASA, its partners and future instrument principal investigators," said Don Figer, director of RIT's Center for Detectors, the Future Photon Initiative and principal investigator for the grant. "The technology will have a significant impact for NASA space missions and ground-based facilities. Our detectors will provide several important benefits, including photon counting capability, large formats, relative immunity to radiation, low power dissipation, low noise radiation and pickup, lower mass and more robust electronics."

The project's co-investigators include RIT Assistant Professor Michael Zemcov and Dartmouth Professor Eric R. Fossum. Fossum has focused on inventing the QIS technology while RIT is leading application-specific development that leverages their expertise in astrophysics.

"We're excited for this collaboration with RIT to build upon Dartmouth's proof-of-concept QIS technology to research and develop instrument-grade sensors that can detect single photons in the dimmest possible light," Fossum said. "This has tremendous implications for astrophysics and enables NASA scientists to collect light from extremely distance objects."

The researchers will develop the technology over the next two years. The Center for Detectors will publish results, reports and data processing and analysis software on their website at http://ridl.cfd.rit.edu.

Go to the original article...

CCD vs CMOS in Display QC Application

Image Sensors World        Go to the original article...

Radiant Vision, a Konica Minolta company, publishes an interesting comparison of CCD and CMOS cameras in display quality control applications:

Go to the original article...

Harvest Imaging Forum is 75% Full

Image Sensors World        Go to the original article...

Harvest Imaging Forum to be held in December 2019, in Delft, the Netherlands, is quickly approaching a fully booked status. More than 75 % of the seats have been sold. The Forum topics this year are:

  • "On-Chip Feature Extraction for Range-Finding and Recognition Applications" by Makoto IKEDA (Tokyo University, Japan)
  • "Direct ToF 3D Imaging : from the Basics to the System" by Matteo PERENZONI (FBK, Trento, Italy)

Go to the original article...

Image Sensors for Machine Vision

Image Sensors World        Go to the original article...

ON Semi publishes a webinar "The Current State of Machine Vision Technology: Image Sensor Challenges and Selection."



BusinessWire: ON Semi also announces a 0.3MP machine vision sensor with 2.2um BSI pixels, the 1/10-inch ARX3A0. The new sensor has 1:1 aspect ratio and features ON Semiconductor’s NIR+ technology.

The 560 x 560 pixel sensor can operate at 360fps speed. It consumes less than 19 mW when capturing images at 30 fps, and 2.5 mW when capturing 1 fps.

Gianluca Colli, VP and GM, Consumer Solution Division of Image Sensor Group at ON Semiconductor said: “As we approach an era where Artificial Intelligence (AI) is becoming an integral part of vision-based systems, it becomes clear that we now share this world with a new kind of intelligence. The ARX3A0 has been designed for that new breed of machine, where vision is as integral to their operation as it is ours.

Go to the original article...

MCT and Microbolometric Imagers in China

Image Sensors World        Go to the original article...

China has achieved a lot of advances in cooled MCT and microbolometric imagers, including high resolution up to 2.7K x 2.7K and pixel size down to 10um. These are imagers from Norinco, CETC, iRay, GST, HikVision, and Dali presented at CIOE Show held in Shenzhen, China, last week:

-Norinco picture removed due to the absence of publishing permission-


Thanks to AB for the info!

Go to the original article...

Sony Unveils 61MP Full-Frame and 26MP APS-C Sensors for Security Applications

Image Sensors World        Go to the original article...

Sony unveils 4 new sensors for security and surveillance applications: IMX415-AAMR, IMX455AQK-K, IMX533CQK-D, IMX571BQR-J

Go to the original article...

UBS: Galaxy S10 5G Cameras Cost $73

Image Sensors World        Go to the original article...

IFNews: According to UBS report, Samsung Galaxy S10 5G cameras, including ToF ones, cost $73. The cameras are the 2nd most expensive component after the display:

Front cameras:
  • Selfie Camera
  • ToF Depth Camera

Rear cameras:
  • Telephoto Camera
  • Wide-angle Camera
  • Ultra Wide Camera
  • ToF Depth Camera


Go to the original article...

Huawei Kirin 990 5G Camera Features

Image Sensors World        Go to the original article...

HuaweiCentral: Huawei presents its new mobile processor Kirin 990 5G at IFA 2019 in Berlin, Germany. One of its most impressive imaging features is the AI-based ability to determine the heart rate and breath rate just from a selfie camera video stream:



Another impressive feature is a real-time video segmentation:



More pictures form the company's IFA presentation:

Go to the original article...

Sigma 50mm f1.4 Art review

Cameralabs        Go to the original article...

Sigma's 50mm f1.4 ART is a high-end standard lens with autofocus for Canon, Nikon, Leica-L or Sony mounts. We've completely updated our original review, retesting it at higher resolutions and comparing it to new primes lenses from a variety of manufacturers. Find out why it remains one of the best 50mm lenses around in our review!…

The post Sigma 50mm f1.4 Art review appeared first on Cameralabs.

Go to the original article...

DARPA Starts Curved IR Imagers Program

Image Sensors World        Go to the original article...

DARPA FOcal arrays for Curved Infrared Imagers (FOCII) program is created to expand upon the current commercial trend for visible sensor arrays by extending the capability to both large and medium format midwave (MWIR) and/or longwave (LWIR) infrared detectors. The program seeks to develop and demonstrate technologies for curving existing state-of-the-art large format, high performance IR FPAs to a small radius of curvature (ROC) to maximize performance, as well as curve smaller format FPAs to an extreme ROC to enable the smallest form factors possible while maintaining exquisite performance.

FOCII will address this challenge through two approaches to fabricating a curved FPA. The first involves curving existing state-of-the-art FPAs, while keeping the underlying design intact. The focus of the research will be on achieving significant performance improvements over existing, flat FPAs, with a target radius of curvature of 70mm. The fundamental challenge researchers will work to address within this approach is to mitigate the mechanical strain created by curving the FPGA, particularly in silicon, which is very brittle.

The second approach will focus on achieving an extreme ROC of 12.5 mm to enable a transformative reduction in the size and weight compared to current imagers. Unlike the first approach, researchers will explore possible modifications to the underlying design, including physical modifications to the silicon that could relieve or eliminate stress on the material and allow for creating the desired curvature in a smaller sized FPA. This approach will also require new methods to counter the effects of any modifications during image reconstruction in the underlying ROIC algorithm.


Thanks to TL for the link!

Go to the original article...

LiDAR News: Lumotive, LeiShen, CoreDAR, Hitachi

Image Sensors World        Go to the original article...

GlobeNewswire: Lumotive, a Bill Gates-funded LiDAR startup, used Himax’s LCOS display with Lumotive’s patented Liquid Crystal Metasurfaces (LCMs) to improve the performance, reliability and cost of LiDAR systems. Other LiDAR sensors utilize MEMS mirrors or optical phased arrays. However, both of these approaches lack performance due to the small optical aperture of MEMS mirrors and the low efficiency of phased arrays. In a first for LiDAR, Lumotive leverages Himax’s unique, tailor-made LCOS process to convert semiconductor chips into dynamic displays that steer laser pulses based on the light-bending principles of metamaterials.

Lumotive’s LiDAR systems offer performance advantages, including a combination of:
  • Large optical aperture (25 x 25 mm) which delivers long range
  • 120-degree FoV with high angular resolution
  • Fast, random-access beam steering


Leishen Intelligent System presents its broad range of low-cost LiDARs. An automotive grade hybrid LiDAR CH16 3D is priced at $599 in quantities of 10,000:


Update: LeiShen kindly sent me their price list for small quantity purchases:


CoreDAR presents its tiny LiDAR concept:



Hitachi presents its view on LiDAR's role in smart city applications:

Go to the original article...

Samsung Exynos 980 Supports 108MP Camera

Image Sensors World        Go to the original article...

Samsung 5G 8nm Exynos 980 mobile processor supports up to 108MP camera:

"For advanced photography, the Exynos 980 delivers compelling camera performances with resolution support for up to 108-megapixels (Mp). The advanced image signal processor (ISP) supports up to five individual sensors and is able to process three concurrently for richer multi-camera experiences. Along with the NPU, the AI-powered camera is able to detect and understand scenes or objects, according to which the camera will then make optimal adjustments to its settings.

For an immersive multimedia experience, the Exynos 980’s multi-format codec (MFC) supports encoding and decoding of 4K UHD video at 120 frames per second (fps). HDR10+ support with dynamic mapping also offers more detailed and illuminant colors in video content.
"

Go to the original article...

sCMOS Sensors: Fairchild Imaging vs GPixel

Image Sensors World        Go to the original article...

Archiv.org paper "Evaluation of scientific CMOS sensors for sky survey applications" by S.Karpov, A.Bajat, A.Christov, M.Prouza from Czech Academy of Sciences compares Andor cameras based on Fairchild Imaging CIS2051 (Neo camera) and GPixel GSense400BSI (Marana camera) sCMOS sensors:

Scientific CMOS image sensors are a modern alternative for a typical CCD detectors, as they offer both low read-out noise, large sensitive area, and high frame rates. All these makes them promising devices for a modern wide-field sky surveys. However, the peculiarities of CMOS technology have to be properly taken into account when analyzing the data. In order to characterize these, we performed an extensive laboratory testing of Andor Marana sCMOS camera. Here we report its results, especially on the temporal stability and linearity, and compare it to the previous versions of Andor sCMOS cameras. We also present the results of an on-sky testing of this sensor connected to a wide-field lens, and discuss its applications for an astronomical sky surveys.

Go to the original article...

e2v Announces Fast Sensors

Image Sensors World        Go to the original article...

GlobeNewswire: Teledyne e2v announces its Flash CMOS sensor family, tailored for 3D laser profiling/displacement applications and high speed, high resolution inspection.

The new Flash sensors feature a 6μm CMOS global shutter pixel which effectively combines high resolution and fast frame rate. They are available in a 4k or 2k horizontal resolution, with respective frame rates of 1800fps and 1500fps (8 bits), and respective readout speeds of 61.4Gbps and 25.6Gbps (the best Gbps/price ratio in the market). The sensors come in a µPGA ceramic package fitting in standard optical formats, APS-like optics in the 4k and C-Mount in the 2k.

Yoann Lochardet, Marketing Manager for 3D at Teledyne e2v said, “We are very pleased to announce the release of the new Flash family of CMOS sensors which were developed after listening closely to the requirements of leading companies in the market. These new sensors feature a unique set of characteristics targeted at 3D laser triangulation applications including; high resolution, very high frame rate, very high readout speed, HDR capability and a large set of additional features. All these capabilities allow our customers to solve the most challenging application demands in 3D laser profiling/displacement such as quality control and 3D measurement.

Evaluation Kits and samples of Flash 2K and Flash 4K are now available.

Go to the original article...

css.php