Archives for April 2019

Ambarella Processor for Security Cameras Promises 2-5 Year Battery Life

Image Sensors World        Go to the original article...

BusinessWire: Ambarella introduces the S6LM camera SoC for both professional and home security cameras. The S6LM includes Ambarella’s latest HDR and low-light processing technology, 4K H.264 and H.265 encoding, multi-streaming, on-chip 360-degree dewarping, cyber-security features, and a quad-core Arm CPU. Fabricated in 10nm process, the SoC has very low-power operation, making it well-suited for small form factor and battery-powered designs.

An S6LM-based battery-powered camera or PIR video camera can shut down in less than one second when something such as an animal, shadow, or rain causes a false alert, effectively extending the camera’s battery life to between 2 to 5 years.

Go to the original article...

Qualcomm Enhances Camera AI Capabilities

Image Sensors World        Go to the original article...

Qualcomm Snapdragon 665 SoC is said to improve its AI capabilities over the previous generation:

"Snapdragon 665 is loaded with advanced AI capabilities to enhance your daily life. Powered by our third generation Qualcomm AI Engine, Hexagon 686 DSP, and Hexagon Vector eXtensions (HVX) for advanced on-device imaging and computing, you can enjoy features like AR Translate that instantly translates words in multiple languages. This latest platform also performs smart biometrics for enhanced security with features like 3D Face Unlock. Overall, these leading on-device AI features are 2X faster than the previous generation mobile platform, the Snapdragon 660.

Some of the Snapdragon 665’s most exciting AI features are related to the camera, opening up the possibilities for brilliant new capture capabilities. Take better shots thanks to object detection, auto scene detect, and smart cropping. Additionally, portrait mode, low-light night mode, and super resolution are designed to ensure you can capture the detail you want up close, at night, and in a multitude of different settings.
"




Snapdragon 730 and 730G SoCs feature the company's 4th generation AI processor:

"AI: Packing 2x the power of its predecessor, Qualcomm Technologies’ 4th generation multi-core Qualcomm® AI Engine accelerates intuitive on-device interactions for camera, gaming, voice and security. The Qualcomm® Hexagon™ 688 Processor inside Snapdragon 730 supports improved base scalar and Hexagon Vector eXtensions (HVX) performance, as well as the new Hexagon Tensor Accelerator—now adding dedicated AI processing into the Hexagon Processor. The combination of these provides a powerful blend of dedicated and programmable AI acceleration now in the 7 series.

Camera: For the first time in the 7 series, the Snapdragon 730 features the Qualcomm Spectra™ 350, featuring a dedicated Computer Vision (CV) ISP, to provide up to 4x overall power savings in Computer Vision compared to the previous generation. The lower power and faster CV can capture 4K HDR videos in Portrait Mode (Bokeh). The CV-ISP is also capable of high resolution depth sensing and the ability to support triple cameras that feature ultra-wide, portrait and telephoto lenses. It also captures photos and videos in the HEIF format so users can document life from multiple angles and store it all at half the file size to the previous generation.
"


Qualcomm's AI Day video shows the broad capabilities that the company expects to bring to the market:

Go to the original article...

Fraunhofer CSPAD-based LiDAR

Image Sensors World        Go to the original article...

Fraunhofer IMS shows its CSPAD-based flash LiDAR camera. CSPAD detectors are CMOS integrated SPADs with on-chip readout circuits. The implementation in a standard CMOS process allows cost efficient manufacturing and the design of compact sensors for applications that require high resolution imagers.


Go to the original article...

9th Fraunhofer IMS Workshop on CMOS Imaging

Image Sensors World        Go to the original article...

9th Fraunhofer IMS Workshop on CMOS Imaging to be held in Duisburg, Germany on May 7-8 2019, publishes its agenda:

"After a series of very successful workshops since 2002 we are happy to announce our 9th workshop on CMOS Imaging, a forum for the European industry and academia to meet and exchange the latest developments in CMOS based imaging technology. 15 presentations of excellent speakers stand for the high quality level of the event.

This year’s key topics are 3D imaging and LiDAR technologies, detectors for space, quantum imaging, and new trends in CMOS imaging, among others.
"

  • Flash LiDAR with CSPAD Arrays, Jennifer Ruskowski, Fraunhofer IMS
  • Components for LiDAR in Industrial and Automotive Applications, Winfried Reeb & Jeff Britton, Laser Components
  • Scanning Solid State LiDAR, Michael Kiehn, IBEO Automotive
  • LiDAR Sensors for ADAS and AD, Alexis Debray, Yole Développement SA
  • LiDAR Receivers for Automotive Applications, Marc Schillgalies, First Sensor AG
  • CMOS SPAD Array for Flash LiDAR, Ralf Kühnold, ELMOS AG
  • Advanced optical inspection with in-line computational Imaging, Ernst Bodenstorfer, AIT Austrian Institute of Technology GmbH
  • Backside Illumination Technology for CMOS Imagers, Stefan Dreiner, Fraunhofer IMS
  • Datasheets and Real Performance of CMOS Image Sensors, Albert Theuwissen, Harvest Imaging
  • CMOS SPAD Arrays for Fundamental Research, Peter Fischer, Universität Heidelberg
  • Optical Imaging based on Quantum Technologies, Nils Trautmann, Carl Zeiss AG
  • Ghost Imaging Using Entangled Photons, Dominik Walter, Fraunhofer IOSB
  • ISS Rendezvous and Beyond – LiDAR Sensors in Space, Jakub Bikowski, Jena-Optronik GmbH
  • Challenges for Optical Detectors in Space, Dirk Viehmann, Airbus D+S
  • CMOS TDI Detector for Earth Observation, Stefan Gläsener, Fraunhofer IMS
  • Optional: Visit of Fraunhofer Wafer Fab

Go to the original article...

OmniVision Announces Industry’s Smallest Cabin-Monitoring Automotive Image Sensor

Image Sensors World        Go to the original article...

PRNewswire: OmniVision announces the OV2778 automotive image sensor, which is said to provide the best value of any 2MP RGB-IR sensor for cabin- and occupant-monitoring, such as detecting packages and unattended children. The OV2778 comes in the smallest package available for the automotive in-cabin market segment — a 6.5 x 5.7mm automotive CSP. It also offers advanced ASIL functional safety, which is important for in-cabin applications when the OV2778 is being integrated as part of an ADAS system.

Demand for cabin and occupant monitoring are accelerating growth in the global automotive image sensor market,” said Thilo Rausch, product marketing manager at OmniVision. “Our new OV2778 image sensor enables these applications in mainstream vehicles by providing the best value with high sensitivity across all lighting conditions.

The OV2778 is built on 2.8um OmniBSI-2 Deep Well pixel technology, which delivers a 16-bit linear output from a single exposure. With the second exposure, the DR increases to 120dB. Additionally, with an integrated RGB-IR, 4x4 pattern color filter and external frame synchronization capability, the OV2778 yields top performance across varying lighting conditions.

This image sensor is AEC-Q100 Grade 2 certified for automotive applications. OV2778 samples are available now, along with a plug-and-play automotive reference design system that can be connected to any vehicle for rapid development.

Go to the original article...

More Pictures from Huawei RYYB Sensor Presentation

Image Sensors World        Go to the original article...

There are few more photos published on Twitter from Huawei P30 and P30 Pro presentation on CYYB CFA:


IFNews quotes Cowen Research comparing camera BOM in flagship smartphones. Huawei invests the most in its camera:

Go to the original article...

aiCTX Neuromorphic CNN Processor for Event-Driven Sensors

Image Sensors World        Go to the original article...

Swiss startup aiCTX announces a fully-asynchronous event-driven neuromorphic AI processor for low power, always-on, real-time applications. DynapCNN opens new possibilities for dynamic vision processing, bringing event-based vision applications to power-constrained devices for the first time.

DynapCNN is a 12mm^2 chip, fabricated in 22nm technology, housing over 1 million spiking neurons and 4 million programmable parameters, with a scalable architecture optimally suited for implementing Convolutional Neural Networks. It is a first of its kind ASIC that brings the power of machine learning and the efficiency of event-driven neuromorphic computation together in one device. DynapCNN is the most direct and power-efficient way of processing data generated by Event-Based and Dynamic Vision Sensors.

As a next-generation vision processing solution, DynapCNN is said to be 100–1000 times more power efficient than the state of the art, and delivers 10 times shorter latencies in real-time vision processing. Based on fully-asynchronous digital logic, the event-driven design of DynapCNN, together with custom IPs from aiCTX, allow it to perform ultra-low-power AI processing.

For real-time vision processing, almost all applications are for movement driven tasks (for example, gesture recognition; face detection/recognition; presence detection; movement tracking/recognition). Conventional image processing systems analyse video data on a frame by frame basis. “Even if nothing is changing in front of the camera, computation is performed on every frame,” explains Ning Qiao, CEO of aiCTX. “Unlike conventional frame-based approaches, our system delivers always-on vision processing with close to zero power consumption if there is no change in the picture. Any movement in the scene is processed using the sparse computing capabilities of the chip, which further reduces the dynamic power requirements.

Those savings in energy mean that applications based on DynapCNN can be always-on, and crunch data locally on battery powered, portable devices. “This is something that is just not possible using standard approaches like traditional deep learning ASICs,” adds Qiao.

Computation in DynapCNN is triggered directly by changes in the visual scene, without using a high-speed clock. Moving objects give rise to sequences of events, which are processed immediately by the processor. Since there is no notion of frames, DynapCNN’s continuous computation enables ultra-low-latency of below 5ms. This represents at least a 10x improvement from the current deep learning solutions available in the market for real-time vision processing.

Sadique Sheik, a senior R&D engineer at aiCTX, explains why having their processors do the computation locally would be a cost and energy efficient solution, and would bring additional privacy benefits. “Providing IoT devices with local AI allows us to eliminate the energy used to send heavy sensory data to the cloud for processing. Since our chips do all that processing locally, there’s no need to send the video off the device. This is a strong move towards providing privacy and data protection for the end user.

DynapCNN Development Kits will be available in Q3 2019.

Go to the original article...

ULIS Releases World’s Smallest 60 Hz VGA 12um Thermal Image Sensor

Image Sensors World        Go to the original article...

ALA News: ULIS launches Atto640, a 60fps VGA 12um pixel thermal image sensor for reduced overall size and cost of the camera. The target market is commercial and defense applications, such as Thermal Weapon Sights (TWS), surveillance and handheld thermography cameras, as well as Personal Vision Systems (PVS), including portable monoculars and binoculars for consumer outdoor leisure, law enforcement and border control.

ULIS is adding a VGA format to its existing QVGA Atto320 to give camera manufacturers more choice in its 12 µm product range. The interest for camera makers is that, compared to 17 µm pixel pitch technology, the 12 µm pitch enables them to use smaller and lower cost optics.

Atto640 achieves its size advantage over competing models through its Wafer Level Packaging (WLP) technology, in which the detector window is directly bonded to the wafer, a technique enabling a significant reduction in the overall dimension of the sensor. Atto640’s footprint is half the size of ULIS’ Pico640-046 (17µm) model. Since Atto640 is designed with WLP, a batch-processing technique, it is suited to high-volume production.

Samples of Atto640 are currently available, with production ramp-up slated for the end of 2019. ULIS intends to further extend its 12µm product line up with larger resolution sensors.



Go to the original article...

FBK Talks about Entangled Light Super-Resolution Microscopy

Image Sensors World        Go to the original article...

FBK publishes a video on SUPERTWIN project - the European entangled light super-resolution microscopy program:

Go to the original article...

SmartSens Unveils DSI Pixel Technology

Image Sensors World        Go to the original article...

PRNewswire: DSI pixel is the next-generation sensor technology provided by SmartSens that has better performance, faster time to market and higher cost effectiveness, compared to previous technologies. The DSI pixel technology integrates SmartSens' design and pixel & process knowledge into the foundry service by DB HiTek (Dongbu).

The DSI pixel is said to surpass both FSI and BSI in terms of performance. Compared to the current SmartPixel FSI performance, the DSI pixel excels in sensitivity improvement and dark current reduction by 2x and 5x respectively. When compared to a different vendor's BSI sensor performance, SmartSens' DSI technology offers enhanced SNR1 and read noise performance.

"With the rapid rise of IoT and AIoT, the market is demanding high-performance image sensors at a low cost of production and fast time to market," said William Ma, COO of SmartSens. "The SmartSens DSI technology goes beyond FSI and BSI technologies paving the way for unique technological advances in image recognition."

Go to the original article...

Sony Polarsens Videos

Image Sensors World        Go to the original article...

Sony publishes new videos explaining its polarization image sensors and showing some example pictures, such as Paris in polarized light:



Go to the original article...

Vision System Design 2019 Innovators Awards

Image Sensors World        Go to the original article...

VisionSystemDesign: Omnivision OS02C10 1080p HDR CMOS sensor won Vision Systems Design's Silver Innovator's Award. The OS02C10 has 2.9 µm pixel with QE of 60% at 850 µm and 40% at 940 µm. The sensor combines OmniVision’s ultra-low light (ULL) and Nyxel near-infrared (NIR) technologies to enable nighttime camera performance.


Sony Image Sensing Solutions XCG-CP510 polarized camera and SDK won Gold AWARD. In 2018, Sony Europe’s Image Sensing Solutions division launched the XCG-CP510 that uses Sony’s IMX250MZR sensor with on-chip polarization filters. In addition, Sony launched an SDK, which provides a dedicated image processing library to speed solution development, as well as numerous functions, such as stress measurement, glare reduction, and support functions such as demosaic and raw extraction.

LUCID Vision Labs Helios ToF 3D camera won Gold Awatd too. The camera is based on Sony’s DepthSense IMX556PLR BSI ToF image sensor with high NIR sensitivity, 10μm pixel size and high modulation contrast ratio. The camera can produce depth data at 60 fps with 640×480 resolution over a PoE Gigabit Ethernet interface. The camera has a precision of 2.5mm at 1m and 4.5mm at 2m.


Photoneo MotionCam-3D won Platinum Award for its "Parallel Structured Light Technology." The technology lets users capture high resolution images of moving objects at a maximum speed of 40 m/second. The camera also features a custom CMOS image sensor and can acquire 1068 x 800-point clouds at up to 60 fps. Additionally, the 3D camera features an NVIDIA Maxwell GPU and a recommended scanning distance of 350 to 2000 mm.

Go to the original article...

Sony vs Canon 135mm – can a 23 year-old lens really compete

Cameralabs        Go to the original article...

Owners of the Canon EF 135mm f2L USM consider it a legend, but can a 23 year old lens really compete with a modern design? Ben Harvey pitches the blisteringly-sharp Sony FE 135mm f1.8 GM against his beloved Canon to find out.…

The post Sony vs Canon 135mm – can a 23 year-old lens really compete appeared first on Cameralabs.

Go to the original article...

Framos on Industrial and Machine Vision Market Trends

Image Sensors World        Go to the original article...

Framos presents the results of its study of Industrial Cameras and Vision Systems Market and Trends:



Go to the original article...

Polarization Imaging Use Cases

Image Sensors World        Go to the original article...

Lucid Vision Labs presents its Sony sensor-based polarization camera use cases:



Fraunhofer IIS talks about polarization imaging applications too:

Go to the original article...

Image Processing News

Image Sensors World        Go to the original article...

Synopsys demos vision functions of its ARC EV6x Embedded Vision Processor IP:



Photron explains operation of its 6D marker and the accompanying software:



Light.co gives some details on its Light ASIC that does almost everything, but image stabilization:

"The Light ASIC is a dedicated chip that can control and transfer image data for up to six cameras simultaneously. On popular SoC platforms up to four Light ASICs can be used to coordinate as many as 24 cameras. When necessary, multiple Light ASICs can be interfaced to one another to allow even larger camera arrays. This chip was built specifically for computational imaging applications in everything from mobile phones, to security systems, to automotive systems.

The Light ASIC is small and incredibly efficient. The Package-on-Package chip is only 14 millimeters square, and designed for its memory to be stacked thereby saving valuable board space. The ASIC is also built for efficiency. It actively manages power consumption for active, preview, and standby modes, optimizing your device’s battery usage and thermal profile.

The Light ASIC and camera array work with the latest chipsets including the Qualcomm Snapdragon series, and multiple peripherals such as LED flashes, Time-of-Flight sensors, and Inertial Measurement Units. The Light ASIC independently coordinates control of all camera modules, simultaneously. It can achieve focus for all modules at a given region-of-interest, adjust exposure levels per aperture, while calculating white balance, all using Light proprietary calibration.
"


Techcrunch: Elon Musk explains the functions of camera looking inside Tesla’s Model 3 from its rear view mirror: Dog mode and Sentry mode. In Dog mode, the camera recognizes a dog left in the car unattended and adjusts air conditioning to keep a comfortable temperature in the car. In Sentry mode, the car uses its cameras to guard itself monitoring for any suspicious activity.

Go to the original article...

MIPI Test Board for Legacy ATE

Image Sensors World        Go to the original article...

PRWeb: Introspect Technology releases its SV4D Direct Attach MIPI Test Module that enables at-speed production testing for MIPI C-PHY or D-PHY transmitter or receiver interfaces.

Whereas we could use conventional ATE for DC parametric testing and a loop-back methodology for high-speed testing on our standard SerDes interfaces, we could not find a solution that could provide the necessary fault coverage for the MIPI ports on our devices,” said Ibrahim Aljabiri, Sr. Manager, Product & Test Engineering, Synaptics. “The SV4D’s strong MIPI features, high operating speed, and compact size allowed us to deploy a high-parallelism multi-site solution on our existing ATE.

Mohamed Hafed, CEO of Introspect Technology, explains, “we found that product engineers all over the world were looking for mimicking system-level functionality as much as possible during wafer sort and final test. So, we set out to create a production test module that leveraged our unique monolithic MIPI physical layers to deliver exactly that. Not only is the SV4D able to perform structural testing using abbreviated device test modes, but it is also able to completely exercise the link and software layers of devices under test.


Go to the original article...

Doppler LiDAR with Regular CMOS Sensor

Image Sensors World        Go to the original article...

Arxiv.org paper "A Time-of-Flight Imaging System Based on Resonant Photoelastic Modulation" by Okan Atalar, Raphaël Van Laer, Christopher J. Sarabalis, Amir H. Safavi-Naeini, and Amin Arbabian from Stanford University proposes a regular CMOS sensor-based Doppler LiDAR:

"To realize this system, a new device, a free-space optical mixer, is designed and fabricated. A scene is illuminated (flashed) with a megahertz level amplitude modulated light source and the reflected light from the scene is collected by a receiver. The receiver consists of the free-space optical mixer, comprising a photoelastic modulator sandwiched between polarizers, placed in front of a standard CMOS image sensor. This free-space optical mixer downconverts the megahertz level amplitude modulation frequencies into the temporal bandwidth of the image sensor. A full scale extension of the demonstrated system will be able to measure phases and Doppler shifts for the beat tones and use signal processing techniques to estimate the distance and velocity of each point in the illuminated scene with high accuracy."

Go to the original article...

Panasonic Lumix G90 G95 review

Cameralabs        Go to the original article...

The Panasonic Lumix G90 / G95 is a mid-range mirrorless camera based on the Micro Four Thirds standard, with a 20 Megapixel sensor, built-in stabilisation, viewfinder, fully-articulated touchscreen, and unlimited 4k recording. Find out how it compares to its rivals in my in-depth review!…

The post Panasonic Lumix G90 G95 review appeared first on Cameralabs.

Go to the original article...

1mW Always-On Imaging

Image Sensors World        Go to the original article...

1mW always-on imaging becomes quite a popular topic. TinyML Summit held in Sunnyvale, CA on March 20-21, has a number of presentations on that.

Pixart presents its approach to the low power CIS:


Qualcomm presents its view on "Ultra-low Power Always-On Computer Vision:"

"The CVM is built on a custom ASIC, which is a 28nm ultra-low-power ARM-based SoC featuring a control processor, a DSP-like hardware accelerator, a dedicated vision processor, and embedded PMU. It also incorporates a lower-power QVGA CMOS grayscale image sensor and a custom-designed wide field-of-view lens. The image sensor is sensitive to near-IR wavelengths, and can be used for low-light scenarios with IR illumination. The entire CVM, including the image sensor and the ASIC, consumes less than 1 mW power while actively performing computer vision tasks such as object detection."

Go to the original article...

Automotive LiDARs in China

Image Sensors World        Go to the original article...

ResearchInChina publishes a report on "ADAS and Autonomous Driving Industry Chain Report, 2018-2019– Automotive Lidar." Few interesting quotes, including Velodyne LiDAR wholesale prices going down to $150:

"In the markets where Chinese companies master core technologies, price of products is bound to plummet. Take IPG for example, its 20W fiber lasers were priced at over RMB150,000 per unit in 2010, compared with current quote at RMB8,800 from the peer -- Shenzhen REEKO Information Technology Co., Ltd.. Maxphotonics Co., Ltd. and Shenzhen JPT Opto-electronics Co., Ltd. are another two rivals in the fiber laser price war.

The similar stories echo in the LiDAR market where price competition pricks up in 2019 as Hu Xiaobo, a founder of Maxphotonics Co., Ltd., ventures into the LiDAR field for a new undertaking.

Velodyne’s new factory in San Jose which already becomes operational, can produce as many as 1 million units a year. If acquiring orders for 100,000 units, Velodyne will cut down the price of its VLS 128-channel products to less than $1,000, and that of VLS 32 to roughly $650, let alone $500 for mass-produced 32-channel Velarray solid-state LiDAR and $150 for 8-channel ones.

It is clear that LiDAR price may be 10 times lower than what it is now, and the reduction hinges on how many are demanded.

Comparing with the previous year, Chinese LiDAR vendors have come a long way in factory construction, mass production, shipment, financing and other aspects.

In 2018, Hesai Tech announced to close Series B funding rounds of RMB250 million, with its automotive LiDAR sales only second to Velodyne’s.

RoboSense raised RMB300 million from investors like Cainiao, SAIC and BAIC. Its shipments of 16/32-channel mechanical LiDARs boomed in 2018. The vendor also acquired a MEMS micromirror firm in the year.

Although the automotive market is “wintering”, the financing story in LiDAR industry still goes on.
"

Go to the original article...

LFoundry Changes Hands Again

Image Sensors World        Go to the original article...

LFoundry and SMIC announce that they have entered into a binding agreement to sell LFoundry to Jiangsu CAS-IGBT Technology Co., Ltd. The transaction also includes LFoundry and SMIC groups in Bulgaria.

Jiangsu CAS-IGBT Technology Co., Ltd. is a group focusing on the research, design and development of new power and electronic chips such as IGBT (Insulated Gate Bipolar Transistor) and FRD (Fast Recovery Diode).

"We are setting the stage for a new era and we are satisfied with it," said Sergio Galbiati and Guenther Ernst, respectively Vice-Chairman and CEO of LFoundry. “The technological and production capacity of the Avezzano plant (specially focused on the automotive sector, but also on security and industrial field with applications such as CMOS image sensors, smart power, embedded memory and others) will provide Jiangsu CAS-IGBT a unique platform from which to grow existing and new Lines of Businesses that will allow for the potential of a brighter future in Avezzano by serving a more diverse set of applications."

The HK stock exchange document filed by SMIC says: "The Consideration is USD112,816,089, which was determined after arm’s length negotiation between the Vendor and the Purchaser by reference to fair value of LFoundry per the Company internal analysis and research, including the investment costs of a newly set up 200mm wafer fabrication facility, valuation of the property, plant and equipment and the market value of other 200mm wafer fabrication facility. The Directors consider that the Consideration is fair and reasonable and in the interest of the Company and its shareholders as a whole.

In accordance with the International Financial Reporting Standards, the net loss before or after taxation (unaudited) of the Target Group for the financial year ended 31 December 2018 and the financial year ended 31 December 2017 were USD8.1 million and USD14.9 million, respectively.

The unaudited total asset value of the Target Group as at 31 December 2018 was USD256.2 million.
"

The formal acquisition finalization is scheduled for the end of June.

Go to the original article...

Image Sensors Europe 2019 Notes

Image Sensors World        Go to the original article...

Image Sensors Europe conference held in London, UK on March 13-14, 2019 has a couple of interesting messages:

Mantis Vision reports that smartphone 3D cameras based on structured light approach has been largely rejected by the market due to a large display "notch" needed for stereo base:


Amazon asks a question whether image sensors can be as power efficient as audio sensors? For example, modern always-on audio solutions consume just 19uA while waiting for a wake-up phrase "OK Google":


Martin Wany shows that several CMOSIS key designers have left the company after AMS acquired it and started new companies:


NHK presented its Selenium-based image sensor:


ON Semi shows the capabilities of its AR0430 sensor with SuperDepth technology:


Sony quotes few papers on DNN potential of improving image quality:

Go to the original article...

Himax Presents 1mW Always-On Intelligent Camera

Image Sensors World        Go to the original article...

Globenewswire: Himax and its wholly-owned subsidiary Emza Visual Sense release their second generation “WiseEye IoT” intelligent vision solution. Compared to first generation solutions, WiseEye 2.0 is “IoT Ready” adding a proprietary processor to Emza’s AI-based machine learning computer vision algorithms and Himax’s low-power CMOS sensor. The new camera provides higher resolution and better efficiency with less power consumption. These new developments enable cost effective addition of human presence detection and identification to next generation consumer IoT devices in security systems, smart homes and buildings.

The key features of the WiseEye 2.0 IoT solution include:

  • Battery-powered human detection sensor: Designed with the combination of an ultra-low-power image sensor and energy efficient CV image processing algorithm, the battery-powered IoT visual sensor enables the always-on camera to wake up devices based on specific patterns or movements.
  • AI-based machine learning at the edge: Unique combination of ultra-low power consumption combined with AI-based machine learning, enables battery operated devices with advanced intelligence that were never previously available for smart home, security and consumer IoT applications.
  • No passive infrared (PIR) sensors required: Current PIR-based sensors used for low power motion detection have no intelligence and as a result deliver a costly level of false-positives. WiseEye 2.0 provides low power with high intelligence to significantly increase accuracy and decrease false alarms.
  • Pre-roll feature: The always-on camera stores all frames related to an alarm including footage from before the event occurred.
  • High accuracy human classification: With human recognition from up to 10 meters away, WiseEye 2.0 is significantly more accurate than first generation solutions.

WiseEye 2.0 brings an enhanced user experience and better-informed decision-making based on minimal power and cost requirements. We plan to release the reference design in Q3 2019 including all components and functions for OEMs and ODMs to simplify integration of advanced vision functionality into their current and next generation IoT devices,” said Yoram Zylberberg, CEO of Emza Visual Sense.

"We are excited about WiseEye 2.0 and the level of integration we have achieved between the new HM0360 camera, algorithm and processor," said Amit Mittra, CTO of Himax Imaging. "The result is sub 1 mW always-on functionality, faster response times and power requirements 1-2 orders of magnitude lower than previous solutions. This is what our customers are specifying for their smart home/building, security, automotive, and consumer IoT applications."

Go to the original article...

Sony Rumored to Prepare 102MP Full-Frame Sensor Capable of 6K Video

Image Sensors World        Go to the original article...

SonyAlphaRumors publishes a rumor of new Sony 102MP full-frame sensor capable of 6K video at 30fps frame rate:


"2.91um pixel architecture, 100MP @ 10fps, 6K video using 12bit ADC with on-chip binning/line-skipping. 4K RGB 4:4:4 video with on-chip colour-aware binning.

This 12288 x 8192 100MP sensor employs a unique, CFA-based column-parallel ADC design:
"

Go to the original article...

Event-based Face Detection

Image Sensors World        Go to the original article...

Neuromorphic Vision paper claims that event-based sensors can detect faces with much lower power: "High Speed Event-based Face Detection and Tracking in the Blink of an Eye" by Gregor Lenz, Sio-Hoi Ieng, and Ryad Benosman.

"We present the first purely event-based method for face detection using the high temporal resolution of an event-based camera. We will rely on a new feature that has never been used for such a task that relies on detecting eye blinks. Eye blinks are a unique natural dynamic signature of human faces that is captured well by event-based sensors that rely on relative changes of luminance. Although an eye blink can be captured with conventional cameras, we will show that the dynamics of eye blinks combined with the fact that two eyes act simultaneously allows to derive a robust methodology for face detection at a low computational cost. We show that eye blinks have a unique temporal signature over time that can be easily detected by correlating the acquired local activity with a generic temporal model of eye blinks that has been generated from a wide population of users. We show that once the face is reliably detected it is possible to apply a probabilistic framework to track the spatial position of a face for each incoming event while updating the position of trackers. Results are shown for several indoor and outdoor experiments. We will also release an annotated data set that can be used for future work on the topic."

Go to the original article...

SystemPlus on Mobile CIS Comparison

Image Sensors World        Go to the original article...

SystemPlus publishes "Mobile CMOS Image Sensor Comparison 2019:"

"Discover the comparative study to provide insights into the structure and technology of 28 CIS die in seven flagship smartphones from several major brands: the Apple iPhone X; Samsung Galaxy S9 Plus; Huawei P20 Pro; Huawei Mate 20 Pro; Xiaomi Mi8 Explorer Version; Oppo Find X; and Vivo X21UD.

The report has shown that the four manufacturers of CIS presented in the flagships, Sony, Samsung, Omnivision and STMicroelectronics, have totally different approaches. For example, Sony is the only manufacturer using hybrid bonding in the analyzed devices, having completely dropped fusion bonding with Through-Silicon Vias (TSVs). We have extracted further technical choices from the four players from the analysis and comparisons.
"

Comparison Omnivision-Samsung-Sony

Go to the original article...

Korean Companies Working to Expand CIS Business

Image Sensors World        Go to the original article...

KoreaHerald reports that Samsung and Hynix are increasing their efforts to expand their image sensor market share:

"President Moon Jae-in most recently ordered immediate measures to raise the domestic semiconductor industry’s competitiveness in the non-memory field at a state affairs meeting, saying, “Measures are needed to reduce the country’s overreliance on the memory chip market.”

Japan’s Sony leads the image sensor market. [Samsung] comes second in the image sensor market after Sony with about 30 percent share as of last year.

SK hynix, the second-largest player in the global memory market after Samsung, has so far remained silent about its non-memory business, shy of revealing the reality of its small business in the image sensor market.

According to market researcher TSR, the company claimed a 9.9 percent market share in the first quarter of 2018. But its image sensor sales -- at 800 billion won ($706 million) -- accounted for a mere 1 percent of the company’s total sales last year.
"

SK Hynix company blog reviews the latest development in smartphone imaging, mostly quoting other Korean companies:

Go to the original article...

Sub-Threshold 200GHz Detector

Image Sensors World        Go to the original article...

MDPI paper "Quasi-static Analysis Based on an Equivalent Circuit Model for a CMOS Terahertz Plasmon Detector in the Subthreshold Region" by Ju-Hee Son and Jong-Ryul Yang from Yeungnam University, Gyeongsan, Korea, claims that sub-threshold-biased nmos transistor in 0.25um process is capable of 200GHz radiation detection:

"An analytic method for a complementary metal-oxide-semiconductor (CMOS) terahertz plasmon detector operating in the subthreshold region is presented using the equivalent circuit model. With respect to design optimization of the detector, the signal transmission from the antenna port to the output of the detector is described by using the proposed circuit model, which does not include a complicated physical operating principle and mathematical expressions. Characteristics from the antenna port to the input gate node of the detector are analyzed through the superposition method by using the characteristic impedance of transmission lines. The superposition method shows that the effect of interconnection lines at the input is simplified with the optimum bias point. The characteristics of the plasmon detection are expressed by using small-signal analysis of the single transistor at the sub-threshold operation. The results of the small-signal analysis show that the unity gain preamplifier located between the detector core and the main amplifier can improve the detection performances such as the voltage responsivity and the noise equivalent power. The measurement results using the fabricated CMOS plasmon detector at 200 GHz suggest that the unity gain preamplifier improves the detector performances, which are the same results as we received from the proposed analytic method."

Go to the original article...

css.php