Recent Industry News: Sony, SK Hynix

Image Sensors World        Go to the original article...

Sony separates production of cameras for China and non-China markets


Sony Group has transferred production of cameras sold in the Japanese, U.S. and European markets to Thailand from China, part of growing efforts by manufacturers to protect supply chains by reducing their Chinese dependence. Sony’s plant in China will in principle produce cameras for the domestic market. Sony offers the Alpha line of high-end mirrorless cameras. The company sold roughly 2.11M units globally in 2022, according to Euromonitor. Of those, China accounted for 150,000 units, with the rest, or 90%, sold elsewhere, meaning the bulk of Sony’s Chinese production has been shifted to Thailand. Canon in 2022 closed part of its camera production in China, shifting it back to Japan. Daikin Industries plans to establish a supply chain to make air conditioners without having to rely on Chinese-made parts within fiscal 2023.

TOKYO -- Sony Group has transferred production of cameras sold in the Japanese, U.S. and European markets to Thailand from China, part of growing efforts by manufacturers to protect supply chains by reducing their Chinese dependence.

Sony's plant in China will in principle produce cameras for the domestic market. Until now, Sony cameras were exported from China and Thailand. The site will retain some production facilities to be brought back online in emergencies. 

After tensions heightened between Washington and Beijing, Sony first shifted manufacturing of cameras bound for the U.S. The transfer of the production facilities for Japan- and Europe-bound cameras was completed at the end of last year. 

Sony offers the Alpha line of high-end mirrorless cameras. The company sold roughly 2.11 million units globally in 2022, according to Euromonitor. Of those, China accounted for 150,000 units, with the rest, or 90%, sold elsewhere, meaning the bulk of Sony's Chinese production has been shifted to Thailand. 

On the production shift, Sony said it "continues to focus on the Chinese market and has no plans of exiting from China."

Sony will continue making other products, such as TVs, game consoles and camera lenses, in China for export to other countries. 

The manufacturing sector has been working to address a heavy reliance on Chinese production following supply chain disruptions caused by Beijing's zero-COVID policy.

Canon in 2022 closed part of its camera production in China, shifting it back to Japan. Daikin Industries plans to establish a supply chain to make air conditioners without having to rely on Chinese-made parts within fiscal 2023.

Sony ranks second in global market share for cameras, following Canon. Its camera-related sales totaled 414.8 billion yen ($3.2 billion) in fiscal 2021, about 20% of its electronics business.

SK Hynix reshuffles CIS team to focus on high-end products


SK Hynix has reshuffled its CMOS image sensor (CIS) team in a bid to shift focus from expanding market share to developing high-end products, TheElec has learned.

Its CIS team was a singular organization prior to the changes, but the company has now created sub-teams that focus on specific functions and features of image sensors.

Overall, the team is now more of a research and development team rather than sales and marketing.

CIS is used widely in smartphones and IT products for its camera features.

Sony’s is the world’s largest producer of the component followed by Samsung.

The pair focuses on high resolution and multi-functions and controls between 70% to 80% of the market together __ Sony is the overwhelming leader with around 50% market share.

SK Hynix is a smaller player in the field and in the past had focused on low-end CIS with 20MP or below resolution.

The company has however started to supply its CIS to Samsung in 2021. It provided its 13MP CIS for Samsung’s foldable phones and last year provided 50MP sensors for the Galaxy A series.

Still, the overall demand for CIS has dropped in recent years as smartphones that mainly use them are suffering from a slowdown in demand.

This has been especially poignant for mid-tier phones due to their unit prices dropping in response to low consumer demand.

SK Hynix has been reducing its CIS output in light of this and is also reducing its inventory, the sources said.

Go to the original article...

Call for Papers: IEEE International Conference on Computational Photography (ICCP) 2023

Image Sensors World        Go to the original article...

Call for Papers: IEEE International Conference on Computational Photography (ICCP) 2023 

Submission Deadline: April 7, 2023

The ICCP 2023 Call-for-Papers is released on the conference website. ICCP is an international venue for disseminating and discussing new scholarly work in computational photography, novel imaging, sensors and optics techniques. 

As in previous years, ICCP is coordinating with the IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI) for a special issue on Computational Photography to be published after the conference. 

Learn more on the ICCP 2023 website, and submit your latest advancements by Friday, 7th April, 2023. 

Go to the original article...

Global Image Sensor Market Forecast to Grow Nearly 11% through 2030

Image Sensors World        Go to the original article...


The global image sensors market was calculated at ~US$17.6 billion in 2020. The market forecasts to reach ~US$48 billion in revenue by 2030 by registering a compound annual growth rate of 10.7% during the forecast period from 2021-2030.

Factors Influencing
The global image sensor market is expected to gain traction in the upcoming years because of the upscaling demand for image sensors technology in the automotive industry. Image sensors are highly useful in converting optical images into electronic ones. Thus, the demand for image sensors is expected to increase due to their applications in digital cameras.

Moreover, constant advancements in Complementary metal-oxide-semiconductor (CMOS) imaging technology would positively impact the growth of the global image sensors market. Recent advancements in CMOS technology have improved visualization presentations of the machines. Moreover, the cost-effectiveness of these technologies, together with better performance, would bolster the growth of the global image sensor market during the analysis period.

The growing adoption of smartphones and advancements in the industry are driving the growth of the global image sensor market. Dual camera trend in smartphones and tablets, forecast to accelerate the growth of the global image sensor market. In addition, excessive demand for advanced medical imaging systems would present some promising opportunities for the prominent market players during the forecast timeframe.

Various companies are coming up with advanced image sensors with Artificial Intelligence capabilities. Sony Corporation (Japan) recently launched IMX500, the world's first intelligent vision sensor that carries out machine learning and boosts computer vision operations automatically. Thus, such advancements are forecast to prompt the growth of the global image sensor market in the coming years.
Furthermore, the growing trend of smartphone photography has surged the demand for the image sensor to provide clear and quality output. Growing demand for 48 MP and 64 MP cameras would lead to the growth of the global image sensors market in the future.

Regional Analysis
Asia-Pacific forecasts to hold the maximum share with the highest revenue in the global image sensors market. The growth of the region is attributed to the increasing research and development activities. Moreover, the growing number of accident cases in the region is boosting the use of ADAS (advanced driver assistance system), together with progressive image sensing proficiencies. Thus, it would surge the demand for image sensors in the region during the forecast period.

Covid-19 Impact Analysis
The use of image sensors in smartphones has been the key reason for the growth of the market. However, the demand for smartphones severely declined during the pandemic. Thus, it rapidly slowed down the growth of the global image sensor market.

Go to the original article...

International Image Sensors Workshop (IISW) 2023 Program and Pre-Registration Open

Image Sensors World        Go to the original article...

The 2023 International Image Sensors Workshop announces the technical programme and opens the pre-registration to attend the workshop.

Technical Programme is announced: The Workshop programme is from May 22nd to 25th with attendees arriving on May 21st. The programme features 54 regular presentations and 44 posters with presenters from industry and academia. There are 10 engaging sessions across 4 days in a single track format. On one afternoon, there are social trips to Stirling Castle or the Glenturret Whisky Distillery. Click here to see the technical programme.

Pre-Registration is Open: The pre-registration is now open until Monday 6th Feb. Click here to pre-register to express your interest to attend.

Go to the original article...

PhotonicsSpectra article on quantum dots-based SWIR Imagers

Image Sensors World        Go to the original article...

Full article available here link:

Some excerpts below:

Cameras that sense wavelengths between 1000 and 2500 nm can often pick up details that would otherwise be hidden in images captured by conventional CMOS image sensors (CIS) that operate in the visible range. SWIR cameras can not only view details obscured by plastic sunglasses (a) and packaging (b), they can also peer through silicon wafers to spot voids after the bonding process (c). QD: quantum dot. Courtesy of mec.

A SWIR imaging forecast shows emerging sensor materials taking a larger share of the market, while incumbent InGaAs sees little gain, and the use of other materials grows at a faster rate. OPD: organic photodetector. Courtesy of IDTechEx.

Quantum dots act as a SWIR photodetector if they are sized correctly. When placed on a readout circuit, they form a SWIR imaging sensor.

The price for SWIR cameras today can run in the tens of thousands of dollars, which is too expensive for many applications and has inhibited wider use of the technology.

Silicon, the dominant sensor material for visible imaging, does not absorb SWIR photons without surface modification — and even then, it performs poorly. As a result, most SWIR cameras today use sensors based on indium gallium arsenide (InGaAs), ...

... sensors based on colloidal quantum dots (QDs) are gaining interest. The technology uses nanocrystals made of semiconductor materials, such as lead sulfide (PbS), that absorb in the SWIR. By adjusting the size of the nanocrystals used, sensor fabricators can create photodetectors that are sensitive from the visible to 2000 nm or even longer wavelengths.

... performance has steadily improved with the underlying materials and processing science, according to Pawel Malinowski, program manager of pixel innovations at imec. The organization’s third-generation QD-based image sensor debuted a couple of years ago with an efficiency of 45%. Newer sensors have delivered above 60% efficiency.

Fabricating QD photodiodes and sensors is also inexpensive because the sensor stack consists of a QD layer a few hundred nanometers thick, along with conducting, structural, and protective layers, Klem said. The stack goes atop a CMOS readout circuit in a pixel array. The technique can accommodate high-volume manufacturing processes and produce either large or small pixel arrays. Compared to InGaAs technology, QD sensors offer higher resolution and lower noise levels, along with fast response times.

Emberion, a startup spun out of Nokia, also makes QD-based SWIR cameras ... The quantum efficiency of these sensors is only 20% at 1800 nm... [but] ... at about half the price of InGaAs-based systems... .

[Another company TriEye is secretive about whether they use QD detectors but...] Academic papers co-authored by one of the company’s founders around the time that TriEye came into existence discuss pyramid-shaped silicon nanostructures that detect SWIR photons via plasmonic enhancement of internal photoemission.

Go to the original article...

Canon’s activities lead to the removal of 1,202 listings from Amazon in Germany, Italy, Spain, the United Kingdom, France and The Netherlands

Newsroom | Canon Global        Go to the original article...

Go to the original article...

106 listings removed from Amazon in Canada, Mexico and the United States of America after Canon files infringement reports

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Registrations Open for Harvest Imaging Forum (Apr 5-6, 2023)

Image Sensors World        Go to the original article...

When: April 5 and 6, 2023
Where: Delft, the Netherlands
Forum Topic: Imaging Beyond the Visible
Speaker: Prof. dr. Pierre Magnan (ISAE-SUPAERO, France)
Registration link:

More information can be found here:

After the Harvest Imaging forums during the last decade, a next and ninth one will be organized on April 5 & 6, 2023 in Delft, the Netherlands. The basic intention of the Harvest Imaging forum is to have a scientific and technical in-depth discussion on one particular topic that is of great importance and value to digital imaging. The forum 2023 will again be organized in a hybrid form:

  • You can attend in-person and can benefit in the optimal way of the live interaction with the speakers and audience,
  • There will be also a live broadcast of the forum, still interactions with the speakers through a chat box will be made possible,
  • Finally the forum also can be watched on-line at a later date.

The 2023 Harvest Imaging forum will deal with a single topic from the field of solid-state imaging and will have only one world-level expert as the speaker.

Register here:


"Imaging Beyond the Visible"
Prof. dr. Pierre MAGNAN (ISAE-SUPAERO, Fr)

Two decades of intensive and tremendous efforts have pushed the imaging capabilities in the visible domain closer to physical limits. But also extended the attention to new areas beyond visible light intensity imaging. Examples can be found either to higher photon energy with appearance of CMOS Ultra-Violet imaging capabilities or even to other light dimensions with Polarization Imaging possibilities, both in monolithic form suitable to common camera architecture.

But one of most active and impressive fields is the extension of interest to the spectral range significantly beyond the visible, in the Infrared domain. Special focus is put on the Short Wave Infrared (SWIR) used in the reflective imaging mode but also the Thermal Infrared spectral range used in self-emissive ‘thermal’ imaging mode in Medium Wave Infrared (MWIR) and Long Wave Infrared (LWIR). Initially mostly motivated for military and scientific applications, the use of these spectral domains have now met new higher volume applications needs.

This has been made possible thanks to new technical approaches enabling cost reduction stimulated by the efficient collective manufacturing process offered by the microelectronics industry. CMOS, even no more sufficient to address alone the non- visible imaging spectral range, is still a key part of the solution.

The goal of this Harvest Imaging forum is to go through the various aspects of imaging concepts, device principles, used materials and imager characteristics to address the beyond-visible imaging and especially focus on the infrared spectral bands imaging.

Emphasis will be put on the material used for both detection :

  • Germanium, Quantum Dots devices and InGaAs for SWIR,
  •  III-V and II-VI semiconductors for MWIR and LWIR
  •  Microbolometers and Thermopiles thermal imagers

Besides the material aspects, also attention will be given to the associated CMOS circuits architectures enabling the imaging arrays implementation, both at the pixel and the imager level.
A status on current and new trends will be provided.

Pierre Magnan graduated in E.E. from University of Paris in 1980. After being a research scientist involved in analog and digital CMOS design up to 1994 at French Research Labs, he moved in 1995 to CMOS image sensors research at SUPAERO (now ISAE-SUPAERO) in Toulouse, France. The latter is an Educational and Research Institute funded by the French Ministry of Defense. Here Pierre was involved in setting up and growing the CMOS active-pixels sensors research and development activities. From 2002 to 2021, as a Full Professor and Head of the Image Sensor Research Group, he has been involved in CMOS Image Sensor research. His team worked in cooperation with European companies (including STMicroelectronics, Airbus Defense& Space, Thales Alenia Space and also European and French Space Agencies) and developed custom image sensors dedicated to space instruments, extending in the last years the scope of the Group to CMOS design for Infrared imagers.
In 2021, Pierre has been nominated Emeritus Professor of ISAE-Supaero Institute where he focuses now on Research within PhD work, mostly with STMicroelectronics.

Pierre has supervised more than 20 PhDs candidates in the field of image sensors and co-authored more than 80 scientific papers. He has been involved in various expertise missions for French Agencies, companies and the European Commission. His research interests include solid-state image sensors design for visible and non-visible imaging, modelling, technologies, hardening techniques and circuit design for imaging applications.

He has served in the IEEE IEDM Display and Sensors subcommittee in 2011-2012 and in the International Image Sensor Workshop (IISW) Technical Program Committee, being the General Technical Chair of 2015 IISW. He is currently a member of the 2022 IEDM ODI sub-committee and the IISW2023 Technical Program Committee.

Go to the original article...

Samsung Tech Blog about ISOCELL Color, HDR and ToF Imaging

Image Sensors World        Go to the original article...


Some excerpts below.

The science of creating pixels has made substantial progress in recent years. As a rule, high resolution image sensors need small, light-sensitive pixels. To capture as much light as possible, the pixel structure has evolved from front-side illumination (FSI) to a back-side illumination (BSI). This places the photodiode layer on top of the metal line, rather than below it. By locating the photodiode closer to the light source, each pixel is able to capture more light. The downside of this structure is that it creates higher crosstalk between the pixels, leading to color contamination.

“To remedy such a drawback, Samsung introduced ISOCELL, its first technology that isolates pixels from each other by adding barriers. The name ISOCELL is a compound word from the words “isolate’ and ‘cell,’” Kim explained. “By isolating each pixel, ISOCELL can increase a pixel’s full well capacity to hold more light and reduce crosstalk from one pixel to another.”

With ISOCELL technology, ISOCELL image sensors have very high full well capacity. Pixels in the newest ISOCELL image sensor have up to 70,000 electrons, allowing the sensor to reach huge signal range.  ... “To reduce noise, we perform two readouts: One with high gain to show the dark details and another with low gain to show the bright details. The two readouts are then merged in the sensor. Each read out has 10-bits. With the high conversion gain readout at 4x, it adds an additional 2-bits, producing 12-bit HDR image output. This technology is called Smart-ISO Pro also known as iDCG (intra-scene Dual Conversion Gain).”

Samsung has a plan to release a new generation of iToF sensor that has an image signal processor (ISP) integrated. The whole processing of depth information is done on the ISP within the sensor, rather than delegating to the SoC, so that the overall operation uses lower power consumption. In addition, the new solution offers improved depth quality even in scenarios such as low light environment, narrow objects or repetitive patterns. For future applications, Samsung’s ISP integrated ToF will help provide high quality depth image with little to no motion blur or lagging, at a high frame rate.

Go to the original article...

SD Optics releases MEMS-based system "WiseTopo" for 3D microscopy

Image Sensors World        Go to the original article...

SD Optics has released WiseTopo, a MEMS-based microarray lens system that transforms a 2D microscopes into 3D. 
Attendees at Photonics West can see a demonstration at their booth #4128 between Jan 31 to Feb 2, 2023 at the Moscone Center in San Francisco, California.

SD OPTICS introduces WiseTopo with our core technology Mals lens, the Mems-based microarray lens system. WiseTopo transforms a 2D microscope into a 3D microscope with a simple plug-in installation, and it fits all microscopes. The conventional system has a limited depth of field, so a user has to adjust the focus manually by moving the z-axis. It is difficult to identify the exact shape of the object instantly.  The manual movements can cause deviations in the observation, missing information, incomplete inspection, and an increase in user work load. SD Optics' WiseTopo is the most innovative 3D microscope module empowered with the patented core technology Mals. WiseTopo converts a 2D microscope into a 3D microscope by replacing the image sensor. With this simple installation, WiseTopo resolves the depth-of-field issue without Z-axis movement. Mals is an optical Mems-based, ultra-fast variable focusing lens that implements curvature changes in the lens with the motion of individual micro-mirrors. Mals moves and focuses at a speed of 12Khz without z-axis mechanical movement. It is a semi-permanent digital lens technology that operates at any temperature and has no life cycle limit. WiseTopo provides ideal features in combination with our developed software. These features let users have a better understanding of an object in real time. WiseTopo provides an All-in-focus function where everything is in focus. The Auto-focus function automatically focuses on the Region of Interest Focus lock maintains focus when multiple focus ROIs are set in the z-axis, multi-focus lock stays in focus even when moving the X- and Y-axis. Auto-focus lock retains auto-focus during Z-axis movement and others. These functions maximize user convenience. WiseTopo and its 3D images will reveal necessary information that is hidden when using a 2D microscope. WiseTopo obtains in-focused images with fast varying focus technology and processes many 3D attributes such as shape matching and point cloud instantly. WiseTopo supports various 3D data formats for analysis. For example, a comparison between the reference 3D data with the real-time 3D data can be performed easily. In the microscope, objective lenses with different magnifications are mounted on the turret. Wisetopo provides all functions even when the magnification is changed. Wisetopo provides all 3D features in any microscope and can be used with all of them, regardless of the brand
3D images created in Wisetopo can be viewed in AR/VR. This will let users feel and observe 3D data in the metaverse space.

Go to the original article...

poLight’s paper on passive athermalisation of compact camera/lens using its TLens® tunable lens

Image Sensors World        Go to the original article...

Images defocus over wide temperature range is a challenge in many applications. poLight's TLens technology behaves the opposite of plastic lenses over temperature, so just adding it to the optics stack addresses this issue.

A whitepaper is available here: [link]

Abstract: poLight ASA is the owner of and has developed the TLens products family as well as other patented micro-opto-electro-mechanical systems (MOEMS) technologies. TLens is a focusable tunable optics device based on lead zirconium titanate (PZT) microelectromechanical systems (MEMS) technology and a novel optical polymer material. The advantages of the TLens have already been demonstrated in multiple products launched on the market since 2020. Compactness, low power consumption, and fast speed are clear differentiators in comparison with incumbent voice coil motor (VCM) technology, thanks to the patented MEMS architecture. In addition, the use of TLens in the simple manner by adding it onto a fixed focus lens camera, or inserting the TLens inside the lens stack, enables stable focusing over an extended operating range. It has been demonstrated that the TLens passively compensates the thermal defocus of the plastic lens stack/camera structure. The fixed focus plastic lens stack cameras, usually used in consumer devices, typically exhibits a thermal defocus of a few diopters over the operating temperature range. Results of simulations as well as experimental data are presented together with a principal athermal lens design using TLens in only a passive manner (without the use of its electro-tunablity) while the electro-tunability can be used to additionally secure an extended depth of focus with further enhanced image quality.


Go to the original article...

Towards a Colorimetric Camera – Talk from EI 2023 Symposium

Image Sensors World        Go to the original article...

Tripurari Singh and Mritunjay Singh of Image Algorithmics presented a talk titled "Towards a Colorimetric Camera" at the recent Electronic Imaging 2023 symposium. They show that for low-light color imaging it is better to use a long/medium/short (LMS) filter that more closely mimics human color vision as opposed to the traditional RGB Bayer pattern.

Go to the original article...

Canon requests removal of toner cartridges from, including cartridges sold by IMAGE SHOP as Lemero brand

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Jabil Inc. collaboration with ams OSRAM and Artilux

Image Sensors World        Go to the original article...


ST. PETERSBURG, FL – January 18, 2023 – Jabil Inc. (NYSE: JBL), a leading manufacturing solutions provider, today announced that its renowned optical design center in Jena, Germany, is currently demonstrating a prototype of a next-generation 3D camera with the ability to seamlessly operate in both indoor and outdoor environments up to a range of 20 meters. Jabil, ams OSRAM and Artilux combined their proprietary technologies in 3D sensing architecture design, semiconductor lasers and germanium-silicon (GeSi) sensor arrays based on a scalable complementary metal-oxide-semiconductor (CMOS) technology platform, respectively, to demonstrate a 3D camera that operates in the short-wavelength infrared (SWIR), at 1130 nanometers.

Steep growth in automation is driving performance improvements for robotic and mobile automation platforms in industrial environments. The industrial robot market is forecast to grow at over 11% compound annual growth rate to over $35 billion by 2029. The 3D sensor data from these innovative depth cameras will improve automated functions such as obstacle identification, collision avoidance, localization and route planning — key applications necessary for autonomous platforms. 

“For too long, industry has accepted 3D sensing solutions limiting the operation of their material handling platforms to environments not impacted by the sun. The new SWIR camera provides a glimpse of the unbounded future of 3D sensing where sunlight no longer impinges on the utility of autonomous platforms,” said Ian Blasch, senior director of business development for Jabil’s Optics division. “This new generation of 3D cameras will not only change the expected industry standard for mid-range ambient light tolerance but will usher in a new paradigm of sensors capable of working across all lighting environments.”

“1130nm is the first of its kind SWIR VCSEL technology from ams OSRAM, offering enhanced eye safety, outstanding performance in high sunlight environments, and skin detection capability, which is of critical importance for collision avoidance when, for example humans and industrial robots interact,” says Dr. Joerg Strauss, senior vice president and general manager at ams OSRAM for business line visualization and sensing. “We are excited to partner with Jabil to make the next-generation 3D sensing and machine vision solutions a reality.”

Dr. Stanley Yeh, vice president of platform at Artilux, concurs, “We are glad to work with Jabil and ams OSRAM to deliver the first mid-range SWIR 3D camera with the use of near infrared (NIR)-like components such as CMOS-based sensor and VCSEL. It's a significant step toward the mass-adoption of SWIR spectrum sensing and being the leader of CMOS SWIR 2D/3D imaging technology.”
For nearly two decades, Jabil’s optical division has been recognized by leading technology companies as the premier service provider for advanced optical design, industrialization and manufacturing. Our Optics division has more than 170 employees across four locations. Jabil’s optics designers, engineers and researchers specialize in solving complex optical problems for its customers in 3D sensing, augmented and virtual reality, action camera, automotive, industrial and healthcare markets. Additionally, Jabil customers leverage expertise in product design, process development, testing, in-house active alignment (from Kasalis, a technology division of Jabil), supply chain management and manufacturing expertise.

More information and the test data could be found at the following website:

Go to the original article...

CIS market news 2022/2023

Image Sensors World        Go to the original article...

Recent Will Semi report that includes some news about Omnivision (Howell):

It is worth noting that in December 2022, Howell Group, a subsidiary of Weir, issued an internal letter announcing cost control, with the goal of reducing costs by 20% by 2023.

In an internal letter, Howell Group said, "The current market situation is very serious. We are facing great market challenges, and prices, inventory and supply chains are under great pressure. Therefore, we must carry out cost control, with the goal of reducing costs by 20% by 2023.

In order to achieve this goal, Howell Group also announced: stop all recruitments and leave without substitutes; salary cuts for senior managers; stop work during the Spring Festival in all regions of the group; quarterly bonuses and any other form of bonuses will be discontinued; expenditure strictly controlled; and some R&D projects will also reduce NRE expenditure.

Howell Group said, "These measures are temporary, and we believe that business-level improvements will occur in the second half of next year, because we have a new product layout in the consumer market, while automobiles and emerging markets are rising steadily. We will reassess the situation at the end of the first quarter of next year (2023). 

More related news from Counterpoint Research: :

Global Smartphone CIS Market Revenues, Shipments Dip in 2022

  • In 2022, global smartphone image sensor shipments were estimated to drop by mid-teens YoY.
  • Global smartphone image sensor revenues were down around 6% YoY during the year.
  • Sony was the only major vendor to achieve a YoY revenue growth, thanks to Apple’s camera upgrades.
  • Both Sony and Samsung managed to improve their product mix.

Compare Omnivision sales with its peers in this graphic:

Go to the original article...

European Defense Fund project for next gen IR sensors

Image Sensors World        Go to the original article...

From Wiley industry news:

€19M project set to enable next-generation IR sensors

13.01.2023 - A four-year defense project, led by Lynred, is first to see EU infrared product manufacturers jointly acquire access to advanced CMOS technology to design new infrared sensors.
A ten-member consortium aims to gain European sovereignty in producing highperformance IR sensors for future defense systems. Lynred, a French provider of high-quality infrared detectors for the aerospace, defense and commercial markets, leads HEROIC, a European Defence Fund project aimed at developing highlyadvanced electronic components for next-generation infrared (IR) sensors, while consolidating the supply chain of these state-of-the-art products in Europe.

High Efficiency Read-Out Integrated Circuit (HEROIC) is a four-year project starting January 2023 with a budget in the order of 19 million euros, of which the European Defence Fund is contributing €18M. 
HEROIC is the first collaboration of its kind to bring together European IR manufacturers, several of whom are competitors, to strategically tackle a common problem. The project’s main objectives are to increase access to, and dexterity in, using a new European-derived advanced CMOS technology that offers key capabilities in developing the next generations of high-performance infrared sensors – these will feature smaller pixels and advanced functions for defense applications. One overall aim is to enable Europe to gain technological sovereignty in producing high-performance IR sensors.
Consortium members include three IR manufacturers: AIM (DE), project leader Lynred (FR), and Xenics (BE); four system integrators: Indra (ES), Miltech Hellas (GR), Kongsberg (NO) and PCO SA (PL); a component provider: Ideas, an IC developer (NO), as well as two research institutions CEA-Leti (FR), and the University of Seville (ES).
“Lynred is proud to collaborate on this game-changing project aimed at securing European industrial sovereignty in the design and supply of IR sensors,” said David Billon-Lanfrey, chief strategy officer at Lynred. “This project represents the first phase for European IR manufacturers to gain access to a superior CMOS technology compatible with various IR detectors and 2D/3D architectures, and equally importantly, make it available within a robust EU supply chain.”

Acquiring the latest advanced CMOS technology with a node that no consortium partner has had an opportunity to access is pivotal to the sustainable design of a next-generation read-out integrated circuit (ROIC). Its commonly specified platform will allow each consortium partner to pursue its respective technological roadmap and more effectively meet the higher performance expectations of post-2030 defense systems.

“The HEROIC project will enable AIM to develop advanced ROICs based on European silicon CMOS technology, as an important building block in its next-generation IR sensors,” said Rainer Breiter, vice-president, IR module programs, at AIM. “We are looking forward to working together with our partners in this common approach to access the latest advanced CMOS technology.”

IR sensors are used to detect, recognize and identify objects or targets during the night and in adverse weather and operational conditions. They are at the center of multiple defense applications: thermal imagers, surveillance systems, targeting systems and observation satellites.

Next-generation IR systems will need to exhibit longer detection, recognition and identification ranges, and offer larger fields of view and faster frame rates. This will require higher resolution formats encompassing further reductions in pixel pitch sizes down from today’s standard 15 μm and 10 μm to 7.5 μm and below. This will need to be obtained without increasing the small footprint of the IR sensor, thus maintaining reasonable system costs and mechanical/electrical interfaces. These requirements make the qualification of a new CMOS technology mandatory to achieving higher performance at the IR sensor level.

"Xenics sees the HEROIC project as a cornerstone for its strategy of SWIR development for defense applications,” said Paul Ryckaert, CEO of Xenics. “Thanks to this project, the consortium partners will shape the future of European CMOS developments and technologies for IR sensors.”

Go to the original article...

Videos du Jour Jan 17, 2023: Flexible image sensors, Samsung ISOCELL, Hamamatsu

Image Sensors World        Go to the original article...

Flexible Image Sensor Fabrication Based on NIPIN Phototransistors

Hyun Myung Kim, Gil Ju Lee, Min Seok Kim, Young Min Song
Gwangju Institute of Science and Technology, School of Electrical Engineering and Computer Science;

We present a detailed method to fabricate a deformable lateral NIPIN phototransistor array for curved image sensors. The phototransistor array with an open mesh form, which is composed of thin silicon islands and stretchable metal interconnectors, provides flexibility and stretchability. The parameter analyzer characterizes the electrical property of the fabricated phototransistor.

ISOCELL Image Sensor: Ultra-fine Pixel Technologies | Samsung

ISOCELL has evolved to bring ultra-high resolution to our mobile photography. 
Learn more about Samsung's ultra-fine pixel technologies.

Photon counting imaging using Hamamatsu's scientific imaging cameras - TechBites Series

With our new photon number resolving mode the ORCA-Quest enables photon counting resolution across a full 9.4 megapixel image. See the camera in action and learn how photon number imaging pushes quantitative imaging to a new frontier.

Go to the original article...

Sony FE 20-70mm f4 G review

Cameralabs        Go to the original article...

The FE 20-70mm f4 G is a general-purpose zoom for Sony mirrorless cameras, widening the range from 24-70 models, making it more attractive to vloggers or anyone who wants to capture large views. Find out more in my full review!…

Go to the original article...

Canon develops terahertz device with compact size, world-highest output and potential use cases in security, 6G transmission and more

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Advantages of a one-bit quanta image sensor

Image Sensors World        Go to the original article...

In an arXiv preprint, Prof. Stanley Chan of Purdue University writes:

The one-bit quanta image sensor (QIS) is a photon-counting device that captures image intensities using binary bits. Assuming that the analog voltage generated at the floating diffusion of the photodiode follows a Poisson-Gaussian distribution, the sensor produces either a “1” if the voltage is above a certain threshold or “0” if it is below the threshold. The concept of this binary sensor has been proposed for more than a decade and physical devices have been built to realize the concept. However, what benefits does a one-bit QIS offer compared to a conventional multi-bit CMOS image sensor? Besides the known empirical results, are there theoretical proofs to support these findings? The goal of this paper is to provide new theoretical support from a signal processing perspective. In particular, it is theoretically found that the sensor can offer three benefits: (1) Low-light: One-bit QIS performs better at low-light because it has a low read noise and its one-bit quantization can produce an error-free measurement. However, this requires the exposure time to be appropriately configured. (2) Frame rate: One-bit sensors can operate at a much higher speed because a response is generated as soon as a photon is detected. However, in the presence of read noise, there exists an optimal frame rate beyond which the performance will degrade. A Closed-form expression of the optimal frame rate is derived. (3) Dynamic range: One-bit QIS offers a higher dynamic range. The benefit is brought by two complementary characteristics of the sensor: nonlinearity and exposure bracketing. The decoupling of the two factors is theoretically proved, and closed-form expressions are derived.

Pre-print available here:

The paper argues that, if implemented correctly, there are three main benefits:

1. Better SNR in low light

2. Higher speed (frame rate)

3. Better dynamic range

This paper has many interesting technical results and insights. It provides a balanced view in terms of the regimes where single-photon quanta image sensor provide benefits over conventional image sensors.

Go to the original article...

Sigma 60-600mm f4.5-6.3 DG DN review

Cameralabs        Go to the original article...

The Sigma 60-600mm DG DN takes you from standard to super-telephoto, ideal for sports and wildlife photography. In my review I test the mirrorless DG DN version!…

Go to the original article...

Canon develops CMOS sensor for monitoring applications with industry-leading dynamic range of 148 dB, automatic exposure optimization function for each sensor area that improves accuracy for recognizing moving subjects

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Canon develops CMOS sensor for monitoring applications with industry-leading dynamic range, automatic exposure optimization function for each sensor area that improves accuracy for recognizing moving subjects

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Startup Funding News from Semiconductor Engineering

Image Sensors World        Go to the original article...


Fortsense received hundreds of millions of yuan (CNY 100.0M is ~$14.3M) in Series C1 financing led by Chengdu Science and Technology Venture Capital, joined by BAIC Capital, Huiyou Investment, Shanghai International Group, Shengzhong Investment, and others. The company develops optical sensing chips, including 3D structured light chips for under-screen fingerprint sensors and time-of-flight (ToF) sensors for facial recognition in mobile devices. Funding will be used for development of single-photon avalanche diode (SPAD) lidar chips for automotive applications. Founded in 2017, it is based in Shenzhen, China.

PolarisIC raised nearly CNY 100.0M (~$14.3M) in pre-Series A financing from Dami Ventures, Innomed Capital, Legend Capital, Nanshan SEI Investment, and Planck Venture Capital. PolarisIC makes single-photon avalanche diode (SPAD) direct time-of-flight (dToF) sensors and photon counting low-light imaging chips for mobile phones, robotic vacuums, drones, industrial sensors, and AGV. Funds will be used for mass production and development of 3D stacking technology and back-illuminated SPAD. Based in Shenzhen, China, it was founded in 2021.

VicoreTek received nearly CNY 100.0M (~$14.3M) in strategic financing led by ASR Microelectronics and joined by Bondshine Capital. The startup develops image processing and sensor fusion chips, AI algorithms, and modules for object avoidance in sweeping robots. It plans to expand to other types of service robots and AR/MR, followed by the automotive market. Funds will be used for R&D and mass production. Founded in 2019, it is based in Nanjing, China.

Greenteg drew CHF 10.0M (~$10.8M) in funding from existing and new investors. The company makes heat flux sensors for applications ranging from photonics, building insulation, and battery characterization to core body temperature measurement in the form factor of wearables. Funds will be used for R&D into medical applications in the wearable market and to scale production capacity. Founded in 2009 as a spin off from ETH Zurich, it is based in Rümlang, Switzerland.

Phlux Technology raised £4.0M (~$4.9M) in seed funding led by Octopus Ventures and joined by Northern Gritstone, Foresight Williams Technology Funds, and QUBIS Innovation Fund. Phlux develops antimony-based infrared sensors for lidar systems. The startup claims its architecture is 10x more sensitive and with 50% more range compared to equivalent sensors. It currently offers a single element sensor that is retrofittable into existing lidar systems and plans to build an integrated subsystem and array modules for a high-performance sensor toolkit. Other applications for the infrared sensors include satellite communications internet, fiber telecoms, autonomous vehicles, gas sensing, and quantum communications. Phlux was also recently awarded an Innovate UK project with QLM Technology to develop a lidar system for monitoring greenhouse gas emissions. A spin out of Sheffield University founded in 2020, it is based in Sheffield, UK.

Microparity raised tens of millions of yuan (CNY 10.0M is ~$1.4M) in pre-Series A+ funding from Summitview Capital. Microparity develops high-performance direct time-of-flight (dToF) single photon detection devices, including single-photon avalanche diodes (SPAD), silicon photomultipliers (SiPM), and SiPM readout ASICs for consumer electronics, lidar, medical imaging, industrial inspection, and other applications. Founded in 2017, it is based in Hangzhou, China.

Yegrand Smart Science & Technology raised pre-Series A financing from Zhejiang Venture Capital. Yegrand Smart develops photon pickup and Doppler lidar equipment for measuring vibration. Founded in 2021, it is based in Hangzhou, China.

Go to the original article...

Canon places fifth in U.S. patent rankings and first among Japanese companies, places in top five for 37 years running

Newsroom | Canon Global        Go to the original article...

Go to the original article...

Electronic Imaging 2023 Symposium (Jan 15-19, 2023)

Image Sensors World        Go to the original article...

The symposium has many co-located conferences with talks and papers of interest to image sensors community. Short courses on 3D imaging, image sensors and camera calibration, image quality quantification, ML/AI for imaging and computer vision are also being offered.

Please visit the symposium website at for full program. Some interesting papers and talks are listed below.

Evaluation of image quality metrics designed for DRI tasks with automotive cameras, Valentine Klein, Yiqi LI, Claudio Greco, Laurent Chanas, and Frédéric Guichard, DXOMARK (France)

Driving assistance is increasingly used in new car models. Most driving assistance systems are based on automotive cameras and computer vision. Computer Vision, regardless of the underlying algorithms and technology, requires the images to have good image quality, defined according to the task. This notion of good image quality is still to be defined in the case of computer vision as it has very different criteria than human vision: humans have a better contrast detection ability than image chains. The aim of this article is to compare three different metrics designed for detection of objects with computer vision: the Contrast Detection Probability (CDP) [1, 2, 3, 4], the Contrast Signal to Noise Ratio (CSNR) [5] and the Frequency of Correct Resolution (FCR) [6]. For this purpose, the computer vision task of reading the characters on a license plate will be used as a benchmark. The objective is to check the correlation between the objective metric and the ability of a neural network to perform this task. Thus, a protocol to test these metrics and compare them to the output of the neural network has been designed and the pros and cons of each of these three metrics have been noted.


Designing scenes to quantify the performance of automotive perception systems, Zhenyi Liu1, Devesh Shah2, Alireza Rahimpour2, Joyce Farrell1, and Brian Wandell1; 1Stanford University and 2Ford Motor Company (United States)

We implemented an end-to-end simulation for perception systems, based on cameras, that are used in automotive applications. The open-source software creates complex driving scenes and simulates cameras that acquire images of these scenes. The camera images are then used by a neural network in the perception system to identify the locations of scene objects, providing the results as input to the decision system. In this paper, we design collections of test scenes that can be used to quantify the perception system’s performance under a range of (a) environmental conditions (object distance, occlusion ratio, lighting levels), and (b) camera parameters (pixel size, lens type, color filter array). We are designing scene collections to analyze performance for detecting vehicles, traffic signs and vulnerable road users in a range of environmental conditions and for a range of camera parameters. With experience, such scene collections may serve a role similar to that of standardized test targets that are used to quantify camera image quality (e.g., acuity, color).

 A self-powered asynchronous image sensor with independent in-pixel harvesting and sensing operations, Ruben Gomez-Merchan, Juan Antonio Leñero-Bardallo, and Ángel Rodríguez-Vázquez, University of Seville (Spain)

A new self-powered asynchronous sensor with a novel pixel architecture is presented. Pixels are autonomous and can harvest or sense energy independently. During the image acquisition, pixels toggle to a harvesting operation mode once they have sensed their local illumination level. With the proposed pixel architecture, most illuminated pixels provide an early contribution to power the sensor, while low illuminated ones spend more time sensing their local illumination. Thus, the equivalent frame rate is higher than the offered by conventional self-powered sensors that harvest and sense illumination in independient phases. The proposed sensor uses a Time-to-First-Spike readout that allows trading between image quality and data and bandwidth consumption. The sensor has HDR operation with a dynamic range of 80 dB. Pixel power consumption is only 70 pW. In the article, we describe the sensor’s and pixel’s architectures in detail. Experimental results are provided and discussed. Sensor specifications are benchmarked against the art.

KEYNOTE: Deep optics: Learning cameras and optical computing systems, Gordon Wetzstein, Stanford University (United States)

Neural networks excel at a wide variety of imaging and perception tasks, but their high performance also comes at a high computational cost and their success on edge devices is often limited. In this talk, we explore hybrid optical-electronic strategies to computational imaging that outsource parts of the algorithm into the optical domain or into emerging in-pixel processing capabilities. Using such a co-design of optics, electronics, and image processing, we can learn application-domain-specific cameras using modern artificial intelligence techniques or compute parts of a convolutional neural network in optics with little to no computational overhead. For the session: Processing at the Edge (joint with ISS).

Computational photography on a smartphone, Michael Polley, Samsung Research America (United States)

Many of the recent advances in smartphone camera quality and features can be attributed to computational photography. However, the increased computational requirements must be balanced with cost, power, and other practical concerns. In this talk, we look at the embedded signal processing currently applied, including new AI-based solutions in the signal chain. By taking advantage of increasing computational performances of traditional processor cores, and additionally tapping into the exponentially increasing capabilities of the new compute engines such as neural processing units, we are able to deliver on-device computational imaging. For the session: Processing at the Edge (joint with ISS).

Analog in-memory computing with multilevel RRAM for edge electronic imaging application, Glenn Ge, Teramem Inc. (United States)

Conventional digital processors based on the von Neumann architecture have an intrinsic bottleneck in data transfer between processing and memory units. This constraint increasingly limits performance as data sets continue to grow exponentially for the various applications, especially for the Electronic Imaging Applications at the edge, for instance, the AR/VR wearable and automotive applications. TetraMem addresses this issue by delivering state-of-the-art in-memory computing using our proprietary non-volatile computing devices. This talk will discuss how TetraMem’s solution brings several orders of magnitude improvement in computing throughput and energy efficiency, ideal for those AI fusion sensing applications at the edge. For the session: Processing at the Edge (joint with ISS).

Processing of real time, bursty and high compute iToF data on the edge (Invited), Cyrus Bamji, Microsoft Corporation (United States)

In indirect time of flight (iToF), a depth frame is computed from multiple image captures (often 6-9 captures) which are composed together and processed using nonlinear filters. iToF sensor output bandwidth is high and inside the camera special purpose DSP hardware significantly improves power, cost and shuffling around of large amounts of data. Usually only a small percentage of depth frames need application specific processing and highest quality depth data both of which are difficult to compute within the limited hardware resources of the camera. Due to the sporadic nature of these compute requirements hardware utilization is improved by offloading this bursty compute to outside the camera. Many applications in the Industrial and commercial space have a real time requirement and may even use multiple cameras that need to be synchronized. These real time requirements coupled with the high bandwidth from the sensor makes offloading the compute purely into the cloud difficult. Thus, in many cases the compute edge can provide a goldilocks zone for this bursty high bandwidth and real-time processing requirement. For the session: Processing at the Edge (joint with ISS)..

A 2.2um three-wafer stacked back side illuminated voltage domain global shutter CMOS image sensor, Shimpei Fukuoka, OmniVision (Japan)

Due to the emergence of machine vision, augmented reality (AR), virtual reality (VR), and automotive connectivity in recent years, the necessity for chip miniaturization has grown. These emerging, next-generation applications, which are centered on user experience and comfort, require their constituent chips, devices, and parts to be smaller, lighter, and more accessible. AR/VR applications, especially demand smaller components due to their primary application towards wearable technology, in which the user experience would be negatively impacted by large features and bulk. Therefore, chips and devices intended for next-generation consumer applications must be small and modular, to support module miniaturization and promote user comfort. To enable the chip miniaturization required for technological advancement and innovation, we developed a 2.2μm pixel pitch Back Side Illuminated (BSI) Voltage Domain Global Shutter (VDGS) image sensor with the three-wafer stacked technology. Each wafer is connected by Stacked Pixel Level Connection (SPLC) and the middle and logic wafers are connected using a Back side Through Silicon Via (BTSV). The separation of the sensing, charge storage, and logic functions to different wafers allows process optimization in each wafer, improving overall chip performance. The peripheral circuit region is reduced by 75% compared to the previous product without degrading image sensor performance. For the session: Processing at the Edge (joint with COIMG).

A lightweight exposure bracketing strategy for HDR imaging without access to camera raw, Jieyu Li1, Ruiwen Zhen2, and Robert L. Stevenson1; 1University of Notre Dame and 2SenseBrain Technology (United States)
A lightweight learning-based exposure bracketing strategy is proposed in this paper for high dynamic range (HDR) imaging without access to camera RAW. Some low-cost, power-efficient cameras, such as webcams, video surveillance cameras, sport cameras, mid-tier cellphone cameras, and navigation cameras on robots, can only provide access to 8-bit low dynamic range (LDR) images. Exposure fusion is a classical approach to capture HDR scenes by fusing images taken with different exposures into a 8-bit tone-mapped HDR image. A key question is what the optimal set of exposure settings are to cover the scene dynamic range and achieve a desirable tone. The proposed lightweight neural network predicts these exposure settings for a 3-shot exposure bracketing, given the input irradiance information from 1) the histograms of an auto-exposure LDR preview image, and 2) the maximum and minimum levels of the scene irradiance. Without the processing of the preview image streams, and the circuitous route of first estimating the scene HDR irradiance and then tone-mapping to 8-bit images, the proposed method gives a more practical HDR enhancement for real-time and on-device applications. Experiments on a number of challenging images reveal the advantages of our method in comparison with other state-of-the-art methods qualitatively and quantitatively.

Go to the original article...

Tamron 70-300mm f4.5-6.3 Di III review

Cameralabs        Go to the original article...

The Tamron 70-300mm f4.5-6.3 is Tamron’s first lens for Nikon Z-mount, and it's also available for Sony mirrorless. Find out how it measures-up in my full review!…

Go to the original article...

Nikon is developing the NIKKOR Z 85mm f/1.2 S, a fast mid-telephoto prime lens, and the NIKKOR Z 26mm f/2.8, a slim wide-angle prime lens for the Nikon Z mount system

Nikon | Imaging Products        Go to the original article...

Go to the original article...

Panasonic Lumix S5 II review

Cameralabs        Go to the original article...

The Panasonic Lumix S5 II is a full-frame mirrorless camera with 24 Megapixels, 6k video, and in a first for a Lumix, PDAF! Find out everything in my in-depth review!…

Go to the original article...

ESPROS LiDAR Tech Day Jan 30, 2022

Image Sensors World        Go to the original article...

Information and registration:

The TOF & LiDAR Technology Day — Powered by ESPROS, is carefully aimed at giving engineers and designers a very valuable hands-on, informative dive into the huge potential of TOF and LiDAR applications and eco-systems. Participants are assured of an eye-opening immersion into the ever expanding world of Time-of-Flight and LiDAR.

Thanks to the experience and quality of expert speakers who will be on hand to guide and inform everyone taking part, these comprise: Danny Kent, PhD, Co-Founder & President, Mechaspin, alongside Beat De Coi, CEO & Founder of ESPROS Photonics AG, and Len Cech, Executive Director, Safety Innovations at Joyson Safety Systems as well as Kurt Brendley, COO & Co-Founder, PreAct.
The TOF & LIDAR Technology Day event takes place on January 30th, 2023 in San Carlos California USA. 

Go to the original article...