2018 Harvest Imaging Forum Agenda

Image Sensors World        Go to the original article...

The 6th Harvest Imaging Forum is to be held on Dec. 6th and 7th, 2018 in Delft, the Netherlands. The agenda includes two topics, each one taking one day:

"Efficient embedded deep learning for vision applications" by Prof. Marian VERHELST (KU Leuven, Belgium)
Abstract:

Deep learning has become popular for smart camera applications, showing unprecedented recognition, tracking and segmentation capabilities. Deep learning however comes with significant computational complexity, making it until recently only feasible on power-hungry server platforms. In the past years, we however see a trend towards embedded processing of deep learning networks. It is crucial to understand that this evolution is not enabled by either novel processing architecture or novel deep learning algorithms alone. The breakthroughs clearly come from a close co-optimization between algorithms and implementation architectures.

After an introduction into deep neural network processing and its implementation challenges, this forum will give an overview of recent trends enabling efficient network evaluations in embedded platforms such as smart camera's. This discussion involves a tight interplay between newly emerging hardware architectures, and emerging implementation-driven algorithmic innovations. We will review a wide range of recent techniques which make the learning algorithms implementation-aware towards a drastically improved inference efficiency. This forum will give the audience a better understanding into the opportunities and implementation challenges from embedded deep learning, and enable to follow research on deep learning processors.


"Image and Data Fusion" by Prof. Wilfried PHILIPS (Ghent University, Belgium)
Abstract:

Large scale video surveillance networks are now common place and smart cameras and advanced video have been introduced to alleviate the resulting problem of information overload. However, the true power of video analytics comes from fusing information from various cameras and sensors, with applications such people tracking over wide areas or inferring 3D shape from multi-view video. Fusion also helps to overcome the limitations of individual sensors. For instance, thermal imaging helps to detect pedestrians in difficult lighting conditions pedestrians are more easily (re)identified in RGB images. Automotive sensing and traffic control applications are another major driver for sensor fusion. Typical examples include lidar, radar and depth imaging to complement optical imaging. In fact, as the spatial resolution of lidar and radar is gradually increasing, these devices these days (can) produce image like outputs.

he workshop will introduce the theoretical foundations of sensor fusion and the various options for fusion ranging from fusion at the pixel level, over decision fusion to more advanced cooperative and assistive fusion. It will address handling heterogeneous data, e.g., video with different spatial, temporal or spectral resolution and/or representing different physical properties. It will also address fusion frameworks to create scalable systems based on communicating smart cameras and distributed processing. This cooperative and assistive fusion facilitates the integration of cameras in the Internet-of-Things.

Go to the original article...

Leave a Reply

css.php