Before providing an overview of the content of this regular issue, we are pleased to mention that two volumes will be published in 2016 (volumes 11 and 12) at the page budget of 800 pages per volume. This arrangement has been made with the Springer’s publishing editor in order to address the backlog of online-first published articles by printing a total of 8 regular and special issues in 2016. Issue 4 of volume 10 in 2015 will consist of two special issues incorporating real-time image processing articles as related to “Robot Vision” and “Smart Cities”.

We are very delighted to state that the recently released JRTIP impact factor by Thomson-Reuters Journal Citation Reports® has been increased to 2.02. This is a clear indication of the increase in the reputation and recognition of JRTIP, in particular in the image processing community.

Here we wish to bring to your attention the upcoming SPIE Photonics Europe Conference on Real-time Image and Video Processing which is to be held in April 2016 in Brussels. The call for papers for this conference appears in the back matter of this issue or can be viewed at http://spie.org/EPE/conferencedetails/real-time-image-and-video-processing. We would like to encourage submission of manuscripts by the real-time imaging community to this conference.

This regular issue comprises eight papers. The first paper by Plaza et al. analyzes remotely sensed hyperspectral images in three stages: reduction of dimensionality, automatic identification of spectral signatures (endmembers) and estimation of the fractional abundance of each endmember for each pixel. This process allows sub-pixel analysis of hyperspectral images, but is computationally expensive because of the data dimensionality. The discussed implementation on the GPU platform offers the computational power needed for real-time performance. This implementation is achieved using compute device unified architecture (CUDA) on an NVidia™ GTX 580 GPU, achieving real-time unmixing performance in two different case studies of the data provided by NASA’s Airborne Visible Infra-red Imaging Spectrometer (AVIRIS).

The second paper by Cao et al. discusses graphics processing units (GPUs) high performance computation and data throughput involving data-parallelization for implementation of image processing algorithms. This paper focusses on computation-to-core mapping strategies investigating the efficiency and scalability of a robust facet image modeling algorithm implemented on GPUs achieving significant performance gain over the standard pixel-wise mapping scheme. Sound performance comparisons across two different mapping schemes show the impact of the level of parallelism on GPUs and two approaches for optimizing future image processing applications are offered using such platforms.

The third paper by Liu et al. addresses the high bandwidth demands in video coding. Any loss during transmission impacts the real-time quality of experience (QoE) of the end user, which originates from the prediction structure used in H.264/MVC encoders. This paper proposes an interleaving scheme for the prediction structure used in the encoder giving priority to a picture using a mathematical model in order to satisfy any group of pictures (GOP) size and any number of views. Validation and verification of the proposed scheme is based on H.264/MVC reference software JMVC 8.5. The results obtained indicate the advantages of the proposed interleaving scheme which significantly outperforms the non-interleaved scheme.

The fourth paper by Kasamwattanarote et al. describes how video abstracting supports reviewing of video surveillance contents for security monitoring which is a time-consuming and time-limiting task. A real-time video surveillance summarization framework intended for minimizing the time requirement for time critical tasks is introduced based on compact moving objects in time–space. Locally segmented individual objects over time within video sequences are regarded as “tunnels”. In order to abstract continuous video by shorter video snippets to avoid loss of selected targets, this paper investigates three real-time algorithms. Direct shift collision detection (DSCD) is implemented for fast shifting of tunnels. Early trajectory searching is applied with the same DSCD technique, and direct distance transform is used to provide trajectory similarity between tunnels and a user’s query. Background subtraction as an essential step for identifying each individual object involves dynamic region adaptation (DRA) to select the best foreground for each object before generating a tunnel while increasing the accuracy. The proposed framework performs in real-time without losing major events of the original video stream.

The fifth paper by Igual et al. presents investigations on log-polar imaging as a biologically inspired visual representation that offers advantages in computer vision and robotics. Software-based mappers are cheap, flexible and most commonly used for this purpose but conversion costs inhibit their use, e.g. for real-time applications in robotics due to limited processing power. Parallel solutions with affordable modern multi-core architectures are devised, implemented, and tested to overcome this drawback and make log-polar imaging more generally available. The experimental results reveal achievable speed-up factors in the range of 10 or 20 to generate log-polar images from large gray-level or color Cartesian images using commodity graphics processors and contemporary multi-core processors. Three different approaches are explored and compared with respect to several criteria and constraints.

The sixth paper by Kumar et al. presents an FPGA-based JPEG encryptor involving block and symbol scrambling meeting four requirements: (1) temporal security, (2) preserving overall bit rate with negligible overhead, (3) remaining compliant with the JPEG file format and (4) fast and less complex implementation for real-time applications on STRATIX II and IV FPGAs using only 1 % extra hardware for encryption than that of the basic JPEG but remaining competitive without encryption with other implementations found in the literature. The encryption module can be used in the image transmission system for secure image coding and tactical communications.

The seventh paper by Hisia et al. proposes a noise removal algorithm that relies on noise detection, motion detection, and an adaptive filter. The experimental results show that this method is effective in reducing large region TV noise while maintaining the original image information. In other words, it provides better picture quality but demanding less processing time compared to existing approaches.

The eighth and last paper of this issue by Sarawadekar et al. is on FPGA-based architecture for ultrasound systems by processing digital scan conversion (DSC) of 132 raw frames per second and speckle reduction imaging (SRI) for de-speckling images of size 640 × 480 at 698 fps both being back-end units of an ultrasound imaging system. Speckle noise, present in ultrasonography (USG) systems, and motion blur occurring while imaging non-stationery organs (e.g., heart) need high speed VLSI designs of the USG back-end that is prototyped on a FPGA platform using parallel-pipelining techniques achieving sufficient throughput at low power and enabling real-time performance.

At the end, for the significant increase of the JRTIP impact factor that has occurred, we would like to thank all the associate editors and reviewers for their contributions but also our guest editors for identifying high quality papers in new and emerging application areas related to various aspects of real-time image and video processing. In addition, we express our gratitude to the Springer editorial and production offices for their support of various issues related to the JRTIP operation.