Researchers Introduce New Method Of Image Retargeting
Image retargeting alters the aspect ratio of images with the goal of preserving content and reducing distortion. Due to the wide range of picture and display aspect ratios, quick and high-quality solutions are very important right now.
Samsung R&D Institute's Daniel Valdez-Balderas, Oleg Muraveynyk, and Timothy Smith suggested a retargeting approach that uses contentaware cropping to quantify and reduce warping distortions. This study was presented at the IEEE International Conference on Image Processing (IEEE ICIP 2021) in Anchorage, Alaska, in September 2021. A map of a source picture is constructed using deep semantic segmentation and saliency detection models in the proposed approach's pipeline. Then, using axis-aligned deformations and a distortion metric to assure minimal warping deformations, a preliminary warping mesh was calculated. Finally, a content-aware cropping algorithm was used to create the retargeted picture. They conducted user research based on the RetargetMe benchmark to evaluate this strategy. Experiments show that this strategy outperforms current methods while taking only a quarter of the time to do.
Retargeting is a technique for changing an image's aspect ratio while preserving crucial content and minimizing visible distortion. Over the last two decades, the use of visual media on mobile devices has skyrocketed. Simultaneously, the diversity of screens continues to expand, now including dynamically changing elements like those found in foldable phones. Retargeting is a very relevant technique because of the variety of media and display sizes available. In the age of mobile computing, technologies that allow real-time, interactive use on mobile devices are very important.
Using deep semantic segmentation and saliency detection techniques, the researchers created a significance map. A raw picture is first put into a segmentation model. The method then runs a test to figure out how important the model's detections are. If the percentage of pixels that aren't background objects is higher than a certain threshold, the algorithm gives a significance score of 1 to pixels that have detections, and 0 to those that don't.
Following the creation of an important map, an intermediate warping mesh is created using an algorithm that determines the ideal warping. The ideal warp is defined as the axis-aligned deformation warp that, when applied to the source picture, produces an image that is as near to the goal size as feasible while not introducing excessive distortions. To achieve that goal, a metric that quantifies distortions is required. If the intermediate warp size is not the same as the target size, content-aware cropping is used. The cropped sections are dispersed on the left and right of the intermediately warped picture, with the goal of removing as few significant regions as possible. This is accomplished by constructing a one-dimensional version of the significance map and distributing cropping in such a manner that the eliminated area under the curve is minimized. In order to be as efficient as possible, the actual cropping is done with a final warping mesh, which combines the intermediate warping and cropping into a single operation.
The researchers proposed a completely automated retargeting technique that combines Deep Neural Network capabilities for significance map construction, the efficiency of a warping algorithm, and a unique way of quantifying and limiting warping distortions. This system may be modified to execute retargeting by warping-only, cropping-only, or a continuous variety of hybrid situations in between by adjusting a single parameter, specifically a distortion threshold. When the distortion threshold for hybrid warp-cropping is fine-tuned, it delivers state-of-the-art quality, as proved in a user study on the RetargetMe benchmark. Furthermore, this system works in a fraction of the time of prior retargeting systems and has been tested on mobile devices, where it retargets pictures in real-time following a preprocessing phase.