IRW-MEF: Informative random walk for multi-exposure image fusion

https://doi.org/10.1016/j.eswa.2025.127147Get rights and content

Highlights

  • An IRW model is established for multi-exposure image fusion.
  • Utilize neighbor information to boost boundary and structure preservation.
  • Design three metrics to reveal the scene details and generate fused images.
  • The proposed framework is applied to a variety of visual applications.

Abstract

In multi-exposure image fusion, loss of details and color distortion will cause unpleasant visual effects. Aims to retain the consistency of details and recover the faithful color from poorly exposed regions, a novel multi-exposure image fusion approach is presented using the informative random walk (IRW) model. Unlike the previous single-layer structure, the IRW model comprises a double-layer graph topology. First, the double-layer graph topology is designed to include attribute and intensity layers. Second, the common neighbor information is introduced in the image processing of walking across two layers, which can enhance a fused image’s boundary adherence and structure preservation effect. Finally, a multi-exposure image fusion framework is established based on the IRW model by innovatively increasing the number of different quality measures to three. The advantages of the proposed framework are able to reveal the details in the brightest/darkest regions of the scene under different exposures and avoid color distortion in the fused image. Experiment results on public datasets, image post-processing, and industrial sites demonstrate the advantages of our method from both qualitative and quantitative perspectives. The code will be available at https://github.com/BOYang-pro/IRW-MEF.

Introduction

Image fusion is a fundamental research topic in vision applications, including industrial imaging (Huang et al., 2020, Pan et al., 2020), multimodal imaging (Guo et al., 2024, Jie et al., 2023) and remote sensing imaging (Wang et al., 2023, Zhang et al., 2024). Image fusion aims to generate a fused image containing important information from multiple images. Due to the limited capture range of ordinary imaging sensors, a single image usually suffers from uneven light conditions, such as backlighting, over-bright light sources, and nighttime scenes. It is desirable to indirectly enhance the visual quality of an image by means of multi-exposure image fusion (MEF). As an important branch of image fusion, MEF is performed to reconstruct an image with high visual fidelity from a stack of low dynamic range (LDR) images in different exposure times. The fused image tends to be more visually observable and suitable for human perception than the source images.
In recent decades, many scholars have extensively studied the MEF algorithms, and this paper classifies MEF algorithms into four categories: pixel-based methods, patch-based methods, transform domain-based methods and data-driven methods. Pixel-based methods fuse input images with relatively low computational complexity at the pixel level. However, they may lead to overly dark or bright fusion images due to potential neglect of local contrast variations. Patch-based methods handle information at the patch level, reducing artifacts and maintaining edge sharpness but facing problems such as loss of color details. Transform domain-based methods transform input images into a specific domain for fusion and then revert to the spatial domain, but the transformation process may increase computation time and cause color distortion. Data-driven methods automatically learn fusion strategies by applying various network frameworks. However, there exists a trade-off between performance and computational cost. Since most of the above methods focus on multiple images, they often show suboptimal fusion results for images with low and high exposure. Specifically, most existing MEF algorithms cannot effectively preserve the relative brightness if highlight regions in the low-exposure images are darker than shadow regions in the over-exposure image. Therefore, achieving brightness consistency and revered actual color in under-exposed and over-exposed areas has become urgent.
Existing MEF methods can achieve satisfactory fusion performance, but some obstacles remain. On the one hand, current methods struggle to effectively handle extreme exposure regions (e.g., camera truncation effects in overexposed images and extremely dark areas in underexposed images). Specifically, the dark regions in under-exposed images might significantly exacerbate the generation of visual artifacts, while the camera truncation effect in overexposed regions may lead to noticeable color distortion in the fused result. On the other hand, although existing methods can maintain overall brightness consistency in the fused image, they still face limitations in balancing local contrast. For example, when the fusion result leans towards under-exposed regions, it may lead to low color saturation, reducing visual quality. Consequently, MEF algorithms should be designed to achieve satisfactory visual effects in under-exposure and over-exposure regions.
In order to visually present the performance of state-of-the-art MEF algorithms, the comparative results are shown in Fig. 1. As can be observed, Fig. 1(e)–(g) suffer from a certain degree of detail loss and color distortion. Therefore, it can be found that the existing MEF algorithms cannot meet the fusion performance of detail preservation and high fidelity. In the MEF, consistency of detail and fusion fidelity is a challenging task. However, how to balance both of them is the core we pay attention to.
To solve these issues, this paper proposes a novel double-layer informative random walk model for multi-exposure image fusion (IRW-MEF). This model is designed as a double-layer graph topology with an attribute layer and an intensity layer to achieve high color saturation and texture consistency. Specifically, the source image contains a wealth of potential information, and using a simple random walk model may cause information loss. Therefore, an efficient structural similarity function is proposed to capture fine scene textures in exposure scenes by calculating the informativeness of intra-layer nodes and applying potential common neighbor information. Additionally, considering the illumination inconsistencies from under/over-exposure regions, a local contrast function is designed to improve the overall color contrast and obtain visually comfortable fusion results. Finally, a novel MEF framework is established to optimize the estimation of weighted probability maps. The framework not only restores high-quality scene details but also prevents color and noise artifacts. A large number of experiments show that this method is superior to the most advanced fusion methods.
In summary, the contributions of this paper are concluded mainly in four aspects:
  • 1.
    A novel double-layer IRW model is designed with two walking strategies. An inter-layer transition combines with an intra-layer proposed previously. The inter-layer strategy selects the next node to visit by considering the common neighbor information (informativeness of the currently visiting node in the intensity layer).
  • 2.
    Three quality metrics are designed including spatial consistency, structural consistency, and local contrast measure functions. They are applied to reveal the scene details and generate visually comfortable fused images.
  • 3.
    A novel IRW-MEF framework is established to model a general optimized problem with the three metrics in (2) to evaluate optimal weight probability maps, which creates a convenient way to solve the MEF tasks.
  • 4.
    The proposed frameworks consistently outperform representative MEF approaches. Besides, the MEF framework is applied to different visual applications, such as monocular depth estimation, image segmentation, edge detection, and industrial applications.
The organization of the remainder of this paper is as follows. Section 2 briefly reviews the existing MEF methods and random walk. In Section 3, the IRW model and its walking strategy are demonstrated. Section 4 introduces a novel IRW model for multi-exposure image fusion. The experiment results and analysis are in Section 5. Section 6 summarizes the paper.

Access through your organization

Check access to the full text by signing in through your organization.

Access through your organization

Section snippets

Preliminaries

This section briefly reviews the existing MEF methods and introduces the principles of random walk.

Methodology of cross-layer informative random walk

To overcome the shortcomings of the RW model, this paper proposes a novel cross-layer informative random walk algorithm for the image fusion task. This section demonstrates the construction of the IRW model and the design of transition probability based on the informativeness of nodes and gives the related optimization solution. Next, the proposed IRW model is to be described in detail. The frequently used symbols are summarized in Table 1.

Informative random walk for multi-exposure image fusion

The proposed informative random walk for multi-exposure image fusion (IRW-MEF) method is described in detail in this section. First, Section 4.1 gives the general framework and specific measurement functions. Second, the energy optimization process of the overall framework is shown in Section 4.2. The details are as follows.

Experiment results

This section presents the experimental results and analysis of the proposed method. Some proper analysis is given about the method. Then, the proposed method is compared with existing state-of-the-art MEF methods quantitatively and qualitatively. Finally, an expansion study is performed and extensive examples of MEF in the image post-processing and industrial environment are demonstrated. (More Results can be found in Supporting Material).
The parameters μ and λ in Eq. (19) are empirically set

Conclusion

This paper presents a novel IRW model for multi-exposure fusion, which achieves detail consistency in overexposed/underexposed areas and restores true colors. Specifically, the IRW model is composed of a two-layer graph topology: attribute and intensity layers. The model walks across different layers through the informativeness of nodes. The IRW-MEF method designs three quality measures for MEF tasks: spatial, structural, and local contrast similarity functions. The three measures are globally

CRediT authorship contribution statement

Zhaohui Jiang: Visualization, Methodology, Supervision. Bo Yang: Writing – original draft preparation, Data curation, Software. Dong Pan: Conceptualization, Methodology, Validation. Haoyang Yu: Investigation, Supervision. Weihua Gui: Supervision.

Declaration of competing interest

The authors declare that they have no conflict of interest.

Acknowledgments

This work was supported by the National Major Scientific Research Equipment of China (Grant No. 61927803), the Young Scientists Fund of the National Natural Science Foundation of China (Grant 62303491), The science and technology innovation Program of Hunan Province (2024RC1007), the Major Program of Xiangjiang Laboratory (No. 22XJ01005), the Central South University Post-Graduate Independent Exploration and Innovation Project (No. 2024ZZTS0303).

References (72)

  • O. Ulucan et al.

    Ghosting-free multi-exposure image fusion for static and dynamic scenes

    Signal Processing

    (2023)
  • Q. Wang et al.

    DBCT-Net:A dual branch hybrid CNN-transformer network for remote sensing image fusion

    Expert Systems with Applications

    (2023)
  • G. Yang et al.

    A dual domain multi-exposure image fusion network based on spatial-frequency integration

    Neurocomputing

    (2024)
  • C. Yang et al.

    A novel similarity based quality metric for image fusion

    Information Fusion

    (2008)
  • X. Zhang

    Benchmarking and comparing multi-exposure image fusion algorithms

    Information Fusion

    (2021)
  • Y. Zhang et al.

    IFCNN: A general image fusion framework based on convolutional neural network

    Information Fusion

    (2020)
  • H. Zhang et al.

    IID-mef: A multi-exposure fusion network based on intrinsic image decomposition

    Information Fusion

    (2023)
  • F. Zhang et al.

    Triple disentangled network with dual attention for remote sensing image fusion

    Expert Systems with Applications

    (2024)
  • V. Aardt et al.

    Assessment of image fusion procedures using entropy, image quality, and multispectral classification

    Journal of Applied Remote Sensing

    (2008)
  • D.P. Bavirisetti et al.

    Multi-scale guided image and video fusion: A fast and efficient approach

    Circuits, Systems, and Signal Processing

    (2019)
  • Burt, P. J., & Kolczynski, R. J. (1993). Enhanced image capture through fusion. In Proc. IEEE int. conf. comput. vis....
  • L.K. Choi et al.

    Referenceless prediction of perceptual fog density and perceptual image defogging

    IEEE Transactions on Image Processing

    (2015)
  • T.M. Co et al.

    Elements of information theory 2nd edition

    Wiley-Interscience

    (2006)
  • N. Cvejic et al.

    A similarity metric for assessment of image fusion algorithms

    International Journal of Signal Processing

    (2005)
  • X. Dong et al.

    Sub-Markov random walk for image segmentation

    IEEE Transactions on Image Processing

    (2016)
  • G.H. Golub et al.
  • L. Grady

    Multilabel random walker image segmentation using prior models

    Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition

    (2005)
  • L. Grady

    Random walks for image segmentation

    IEEE Transactions on Pattern Analysis and Machine Intelligence

    (2006)
  • K. Gu et al.

    No-reference image sharpness assessment in autoregressive parameter space

    IEEE Transactions on Image Processing

    (2015)
  • X. Guo et al.

    LIME: Low-light image enhancement via illumination map estimation

    IEEE Transactions on Image Processing

    (2017)
  • W. Hackbusch

    Iterative solution of large sparse systems of equations

    (1994)
  • K. He et al.

    Single image haze removal using dark channel prior

    IEEE Transactions on Pattern Analysis and Machine Intelligence

    (2011)
  • He, J., Zhang, S., Yang, M., Shan, Y., & Huang, T. (2019). Bi-directional cascade network for perceptual edge...
  • J. Huang et al.

    3D topography measurement and completion method of blast furnace burden surface using high-temperature industrial endoscope

    IEEE Sensors Journal

    (2020)
  • J. Huang

    Depth estimation from a single image of blast furnace burden surface based on edge defocus tracking

    IEEE Transactions on Circuits and Systems

    (2022)
  • Z. Jiang et al.

    Soft sensors using heterogeneous image features for moisture detection of sintering mixture in the sintering process

    IEEE Transactions on Instrumentation and Measurement

    (2023)
  • Cited by (2)

    View full text