

In the current study, we investigated new color fusion schemes that combine the advantages of both methods (i.e., the efficiency and color constancy of the sample-based method with the ability of the statistical method to use the image of a different but somewhat similar scene as a reference image), using the correspondence between multiband sensor values and daytime colors (sample-based method) in a smooth transformation (statistical method).

When no exact matching reference images are available, the color transformation can be derived from the first-order statistical properties of the reference image and the multiband sensor image. When both daytime color images and multiband sensor images of the same scene are available, the color mapping can be derived from matching image samples (i.e., by relating color values to sensor output signal intensities in a sample-based approach). A crucial step in the proposed coloring process is the choice of a suitable color mapping scheme. As a result, it has been shown that these colorizing methods lead to an increased ease of interpretation, better discrimination and identification of materials, faster reaction times and ultimately improved situational awareness. These mappings not only impart a natural daylight color appearance to multiband nighttime images but also enhance their contrast and the visibility of otherwise obscured details. Previously, we presented two color mapping methods for the application of daytime colors to fused nighttime (e.g., intensified and longwave infrared or thermal (LWIR)) imagery.

This method can be used as a preprocessing step in order to improve the recognition and interpretation of dark imagery in a wide range of applications. Besides, the proposed method has a low computational complexity, property that is important for real time applications or for low-resource systems. Our results show that the methodology presented in this paper can be a good alternative to low-light or night vision processing techniques. In this regard, the tests performed show an accurate transference of colors when using perceptual color spaces, being RLAB the best color space for the procedure. Two aspects are particular to this work, the application of color transfer on dark imagery and in the search for the best color space for the application. Specifically, we use a classical color transfer method where we obtain firstorder statistics from a target image and transfer them to a dark input, modifying its hue and brightness.

In this article, we introduce an image enhancing approach for transforming dark images into lightened scenes, and we evaluate such method in different perceptual color spaces, in order to find the best-suited for this particular task.
DAYLIGHT HALLUCINATION CODE
Moreover, we will publish our research content, data and code publicly at. Extensive experiments verify the importance and robustness of each step in the subjective and objective evaluation and demonstrate that our work represents a trade-off among color fidelity, fusion performance and computational efficiency. In addition, to ensure that the fused image has rich color, high fidelity and steady brightness, a color vision transfer method is proposed to recolor the fused gray results by deriving a map from the visible image serving as a reference. Specifically, the multi-receptive-field attention mechanism aims to extract comprehensive spatial information to enable the encoder to separately focus on the substantial thermal radiation from the infrared modal and the environmental characteristics from the visible modal.
DAYLIGHT HALLUCINATION SERIES
The fusion network is an integrated encoder-decoder modal with a multi-receptive-field attention mechanism that is implemented via hybrid dilated convolution (HDC) and a series of convolution layers to form an unsupervised framework. The proposed method enables the fused image to effectively recognize thermal objects, contain rich texture information and ensure visual perception quality. In this paper, a robust infrared and visible image fusion scheme that joins a dual-branch multi-receptive-field neural network and a color vision transfer algorithm is designed to aggregate infrared and visible video sequences.
