GAN-Based Image Enhancement and Restoration: A Multi-Loss Framework for Perceptual Consistency and Structural Preservation
Main Article Content
Abstract
This paper proposes an image quality reconstruction method based on generative adversarial networks to address common issues in image enhancement and restoration, such as detail loss, structural distortion, and visual inconsistency. The method is built upon an encoder-decoder framework, incorporating residual feature learning modules and multi-level structural modeling to enhance the joint representation of local textures and global structures. To improve perceptual consistency, a multi-objective optimization strategy is designed by combining perceptual loss and edge-preserving loss. This constrains the generation process from multiple dimensions and ensures that the reconstructed images align with both subjective perception and objective metrics. A deep convolutional discriminator is employed to guide the generator through adversarial training, encouraging outputs that better match real image distributions. For input degradation, the method simulates various scenarios including different levels of occlusion, noise interference, and structural damage. The robustness and restoration performance of the model are evaluated under complex conditions. Experiments conducted on multiple public image datasets demonstrate the effectiveness of the proposed approach, achieving superior performance on PSNR, SSIM, and LPIPS metrics. These results confirm the accuracy and practicality of the method for image enhancement and restoration tasks.