In steganography, a text is placed in a digital image in a secure, imperceptible and retrievable way. The three main methods of digital image steganography are spatial methods, transformation and neural network. Spatial methods change the pixel values of an image to embed information, while transform methods embed information hidden in the frequency of the image.
Neural networks are use to perform the hiding process and it is the main part of this research. This research examines the use of LSTM[1] deep neural networks in digital image text steganography. This work extends an existing implementation that uses a two-dimensional LSTM to perform the preparation, hiding, and extraction steps of the steganography process. The proposed method modified the structure of LSTM and used a gain function based on several image similarity measures to maximize the indiscernibility between an overlay and a steganographic image. Genetic algorithm helps in improving the structure of LSTM networks in the textual information within hidden images, with optimizations (number of layers, neurons, evaluations) and selection of appropriate features, increasing the accuracy, improving image quality and preventing overfitting. This method helps to find the optimal architecture for the LSTM network and improves the efficiency of the steganography.
The proposed method demonstrates superior performance based on three evaluation metrics Peak Signal-to-Noise Ratio (PSNR[2]) in decibels, Mean Squared Error (MSE[3]), and accuracy rate in percentage compared to three other benchmark images (lena.png, peppers.png, mandril.png, and monkey.png), achieving values of 93.665275 dB, 0.6945 MSE, and 97.23% accuracy, respectively. The proposed method modified the structure of LSTM and used a gain function.