Computer
Suha Mohammed Saleh; Abdulamir A. Karim
Abstract
From big data analytics to computer vision and human-level control, deep learning has been effectively applied to a wide range of complicated challenges. However, these same deep learning advancements have also been used to develop malicious software that threatens individuals' personal data, democratic ...
Read More ...
From big data analytics to computer vision and human-level control, deep learning has been effectively applied to a wide range of complicated challenges. However, these same deep learning advancements have also been used to develop malicious software that threatens individuals' personal data, democratic processes, and even national security. Apps backed by deep learning have lately appeared, with deepfake being one of the most notable. Deepfake algorithms can create fake images and videos that humans cannot distinguish them from authentic ones. One of the fields that deep learning accomplished major success is face synthesis and animation generation. On the other hand, it can create unethical software called deepfake that presents a severe privacy threat or even a huge security risk that can affect innocent people. This work introduces the most recent algorithms and methods used in deepfake. In addition, it provides a brief explanation of the principles that underpin these technologies and facilitates the development of this field by identifying the challenges and scopes that require further investigation in the future.
Computer
Wildan J. Jameel; Suhad M. Kadhem; Ayad R. Abbas
Abstract
The main reason for the emergence of a deepfake (deep learning and fake) term is the evolution in artificial intelligence techniques, especially deep learning. Deep learning algorithms, which auto-solve problems when giving large sets of data, are used to swap faces in digital media to create fake media ...
Read More ...
The main reason for the emergence of a deepfake (deep learning and fake) term is the evolution in artificial intelligence techniques, especially deep learning. Deep learning algorithms, which auto-solve problems when giving large sets of data, are used to swap faces in digital media to create fake media with a realistic appearance. To increase the accuracy of distinguishing a real video from fake one, a new model has been developed based on deep learning and noise residuals. By using Steganalysis Rich Model (SRM) filters, we can gather a low-level noise map that is used as input to a light Convolution neural network (CNN) to classify a real face from fake one. The results of our work show that the training accuracy of the CNN model can be significantly enhanced by using noise residuals instead of RGB pixels. Compared to alternative methods, the advantages of our method include higher detection accuracy, lowest training time, a fewer number of layers and parameters.