Media Forensics Fake / Forged / Tempered Multimedia Content Detection / Inam Ur Rehman, Malik MatiUllah, Muhammad Musab, Talal Arif Shah. (TCC-31 / BETE-56)

By: Rehman, Inam UrContributor(s): Supervisor Dr. Haider AbbasMaterial type: TextTextMCS, NUST Rawalpindi 2023Description: 84 pSubject(s): UG EE Project | TCC-31 / BETE-56DDC classification: 621.382,REH
Contents:
Fake or manipulated images propagated through the Web and social media can deceive, emotionally distress, and influence public opinion. The dependability of visual information on the web and the authenticity of digital media appearing virally on social media platforms have raised unprecedented concerns. The technique of deep fake involves generating human images using neural network tools such as GAN or Auto Encoders. These tools utilize deep learning techniques to overlay target images onto source videos and create videos that appear very realistic. In fact, it becomes nearly impossible to distinguish the difference between deep-fake videos and real videos. This project presents a novel deep learning-based method for effectively differentiating AI-generated fake videos from real videos. Our approach involves utilizing Res-Next Convolution Neural Networks to capture features at the frame level. These features are then used to train a Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM) to determine whether a video has been manipulated, i.e., whether it is a deep fake or a genuine video. To improve the real-time performance of the deep fake detection model, we trained it using a combination of different datasets. We utilized videos from Face-Forensic++, Deepfake Detection Challenge, and Celeb-DF datasets to allow our model to learn various features from different types of images. In addition, we tested our model against YouTube videos to achieve competitive results in real-world scenarios. The proposed methods are evaluated using a dataset of digital forgeries that include 6000 videos from different datasets. For detection, we use 20 frames per video and produce an accuracy of 87.79%.
Tags from this library: No tags from this library for this title. Log in to add tags.
Item type Current location Home library Shelving location Call number Status Date due Barcode Item holds
Project Report Project Report Military College of Signals (MCS)
Military College of Signals (MCS)
Thesis 621.382,REH (Browse shelf) Available MCSPTC-449
Total holds: 0

Fake or manipulated images propagated through the Web and social media can deceive, emotionally distress, and influence public opinion. The dependability of visual information on the web and the authenticity of digital media appearing virally on social media platforms have raised unprecedented concerns. The technique of deep fake involves generating human images using neural network tools such as GAN or Auto Encoders. These tools utilize deep learning techniques to overlay target images onto source videos and create videos that appear very realistic. In fact, it becomes nearly impossible to distinguish the difference between deep-fake videos and real videos. This project presents a novel deep learning-based method for effectively differentiating AI-generated fake videos from real videos. Our approach involves utilizing Res-Next Convolution Neural Networks to capture features at the frame level. These features are then used to train a Recurrent Neural Network (RNN) with Long Short-Term Memory (LSTM) to determine whether a video has been manipulated, i.e., whether it is a deep fake or a genuine video. To improve the real-time performance of the deep fake detection model, we trained it using a combination of different datasets. We utilized videos from Face-Forensic++, Deepfake Detection Challenge, and Celeb-DF datasets to allow our model to learn various features from different types of images. In addition, we tested our model against YouTube videos to achieve competitive results in real-world scenarios. The proposed methods are evaluated using a dataset of digital forgeries that include 6000 videos from different datasets. For detection, we use 20 frames per video and produce an accuracy of 87.79%.

There are no comments on this title.

to post a comment.
© 2023 Central Library, National University of Sciences and Technology. All Rights Reserved.