Image taken from Microsoft Vasa-1
The real problem is not the existential threat of AI. Instead, it is in the development of ethical AI systems. ― Rana El Kaliouby
This repository is a proof of concept (PoC) that demonstrates how transfer learning can be used to detect AI-generated faces. By leveraging pre-trained models and fine-tuning them for the specific task of face detection, this project aims to show the potential of transfer learning in distinguishing between real and AI-generated faces.
As AI-generated images become increasingly realistic, the ability to differentiate between real and AI-generated faces is critical, particularly in defending against malicious or unethical uses of AI technology. Tools like this project aim to empower individuals, researchers, and organizations by providing a reliable method to detect AI-generated faces, which can help prevent misinformation, identity fraud, and other unethical applications of deepfake technology. By leveraging transfer learning, a powerful machine learning technique where pre-trained models are adapted for specific tasks, this project showcases how existing AI tools can be repurposed to protect privacy, maintain trust in digital content, and safeguard against AI misuse. The goal is not only to demonstrate the effectiveness of transfer learning for this use case, but also to contribute to responsible AI development and usage.
The repository contains the following key files:
train.ipynb
: Jupyter notebook for training the model using transfer learning.inference.ipynb
: Jupyter notebook for running inference on AI-generated face images to test the model's performance.fetch.sh
: A script to fetch the imagenet pre-trained weights.resizing.sh
: A script to resize videos to suitable size for the model to work with.
To train and infere from the model you first have to install the used libraries:
- Tensorflow
- Keras
- OpenCv
- Numpy
Afterwards download the dataset from Kaggle and place it besides the project. Then you can use the fetch.sh
for getting the DenseNet121
weights and you should be good to go.
To train the model, run the train.ipynb
notebook. It includes the following steps:
- Load a pre-trained model.
- Fine-tune the model using a dataset of real and AI-generated faces.
- Save the trained model for inference.
To test the model on new data, run the inference.ipynb
notebook. This notebook allows you to:
- Load the fine-tuned model.
- Run predictions on videos both in real-time (e.g. using your webcam) or from recorded videos
- View the results and performance metrics.
The results of this project demonstrate the power of transfer learning for AI-generated face detection. By using pre-trained models, the training time is reduced, and the model achieves good accuracy in distinguishing between real and AI-generated faces.