Deepfakes As Disinformation & Ethical Considerations

As a society, humanity is at a stage where information is of high importance. Unfortunately, misinformation and disinformation have spread quickly due to the internet and digitized platforms. In almost all instances, the developers of these platforms have allocated resources to tackle these issues. However, the methods used to produce misinformation and disinformation are becoming increasingly deceiving. Today, deepfakes are perhaps one of the most problematic methods of misinformation and disinformation communication. 

The Technology Powering Deepfakes

"Deepfake" (obtained from "deep learning" and "fake") is the creation of hyper-realistic synthetic content (i.e. videos) in which an "artificial" individual states or performs actions that never happened in reality [Westerlund]. Usually, a deepfake video would appear with an individual's face being replaced with another individual's face. The primary technology powering this is called generative adversarial networks (GANs)[Westerlund].

GANs have two main deep learning neural networks: the generator and the discriminator [Westerlund]. Both the generator and the discriminator are trained on the same initial dataset. Based on its training, the generator is able to generate synthetic ("fake") data [Westerlund]. The task of the discriminator, which is trained based on the same dataset as the generator, is to distinguish between the real (pre-analyzed) data and the fake but realistic (synthetic) data [Westerlund]. Both the real and fake data (produced by the generator) are inputted into the discriminator, and the discriminator has to produce an output related to the validity of the two inputs [Westerlund].

Figure 1. GAN model architecture (Image by Author)

The goal of the generator is to generate data that is realistic enough that the discriminator will incorrectly select its creation as the real data [Wang]. The goal of the discriminator is always to detect the synthetic content produced by the generator [Wang]. Through this adversarial (generator vs. discriminator) process, both of their capability improves, as the generator can produce more realistic synthetic images, and the discriminator can distinguish between realistic synthetic content and the real data more effectively [Wang]. It is a never-ending process and can be continued forever, but when the generator is at a state that can produce adequately realistic fake data, it can then be applied to real-world applications. One application of GAN technology is the generation of realistic yet fake images and videos of humans–these are deepfakes.

Ethical Considerations Of Deepfakes

Deepfakes present several ethical issues. To begin, deepfakes are identified as a form severe form of identity theft [Westerlund]. Stealing an individual's personal information and using it for malicious reasons is illegal, and the deepfake technology can be used in a similar fashion for it to be considered an illegal action. From a greater perspective, if deepfakes are utilized to manipulate real individuals into stating and performing actions that never occurred, it could lead to an erosion of trust between the general public and various organizations and the government. This could have unforeseen impacts on society.

Whether the development of more advanced GANs and, in turn, deepfakes, should continue should be evaluated. Undoubtedly, GANs have proven to be beneficial in scientific research, artistic creation, and more. However, with the emergence of deepfakes that will develop with GANs and their inevitable exponential spread, a future may be in sight where no information can be trusted due to the hyper-realism of fake content.

Solutions to Tackling Deepfakes

Although there are solutions to simultaneously develop GANs for the benefit of humanity and tackle the dangerous consequences of deepfakes, one of the solutions is the utilization of the discriminator within GANs. This solution is called deepfakes detection. Fundamentally, deepfakes detection is the training of content machine learning models to be able to detect deepfakes, and researchers have proved that it has functioned.

For example, Facebook and Partnership on AI, through collaboration with other prominent organizations and institutions, hosted the deepfake Detection Challenge (DFDC) on Kaggle in 2019 [Presenti]. DFDC was a competition that encouraged Kaggle users to design, research, and develop high-accuracy deepfakes detection with the incentive of being awarded  $500,000 for first place in the competition [“Deepfake Detection”]. This competition led to a spark in the development of advanced deepfakes detection technology. Furthermore, in June of 2021, Facebook researchers published an article claiming to be able to effectively and efficiently detect deepfakes, even to the extent of tracking their origins [Asnani]. Such achievements provide hope that the deepfakes technology may not be as problematic after all if the resources are spent tackling it.


Thank you for reading this article. If you have any feedback for my writing, or have found a mistake in this article, please message me using the Contact Us page.


References

Asnani, Vishal, et al. “Reverse Engineering of Generative Models: Inferring Model Hyperparameters from Generated Images.” ArXiv, 15 June 2021, doi:10.48550/ARXIV.2106.07873.

“Deepfake Detection Challenge - Prize.” Kaggle, www.kaggle.com/competitions/deepfake-detection-challenge/overview/prizes.

Pesenti, Jerome. “Deepfake Detection Challenge Launches with New Dataset and Kaggle Site.” Meta AI, 11 Dec. 2019, ai.facebook.com/blog/deepfake-detection-challenge-launches-with-new-data-set-and-kaggle-site/.

Wang, Kunfeng, et al. “Generative Adversarial Networks: Introduction and Outlook.” IEEE/CAA Journal of Automatica Sinica, vol. 4, no. 4, 2017, pp. 588–598., doi:10.1109/jas.2017.7510583.

Westerlund, Mika. “The Emergence of Deepfake Technology: A Review.” Technology Innovation Management Review, vol. 9, no. 11, 2019, pp. 39–52., doi:10.22215/timreview/1282. 

Rdn

Contributor @ Universal Times

Previous
Previous

Text Forensics: The Blueprint for Advanced Disinformation Solutions

Next
Next

Automotive Innovation of Artificial General Intelligence Through Initial Replication