Individual Report
Topic: Digital world
Issue: Artificial intelligence abuse
Research question: Should governments prohibit companies from using artificial intelligence to generate deepfake multimedia content?
Introduction
The modern digital age sees artificial intelligence (AI) transforming the way we produce and distribute information. A disturbing trend is deepfake technology where AI creates videos or images that seem real but are fake. These deepfakes may disseminate misinformation, violate rights, and sometimes manipulate society. It leads to a critical concern: Must the companies be banned by the government bodies to use artificial intelligence to produce deepfake multimedia files? The necessity of a ban has been suggested by some as a precautionary measure against damaging society. There are those who think that deepfake technology has positive applications such as in cinema or training. The problem is not purely based on technology it is based on ethics, security, and the position played by companies within the digital era.
In 2024, 78% of organizations said they use AI. This was a huge increase from 55% in 20233. AI is also getting cheaper and easier to use. Since 2022, the cost of running advanced AI has dropped more than 280 times3. At the same time, AI is now part of daily life. In 2023, US health regulators approved 223 new AI tools for medical use3 , it means that we use AI all the time. Because AI is growing so fast, the risk of misuse—such as creating fake videos that look real is also growing.
The scale of this problem is already clearly. Between 2022 and 2023 alone , about 500,000 deepfake videos and audio clips were shared online1. Even more worrying is that most people cannot protect themselves from this threat. Studies show that only 0.1% of viewers—just one person in a thousand—can reliably tell the difference between human-make content and deepfakes1.
Causes and consequences
It is essential to know how the development of deepfake technology has come to be as big of a problem on a global level as it is by examining what triggered the fast increase in its popularity. There are two explanations of why deepfakes have become so prevalent and dangerous.
The very first thing is that AI is getting less expensive and more accessible to the population at large. According to The Stanford University AI Index Report 2025, running advanced AI costs less than 280x as it did in 20223. It implies that things that could be done in large film studios can now be accomplished by anyone having a laptop. There are many currently available apps that provide free AI solutions which can generate realistic fake videos within minutes. The popularity of this technology has made the number of deepfakes skyrocket. A study by Sharma and Selwal discovered that "in 2022-2023 alone, approximately 500,000 videos and audio files containing deepfake footage were published online" 1. It is apparent how fast this is increasing. A survey in 2026 further indicated that every four individuals out of five Americans had experienced deepfake voice calls within the last twelve months8.
Further, social media sites facilitate even more rapid distribution of deepfakes. After a deepfake video has been posted on such websites as YouTube or Tik Tok, it would be seen by millions of people in hours. Such platforms do not often care about verifying the authenticity of content until it becomes popular. The analysis of more than 91,000 misleading posts revealed that the probability of virality of AI-created misinformation was significantly higher compared to traditional fake news that used traditional methods 5. In addition, most people rely on what they witness. Research indicates that no more than 0.1% of viewers (one person out of every thousand) have been shown to consistently distinguish between real content and deepfakes 1.
The effects of deepfakes abuse are severe and they impact individuals as well as the entire society. The most significant one is the violation of personal rights. Individuals have an internationally recognized right to privacy which is infringed by deepfakes. If a person sees their face in a deepfake without being asked, their privacy is gone. Studies show that 98% of all deepfake posts on the internet are pornographic, and 99 percent of the victims are women 7. Once, Italian premier Giorgia Meloni resorted to legal proceedings when deepfake pornography videos of herself were watched by millions of viewers on the web 9. The Oversight Committee of Meta has already identified that it is dangerous, and the only successful reaction would be to remove the content to safeguard people’s rights to privacy, dignity, and security 7. As per Wang, such videos can never be completely deleted and therefore, the victims suffer throughout years 4.
Another consequence is the threat to democracy. The spread of deepfakes will misinform people in an election and shape their opinions. According to research, synthetic media impersonating actual individuals can distort political discussion and compromise the integrity of elections 5. In the year 2025, the President of the Parliamentary Assembly of the Council of Europe received an AI scam where deepfake technology was used to imitate a Ukrainian official in a Zoom call 6. It means that even such institutions cannot be protected. As deepfakes become more lifelike, according to Amerini et al., it will be more challenging to tell the truth to citizens 2.
To conclude, the reasons behind the spread of deep fakes, namely, lower costs in technology and rapid dissemination through the internet, caused severe issues such as violation of the human rights and undermining of democracy. It is important that we understand these issues first because it will help us come to a decision on whether or not governments ought to ban companies that use AI to produce deepfakes.
Different perspectives on the issue
There are instances where AI face swapping scams present a serious hazard to the ordinary population of Hong Kong. Police started an operation with the name, "Dark Curtain," and dismantled two scam hubs in Ngau Tau Kok in January 2025. The group was made up of 31 people who were between 20 and 34 years old 10 . The group was made up of 31 people who were between 20 and 34 years old. 10 The gang applied AI deepfake technology to make fake photos and videos and created fake female profiles on dating sites to deceive people outside Hong Kong to invest in cryptocurrency. All losses amounted to more than HK $34 million.10The police seized 11 computers, 116 mobile phones, more than HK $6.8 million in cash and even 2 kg of gold bars when raiding the facilities.
Different countries around the globe are attempting to try various strategies of addressing deepfakes. Deepfakes are defined as high-risk AI systems and all AI generated information should always be marked clearly 11 by the European Union that enacted the EU AI Law in April 2024. It becomes the first global AI legislation that seeks to ensure protection of people against harm but also permits innovations to occur.
We observe a distinction between the case of Hong Kong and others that we compared with it. The EU will regulate prior to problems taking place whereas Hong Kong will mostly respond to crimes once they occur. Though Hong Kong police is putting a lot of work into it, the examples provided demonstrate that the use of AI by scammers is already outpacing both the law enforcement and security systems in place. The 19 out of 23 instances of fake bank verification indicates a serious vulnerability in the field of financial protection.
The example of Hong Kong demonstrates that deepfakes are indeed not a concern of the future, but a fact of everyday life. Hong Kong, both in terms of dating scams and breaching the safety of banks, as well as an increase in the number of phishing attacks, is at the forefront of the battle with AI crime.
Analyses and evaluates possible courses of action
In response to the increasing challenge of deepfake abuse, governments, international bodies, and technology corporations have suggested a wide range of measures. All the methods have their advantages and disadvantages, and it is important to know them to be able to determine which action will be more efficient.
Another strategy would be to implement rigorous regulations to stop corporations from creating fake deepfake content on the basis of AI technology. This is a direction that the European Union has already gone. The AI Act was adopted by the EU in 2024, under which deepfakes are considered to be high-risk AI systems and all AI-generated material must be clearly labeled11.
First, this provides an unambiguous legal benchmark that businesses should comply with. Second, it shields people by ensuring that people have an understanding of when something is a falsehood. But there are also disadvantages. The tough restrictions can restrict the acceptable applications of deepfake technology in the movie industry, educational sphere and art.
Enforcement is also another drawback. Supposing a state outlaws deepfakes, there is still the possibility of foreign-created material finding its way to the populace using the internet.A different way out is to take a stand on technology solutions, instead of the bans imposed by the authorities. There are experts who claim that AI detection programs may assist in identifying deepfakes prior to them inflicting a damage. It has been established that although detection systems are enhancing in quality, they are not yet fully developed. As deepfakes become more lifelike, it is expected that deepfakes will be increasingly difficult to detect whether by humans or machines, according to warnings by Amerini et al.
A drawback to this is that it puts the onus on individual people and platforms to recognize fakes, which a lot of them do not manage to accomplish. Research indicates that only less than 0.1% of viewers can be consistent in distinguishing real content and deepfakes 1, thus, the technology is not sufficient to save the population.
On reviewing these factors, I think a hybrid solution can be considered as the best way forward. Authorities must not just ban the concept of deepfake technology outright since it can also be used positively. But they need to ban the most dangerous uses, including the making of non-consensual pornographic deepfakes or the use of deepfakes to manipulate elections. Studies suggest that 98% of all deepfake material online is pornography, and 99% of the victims are women"7, and eliminating this particular form of abuse alone would shield many individuals without damaging legal businesses.
To sum up, total prohibition of deepfake technology is not the most appropriate option since it will also remove its beneficial applications. Nevertheless, in my opinion, governments ought to ban the most dangerous ones, including the pornographic deepfakes and manipulation of elections. They must also make sure that AI-created material receives distinct labeling, put investment into detection technologies, engage the population, and compel financial institutions to enhance their safety measures. The only way of dealing with this problem would be to integrate all three (laws, technology, and education) in order to minimize the negative impact of deepfake abuse and, at the same time, leave room to innovate.
Evaluation of sources
The resources that are cited in this report are credible organizations and authors. The work of Sharma and Selwal 1was also published in a peer-reviewed scholarly journal implying that their results were reviewed by other scholars before publishing. It is therefore highly reliable as a source. Stanford University 3 has long standing records in studying artificial intelligence and is a leading research institution throughout the world. Its official AI Index Report is frequently referenced by both policymakers and academics. McAfee Corp. 8is a reputable company in terms of cybersecurity and its survey data represent modern information on what impact deepfake scams can have on common people. All these sources are recent, relevant and authoritative, which adds strength to the arguments presented in this report.
Bibliography
1. Sharma, S., & Selwal, A. (2026). Potential of artificial intelligence in deepfake media: From generation to detection mechanisms, state-of-the-art, and challenges. Computer Science Review, 60, 100866. https://doi.org/10.1016/j.cosrev.2025.100866 (Accessed 6th March 2026)
2. Amerini, I., Barni, M., Battiato, S., Bestagini, P., Boato, G., Bruni, V., Caldelli, R., Natale, D., Nicola, D., Guarnera, L., Mandelli, S., Majid, T., Luca, M. G., Micheletto, M., Montibeller, A., Orrù, G., Ortis, A., Perazzo, P., Puglisi, G., & Purnekar, N. (2025). Deepfake Media Forensics: Status and Future Challenges. Journal of Imaging, 11(3), 73. https://doi.org/10.3390/jimaging11030073 (Accessed 21st February 2026)
3. Stanford University. (2025). Artificial intelligence index report 2025. https://hai.stanford.edu/assets/files/hai_ai_index_report_2025.pdf (Accessed 21st February 2026)
4. Wang, Y. (2024). The Ethical Risks and Regulatory Logic of Deep Fake Video. Open Journal of Legal Science, 12(08), 5444–5449. https://doi.org/10.12677/ojls.2024.12877 (Accessed 10th March 2026)
5. Drolsbach, C., & Pröllochs, N. (2025). Characterizing AI-Generated Misinformation on Social Media. arXiv preprint arXiv:2505.10266. https://arxiv.org/abs/2505.10266 (Accessed 19th March 2026)
6. Parliamentary Assembly of the Council of Europe. (2025, December 19). PACE President warns of AI-driven fraud targeting democratic institutions. https://pace.coe.int/en/news/10154/ (Accessed 19th March 2026)
7. Oversight Board. (2024, July 25). Oversight Board Case of Explicit AI Images of Female Public Figures. Columbia Global Freedom of Expression. https://globalfreedomofexpression.columbia.edu/cases/oversight-board-case-of-explicit-ai-images-of-female-public-figures/ (Accessed 19th March 2026)
8. McAfee Corp. (2026). The Artificial Intelligence Trap: Global Deepfake Survey 2026. https://www.mcafee.com/ai-trap-survey-2026 (Accessed 19th March 2026)
9. Tondo, L. (2025, July 10). Giorgia Meloni seeks €100,000 damages over deepfake porn videos. The Guardian. https://www.theguardian.com/world/2025/jul/10/giorgia-meloni-seeks-damages-over-deepfake-porn-videos (Accessed 19th March 2026)
10. Hong Kong Commercial Daily. (2026). Commercial Crime Bureau busts online romance scam case arresting 31 men and women, losses of HK$34 million with 4 persons to appear in court today. https://hkcd.com/newsTopic_content.php?id=8739971 (Accessed 19th March 2026)
11. Ta Kung Wen Wei News Network. (2026). Hong Kong records over 15,000 cyber security incidents last year, up nearly 30% setting new record: Productivity Council says AI applications increase risks. https://www.tkww.hk/a/202601/28/AP6979dad2e4b0aa6cbcd3c79c.html (Accessed 19th March 2026)
12. Sing Tao Daily News. (2025). AI breaks through online banking verification: Scammers use stolen identity cards with altered photos to steal HK$220,000, police arrest 22 including mastermind. https://singtao.co.uk (Accessed 19th March 2026)