
Artificial intelligence (AI) face-swapping applications have surged in popularity, allowing users to superimpose faces onto different bodies in images and videos. While these tools offer entertainment value, they have also been exploited to create non-consensual explicit content, leading to significant harm for unsuspecting individuals.
The GenNomis Data Breach
In March 2025, a cybersecurity researcher reported to vpnMentor an uncovered an unprotected database belonging to GenNomis, a South Korean AI company specializing in face-swapping and “Nudify” services. This 47.8 GB database contained 93,485 images and JSON files, including explicit AI-generated images of individuals who appeared to be minors. The exposure of such sensitive content underscores the potential for misuse inherent in AI-driven image manipulation technologies.
Wider Implications and Risks
The misuse of AI face-swapping apps extends beyond isolated incidents. Educational institutions have reported alarming cases where students use these applications to create explicit images of peers without consent. For instance, in Victoria, Australia, school officials have raised concerns about the psychological distress and reputational damage inflicted on victims of such deepfake images.
Similarly, in Los Angeles, schools have faced incidents where AI-generated deepfake images of students were disseminated online, prompting warnings to parents and students about the ethical and legal ramifications of creating and sharing such content.
Legal and Ethical Challenges
The rapid advancement of AI technologies has outpaced existing legal frameworks, making it challenging to address the creation and distribution of non-consensual deepfake content. While some jurisdictions have introduced legislation to criminalize the sharing of intimate deepfakes without consent, enforcement remains complex. For example, the UK’s Online Safety Act, enacted in January 2024, made it illegal to share AI-generated intimate images without consent.
Refuge
In China, the Beijing Internet Court ruled in June 2024 that unauthorized use of individuals’ images in AI face-swapping apps infringes on personal information rights, highlighting the global recognition of the issue.
The Path Forward
Addressing the risks posed by AI face-swapping apps requires a multifaceted approach. Legislative bodies must update and enforce laws to protect individuals from non-consensual image manipulation. Technology companies should implement robust security measures and ethical guidelines to prevent misuse of their platforms. Public awareness campaigns are essential to educate users about the potential harms of these technologies and promote responsible usage.
As AI continues to evolve, it is imperative to balance innovation with the protection of individual rights and privacy, ensuring that technological advancements do not come at the expense of personal dignity and security.