Written by : Adrienne Calzada
Edited by: Kendall Fagan

In 2015, Sam Altman, CEO of OpenAI, predicted that “AI [Artificial Intelligence] will probably […] lead to the end of the world. But in the meantime, there will be great companies created.” An interview conducted in February 2025 reflects a similar sentiment; Altman dismisses concerns about deepfakes and synthetic media by insisting that “real pictures today aren’t real,” arguing that society has already drifted away from a strict threshold of authenticity. As he puts it, “we have accepted some gradual move from the photons in the camera,” where videos on TikTok have photo-editing tools and scenes are entirely generated. The boundary of what counts as “real” thus keeps shifting. Ten years later, Altman continues to prioritize profit over safety, ultimately destabilizing epistemic trust.
OpenAI was founded in late 2015 with the mission of ensuring that artificial general intelligence (AGI) “benefits all of humanity.” Altman framed OpenAI as a steward of a humanity-wide future, claiming to avoid enabling uses of AI or AGI that harm humanity or unduly concentrate power. Today, however, it is a private company that is effectively setting the technological terms of human identity, communication, and even epistemic trust. While AI offers monumental advantages such as accelerating research and creative production as well as transforming medicine and accessibility, it also introduces unprecedented security risks. Seemingly harmless AI-generated images and videos circulate widely online, but the same underlying technology powers deepfakes capable of political manipulation, psychological warfare, and mass disinformation. Deepfakes are not simply “content”; they are a tool for destabilizing the shared reality upon which democratic legitimacy depends.
As CEO of the most influential generative AI firm in the world, Altman has a responsibility to foreground these risks. However, when asked where a line between real and synthetic ought to be drawn, he deflects, suggesting that “the threshold of what counts as real keeps moving.” This normalization of unreality is precisely what makes deepfakes dangerous: once authenticity becomes fluid, accountability becomes optional. If realism no longer anchors truth, trust becomes ungoverned and democratic societies become easier to manipulate.
The speed at which AI technology is advancing poses a serious risk for policymakers to propose and enact adequate and timely legislation. Experts at the Center on Technology Policy at the University of North Carolina, therefore, recommend that policymakers focus on rights rather than regulating the technologies themselves. In this context, governments bear a dual obligation: to shield citizens from AI-enabled misinformation and to safeguard their own institutional decision-making processes from being compromised by it. Denmark is assuming this responsibility, pioneering an unprecedented legal shift. In June, the Danish Minister of Culture, Jakob Engel-Schmidt, proposed to amend Denmark’s Copyright Act, granting individuals the right to copyright their face, voice, and body in digitally generated content. In empowering individuals against the misuse of their digital identities, Denmark is challenging traditional legal frameworks, disrupting the legal divide between intellectual property and personal identity. In doing so, Denmark is suggesting that existing privacy and AI governance frameworks are no longer adequate for the realities of synthetic media.
Denmark’s proposal raises crucial questions about what effective AI regulation should look like, while also underscoring how difficult it is for governments to craft workable safeguards in such a rapidly evolving technological landscape. Furthermore, intellectual property lawyer Luca Schirru, calls into question the validity of Denamark’s proposal, noting that deepfakes are a matter of personality rights rather than copyrights. Confusing these two frameworks risks unintended consequences: “Copyright can be a trap; it can turn our bodies into consumer goods.” remarks Lana, board member of the Copyright Observatory Institute (IODA) and Creative Commons Brazil.
The social and ethical dimensions are equally urgent. Deepfakes disproportionately target women through non-consensual sexual imagery, turning technology into a weapon of gendered violence. For journalists, activists, and public figures, especially women and marginalized voices, this technology is a new form of silencing. Protecting one’s likeness is, therefore, an issue of dignity, autonomy, and safety for those most vulnerable to technological abuse.
While Denmark centers its legislation on individual rights and digital sovereignty, the European Union’s AI Act, which is set to be implemented gradually from 2024 to 2026, instead focuses on managing technological risk and compliance. It classifies AI by risk level rather than by ownership of identity, prohibiting systems that manipulate behaviour or conduct untargeted biometric scraping, and imposing strict transparency rules for high-risk uses such as law enforcement, education, and employment. Generative AI models like those developed by OpenAI fall under the Act’s “general-purpose AI” provisions, which require documentation, copyright compliance, and risk evaluation for models with systemic impact. However, synthetic identity remains legally ambiguous under this framework.
Denmark and the European Union’s efforts are indicative of the nuanced and challenging nature of AI regulation. The potential threat of AI begs the question: how can the protection of citizen rights and technological innovation work in tandem? As AI continues to obfuscate the boundaries between human and synthetic, policymakers will need to define a new kind of protection: the right to authenticity. While states try to implement democratic safeguards, Altman continues to consolidate influence over the digital infrastructures that shape public perception— advancing a form of technocracy in which private actors determine the conditions of truth and credibility. Without adequate protections, truth risks becoming negotiable, completely destabilizing the foundations for democracy and peace. Denmark’s precedent therefore serves as a reminder that technology can erode democratic foundations if precautionary measures are not urgently taken.
