Apps that create deepfake nudes should be banned
Internet Matters has called on the Government to strengthen the Online Safety Act to ban tools that can create deepfake nudes after estimating that as many as half a million children have encountered such images online. The charity highlighted concerns over AI-powered “nudifying” apps that can generate non-consensual explicit images of individuals, including children.
According to a study by Internet Matters, there is a growing sense of unease among young people regarding deepfake nudes, with 55% of teenagers expressing that having a deepfake nude circulated would be more distressing than a real image. The organisation emphasised the need for stricter regulations as the current laws are insufficient in addressing the issue, given that the AI models used to produce such content are not currently illegal in the UK, despite the possession of such images being a criminal offence.
Recently, the Internet Watch Foundation (IWF) cautioned about the rise of AI-generated child sexual abuse material on public websites, indicating a shift from its previous presence on the dark web. Internet Matters stated that approximately 99% of deepfake nudes involve women and girls, warning that this content facilitates various forms of sexual abuse and exploitation.
Carolyn Bunting, co-chief executive of Internet Matters, described the creation of deepfake images as a significant violation of privacy and dignity, particularly affecting girls. She emphasised that children fear the potential of becoming victims of deepfake image abuse, highlighting the need for collective action from the government and industry to prevent the misuse of such technology.
A survey conducted by Internet Matters, involving 2,000 parents and 1,000 children in the UK, revealed that teenage boys are more likely to have encountered nude deepfakes, with a higher likelihood of boys creating such content. However, girls are more often targeted as victims. The study underscored the importance of education on deepfake technology, with 92% of teenagers and 88% of parents supporting the inclusion of risks related to deepfakes in school curricula.
Minister for safeguarding and violence against women and girls, Jess Phillips, acknowledged Internet Matters’ efforts in shedding light on the misuse of emerging technologies. Phillips emphasised the need for collaboration with organisations like Internet Matters to address the misuse of AI technologies producing child sexual abuse material, particularly to reduce the disproportionate impact on women and girls.
In conclusion, the call to ban apps creating deepfake nudes reflects a growing concern over the potential harm and exploitation facilitated by such technologies. Strengthening regulations and implementing educational initiatives are crucial steps towards safeguarding individuals, particularly children, from the detrimental effects of deepfake content online.