AI-generated NSFW material is a fraught area as governments and regulators around the world grapple with new ethical and legal questions. Despite their notoriety, most NSFW AI would likely be covered by intellectual property and privacy laws in many countries (even decency if they go too far), but the worldwide lack of advance notice about this revolution has made it nearly impossible for new legislation to impact what could already potentially reside in digital form.
Across the pond in the EU we have a forthcoming AI act which tackles high-risk applications of artificial intelligence — including systems proven to be used for deepfakes. Being one-step fancier, AI-enabled systems that produce or disseminate NSFW material are under a much stricter regime — should they not comply with any provision in the ePrivacy Regulation (read: provide an impact assessment), their operators could be fined up to 6% of global annual turnover. It's supposed to make the industry more transparent, requiring companies to say when content is made by AI and keep people’s privacy in mind (basically asking for permission before using someone else's face).
At the state-level in the United States, is legislation attempting to deal with deepfakes and AI-generated explicit material. California and Texas have laws that make it illegal to create deepfakes of sexually explicit images without the permission of those whose faces are being swapped onto somebody else's body, if these edited files will be put on the internet. Failures carry fines, and potentially prison sentences. A 2020 study found that nearly all (96%) of deepfakes were pornographic and used images of women without their consent, prompting it to urge the U.S. government to join other countries in enacting NSFW AI regulations at the federal level;
Again the big concern is privacy. Indeed, the General Data Protection Regulation (GDPR) in the EU protects against unauthorised exploitation of personal data — which images from an AI training dataset contains. Those companies could also be in violation of GDPR, which means they would need to have clear consent to use your personal image data for training NSFW AI models and potentially face fines up €20 million or 4% of their previous year's global annual turnover. The critical role that is played by data consent and privacy while building AI systems.
Ethical AI Development has been a topic covered by tech experts full well. AI ethics researcher Timnit Gebru said that if an engineer is working on a system, “the first formulas they should think of are the loss terms for vulnerable populations due to harm.” And she subscribes to the wider worry not just about NSFW AI technologies but unflinchings ones being developed in an ethically vacuous space.
International regulations may not be ready to address this new technology of NSFW AI timely, but it is certain that the law will increasingly turn a eye toward checking such innovation over the coming years. With every new case of misuse, governments and tech giants will face increasing pressure to crack down.
If you want to learn more about nsfw ai and the regulatory environment, go here:nsfw ai