Meta has now strongly denied using any kind of pornographic material in the training of its artificial intelligence models after reports have suggested that adult content might have been part of internal datasets.
The company have thus clarified that while some inappropriate material may have been unintentionally downloaded by staff but it was never officially used in AI model development or any kind of training.
According to Meta’s recent official statement, an internal review has also revealed that certain employees had downloaded NSFW (Not Safe for Work) content from the internet while collecting some type of public data for AI research.
However, the company has also emphasised that these files were immediately flagged and even excluded from all machine learning datasets.
Meta has also reassured users that its AI systems, including those behind Facebook, Instagram, and even Meta’s new generative AI tools, are also built on responsibly sourced and filtered data.
The incident has also reignited discussions about various kinds of ethical AI training and the need for stricter oversight in data collection.
Experts have also warned that even accidental inclusion of adult material in various kinds of training datasets can lead to biases, various ethical violations, or even misuse of AI-generated outputs.
Meta has responded by reinforcing various policies and even tightening its content filtering systems to prevent any kind of similar issues in the future.
Meta’s spokesperson has also stated that the company follows rigorous data governance protocols and that any violations of content sourcing policies are taken seriously.
The company has also reaffirmed its commitment to the process of responsible AI development, transparency in the training process, and adherence to international data protection standards.


