
The Italian Data Protection Authority has launched an investigation into Sora, OpenAI’s large-scale AI model that can generate videos from text prompts and pre-existing images or videos.
“Given the possible implications that the ‘Sora’ service could have on the processing of users’ personal data in the European Union and in Italy in particular, the Authority has asked OpenAI to provide a number of clarifications,” the Italian watchdog said.
Regulators are giving the company 20 days to respond to the following questions:
- How is the algorithm trained?
- What data is collected and processed to train the algorithm?
- Is it being trained on any kind of personal data?
- Does it collect data about religious or philosophical beliefs, political opinions, genetic data, health, and sexual life?
Where does it collect the data from?
Unveiled just a month ago, OpenAI’s Sora has already made waves after demonstrating stunning capabilities to generate realistic videos.
This week, WSJ journalist Joanna Stern’s interview with OpenAI’s Chief Technology Officer (CTO) Mira Murati went viral after Murati acknowledged that Sora was trained on publicly available and licensed data.
“Watch this OpenAI thief squirm when questioned about data. They KNOW they're stealing, look at her face and how she dodges the question. "Publicly available" doesn't mean you have a license, they're nothing more than common criminals. Sue them into oblivion”, a comment by Reid Southen, a film concept artist and illustrator, read.
Your email address will not be published. Required fields are markedmarked