A controversial change to how user data can be analyzed seems designed to tempt regulators to step in.
Elon Musk rarely shies away from controversy. From picking fights with political leaders, as he has done in the last week, to arguing over his company's rights to operate with presidents, prime ministers, and other legislators, it’s clear that Musk believes his thinking and approach to life and business are the match of others.
But a recent change to how X processes user data, which went largely unnoticed until users began to share information about it on social media, suggests that Musk may either think he’s above the law – or that data protection law isn’t fit for purpose.
In late July, users began realizing that hidden among their X settings was a checkbox that allowed their data to be used to train Grok, the company’s generative AI chatbot, set up to try and compete with ChatGPT.
Musk has a long history of enmity with OpenAI, having helped fund it since its founding, but moving away from the company over concerns that it was not following its founding principle of open-source technology for the benefit of everyone.
What changed?
The change, squirreled away under the Data Sharing tab in settings, allows “posts as well as your interactions, inputs, and results with Grok to be used for training and fine-tuning” by default. That may sound innocuous, but the additional text providing more detail outlines the scale of data collection and why it would be troublesome for some people.
The text reads: “To continuously improve your experience, we may utilize your X posts as well as your user interactions, inputs, and results with Grok for training and fine-tuning purposes.”
The information added that it means interactions, inputs, and results on X may be shared with xAI, the parent company of Grok, “for these purposes.”
Individuals as well as organizations including the EFF came out against the change, which was not clearly notified to users proactively, leaving them to discover it themselves and recognize that they needed to opt out. The fix is a quick one, albeit one that users shouldn’t have to do themselves. To untick the box, visit this section of X’s settings and opt out.
Why does it matter?
Data protection experts suggest that automatically opting users into having their posts on the social media platform training Grok is potentially in breach of European data protection rules, which would require clear communication and an obvious opt in, rather than opt out, to such data use.
But more than that, it highlights how Musk is willing to risk the ire of regulators in pursuit of his business goals – seemingly at any cost – and flouts rules put in place to try to protect users.
It's a pattern many Musk watchers will recognize. In April, Musk picked a fight with Australia’s Online Safety Commissioner over how his platform handled footage of a stabbing at an Australian church. Musk held out from removing the offending posts, which the Online Safety Commissioner feared could be used as incitement to further violence, because of his belief in free speech.
But it’s far from the only time that Musk has called the bluff of regulators. He has routinely rankled those within the European Union for his approach to rules as optional, in their eyes, rather than required. And this latest decision appears in some way to be a heightening of that: on the face of it, this is quite an egregious breach of the rules.
But whether the EU regulators choose to step in and censure Musk will be a test of their mettle. Musk appears to be goading regulators to act. And for them, it’s now put up or shut up time.
Your email address will not be published. Required fields are markedmarked