© 2022 CyberNews - Latest tech news,
product reviews, and analyses.

If you purchase via links on our site, we may receive affiliate commissions.

Can Facebook clean up its data policy?


A leaked internal document has suggested that the social media giant could not rectify its suspect data practices even if it wanted to, citing swathes of information it can barely keep track of. But not everyone agrees: one digital privacy expert believes that US company Meta can and should take the necessary steps to comply with EU data protection laws.

Ambuj Kumar, CEO and co-founder of information security company Fortanix, is an optimist. Though perturbed by what he sees as Facebook’s breach of trust when it comes to sharing people’s data without their consent, he reckons a combination of democratic state intervention and market forces will eventually bring it to the table. Moreover, he thinks the solution to Big Tech’s privacy problem lies in technological innovation, in the form of a centralized database that all social media users can access and control.

I spoke with Kumar about the issues surrounding Facebook and how it uses our data. In response, he explained why, despite the platform’s shady track record in this area, he doesn’t see it as a cybercriminal organization – and how breaking it up into smaller entities could backfire.

Though it is incorporated in the US, Facebook – or Meta, as it’s now known – is obliged to try and abide by the General Data Protection Regulation (GDPR) because it has lots of customers in the EU – but what is the reality of enforcement against such hugely powerful companies?

That's interesting. I fundamentally believe that in democracies people will do the right thing. And what we need is transparency. You mentioned Facebook is powerful, and certainly, they are, but there was an Irish citizen named Max Schrems: a few years ago he approached Facebook, saying that he wanted it to delete his data. One thing led to another, and the EU passed a ruling called Schrems II. And the ramifications of that are that any company in the cloud has European data: they would have to not only delete that but demonstrate that the American or other governments cannot get to it. My observation is that eventually, if people know what is happening, they will have lawmakers enact new regulations with new enforcement agencies. We have dozens of European customers who take our products because they need to comply with Schrems II.

Fair enough, but what do you say to the leaked Facebook document that was recently picked up by the industry press? The gist of it was that things have got to the point where it can’t even regulate its own data internally – it said it would be like emptying a pot of ink into a pool of water and then trying to recollect the contents. You’ve recently suggested that a central database that can be accessed and controlled by all social media users might be a solution, but how do we get there in the real world?

Great question. So the reality of the situation, going by that leaked document, is that they collect users' data, and just put it all on their systems. Facebook has tens of thousands of engineers, and any one [of them] working on some project might be able to access that data. So it is really haphazard: there is no way of figuring out who has access to what data, for what purpose, or for how long.

But the right way of looking at it is, for example, how we control the distribution of [illegal] drugs or black money. Anti-money-laundering laws require financial institutions to know their customers – exactly what is the source of money, you trace it from the point of collection to its exit. The same thing can be done with data: this is a picture I am collecting from a user, and the only thing I can do is show it on my website to their friends. So you need to tag the data in that internal network. And that's how you ensure that it cannot be used by somebody who is not authorized. When a user says “I want to exit your system” you can actually go and delete that data. And, by the way, that deletion is a very strict requirement of the GDPR, which Facebook needs to comply with.

Technically, it's not complicated – it's just that large companies realize that the more data they collect, the more control they have over users. From a technical perspective, it's very much a solved problem. What we need is for these companies to adopt those solutions.

But what is going to bring these guys to the table and force them to comply? I can't see how fines will do it, because of the vast wealth that these companies command – unless the ceiling gets raised. So I don't doubt you when you say that there is a way to do this, but where is the will to see it through?

You are absolutely right. Legal systems can enact the laws, but enforcing them and fines is an uphill battle, even if you raise the ceiling. Let's say that GDPR, for example, says that you can fine somebody 15-20% of global revenue. But when the stakes are high, these companies are also spending millions of dollars in the legal system – to save their tens of millions of dollars in fines. It's not going to be easy to prove in court that they are liable for that kind of money. So I agree that the legal system alone is not sufficient, and we have seen proof of that already.

But what can change is, if you look from Facebook's perspective, their stocks are not doing well – and that does matter to Mark Zuckerberg. And one of the reasons Facebook is struggling is that their PR image is questionable. Educate people that companies are selecting data and using it for things that you didn't know about. Maybe even technical people like me have a role to play [with] an app that shows your data is going into fifty places [when] you thought that it was going to the one place. So [it’s about] raising awareness, because if citizens of the world don't believe that the company is being ethical or moral, sooner or later it will start to show in their earnings.

Do you foresee a time when Facebook and other big tech firms are forcibly broken up by state intervention, as has been called for by some? And if so, do you think that would be helpful to solving the data problem, or a hindrance?

I don't know whether they'll be broken up or not, but if you break [Facebook] up into multiple companies then all of them will need to be concerned about privacy. And the reason we are having this conversation right now is because Facebook, being Facebook, does warrant certain attention. I don't mean to imply at all that small companies are more privacy-concerned, or that they do not misuse data. In fact, quite the contrary – a small company that is flying under the radar might [have] more impunity. If I have five hundred different companies that have access to my data, it's much more likely that one of them is misusing it. So that is an argument that breaking up Facebook might not solve this specific problem.

On the other hand, if you create a data bank [for] all social media companies, they will start to differentiate themselves not on the basis of how much data they have, but on how ethical or private they are about displaying that data, and what kind of controls they give to users. So maybe we can shift the conversation from “Facebook is the biggest social media company because it has 3.5 billion users' data” to “Facebook is the biggest social media company because they are the most privacy-conscious.” That might be one argument for taking user data and putting it in some kind of vault.

What would you say to someone who says Facebook is in fact a huge cybercriminal organization – because it is misusing data in the same way as, say, initial access brokers who sell harvested credentials on the dark web? Is that an unfair comparison, or is there some truth in it?

I think it's an unfair comparison. When my father signed up for Facebook, he was certainly not aware that somebody might misuse his data, but if one of my friends who works for Facebook meets my father, my father will not react badly to him. He will think that he is working for a legitimate company. But on the other hand, if it’s a black-hat hacker... of course there is stigma, they do bad things. I think it is the breach of trust I find most troubling, but [though] one can argue about how messed up their data controls are, certainly all the data they are collecting is for internal use. I definitely believe it is good for their business – they can have you click on more ads and things like that – but Facebook is already a $500-600 billion company, so they will try to optimize things for the long term. On the other hand, if there is a cybercriminal that gets their hands on some of this data, they are not necessarily trying to build a billion-dollar enterprise – they will sell 100 million users’ data for $100. Very different intentions.

You say that, but Facebook is still monetizing that data without its customers’ full realization. That seems to be a moral gray area, from where I'm sitting...

That is true. What will separate the two is how Facebook behaves. So for example, if they apologize that they don't have better controls, invest in technology that works, and are receptive to ideas, then it's one thing. But on the other hand, if they try to suppress the news, to push everything under the rug...

Which they have some power to do...

They have! Their own platform is maybe the most [read] news platform. Many times we hear about what is happening around the world from Facebook, so how they behave will tell us which camp they are in. Do they fundamentally believe that their business is compatible with people's privacy, or do they believe that those two things are at odds and they will always have to collect and misuse data? They can choose their path, and I think we need to watch it.

And further to that, in your best-educated guess, where do you see Facebook on these issues in the next few years?

Facebook recently joined a consortium called Confidential Computing to prevent things like this, [so] a company cannot use data. Facebook certainly needs to invest in technology to make these things better. I'm an optimist, I believe that Facebook is competing with TikTok for user attention, and I will definitely choose – bad as it may sound – Facebook over Tiktok.

Why, is that a case of the devil you know?

Yes. Push comes to shove, US citizens can force Facebook to be a certain way. Because Facebook is headquartered 20 miles from my home, I know a bunch of people who work there, they are an American company bound by Western culture. TikTok, on the other hand, is controlled by a Chinese company... Even if, let's say, Europe and the US came together and said “you must do this or that, or there is a $100 million fine”, do you think that we could impose those things on TikTok? I believe not. So it is definitely the devil I know if it's the devil that we can control.


More from Cybernews:

Biggest European economy tightens its grip on Meta

Twitter without spam bots - utopia or privacy nightmare?

What are the safety risks of removable media?

The UK's Online Safety Bill could expose your private messages to third parties, cybersecurity experts warn

Identity 3.0? How to guard privacy in the metaverse

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are marked