Twitter bot problem is greater than the 5% it’s claiming, a hacker who was banned from the social media platform for toying with bots told Cybernews.
It’s hard to convince a well-known ethical hacker Joshua Crumbaugh that bots account for only 5% of Twitter’s active daily users. After deep-diving into the marketplaces selling Twitter accounts, he believes a social media giant could do a better job weeding out bots designed to misinform and scam people.
Here's what numbers tell us. Twitter estimates that false accounts represent fewer than 5% of its daily active users. It translates to approximately 11,5 million active daily users, given that Twitter has close to 230 million of them.
Crumbaugh’s analysis of the dark marketplaces gives us quite a different picture. He found one seller with 55 million Twitter accounts for sale.
“That right there draws some serious questions on their 5% number,” he told Cybernews.
Crumbaugh himself was banned from Twitter after openly demonstrating how relatively easy it is to create a spam bot. I sat down with him to learn more about his experiment and the Twitter bot problem.
You’ve experimented with bots on Twitter. Why? What was your intention?
This was towards the end of the 2020 elections. Bots were a really big topic at the time. One of the local reporters contacted me, and we got together and talked about this problem. She mentioned that it would be cool if we could do a demo. I said that it was not a problem and it would be fun to build a Twitter bot.
We built a republican Twitter bot and a democrat Twitter bot. All they did was very simple – they looked for specific hashtags that were very pro-republican or very pro-democrat and anti-republican and anti-democrat. In any case, we just had to grab all these tweets, retweet them, like them, and put a series of random comments. The idea was to see how much interaction we get. They got about the same amount of interaction, and it was a really interesting experiment.
We talked about that, and not long after my accounts got shut, every account associated with my particular phone number. So my company's account [Crumbaugh is the CEO of PhishFirewall ] got permanently banned. It was all because we showed just how easy it was to create that Twitter bot. It was easy to do.
When I was completely honest with them, I just said to them, "Hey, I'm a security researcher, doing a story with the news," they absolutely rejected giving me the API [Application Programming Interface] key. I had to go with less honest ways of doing it.
There are these sites that are primarily out of Russia that sell all kinds of accounts. For example, in one of the largest marketplaces out there, there's one seller, and he's a huge one, with thousands of orders and 90 5-star ratings, he's a certified premium seller on this marketplace, and he claims to have 55 million accounts with over 10,000 followers each. I bring that up because there's a lot of talk in the news about the percentage of bots. There's an army of bots representing more than the 5% the Twitter is saying right there, just waiting for somebody to buy them.
So you don't believe what Twitter is saying that 5% of Twitter daily users are fake accounts?
It's bigger than the 5%. To give an estimate is very difficult. They are getting smart with these bots. Half of these bots have more followers than I do, especially now when I have a new Twitter account. I'm looking at these marketplaces, and you can buy verified accounts with the blue checkmark. You can purchase developer accounts. So I said I needed developer accounts. Developer accounts are Twitter accounts with API keys, so it's really easy to program a bot. Just because your Twitter account is not a developer account does not mean you can't do a bot. It just gets slightly more difficult because you have to automate the mouse movements on the screen.
But that's relatively easy these days, and so they figured out how to do that as well. So, I went to one supplier, and the price was 150$ an account, but I asked, "Hey, can you supply 100,000 developer accounts, and they said, well, we need 30 days, but absolutely."
That tells me that this problem is far bigger than 5%. I would estimate that it's bigger than the 20% I've also heard in the news. But it's really hard to say because the best numbers I can find for this are the marketplaces selling accounts. But when we have one seller with 55 million accounts, that right there draws some serious questions on their 5% number for sure.
Is Twitter doing enough to fight bots? Or maybe it's not that easy to get rid of bots?
In Twitter's defense, it is not that easy to identify bots. There's a lot more that they could do with machine learning and AI to detect these bots better because there are some sorts of predictable behaviors and indicators that they can use.
The bad guys are always trying to stay one step ahead, too, so their bots are getting smarter. It's a lot more random. Many of the bots aren't using APIs anymore. I talked about how it's easy to create a bot when you have API access, but many of the bots are emulating the mouse movements and keyboard movements on the screen to evade detection. All of that makes it a little bit more challenging to detect.
So when we learned about Musk's intentions to buy Twitter, we also learned that he wants to weed out all the bots and maybe even manually authenticate the users. Does that sound futuristic or something that could be achieved in the future?
This is Elon Musk we are talking about. He constantly achieves things that we all thought were impossible. They could do a lot better job because they are doing a great job at stopping people that are being completely honest. But the bad guys figure out how to get those developer accounts en masse, get followers en masse, and do all of these things to build this entire marketplace.
I found three major marketplaces with millions of transactions each, all devoted to selling pretty much nothing but Twitter, Instagram accounts, and things like that. You can buy accounts with thousands of certified followers, and the more it has, the more expensive those accounts become.
There's this massive marketplace devoted to this. Unless they can stop that, there's always going to be the ability to do this because they may stop someone who's overly honest with them, but the people who have figured this out as a business are not getting blocked.
They [cybercriminals] tell you the age of each account, and you've got accounts that are less than six months old, and you've got millions of accounts that are more than five or even ten years old. This has been a problem for a long time, and it's just gaining enough notoriety that they are trying to do anything about it. To Elon Musk's goals, I would say anything is better than nothing.
So, if you had done your bots in secret, you'd still be able to operate those experimental bots?
And that's what I've told them. They would have never tied this to me if I'd been trying to hide my identity. I'm a security researcher, and part of trying to be a security researcher is sometimes you do things that might violate their terms of service. The general rule is don't hide your identity, make it clear who you are, that it's for research purposes, and you won't have any fallout from it. I don't regret it.
What harm can those bots cause, besides disinformation campaigns?
There's a lot. We've seen AI-driven phishing on Twitter and Facebook. So it's learning about you. Typically it's taking data that was received through some data breach or a series of data breaches because we all have a bunch of our data just sitting out there in these criminal databases. These bad guys are very well funded because of the ransomware they've done, they've got money, and so they've got resources, and they are building some very complex targeting capabilities. I've seen bots designed to steal your credentials and run malware on your computer.
I've read a story about a kid bullied at school. He built a bunch of bots to turn the tables and bully somebody else in school. They can be used for different purposes, everything from nation-state espionage to basic criminal activity. I've seen them used for securities fraud and things like that.
More from Cybernews:
Subscribe to our newsletter