Hacker’s guide to asymmetrical OSINT warfare


I’ve never experienced a warzone or had to survive an invading army, but I know something about powerlessness.

I grew up in rough neighborhoods near the border of Tijuana, Mexico. Most of my family has been carjacked or abducted. I was kidnapped once, albeit briefly. Powerlessness was familiar, but it never taught me hopelessness.

Someone once said, “To be powerless is to be at the mercy of the powerful, and that is the root of all oppression.”

ADVERTISEMENT

That hit home. And if you’ve followed my writing, you know this: I have a deep disdain for arbitrary control.

I’ve lived under it. From a fanatical religious cult to 11 years in the US federal prison system – an authoritarian environment where rumor mills and disinformation replace access to real news.

This is the new world now. You don’t have to be in Gaza or Ukraine to feel the effects of narrative warfare. Information is the battlefield, and control is the objective.

But rather than feeling helpless, this playbook is designed to empower you to expand your imagination with real-world OSINT techniques and give you tools for running counter campaigns against the onslaught of wartime propaganda.

As historian Howard Zinn once said, “If you can control information, you can control people.”

While we don’t want to control people the way the enemy wants to, we do want to help shape perception around facts and truthful narratives, which counters the goal of enemy disinformation and misinformation campaigns.

That’s why this guide exists: to help punch through state-run narrative control and turn OSINT into a force multiplier for grassroots resistance.

While state militaries rely on satellites, SIGINT, backdoors, and scraped data, asymmetrical actors use troop selfies, Twitter (X) convoys, and weather apps to map axis positions, which could disprove false narratives designed to cause fear, confusion, or something worse.

ADVERTISEMENT

Sound interesting? Let’s go down the rabbit hole of asymmetrical warfare for the everyday person like you and me.

Identifying bot behaviors and tracking disinformation campaigns

It is substantially documented that enemy forces have utilized social media platforms to spread propaganda at scale. This tactic is as old as warfare itself, spanning thousands of years. As the world entered the modern era, it exponentially increased during WWII and the Cold War and continues to the present day.

Almost every country in the world is involved in some form of propaganda, especially the West. Bots also play a strong role in amplifying the spread of propaganda, including ChatGPT, which continues to propagate perception-shaping false news engineered by Russian propagandists. This is nothing new. So, how do you counter it?

Some will ask, if this is even possible? Consider this: Platforms like Bellingcat have demonstrated how grassroots analysts can outperform the efforts of government narratives using fact-checking and OSINT.

There’s a little thing they invented back in the pre-common era called counterpropaganda. In the modern context, users can expose disinformation through a variety of methods, such as reverse image searching doctored media, geolocating fake battle footage, and especially archiving posts before they get deleted. Sometimes things get accidentally posted and removed. And sometimes you just need to shadowban the propaganda accounts.

How do you identify them? The answer lies in simple online behaviors we all recognize, maybe just within the theater of war context.

  • High volume posting: If social media post activity is running around the clock, this suggests the use of bots or shift-based control.
  • No personal content: The absence of selfies, personal photos, food, friends, or family-related content means the account is likely a sock puppet.
  • Only follows or interacts with accounts that share the same political views: This narrow behavior could indicate a coordinated disinformation campaign, especially if the account avoids all opposing opinions and only promotes one side.
ADVERTISEMENT
  • No replies, only posts/retweets: This is classic behavior associated with bots.
  • Instant commentary on breaking news: Generally pre-scripted narratives or alerts connected to information cells.
  • Repetitive phrases: Linguistic and psychological indicators, such as repetitive phrasing in military or political slogans copied and pasted identically across many profiles, occur commonly in ideologically driven campaigns. This helps connect the sock accounts in the propaganda network.
  • Overuse of emojis and hashtag bombing: This is commonly associated with artificial amplification tactics. Foreign scammers using bots clumsily rely on these in excess.
  • Disregard for nuance or evidence: Propaganda doesn't like to debate. Its sole purpose is to assert a narrative, not question it.

There are several ways to determine whether an account is legitimate or being used to spread propaganda. I use Twitter scrapers and other tools to export social media posts and chats into ChatGPT. This allows me to analyze the content quickly by providing specific queries that help narrow down the criteria I’m looking for and build comprehensive statistical reports.

Intelreport

There are several ways to determine whether an account is legitimate. One option is Bot Sentinel, which assigns a score estimating the likelihood that an X user profile is a bot.

Another free online resource is Hoaxy. It lets you track and visualize the spread of information across social media, making it incredible for tracking information campaigns regardless of intent. It displays the usernames of accounts that have shared specific links or claims.

ADVERTISEMENT

For our own campaign, this lets users track the spread of misinformation and disinformation, even if it’s propagated by bots. It also provides real-time monitoring, so you can track how information spreads and through whom it spreads.

While it may not be able to pinpoint the exact source of where the information began, it can help infer the point of origin or identify early amplifiers by analyzing the network structure and the timeline of how the information spreads.

Hoaxy social media interactions
An example of Hoaxy tracking social media interactions with the keyword “Ukraine”.

Taking back the narrative: sockpuppet warfare

Let’s break down what a sockpuppet, or fake online identity, can be used for in the context of this article:

  • Gather useful intelligence
  • Disrupt propaganda ecosystems
  • Join private or closed communities
  • Silently shift narratives from within
  • Blend into opposition bubbles

These can be created (ethically) for different purposes. For example, a sock account might be curated to blend into the enemy’s sphere of influence, or crafted with diametrically opposing content for counter-messaging purposes.

This is the easy part, although fighting propaganda-spewing accounts isn’t exactly glamorous work. But it is vital if you want to diminish the social impact of enemy bot farms.

The good news is, you can utilize AI-powered bots or manually manage sock accounts to monitor narrative propagation. Once you’ve got your operation rolling, you can use Hoaxy to track how it spreads.

That means it might be a good idea to start your own social media sockpuppet bot farm. But if that’s not your thing, you can still contribute by making posts and amplifying other accounts to help spread truthful narratives.

ADVERTISEMENT

The difference between enemy bot farms and how you deliver information is a game-changer. This can only be achieved by posting human-centered storytelling that resonates more than state-crafted propaganda.

A common international language is using memes, irony, and satire to ridicule enemy narratives. We see these all day long, on any day of the week, on social media, depicting the president of this country or the prime minister of that country in ironic scenes.

Next, you trendjack hashtags and bait algorithmic recommendations to climb the visibility ladder. This is the act of inserting your content into a conversation or hashtag that’s trending.

This causes strategic disruption of the visibility that an enemy is gaining through trending hashtags, and ultimately can lead to hijacking the false narrative. It’s essentially stealing the enemy’s megaphone and using it against them.

People-powered surveillance

I learned something while in jail for hacking back in 2009. While my jail unit only had two cameras, the inmates often referred to other inmates as walking cameras. In the real-world context, this means people are proverbial cameras, who see and witness events first-hand. And since most of us have smartphones with cameras, the saying becomes quite literal.

Having people on the ground in conflict zones who can safely document events is a powerful way to expose false military and government propaganda. This has become especially critical during the war between Israel and Gaza, where one side has worked to minimize reports of civilian casualties in mainstream media, while the other has presented concrete, documented evidence of a genocide unfolding behind closed doors.

Analyzing the media artifacts

Whether we are observing the industrious creativity of Ukrainian troops or Israeli IDF soldiers mocking the deaths of Palestinian families on TikTok, unauthorized social media uploads can provide invaluable insights, such as revealing locations, equipment, psychological states, operational security breaches, and broader patterns of behavior that might otherwise remain hidden.

If certain soldiers have a propensity to document aspect of a war, I am going to follow that account and look for artifacts that can give away more than what viewers were intended to see.

ADVERTISEMENT

A skilled OSINT-minded individual could also uncover if that poster is subscribed to apps that leak geolocation data, such as Google, which can aid researchers in following troop movements.

Niamh Ancell BW jurgita Izabelė Pukėnaitė Marcus Walsh profile
Don't miss our latest stories on Google News

Taking screenshots of specific geographical areas shown in videos, such as street signs, license plates, and location-specific tourist spots, and running them through a reverse image search tool can have surprising results.

This same method can be used to spot fake combat images and videos, whether they originate from state-backed or user-created disinformation campaigns. A reverse image search can help assess the source and quickly discredit false visuals. While this won’t necessarily stop their spread, it serves as an important first step.

For example, in recent memory, the military simulation games Arma 3 and Squad have been widely used in misinformation campaigns to falsely portray real-life combat scenes during the Russian invasion of Ukraine and the Israel-Hamas war. Although realistic war footage from video games has become a common tactic, it rarely holds up under scrutiny.

Whether you’re a de facto independent journalist on the ground or the one fighting disinformation in cyberspace, or even the one combing through media posted by opposing military forces, you are part of a network, just like the enemy. You are turning their tools against them, and in the end, the truth always prevails.