
As we keep guarding our most precious data with laughably weak passwords, it’s no surprise that companies are looking to replace us with artificial intelligence (AI).
A new study from Cybernews analyzing over 19 billion freshly exposed passwords confirms what we already suspected: we’re still terrible at basic cybersecurity.
Lazy keyboard patterns like “123456” dominate, and a staggering 94% of passwords are reused or duplicated. The data, pulled from leaks between 2024 and 2025, also shows a persistent habit of using names or movie characters as passwords.
It gets worse. A Cybernews study found that 42% of passwords were just eight to 10 characters long, with nearly a third composed solely of lowercase letters and digits. Despite years of warnings, default passwords like “password” and “admin” are still widely used, as are profanities.
This simplicity makes passwords easy prey for brute-force attacks in what researchers call “a widespread epidemic of weak password reuse.”
Even those in top government positions aren’t immune. Tulsi Gabbard, now serving as the US Director of National Intelligence, reportedly reused the same weak password across multiple accounts for years.

It included the word “shraddha,” a term from Hinduism that refers to a ritual honoring the dead and appears to hold personal meaning to Gabbard – a reminder of how sentiment can also override security. Data reviewed by Cybernews shows that more than 100,000 users today still rely on the same password, or a variation of it.
Which brings us to the launch of REAL ID, now required for domestic air travel in the US and for entry into certain federal facilities.
While biometric ID systems are becoming standard globally, including in the EU, the US rollout feels less like a convenience upgrade and more like a control mechanism. Some cybersecurity experts have gone as far as calling REAL ID a “surveillance superweapon” and a “giant bullseye for every hacker in the world.”
All of this – the passwords, the breaches, the federal overreach – points to one thing: human error. If we’re the weakest link in cybersecurity, it’s no wonder machines are being trained to cut us out of the equation entirely.
AI doesn’t forget passwords. It doesn’t click phishing emails. And it doesn’t name its login credentials after childhood pets or spiritual concepts. That’s why companies are handing more responsibility over to machines – not just in security, but in every corner of the workplace. Microsoft recently said that AI now writes up to 30% of its code, while Meta wants half of its development done by AI in the next year.
Fiverr’s CEO, Micha Kaufman, bluntly warned his employees that “AI is coming for your jobs,” adding that those who fail to adapt are “doomed.” The message may sound alarmist, but for a company built on outsourcing low-skill freelance tasks, it’s also realistic.
Cybersecurity giant CrowdStrike is laying off 500 workers to streamline operations through automation. Duolingo said it’s ditching contractors as it becomes an “AI-first” company.

Big tech companies that helped build these systems are treading more carefully with their message. IBM said AI had replaced hundreds of HR roles, but also created new jobs in programming and sales.
Microsoft promises a future where people aren’t replaced, but empowered, managing their own AI agents rather than competing with them. And Google is funding the training of 100,000 electricians to keep the growing AI infrastructure running.
Like any technology, AI is a double-edged sword. If we can learn from our flaws, adapt, and work alongside these systems rather than fear them, the future may not be after replacement after all. It could be safer, too.
Your email address will not be published. Required fields are markedmarked