
Some still call the artificial intelligence (AI) craze a bubble, but others have decided to use the new technology for work and now say they can’t imagine their life without it. These are called AI super users, and these are their stories.
“I made ChatGPT my personal assistant – and it changed everything. Now I work just one hour a day – and get more done than before.”
We’ve all come across similar posts on our socials, right? They’re, quite frankly, the reason to quit social media – but, in all seriousness, for some people, generative AI really has helped to get ahead.
The so-called AI super users tap the technology every day to analyze data, polish their writing, learn new skills, and, probably most importantly, reclaim hours of time – or at least swap the boring stuff for something more interesting or creative.
Cybernews has chatted to a bunch of AI enthusiasts, and one thing seems to unite them – taming the chatbots is definitely worth it. Yet, they also say they try to be as careful as possible.
Helps productivity, efficiency, and creativity
“AI isn’t just hype anymore, it’s leverage. If you’re not applying it to your workflow, you’re leaving too much on the table,” said Chris Grippo, owner of “The Shop Tinkerers,” a growth agency for Shopify brands.
It’s safe to say Chris is in awe. He uses AI every single day – for writing ad copy, running simulations, drafting outreach, and cleaning up datasets.
“It’s like having an extremely fast intern with a solid strategy background. I don’t rush into trends, but I move fast when the ROI (return on investment) is clear,” says Grippo.
There are a lot of people like him out there. They’re not a majority, of course – adoption of AI at work is still nascent, although leaders are pushing it – but they see quite a few benefits in productivity, efficiency, and creativity.
And they’re not easily bewitched by the technology’s seemingly magical capabilities, instead seeing AI for what it is – a tool that can help you become better at whatever you do for a living.
Danny Gazit, managing director at Hype, a crypto superagency, knows perfectly well why he and his colleague need AI: “It helps us move faster by automating a lot of the laborious and mundane tasks.”
To Gazit, the way one prompts the AI model is critical in order to not end up with AI slop. He calls “prompting” a completely new and important skill.
“Being a good AI prompter means thinking more effectively (because sharp context and prompts lead to sharp responses), making better judgments (since AI tends to be average, you need to push for more), and designing workflows (breaking tasks down so AI bots and agents can assist without missing steps),” he says.
Scott Steinhardt – who detects deepfakes at Reality Defender – sums it up pretty neatly: it’s smart to use AI for solving the myriad of clerical or administrative tasks.
“This means formatting and transferring data, making apps talk to each other where there are no APIs or connections, building spreadsheets, sending recurring emails, and so on,” Steinhardt told Cybernews.

OpenAI’s ChatGPT seems to be the go-to model for most people Cybernews has spoken to. But they play with a lot more toys, of course – Anthropic, Google’s Gemini, and other more specialized models.
“ChatGPT can’t do absolutely everything. I typically use Perplexity for facts and sources, Lovable for quick creative work, and Claude for longer, thoughtful drafts. None of the tools do everything, so it’s about effectively choosing them based on the tasks at hand,” says Gazit.
Fact-checking is the discipline you need
Of course, AI models aren’t flawless. Improvement is undeniable, but they still hallucinate and provide inaccurate information – which could lead to costly errors in decision-making.
In February, US personal injury law firm Morgan & Morgan even sent an urgent email to its more than 1,000 lawyers, saying that AI can invent fake case law and get you fired.
That’s why fact-checking – and, actually, basic understanding that these errors arise from limitations in training data, misinterpretation of patterns, or the models’ inability to verify information against reliable sources – is key, the AI super users say.
“If it’s anything critical – legal, financial, or public-facing – I always double-check,” says Grippo, while Hrishikesh Tawade, a robotics software engineer, always requests ChatGPT to provide citations with references so that he can check the model’s claims.
Tawade admits: “For about 70% of the time, the model correctly summarizes information. However, there have been instances of significant hallucinations – fortunately, having direct links allows me to quickly identify and correct misinformation.”
"You can move fast, but you still have to own the final call,"
Cahyo Subroto
Some businesses require additional checks. For instance, the founder of Espresso Translations, Danilo Coviello, enjoys AI-based translation systems but says he needs to be extra vigilant for errors.
“High-stakes translations of legal documents and financial reports require my attention because small mistakes could create significant problems,” says Coviello.
“I verify every AI-generated suggestion, especially when the content requires specialized vocabulary. AI serves as an excellent support too,l yet it functions below the abilities of human interaction.”
Cahyo Subroto, Founder of MrScraper, an AI-powered data extraction platform, is fully aware that the AI chatbot can draft anything in 30 seconds, but he still reviews the result line by line before using it: “That’s the discipline you need to have. You can move fast, but you still have to own the final call.”
Hype’s Gazit shares an interesting insight, though. After using AI for so long, he’s noticed that the models can sometimes simply try to make the user happy with their responses, even if the prompt or the idea isn’t that great.
“This can lead to bias in results, so pushing back and questioning the output has become an important part of the process I use. You have to keep challenging it. I’ve even heard of others who play one AI off against another, so that it sense-checks itself and improves each time,” Gazit told Cybernews.
Does the data support the AI hype?
The problem of AI seeming almost eager to please and extrapolating too much from too little is well known – even to OpenAI’s boss, Sam Altman, who admitted in April on X that the latest version of ChatGPT has become “sycophant-y and annoying.”
we missed the mark with last week's GPT-4o update.
undefined Sam Altman (@sama) May 2, 2025
what happened, what we learned, and some things we will do differently in the future: https://t.co/ER1GmRYrIC
In that case, the flattery was a bit much, clearly, and we didn’t like it. Still, to millions, generative AI is a magic key to untold riches.
But users rushing to embrace whatever generative AI is offering is what actually worries Timothy Plunkett, a New York attorney who is a certified Artificial Intelligence Governance Professional (AIGP). He asks: Are programs that write agentic code truly AI, or are they workflow management?
“I abide by the more classic definition that AI is a technical system used to mimic human behavior, not something that just creates a faster or more efficient workflow,” Plunkett told Cybernews.
“AI will give you answers you want, answers you are asking for – so it should be no surprise that the models might cut corners. Remember this: these systems are human creations and are flawed, full of bias and agendas that do not fit the needs of everyone.”
Is AI just a cool name for a technology that manages processes? According to Plunkett, that indeed is the case, and that’s why, in his opinion, people shouldn’t rush into adoption without consideration. The race to try out a Chinese model, DeepSeek R-1, is a good example.
“All data is accessible by the Chinese government. Yet, DeepSeek was downloaded almost three million times in a week! People exposed themselves to a world-class surveillance tool without much thought,” he said.

Finally, as much excitement as AI arouses, a new study from Denmark just found that generative AI chatbots, such as ChatGPT or Gemini, have had almost no impact on salaries and jobs thus far.
This begs the question: are huge investments in AI models really worth their while? The data simply doesn’t support the hype.
“AI chatbots have had no significant impact on earnings or recorded hours in any occupation. <...> Our findings challenge narratives of imminent labor market transformation due to generative AI,” the researchers said.
Well, if you’re Ric Nelson, you won’t care. Nelson is an executive with cerebral palsy – and an AI super user. AI helps him produce project bids, reports, and memos, and he even has a digital clone on Delphi.
“AI is a powerful tool for individuals aspiring to executive positions who, like me, may face challenges in conventional communication styles, as it is difficult for me to respond verbally in a way that others can easily understand,” Nelson told Cybernews.
“Now I give keynote presentations and have authored a book. It has opened many more professional doors.”
Your email address will not be published. Required fields are markedmarked