Is it real or just a fantasy? Cybernews take on ChatGPT hype


Winning arguments was way more fun and much easier before Google. However, the popular search engine is not always correct either, and the risk of misinformation has only increased with generative AI tools like ChatGPT going mainstream.

ADVERTISEMENT

Conversing with ChatGPT, Freddie Mercury wouldn’t know any better if “this is the real life? Is this just fantasy” since generative bots like this do not know what a fact is.

OpenAI's ChatGPT, Google's Bard, or any other artificial intelligence-based service can inadvertently fool users with digital hallucinations, explains our senior journalist Vilius Petkauskas in his editorial piece ChatGPT's answers could be nothing but a hallucination.

Many enthusiasts and scholars experimenting with the ChatGPT and other generative AI tools encountered a flaw – AI systems create something that looks very convincing but has no basis in the real world.

"The AI world's hallucinations may be a small blip in the grand scheme of things, but it's a reminder of the importance of staying vigilant and ensuring our technology is serving us, not leading us astray," Greg Kostello, CTO and Co-Founder of AI-based healthcare company Huma.AI, told Cybernews.

Cybersecurity experts also sound the alarm bells. This week, I interviewed Daniel Spicer, a chief security officer at the software company Ivanti, and learned just how worried he is about the direction this is taking.

First, it might aggravate the misinformation problem, and "people have to acknowledge that the AI and the search are just as likely to be incorrect or misstate facts as any other person who's just put something up on a page on the Internet."

Also, while generative AI is of little use to defenders, cybercriminals might abuse it to a great extent, making it only more challenging for us to fend off cyberattacks.

"Imagine if they [cybercrime gangs] had AI as their Copilot, helping them generate and change up their tactics so that they're a little bit harder to identify throughout the network as they're trying to carry out their attack," Spicer said.

ADVERTISEMENT

Neil C. Hughes went even deeper this week with his editorial, emphasizing that OpenAI's chatbot can be leveraged to create stealthier malware and help threat actors draft more believable phishing emails.

When he asked ChatGPT, who was to blame for the inappropriate use of the platform, it replied, "As an AI language model, I am not capable of having intentions, desires, or emotions, so I cannot be held responsible for any actions. The responsibility for the content generated by me lies solely with the person or organization using me, as they have control over the context in which I am used and the manner in which my outputs are utilized."

But, of course, it's not all doom and gloom. We've had our share of fun trying to make ChatGPT flirt. First, we asked two AI bots, Jax and Mina, to flirt in a robotic language, the results were satisfactory, and we wrote down the dialogue. For the video version, we asked to flirt more naturally, and the conversation was indeed more human-like and, well, excuse me, but way more boring.

Since AI is not yet intelligence per se and rather reflects our way of thinking and our biases, talking to ChatGPT might feel like looking into a mirror, which sometimes might not be a pleasant experience.


More on AI from Cybernews:

AI can see things we can't – but does that include the future?

NIST to launch AI guidelines amid ChatGPT fears

Pigeons puzzle experts with AI-matching intelligence

Will ChatGPT sink Google?

AI lawyer retires before it even has its first case in court

Subscribe to our newsletter

ADVERTISEMENT