ChatGPT and the slow decay of critical thinking

Some people still believe everything they see on television or read in the newspapers and assume if it's on page one of their Google search results, then it must be true. With critical thinking skills on the decline, could the inaccuracy of tools like ChatGPT make us even more vulnerable to manipulation and deception?

If you were to take a peek inside any meeting, one person always prides themselves on being the loudest and strongest voice in the room. They can be found talking confidently with such great conviction that to a non-expert it will sound as if they know exactly what they are talking about. However, if you scratch beneath the bravado, you quickly learn that many of said individual's claims are largely inaccurate. Unfortunately, ChatGPT already shows disturbing signs of becoming a virtual caricature of that person.

With access to a vast dataset of conversational text and regular updates, ChatGPT is an ever-evolving oracle of information. However, like any artificially intelligent system, it has its limits: its knowledge is currently capped at 2021, which means it cannot provide you with the latest on current events. For example, ChatGPT has struggled to accurately identify the number of times Argentina has won the FIFA World Cup.

ChatGPT vs. search engines

While search engines and ChatGPT both help users find information, their goals differ significantly. Search engines aim to guide users toward accurate and trustworthy resources. But, ChatGPT's primary goal is to generate coherent and natural-sounding responses to inputted questions, using its extensive knowledge of natural-language processing.

It's essential to note that accuracy is not the primary goal of ChatGPT. While providing accurate responses is undoubtedly a secondary goal, it's not guaranteed. In contrast, search engines prioritize accuracy, using algorithms to assess the relevance and reliability of search results. So while both search engines and ChatGPT have unique strengths and limitations, it's crucial to understand that they serve different purposes.

Although their goals differ, there is some overlap between the capabilities. ChatGPT's main objective is to produce human-like responses, drawing upon various data sources to generate a helpful answer. When ChatGPT delivers an accurate response, it simplifies searches, allowing for a more conversational and interactive experience. But we must never assume it's accurate.

The erosion of our ability to think for ourselves

ChatGPT's ability to create seemingly plausible dialogues is impressive, because it looks for patterns and data collected from human conversations. However, it's crucial to remember that ChatGPT doesn't know anything or even what a fact is. It merely regurgitates text based on what it has learned.

As it relies on patterns rather than hard facts and data, ChatGPT's responses are not always accurate. For this reason alone, it's essential to avoid using it as a credible source of information for academic writing or research purposes. Its human-like responses may sound believable, but they are not necessarily based on verifiable data, that is to say, evidence.

The rise of ChatGPT also raises questions about accountability for spreading misinformation or promoting confirmation bias. As with any technology, there is a risk that it may be used to disseminate inaccurate information, intentionally or unintentionally, and it's up to individuals and organizations to use it responsibly.

As with any technology, we must use it responsibly, promote accuracy, and avoid unwittingly spreading misinformation. We need to get used to fact-checking and dare to question what we are told, rather than assuming it's the truth if it's written down.

Without strong critical thinking skills, we risk becoming unwitting pawns in the hands of those who seek to control us through the tool we rely on for information. Moreover, the erosion of our ability to think for ourselves leaves us vulnerable to manipulation and deception.

Where will ChatGPT take us?

ChatGPT's current limited knowledge has made it easy to identify untruths in the answers it produces. Unfortunately, when the next version is released, it will become much harder to identify the differences between misunderstandings, misinformation, knowledge gaps, and digital hallucinations.

It has never been more important to have a curious mind and ask questions about any information you find online. For example, pick an area where you have vast experience or consider yourself an expert and spend a little time discussing it with ChatGPT. To begin with, you might be blown away by the creative answers it produces. But as you dig a little deeper, I suspect you might find the points it raises are naive and often completely inaccurate.

There are big questions about how this technology will continue to serve us and whether it will lead lazy users who want to become millionaires astray and into uncharted digital waters. For now, I believe that humans can avoid being duped by technology and sleepwalking into a fate similar to the ending of Ex Machina. But if we continue moving fast and breaking things, I remain cautious about where future versions of ChatGPT could take us.

More from Cybernews:

Biden-Harris cybersecurity strategy explained

TikTok influencers drive job seekers to Netflix

China's lead in critical tech research 'stunning'

Jack Dorsey’s Twitter alternative Bluesky reaches closed beta

Subscribe to our newsletter

Leave a Reply

Your email address will not be published. Required fields are markedmarked