© 2024 CyberNews- Latest tech news,
product reviews, and analyses.

Cybernews podcast unpacks 2023's AI odyssey


From the buzz around the Open AI drama to the lurking dangers in this AI-driven era, Cybernews’ podcast "Through a Glass Darkly" is taking a look back at the most significant developments in AI in 2023.

But wait, there's more!

In the second part of our show, we’re switching gears and delving into some insightful books that we've explored. We believe they can be your guiding light in understanding this ever-evolving AI era.

OPEN AI drama

openai-qstar-altman

Jurgita: Alright, let's dive into the soap opera vibes of the recent Open AI drama! Gintarai, as the lead writer on this, could you give us a quick refresher on what went down and why it's such a big deal?

Gintaras: It really was epic in terms of tension and the fast-changing dynamics. So here's what happened. First, OpenAI's board suddenly fired Sam Altman, its CEO. Details are lacking, but the statement said Altman was sacked over a failure to be “candid in his communications.”

Needless to say, Silicon Valley was shocked, as in, why? Everything was going so well for OpenAI, and Altman was touring the world with his Oppenheimer-like message of protecting us from the rise of the machines. The members of the team, first and foremost Ilya Sutskever, OpenAI's chief scientist, defended Altman's ouster and claimed it was necessary to protect the firm's mission of "making AI beneficial to humanity." Staff researchers also sent the board a letter, warning of a powerful AI discovery.

But then Microsoft pushed, OpenAI employees pushed, other investors pushed, and then OpenAI announced that Altman was returning as the CEO. The board changed, too – it's now led by Bret Taylor, the former co-CEO of software firm Salesforce, and Larry Summers, the former Treasury Secretary, is also included. Sutskever is still at the company but not on the board anymore, by the way.

It's interesting to see which way OpenAI will now turn. But we can remember that OpenAI started out in 2015 as a nonprofit with the goal of making sure that AI doesn't wipe out humans, and this only changed when ChatGPT – and presumably, lots of money – showed up on the horizon. This pressure to commercialize nearly imploded the firm, as we’ve now seen.

Interestingly, I recently watched iHuman, a BBC documentary made before the whole ChatGPT bonanza, and Sutskever was extensively interviewed in the film. He seemed worried about AI's rise, so I'm actually not surprised if he saw the for-profit arm of the company, led by Altman, as a threat to the safety of humanity.

Now, OpenAI has created a new "safety advisory group" that will make recommendations to leadership and updated its "Preparedness Framework," which appears to show a clear path for identifying, analyzing, and deciding what to do about "catastrophic" risks inherent to the models they‘re developing.

But I think it's clear this is more or less for show, and that the commercialization camp has won this time out. It’s still a bit shady to me – Altman seems to be a polarizing figure, and I have no doubt that Bret Taylor and Larry Summers are not AI safety experts.

AI and the job market

Gintaras: Jurgita, with all the chatter about AI taking jobs this year, what's the word from analysts? Who's at the highest risk of job loss? And when we talk about worst-case scenarios for the near future, how likely are they to become a reality?

Jurgita: AI has become a buzzword in the job market. LinkedIn data shows an uptick in members’ skills and employees’ postings that mention artificial intelligence. Even if you’re not a developer applying to fill a newly created position like “Head of AI” or “AI prompt engineer,” you’ll probably be in a better position if you know how to harness AI to enhance your skills.

As for the bigger picture, this April, Goldman Sachs said that 300 million jobs will be replaced by automation. Now, IT research company Forrester is giving a far more conservative number, saying only 2.4 million positions are in peril.

Those with salaries over $90,000 stand the greatest chance of having their job replaced altogether, with legal, scientific, and administrative professions being identified as the highest risk.

However, many experts envision AI as not being a job killer but a job redefiner. A few months back, GitHub, a code hosting platform, surveyed 500 developers to better understand the typical developer experience. It revealed that a staggering number of developers – 92% – were already using AI tools either at work or in their leisure time.

Employees are already seeing the huge potential in AI – they’re delegating the most mundane jobs to machines, freeing their own hands for more creative tasks and innovation. And this is one important trait that machines will not be able to mimic – at least in the near future – human creativity. I'm saying that with caution, because…

AI and Hollywood

Writers strike 2023

Jurgita: AI has been stealing the show in Hollywood this year, causing writers and actors to strike against its extensive use. How is this lightning-fast tech adoption reshaping Tinseltown, and what impact does it have in the industry?

Gintaras: TV shows have become shit – end of story. Well, no. There’s a lot of content out there, and a big part of it is this wishy-washy nonsense, but what the writers and actors were worried about – among arguably more important wage demands – was that the studios would use new generative AI tools and deepfakes to avoid paying unionized workers.

First, the Writers Guild of America has won a provision in the new contract, saying that literary material cannot be written by AI and that it will not be considered source material.

Then, the Screen Actors Guild also said it had achieved “unprecedented provisions for consent and compensation that will protect members from the threat of AI.” Actors have long demanded to control the ability of studios to replicate their voices and likenesses via generative AI models.

This was indeed worrying because back then, the Hollywood Reporter cited sources saying that the studios wanted to make AI scans of Schedule F performers – union members earning around $32,000 per TV episode or $60,000 per film – which they could then keep reusing without having to pay them again. The report even said that actors’ likenesses could be used after they passed away.

All in all, for now, the threat of AI seems to have been fended off. But I wouldn’t count on the new technology not being implemented much more widely in the future – just look how prevalent CGI is now in the movies. It’s the new normal.

AI and data leaks

Data leak yellow bright green

Gintaras: AI reliance brings huge consequences, especially for our privacy. Jurgita, what's keeping security experts up at night regarding AI's impact? Let's dive into the concerns they're wrestling with.

Jurgita: AI is being trained on immense amounts of data, including the one we put into chatbots. That alone poses a huge risk to individual privacy and also exposes corporate secrets.

One of the widely discussed cases was when back in April several Samsung employees allegedly leaked sensitive company data on three separate occasions. After the incident, Samsung banned the use of generative artificial intelligence (AI) tools from its premises, threatening disobedient staff with the termination of a contract.

Another concern is how malicious hackers exploit generative AI tools for their evil deeds. Not long ago, poor grammar was the first tell-tale sign of a scam email. With that eliminated, it has become much harder to spot phishing emails.

That’s, of course, not the only way hackers exploit ChatGPT and similar tools. Unfortunately, they’re jailbreaking AI models and tricking them into writing malicious codes for them, giving step-by-step guidance on how to hack websites, and so on. Unfortunately, the opportunities both for the good and bad guys are limitless.

AI and misinformation

misinformation-hamas-israel

Jurgita: But there's a pressing issue that's close to my heart – how AI is amplifying misinformation. From deep fake jokes to AI-driven influence campaigns and media outlets leaning heavily on AI, the stories are endless. Gintarai, is this getting out of hand?

Gintaras: Yes and no. Since May 2023, the number of websites hosting fake and false articles created with artificial intelligence (AI) has increased by more than 1,000 percent, NewsGuard, an organization tracking misinformation, said just recently.

The rollout of generative AI tools has been a boon to content farms and misinformation purveyors alike. In other words, it’s now as easy as ever to spread pure propaganda or at least false narratives about things like elections, wars, and natural disasters.

I like what Claire Wardle, co-director of the Information Futures Lab at Brown University, recently told The Washington Post: “We have to get smarter about what this technology can and cannot do, because we live in adversarial times where information, unfortunately, is being weaponized.”

True, and yet, one could also argue that information has been weaponized for ages, AI or no AI. The impact remains to be seen, of course – some say the current boom of generative AI can be likened to the Gutenberg press revolution.

What is quite obvious is that media literacy classes should include AI literacy as soon as possible. Sadly, at least in democracies where everyone has a vote, it’s been easy to sway people anyway – on average, most people are not especially sophisticated news consumers who follow professional advice on how to differentiate fake or false content from real news.

Social media is close, but it’s different. The owners and chief executives of platforms like X, Instagram, or Facebook should undoubtedly do more to fight the epidemic of false content on them – unfortunately, they probably won’t because engagement, whatever kind, means cash. This is where state regulators could have a say.

Still, at the moment, the flood of new information and conjecture around AI is an informational risk in itself. Companies may overstate what their AI models can do and be used for. Proponents may push science-fiction storylines that draw attention away from more immediate threats.

AI intimacy

AI romance algorithms

Gintaras: Remember the iconic movie Her from 2013, where Joaquin Phoenix's character falls for his AI assistant? What was once science fiction is now our reality. Jurgita, how is AI reshaping the landscape of dating and intimacy?

Jurgita: So, this year, a Belgian man in his 30s has been talking to a chatbot named Eliza for around six weeks before his death. The grieving widow and the psychiatrist both felt that the chatbot was at least partly responsible. The man felt "isolated in his anxiety" and saw the chatbot as a breath of fresh air – Eliza became his confidante, a sort of drug.

This is one of the areas, I believe, that we should be particularly concerned about – AI chatbots replacing human interaction. We need to understand that AI models, even when built with many ethical considerations in mind, are designed to give us the answers we want. It’s just like the social media bubble we tend to create for ourselves – online, we often seek support and rely on like-minded people, almost completely shutting down the window to different, usually uncomfortable ideas.

Another area where I see a red flag is what AI does to dating. Here at Cybernews, we’ve been researching a variety of different romance apps. Some of them seem like something out of a science fiction novel. For example, recently, our colleague Paulina Okunytė experimented with an app somewhat similar to Tinder. Only, instead of matching with real people, users are being paired with AI avatars. While this could be a fun way to spend an hour or two, you know, chatting with a machine, it leads nowhere.

Sure, for some introverted people, all this digital innovation is a perfect escape, but what will we have in the end? To quote the Beatles:

All the lonely people

Where do they all belong?

To be fair, I am an introvert, too, but what happened to some quality time with a book when you want to escape people?

Boy, do we like books, don’t we, Gintaras?

Book recommendations

Jurgita: We've barely scratched the surface of AI's evolving impact. Keeping up with all the chatter around it can be a real challenge, and sometimes, it's hard to make sense of it all. That's why we turn to books, hoping they'll give us a broader perspective. Gintarai, could you kick us off by sharing a book that's been particularly insightful for you?

Gintaras: yes, absolutely. I loved – and reviewed on Cybernews – Your Face Belongs To Us, a book by New York Times journalist Kashmir Hill.

Essentially, the book is a haunting portrait of sci-fi darkness in the real world, and the ultimate villain here is Clearview AI, a secretive facial recognition start-up that has built a huge, searchable database of people’s faces.

I called my review “Where do you when there’s nowhere to hide?” It’s fair to say the book scared the shit out of me. Various law enforcement agencies around the world, authoritarian governments included, have contracted the firm and started identifying everything about a person’s life based on a photograph. Clearview AI made this possible, so why not?

If you’re a police officer trying to catch a criminal, it’s all very exciting – especially in, say, non-democratic countries such as Russia or China, where facial recognition technology has made identifying annoying protesters so much easier.

But if you’re even a tiny bit conscious about your own privacy, you should worry about the possibility of a future where being anonymous in public places will be virtually impossible.

Throughout the book, Hill almost begs people to pay more attention to their online data and warns us that drooling over augmented reality devices is really, really not smart. Faceprints are real, people.

Yeah, you can hide your name online, but if you expose your face, you’re not anonymous at all. And yeah, you can live in a democracy now, but what if there’s a coup, and you’re suddenly an enemy of the new order?

Jurgita: Sentience by Nicolas Humphrey, Oxford professor, is the book of the year for me.

This book serves as an illuminating starting point for those intrigued by the question of whether machines can attain sentience.

What, at least for now, separates us from machines? There’s something that a robot just wouldn’t care for. A part of us wants to be nourished, and for each one of us, that nourishment might be very different in a sensory way – the smell of a meadow, the sound of the sea, a vision of a sunset, or all of those sensations combined.

Looking at it from the evolutionary perspective, it seems impossible for a robot to become sentient in the near future – I was convinced after reading the book.

However, does it matter? Robots mimic humanity really well these days, and that’s scary enough. What truly matters is whether we attribute sentience to a machine. Do we believe we are talking to a person when we’re talking to a chatbot?

This aspect carries undeniable social implications, as human interaction with AI assistants could potentially escalate to scenarios involving suicide (remember the Belgian man committing suicide after talking to an AI therapist) or even a plot for murder (The person who tried to assassinate the Queen [Elizabeth], in late 2021, was communicating with an AI girlfriend, and it was reinforcing his idea to go and attempt an assassination on the Queen).

Gintaras: another book I enjoyed – and of course was creeped by – was The Battle for Your Brain, written by Nita Farahany, a professor of philosophy and law at Duke University.

Farahany writes about what she calls our naivety in presuming that companies and governments that are surveilling us through our laptops and smartphones cannot intercept our thoughts at least.

For now, yes, but neurotechnology companies that are developing ever more sophisticated brain-hacking devices (headsets, electrode-enabled earbuds, hats, and other gadgets that connect our brains to computers) are not necessarily full of goodwill.

Sure, some innovations seem useful. Concussions might be diagnosed with the help of smart helmets, and devices can also track the slow-down of activities in brain regions associated with conditions like Alzheimer’s disease, schizophrenia, and dementia.

But the same neuroscience that gives us intimate access to ourselves can allow companies, governments, and all kinds of actors – who don’t necessarily have our best interests in mind – access too.

The journey towards a world of total brain transparency, where folks peer into our brains and minds at will, has already begun. For instance, in China, workers in government-controlled factories are required to wear EEG sensors to monitor their productivity and their emotional states, and they can be sent home based on what their brains reveal.

What if, one day, these scanners can show your political attitudes, too? Arrests for thought crimes aren’t impossible.

What will it mean if our thoughts and emotions are up for grabs, just like the rest of our data? Does freedom of thought protect us from governments tracking our brains and mental processes? The answers are not there yet, but even waiting for them is quite disturbing.

Jurgita: Data Baby: My Life in a Psychological Experiment, Susannah Breslin narrates her life's journey, encompassing experiences from battling breast cancer to establishing herself as a journalist in the adult entertainment industry.

The focal point revolves around the extensive experiment she was a subject of for decades. When she was just a toddler, Breslin's parents enrolled her in a laboratory preschool at the University of California, Berkeley. She was among hundreds of children studied as part of research aiming to predict their future paths.

We’re all data babies, she writes. At least the experiment she was a part of was for scientific enlightenment. What do we sacrifice our privacy for?

The life details we put out there willingly, seemingly with no self-control of keeping our holiday and children’s pictures to ourselves, combined with what hackers have stolen from us or companies failed to protect, are enough to build our digital twin – a doppelganger if you will.

Breslin accurately captures a very important aspect of being surveilled – even if just by Big Tech aiming to exploit your data for social engineering and convince you to buy more stuff.

The one who watches you has power over you. Have you purchased a new shiny outfit for your work party? Was it intentional, or did you follow a “random” ad boasting a huge discount just when you needed it?


More from Cybernews:

Sony's Insomniac Games finally addresses ransom leak

Dozens of high-profile Israeli firms hacked by Iran-sponsored gang

Streaming in 2024: more content, licenses, bundles, and ads

Europol warns 443 shops of digital card skimming

Tech to avoid while last-minute holiday shopping

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are markedmarked