© 2023 CyberNews - Latest tech news,
product reviews, and analyses.

Signal boss: there’s nothing open about AI


AI development is far from open or democratic, and the very concept is being leveraged by large corporations who are manipulating such terms to shore up their grip on markets and maintain a stranglehold on power. That appears to be the central thrust of an academic paper co-written by Signal president Meredith Whittaker.

OpenAI founder Sam Altman, take note: your company is named after an illusory idea that could potentially be dangerous if humanity is fooled into believing it. Popular AI programs such as DALL-E2 and ChatGPT remain “gated to public” access, and are therefore not open to public scrutiny as some would have you believe.

“We find that the terms ‘open’ and ‘open source’ are used in confusing and diverse ways, often constituting more aspiration or marketing than technical descriptor,” say Whittaker and her co-authors David Widder of Carnegie Mellon University and Sarah West of the AI Now Institute.

“This complicates an already complex landscape, in which there is currently no agreed on definition of ‘open’ in the context of AI, and as such the term is being applied to widely divergent offerings,” they add.

And divergent they are. For instance, while AI company BigScience makes its BLOOM large-language learning model “fully open” to scrutiny, allowing for “community research” and “high auditability”, Google does just the opposite: PaLM and Imagen are subject to “internal research only” and therefore “limited perspectives.”

“Given the immense importance of scale to the current trajectory of artificial intelligence, this means ‘open’ AI cannot, alone, meaningfully ‘democratize’ AI, nor does it pose a significant challenge to the concentration of power in the tech industry,” say Whittaker and her co-researchers.

Open-source development frameworks might help to expedite AI development and deployment — but they also benefit the firms developing them. This might seem benign in the case of, say, a small tech startup, but when used by megalithic corporations, these structural paradigms become, in essence, a tool to consolidate power.

Playing Lego with the data world

“Most significantly, they allow Meta, Google, and those steering framework development to standardize AI construction so it’s compatible with their own company platforms — ensuring that their framework leads developers to create AI systems that, Lego-like, snap into place with their own company systems,” say Whittaker & co.

This allows the giants like Meta and Google to commercialize AI models originally developed in the academic sphere — PyTorch and TensorFlow respectively being prime examples. This inevitably ends in the placement of profit above progress, or as Whittaker and her associates put it: “Open source AI development frameworks allow those bankrolling and directing them to create onramps to profitable compute offerings.”

This in turns gives the tech giants free rein to dictate the “work practices of researchers and developers such that new AI models can be easily integrated and commercialized.” In other words, it’s a vicious cycle: large firms control the workers via the profit model, which allows them to control the true tech innovators so they can generate even more profits.

Or, again, as Whittaker and her co-authors put it, tech giants such as Meta and Google, through their control of AI development frameworks, enjoy “significant indirect power within the ecosystem: training developers, researchers, and students interacting with these tools in the norms of the company’s preferred framework, and thus helping define — and in some ways capture — the AI field.”

Unfortunately, they continue, this stranglehold exists largely because of the sheer cost involved in running an AI model, which completely outstrips that of developing it.

“These significant computational requirements do not necessarily wane after the preliminary development stage, during which an AI model is initially trained and calibrated,” say Whittaker & co. “Indeed, these upfront compute requirements can be dwarfed by the compute needed to use large AI models in the real world to provide answers or generate images.”

Because Big Tech closely guards its data secrets, precise information to back up this assertion is hard to come by, they add, but estimates suggest that the weekly cost of running GPT-4 eclipse the total expense of its initial training.

Meanwhile, Microsoft reportedly needed to sink $4 billion into its infrastructure so it could incorporate OpenAI’s breakout AI software into its Bing search engine, and Altman himself told the US Congress that he hoped for fewer rather than more users to avert excess costs.

“We try to design systems that do not maximize for engagement,” Whittaker and her co-authors quote him as having said, while lamenting a chronic shortage of graphic processor units (GPUs) caused by constant demand for ChatGPT’s services.

Money equals tools equals power

All in all, it doesn’t add up to a rosy picture of an AI-driven future — precisely because Whittaker and her fellow experts seem to believe that the huge costs that the technology entails point towards it being driven by large corporations with oodles of money.

“This requirement for more, and more, and more compute does not appear to be subsiding,” they say. “A recently published (and then deleted) profile of OpenAI shows the company scrambling to secure more computational resources, viewing limited GPUs — specialized processors used to train AI — as the primary check on their aspiration toward bigger more powerful models.”

And if history teaches us anything, it’s that entities that have the means to acquire or produce tools that enable power end up being very powerful themselves.

“The computational resources needed to build new AI models and use existing ones at scale, outside of privatized enterprise contexts and individual tinkering, are scarce, extremely expensive, and concentrated in the hands of a handful of corporations,” say Whittaker & co.

These corporatations “themselves benefit from economies of scale, the capacity to control the software that optimizes compute, and the ability sell costly access to computational resources.”

This leads to what they describe as a “significant resource asymmetry” that “undermines any claims to democratization that the availability of ‘open’ AI models might be used to support.”

Human exploitation underpins AI

And if you had any notions that AI development is somehow elitist, the special and exclusive preserve of the highly educated and skilled who flick digital switches and make things happen, think again — what really makes ChatGPT et al tick is good, old-fashioned hard work.

“Large-scale AI systems’ insatiable need for curated, labeled, carefully organized data means that building AI at scale requires significant human labor,” say Whittaker & co. “This labor creates the ‘intelligence’ that artificial intelligence systems are marketed as automating and making computational.”

Said labor consists principally of data labelling and classification, assessing AI models based on human feedback, content moderation, and feats of computer engineering that include product development and maintenance. Presumably, this is not something one could do with no prior experience, but make no mistake, it’s grunt work too: and it’s being exploited by massive corporations, if Whittaker’s thesis is anything to go by.

“Generative AI systems, the large-scale AI systems currently receiving the most attention, are trained and evaluated on a broad range of human-generated text, speech and/or imagery,” she and her colleagues say. “The process of shaping a model such that it can mimic human-like output without replicating offensive or dangerous material requires intensive human involvement in order to ensure the model’s outputs stay within the bounds of ‘acceptable’ — and thus enable it to be marketed, sold, and applied in the real world by corporations and other institutions intent on maintaining customers and their reputations.”

Politically correct ChatGPT? That isn’t about morality, folks, it’s about money. But then again, arguably, that’s what political correctness was always about — maintaining the status quo while aggressively virtue signalling to create an illusion of justifiability that ultimately benefits those in power, or those who wish to be. And if AI is being programmed and deployed as a weapon in this age-old war of perception control, then it likely spells bad tidings for humanity.

PC AI is a myth

So, let’s take a closer look at political correctness in AI. Whittaker and her colleagues certainly did, and one has to wonder if they didn’t find themselves rapidly investing in noseplugs to shield themselves from the stink of hypocrisy.

In order to teach ChatGPT and suchlike how to filter out content that was “toxic, offensive, or dangerous,” Google employed recruitment firms Accenture and Appen to drum up a labor force of workers tasked “with making consequential decisions about the boundaries of ‘acceptable’ expression” with “minimal training [...] under frenzied deadlines.”

Never one to court bad publicity, known in some antiquated circles as responsibility, Google was more than happy to outsource the dirty work that feeding its AI models involved — and OpenAI appears to be learning a thing or two from the old guy on the block, drafting in workers from Kenya to do its virtual toilet cleaning, at cost to the people unfortunate enough to have to do it.

“This work is often outsourced, providing distance between the company developing and marketing the model and the detrimental working conditions involved in the training process,” say Whittaker & co. “OpenAI accomplished this for their GPT models by hiring workers in Kenya through the outsourcing firm Sama. This work resulted in harmful consequences, as workers were forced to view and read horrific ideas and images repeatedly for low wages with no meaningful support.”

There is some good news, they add: said workers have since unionized, and filed a petition with the Kenyan National Assembly “to investigate the welfare and working conditions of Kenyans performing such services and whether they are compliant with protections from exploitation and the right to fair remuneration and reasonable working conditions.”

As she and her comrades put it, “this extensive, rarely heralded labor” is obfuscated by the big corporations that command it, and poorly paid to boot. If you are under any illusions about AI being some kind of silicone-happy valley departure from all the other exploitative industries that have historically simulataneously benefited and hurt humankind, think again — because, yet again, workers in post-colonial countries are getting screwed.

But don’t take my word for it. Take those of Whittaker and her associates: “We cannot accept the term democratic for a structure that relies on low-paid, precarious workers who receive little benefit while enduring harm, and are themselves excluded from such imagined democracy.”

Whittaker the tech socialist? Watch this space.


More from Cybernews:

My 8GB Mac became painfully slow while browsing: here’s what I did

Multi-nation operation nabs cybercriminals behind $40M loss

Not even a quantum computer should be able to crack new Google keys

Snapchat's AI posts an obscure story, and admits it got a bit carried away

Ransom victims less willing to pay: the five most notorious gangs

Subscribe to our newsletter



Leave a Reply

Your email address will not be published. Required fields are marked