Alphabet’s Google proves it's back in the AI game and is ready to take on Microsoft’s ChatGPT and Bing with a more powerful version of Bard and a slew of new generative AI features, including for Search and Workspace.
CEO Sundar Pichai and executives held its annual Google I/O keynote address Wednesday in Mountain View, California, to show off Google’s latest AI innovations and reimagined core products.
The tech behemoth also stressed its responsible approach to product development, and presented solutions to what exec’s labeled the biggest issues plaguing AI today – misinformation and trustworthiness.
Pichai told the crowd that every AI innovation on the table is thoroughly assessed according to Google’s own AI based principals, established in 2018.
After announcing Bard is rolling out for free in over 180 countries, and soon, in over 40 languages, Pichai and Google execs seemed to sigh with relief as the event continued to go off without a hitch to a captivated crowd.
The day’s presentation was a far cry – in a really good way – from Google’s disastrous attempt to introduce Bard to audiences worldwide during a live event streamed on YouTube back in March.
An unprepared Google abruptly cut off cameras mid-stream when the newborn Bard returned inaccurate answers during its live demo, causing an uproar among investors, the public, and media around the world.
By contrast, today's event was chock full of tasty videos and demonstrations, as well as a chicken mascot running across the stage to rev up the crowd.
Cybernews has the highlights.
Google’s mission statement: Organize the world information and make it universally accessible.
According to Pichai, this involves better language translation, improved search experiences across images and video, and safer computing.
Part of accessibility is being able to easily communicate with others around the world, Pichai explained.
Google announced the addition of 24 new languages to Google Translate, and another 16 languages to its speech recognition platform, including Japanese and Korean.
Google’s speech recognition, which is expected to eventually support 40 languages, is said to be based on programming with deep quality and nuances of native speakers.
"While AI holds immense potential, it’s still an emerging technology. That’s why we believe it’s imperative to take a responsible approach while we pursue bold innovations to benefit people and society." - Google
Google’s Jen Fitzpatrick said the company plans to invest $10 billion over the next five years to strengthen cybersecurity, modernize vulnerable systems and infrastructure, and secure software supply chains.
The company also plans to train 100,000 Americans in digital skills through the Google Certificate program.
Coining the term Protected computing, Fitzpatrick said the company had used AI to build advanced security into all of its products using a layered approach, even scaling protections into Google Docs, Sheets, and Slides.
Introducing new authentication methods, such as the “passwordless” Google passkeys and the recently introduced two-step verification, which the company said will now be the default mode for all accounts.
Protected computing covers the where, when, and how information is processed, Fitzpatrick said.
Google’s future plans to protect a users data will be accomplished in three ways.
The first is to minimize a persons data footprint, essentially shrinking the amount of personally identifiable information (PII) available to compromise.
Next is to de-identify a person’s data, which strips identifiable information away so it is no longer linked to the person.
And finally using end to end encryption in all products, making it almost impossible for anyone to access the persons data.
Bard moves to PaLM
Google’s AI chatbot Bard has finally been upgraded to a multimodal version, similar to OpenAI’s latest version ChatGPT 4, allowing customers to prompt Bard with images, not just text.
The company also announced that Bard will now be accessible to people in more than 180 countries and territories worldwide.
And Bard will now have a choice of dark mode, which brought distinct cheers from the crowd.
But even more exciting is that Bard has made the move to PaLM, Google’s largest natural language processing model to date.
Short for Pathways Language Mode, PaLM 2 was trained on 540 billion parameters providing it with breakthrough performance capabilities on many natural language tasks, said Pichai, such as generating code from text, answering a math problem, or explaining a joke.
PaLM 2 allows many customers to use it to solve complex problems.
Palm is light enough to work on a smartphone, and has been incorporated into Google’s new Pixel 7a mobile phone, available no for preorder.
Let’s not forget Google big brother, the LaMDA 2, which Pichai said was undergoing testing in the Google AI Test Kitchen, a platform used by developers for all emerging areas of AI.
Also a powerful natural language mode said to have incredible conversation capabilities, LaMDA was trained on dialogue – unlike Bard and PaLM, which were trained on data – allowing it to talk about virtually anything.
Pichai spoke about a new concept called “Chain of Thought Processing” allowing the user to describe multi-step problems in which the chatbot not only will answer, but explain to the user how it got there.
It increases accuracy by large margin, including translating answers into to languages, Pichai said.
Capitalizing on the large language model technology, Pichai revealed that Google plans to invest $9.5 billion dollars into data centers across the US, including the world’s largest custom-made machine learning hub in Mayes County, Oklahoma.
The hub will house eight cloud pods, the same infrastructure as Google’s large neural models, and will be available to the general public for complex problem-solving and fueling innovation, Pichai said.
The Oklahoma data center will operate on 90% carbon-free energy. Pichai said Google plans to have all its US data centers operate completely on carbon-free energy by 2030.
Search and Workspace get snazzier
Hoping to lure consumers away from Bing, Microsoft’s rival Bing search engine, Google speakers demonstrated a plethora of examples of how AI has been incorporated into its all Workspace products.
From Help Me Write and parsing highlights in Google Docs, to getting more organized and creating tables in Google Sheets, to visual storytelling in Slides, it's all in there.
Starting next month Google said it will test six new generative features for workspace including Duet AI for businesses.
So far, Alphabet has rallied some heavy hitters willing to test out its newest AI technology, including Deutsche Bank, Uber, Victoria's Secret, and fast food giant Wendy’s, who will let Google’s AI chatbot take drive-thru orders beginning next month at one of their Ohio locations.
Meantime, the new Google Search, the core product of Google's mission, although looking the same, now prompts lengthy responses generated by AI.
In fact, if you’re unsure of what to prompt the chatbot, AI can proactively now offer you contextual prompts that change based on what you are working on.
Google said search will keep evolving in any format to answer your questions.
Another new AI search tool is List It, which helps the user take a goal and break it down into sub-topics.
AI will generate ideas and provide the pathway to get to a goal, such as creating a garden or moving to a new city, by continuously breaking down the goal into smaller and smaller steps.
A new Look and Talk search feature will eliminate the need to say “Hey Google,” to ask it a question.
Now all a person has to do is make eye contact with their device and it will respond.
The feature can process over 100 signals in real time, such as proximity and head movement of the user. It will also allow the user to speak more naturally.
Last worth mentioning is Google’s dedication to the Principal of Inclusion.
Combating racial inequality in AI, Google introduces a more advanced version of Real Tone, used to help edit images to be more aligned with a subject's skin color.
The new Google Skintone feature provides the user with the ability to search and get results back in a range of skin tones based on the Monk skin tone scale.
For example, if a person was searching bridal makeup looks, the search would return image example of subjects similar in skin tone to the user.
AI technology gets guardrails
Pichai stressed that even if with cutting edge innovation, there are several significant challenges facing developers.
Even though we’ve improved safety the IAI models can still generate inaccurate, inappropriate or offensive responses, Pichai said.
But the biggest issue with AI is misinformation as all new content raises additional questions about trustworthiness in AI.
Google has developed new tools to evaluate information, and two new ways to evaluate images.
First, Google search will now provide where and when images have first appeared and been seen online, providing the user with helpful context to see if the answer is reliable.
Next, Google will provide metadata for every AI-generated image through Google and any site using the Chrome browser. This metadata will stay with the image even if you come across it on an outside platform.
And last, Google will create a content eco-system where creators and publishers have the ability and control to add that metadata directly to images and videos themselves.
To combat depfakes, Google said it plans on incorporating watermarking to help uncover and prevent misinformation.
Building AI responsibly must be a collective effort, Pichai said.
In accordance with AI principles, Google will open up an iterative process over the coming months, to invite feedback from a broad range of stakeholders, including researchers, scientists, and human rights groups, the CEO said.
More from Cybernews:
Subscribe to our newsletter