It’s official: Google is working on a ChatGPT competitor dubbed Bard.
The project was revealed by Google CEO Sundar Pichai in a blog post published yesterday. He called the software a “experimental conversational AI service” that will respond to user inquiries and engage in conversations. The program is presently only accessible to a limited group of vetted testers, but it should be made more widely available to the general public in the weeks to come.
Bard can be an outlet for creativity, and a launchpad for curiosity, helping you to explain new discoveries from NASA’s James Webb Space Telescope to a 9-year-old, or learn more about the best strikers in football right now, and then get drills to build your skills.Sundar Pichai
Pichai also notes that Bard “draws on information from the web to provide fresh, high-quality responses,” implying that it would be able to answer questions about recent events – something ChatGPT struggles with at the moment.
The rushed announcement and scant information about Bard are clear symptoms of the “code red” set in motion at Google by last year’s launch of ChatGPT. While ChatGPT’s underlying technology isn’t revolutionary, OpenAI’s decision to make the system freely accessible online has exposed millions to this new form of automated text generation. The effect was bombastic, fueling debates about ChatGPT’s impact on education, business, and – which is particularly important to Google – the future of search engines.
Although Google has extensive expertise in the type of artificial intelligence that enables ChatGPT (in fact, it invented the key technology – the transformer that is the ‘T’ in GPT), the company has so far been more careful about making its tools available to the general public. The LaMDA language model, which serves as the foundation for Bard, was previously made available through Google’s AI Test Kitchen app. However, this version is extremely limited, as it can only produce text related to a few queries.
Like other tech giants, Google has been wary of untested AI. Major language models like LaMDA and GPT-3.5 (which powers ChatGPT) have a history of confidently asserting misinformation and spewing out toxic content like hate speech, to the point where one professor referred to such systems as “bullshit generators.”
The upcoming launch of Bard represents a shift in Google’s perspective on this technology. In his blog post, Pichai emphasizes that Google will combine “external feedback with our own internal testing to make sure Bard’s responses meet a high bar for quality, safety, and groundedness in real-world information.” But it’s almost a given that the system will make errors, and possibly serious ones.
Meanwhile, Google is also highlighting how it’s already incorporating AI into many of its products, including search. Over the past few years, Google has increasingly employed artificial intelligence to curate search results, pulling information from websites rather than letting users click and explore on their own.