Google Tells Employees to Avoid Using Chatbots like Bard
(Photo : ALAIN JOCARD/AFP via Getty Images)
Google has advised employees to avoid using AI chatbots following its decision to postpone launching Bard in Europe.

Alphabet Inc., Google's parent company, has cautioned Google employees about how they use AI chatbots, including its own Bard, as they were marketing the program around the world.

Four sources familiar with the matter told Reuters the company had advised employees not to enter confidential materials into AI chatbots, citing a long-standing policy on safeguarding information. The sources also said Alphabet alerted its engineers to avoid the direct use of computer code that chatbots can generate.

"Don't include confidential or sensitive information in your Bard conversations," Google's June 1 privacy notice stated.

In a comment to the advisory, Google said Bard could make undesired code suggestions despite it helping programmers. The company also aimed to become transparent about the limitations of its technology.

Bard and its competitor, OpenAI's ChatGPT, are human-sounding programs using so-called generative AI to converse with users and answer multiple prompts.

Read Also: Mandiant Says Chinese Hackers Broke Into Barracuda Zero-Day to Spy on Governments

Risk Posed by AI Chatbots

Google's concerns show how Google aimed to reduce business harm or risk from the software it launched in competition with ChatGPT, which OpenAI and Microsoft backed. The company aimed to win investments, as well as advertising and cloud revenue from new A programs.

The caution also reflected the common move by corporations to warn personnel about using publicly-available chat programs, such as Samsung, Amazon, Deutsche Bank, Apple, and Reuters.

According to a January 2023 Fishbowl survey, there were around 43% of nearly 12,000 professional respondents used ChatGPT and other AI tools without prior supervisor approval. The survey included employees from top US-based companies.

By February, Business Insider reported Google told staff testing Bard before launching not to give internal information.

Bard was built with Google's in-house Language Model for Dialogue Applications (LaMDA) AI engine. Now that Google was rolling Bard out to most of the world, the warnings were extended to its code suggestions.

Google also told Reuters it has had detailed conversations with Ireland's Data Protection Commission, as well as addressed regulators' questions. The comments were provided after Politico made a report Tuesday that the company was postponing Bard's EU launch this week pending more information about the chatbot's impact on privacy.

AI and Sensitive Information

There are fears that AI technology could draft emails, documents, or even software to speed up tasks. However, such outputs could also contain misinformation, sensitive data, and even copyrighted passages from published books or undesired code suggestions.

Some companies have developed software to address such concerns, such as Cloudflare, which defends websites from cyberattacks and offers cloud services. The use of software as a way of putting AI in its place is a marketing capability for businesses to tag and restrict some data from flowing externally.

Cloudflare CEO Matthew Prince said typing confidential matters into chatbots was "turning a bunch of PhD students loose in all of your private records."

Meanwhile, Google and Microsoft offer conversational tools to business customers that would be much more expensive than their current AI chatbots but would refrain them from absorbing data into public AI models, such as the default setting of Bard and ChatGPT.

Microsoft consumer CMO Yusuf Mehdi said it "[made] sense" companies would not want their staff to use public chatbots for work. "Companies are taking a duly conservative standpoint," he said, explaining how Microsoft's free Bing chatbot compared with its enterprise software. "There, our policies are much more strict."

However, Microsoft declined to comment on whether it has a blanket ban on staff entering confidential information into public AI programs, including its own, but an unidentified executive told Reuters he personally restricted his own use.

Related Article: Google Delays EU Rollout of Bard AI Chatbot