GOOGLE is advising its employees to exercise caution when using chatbots, including its own Bard, even as it promotes the programme worldwide, according to four individuals familiar with the matter. The parent company of Google has informed its staff not to input confidential materials into AI chatbots, citing long-standing policies on safeguarding information, the company confirmed.
The chatbots, including Bard and ChatGPT, utilise generative artificial intelligence to engage in conversations with users and respond to prompts. Human reviewers may review these chats, and researchers have discovered that similar AI models can replicate the absorbed data, posing a risk of leaks.
Additionally, Google has alerted its engineers to refrain from directly utilising computer code that chatbots can generate, as some insiders have revealed. In response to queries, the company acknowledged that Bard may offer unwanted code suggestions, but it still aids programmers. Google also expressed its commitment to transparency regarding the limitations of its technology.
These concerns indicate Google’s efforts to mitigate potential business harm caused by its software, which competes with ChatGPT. The race between Google, OpenAI (the backers of ChatGPT), and Microsoft holds billions of dollars in investment, as well as significant advertising and cloud revenue from new AI programmes. Google’s cautious approach aligns with the security standards adopted by corporations, which include warning employees about using publicly available chat programmes.
Numerous businesses worldwide, including Samsung, Amazon.com, and Deutsche Bank, have implemented guardrails for AI chatbots. Apple, which did not respond to requests for comment, is also reported to have implemented similar precautions.
According to a survey conducted by the networking site Fishbowl, approximately 43 percent of professionals were already utilising ChatGPT or other AI tools as of January, often without informing their supervisors. In February, Google instructed staff testing Bard prior to its launch not to share internal information with the chatbot, as reported by Insider. Now, Google is introducing Bard to over 180 countries and in 40 languages as a catalyst for creativity, while extending its warnings to include code suggestions.
Google has stated that it has engaged in detailed discussions with Ireland’s Data Protection Commission and is addressing regulators’ inquiries following a Politico report claiming that the company postponed Bard’s EU launch to gather more information about its impact on privacy.
While such technology offers the promise of greatly expediting tasks, such as drafting emails, documents, and even software, it also raises concerns about misinformation, sensitive data, or copyrighted content. Google’s updated privacy notice from June 1 also advises users not to include confidential or sensitive information in their Bard conversations.
Certain companies have developed software to address these concerns. For instance, Cloudflare, which provides cybersecurity services and other cloud solutions, offers businesses the ability to tag and restrict certain data from being shared externally.
Both Google and Microsoft are also offering conversational tools to enterprise customers, which come at a higher price but refrain from incorporating data into public AI models. The default setting in Bard and ChatGPT is to save users’ conversation history, although users have the option to delete it.
Yusuf Mehdi, Microsoft’s consumer chief marketing officer, commented that it is understandable for companies to discourage their employees from using public chatbots for work. He noted that companies are taking a conservative stance, explaining that their policies are much more stringent for enterprise software compared to Microsoft’s free Bing chatbot. Microsoft declined to comment on whether it has a blanket ban on employees inputting confidential information into public AI programmes, including their own, although another executive stated personal restrictions on usage.
Matthew Prince, CEO of Cloudflare, compared typing confidential matters into chatbots to ‘turning a bunch of PhD students loose in all of your private records.’
(with Reuters)