Chatbot Race must not be Run with Blinkers on
This opinion piece by Human Rights Commissioner Lorraine Finlay and Human Rights Advisor (Business and Technology) Patrick Hooton appeared in The Australian on Thursday, 09 February 2023.
Since November last year, AI-informed chatbots have made the front pages of newspapers across the globe. Leading this charge has been ChatGPT, which was developed by OpenAI.
Until now this has largely been a one-horse race, but Google’s parent company, Alphabet, has now entered into the fray, announcing that Bard, a conversational AI service, would be released in the coming weeks.
Bard will similarly be run by AI but will be powered by Alphabet’s controversial LaMDA (which was described by a Google engineer last year as being sentient). With Alphabet’s announcement, it seems ever more likely that these sophisticated chatbots will become the next “Google” search engine. No longer will you trawl through a list of links to find answers, now the answer is brought directly to you.
However, if these sophisticated chatbots are to replace Google, developers must not race to secure market position with blinkers on. Wilful ignorance of the serious human rights risks that emerge from the pursuit of a profitable bot is not acceptable, and companies must take proactive steps to mitigate the associated human rights risks.
Privacy at Risk
Chatbots implicitly threaten our right to privacy in the 21st century. Chatbots collect, store and use personal data entered into prompts to further train the AI. This risk is exacerbated where users enter sensitive data into a prompt.
For example, if a user asked ChatGPT or Bard to review a personal essay, which might include intimate details about their sexuality or political beliefs, this information may be rapaciously devoured and regurgitated by the technology in future responses, possibly with unintended consequences. Users are unknowingly giving away private data.
OpenAI and Alphabet must improve transparency and warnings when using these chatbots so people understand how their information is being harvested.
Bias and Discrimination
Bias and discrimination in chatbots have also been a central concern of users and developers. Chatbots are trained using vast amounts of information from all corners of the internet – which often includes hate speech and other illegal content. Because AI cannot distinguish fact from fiction without the assistance of human moderators and safeguards, this can cause chatbots to spew hateful and repulsive content, including about minority groups and children.
Developers must continue to ensure that AI learns what material is inappropriate. We cannot allow technology to further divide society if these chatbots are to replace traditional
search engines. We want chatbots to reflect the best of humanity, not the worst.
AI Hallucinations
Due to this inability of an AI to distinguish fact from fiction, chatbots often give answers that are plainly false but presented as truths. This phenomenon is known as an “AI hallucination”. While this can result in absurdity, with many Twitter users posting hilariously incorrect chatbot answers online, it can also risk the spread of false information or the manipulation of public opinion.
Consider how an AI may have difficulty determining if Covid-19 was real. Although Covid-19 is undoubtedly a genuine disease that has led to more than 6.8 million deaths worldwide, there is ample content online that promotes a narrative that it was a fabrication. Tech companies must put in place strict safety measures to ensure AI is not providing responses that amplify and support harmful conspiracy theories.
Finding Balance
A constant tension in digital spaces is the balance between freedom of expression and content moderation to prohibit hateful content. While a Google search provides a list of links to find information from different sources, chatbots provide a singular response. We are already seeing examples of people calling out alleged bias in the answers ChatGPT is providing, or even in the topics that ChatGPT refuses to engage with. By narrowing the range of sources available, we risk losing the benefits that come from being exposed to a variety of different perspectives.
Problems may also arise if developers and moderators are given control in censoring or promoting certain viewpoints without accompanying transparency and accountability. If for example the developers were in China, users might struggle to get a response to prompts concerning democratic processes, the Tiananmen Square massacre or the status of Taiwan. There is a real risk that domestic censorship comes to have an expanded global reach. Both OpenAI and Alphabet must tread carefully in balancing genuine freedom of expression and content moderation otherwise they risk further accusations of “AI censorship”.
It now seems inevitable that sophisticated chatbots will be the future of how we find information online. But developers and regulators must not hurtle towards technological advancement with blinkers on. They need to be thinking ahead about how their inventions may be used, and the harm they may potentially cause.