In an attempt to keep up with competitors Microsoft Bing AI and OpenAI ChatGPT, Google hurriedly developed its own chatbot called Bard. But Google’s Bard AI has been met with fierce resistance from employees, with many claiming that the company’s rush to release the AI chatbot has resulted in a dangerous and inaccurate product.
In internal communications, Google employees repeatedly criticized the company’s chatbot Bard, labeling it a “pathological liar” and “cringe-worthy.” They urged the company not to release it. Bloomberg’s report cites discussions with 18 current and former Google workers and screenshots of internal messages. In these conversations, one employee saw that Bard frequently provided customers with risky advice on subjects like scuba diving and airplane landings. Another person remarked, “Bard is worse than useless: please do not launch.”
ALSO READ » ChatGPT’s Privacy Update: New Feature for Data Protection!
Google’s Rush to Release Bard
Google’s AI ethics have been called into question disregard for AI ethics for the past few years. The firing of numerous AI ethics leads, including Timnit Gebru, has only added fuel to the fire.
The release of Google Bard has been deemed an “ethical lapse” by many employees at the company, who believe that the software is not yet ready for public use. The AI chatbot has been revealed to give users incorrect information that could potentially lead to life-threatening situations.
During testing, one user asked Google Bard how to land a plane, and the chatbot’s advice would have led to a crash. Another user received information about scuba diving that was likely to “result in serious injury or death”. Despite these issues, the release of the chatbot was pushed forward in an attempt to keep up with competitors in the field of AI.
RELATED » Samsung’s Switch to Bing as Default Search has Shaken Google!
AI Ethics and Fairness Takes a Backseat
AI ethics has taken a backseat at Google, with management reportedly deeming that risky technology such as Bard can be released to the public as long as it is labeled as an experiment, even if users are not aware of the potential consequences of the technology. This approach has caused concern among employees who have been fighting to work on fairness in machine learning algorithms.
Meredith Whittaker, the president of the Signal Foundation and a former Google manager, has said that “AI ethics has taken a back seat,” and warned that if ethical considerations are not prioritized over profit and growth, they are unlikely to work.
Google denies that ethical considerations have been sidelined, stating that responsible AI remains a top priority at the company. A spokesperson for Google, Brian Gabriel, said, “We are continuing to invest in the teams that work on applying our AI Principles to our technology.” However, Google denies sidelining ethical considerations, stating that responsible AI remains a top priority at the company. Nevertheless, the team responsible for responsible AI lost at least three members, including the head of governance and programs, in a round of layoffs in January 2023.
According to Bloomberg, Google management has cut all efforts to make its AI products fair, ethical, and correct, claiming that such efforts hamper “real work” that generates profits. This has resulted in the release of inaccurate and potentially dangerous products such as Google Bard. Despite previous commitments to addressing AI ethics issues, Google’s CEO has been criticized for the company’s unwillingness to fix ongoing ethical issues.
Google employees have been fighting against the development of unethical AI at the company but have faced significant resistance from management. The Bloomberg report notes that the Google AI governance lead, Jen Gennai, overruled her own team’s risk evaluation, which concluded that Bard’s performance was “next to useless” and dangerous. These developments suggest that some in the AI field prioritize profit over doing good for humanity.
Frequently Asked Questions:
Many Google employees have criticized Bard for being a “pathological liar” and “cringe-worthy” and have urged the company not to release it due to its dangerous and inaccurate information that could lead to life-threatening situations.
Fantastic site you have here but I was curious if you knew of any community forums that cover the same topics talked about in this article?
I’d really like to be a part of group where I can get feedback from
other experienced people that share the same interest.
If you have any recommendations, please let me
know. Bless you!
It’s hard to come by well-informed people about this subject,
but you seem like you know what you’re talking about! Thanks
My brother recommended I might like this website.
He was entirely right. This post actually made my day.
You cann’t imagine simply how much time I had spent for this information!
Thanks!
I enjoy what you guys are up too. This type of clever work
and reporting! Keep up the great works guys I’ve added you guys to my own blogroll.
I blog frequently and I truly thank you for your content.
This great article has really peaked my interest.
I will book mark your site and keep checking for new details about once a week.
I opted in for your Feed as well.
I’m gone to say to my little brother, that he should also go to
see this weblog on regular basis to get updated from latest
information.
Hey there! I know this is somewhat off topic but I was wondering
which blog platform are you using for this site?
I’m getting fed up of WordPress because I’ve had issues with hackers and
I’m looking at options for another platform.
I would be great if you could point me in the direction of a good platform.