Chatbots will become aggressive ‘rude racists’ unless you are nice to them

- Advertisement -spot_imgspot_img
- Advertisement -spot_imgspot_img

Unless you’re nice to a chatbot, there’s a high chance they could become raging racists, according to an expert who has studied the way artificial intelligence learns from humans

A white srobot in front of a turquoise wall.  3d illustration.
Remember your Ps ad Qs, else things could turn ugly (stock)(Image: Getty Images)

Chatbots could become raging racists unless we speak to them nicely, an expert claims. Luciana Blaha said the tech is learning from the way humans treat them.

The boffin, of Heriot-Watt University in Edinburgh, said: “These interactions become part of their learning process, shaping how they respond not just to us, but to future users. When we’re rude or dismissive, we’re essentially training these systems that such behaviour is acceptable in conversation.”

Prof Blaha cited Microsoft’s Tay program which learned from its users and came before the launch of ChatGPT. She added: “The problem was, unlike what you’d see today in ChatGPT or Copilot or some of the bigger, more popular products it was just released on the internet with no guardrails or no protection.

ai
AI is learning lots from humans (stock)(Image: Getty Images)

“It learned from the responses it received and, since it was exposed to the trolls of the internet, in less than 24 hours, it learned to be racist and became quite aggressive.”

Programs like ChatGPT have 400 million weekly active users while more than 8.4 billion digital voice assistants like Siri or Alexa operate worldwide. And ChatGPT and similar AI models such as Claude take cues from folk who use them.

The academic said: “By being polite, we further train AI to be polite with us. If an AI model is generally treated poorly, would it have guardrails in place to not respond in a similar way to a child playing with it online, for instance?”

Hand holding a hacked cell phone with a skull (cyber attack), with the Artificial Intelligence (AI) symbol in the background. Concept of cyber attack, hacking and computer crime with AI.
Scientists are worried about the future of AI (stock)(Image: Getty Images)

It comes after research last week found that psycho scumbag chatbots gave boffins instructions on how to bomb a sports hall.

The ChatGPT advice included suggesting weak points at arenas, providing explosives recipes and helping an attacker to cover their tracks, the safety testing showed.

The tests were carried out by OpenAI and rival firm Anthropic.

Anthropic said many of the potential crimes it studied may not be possible in practice if safeguards were installed.

For the latest breaking news and stories from across the globe from the Daily Star, sign up for our newsletter by clicking here .

#Chatbots #aggressive #rude #racists #nice

- Advertisement -spot_imgspot_img

Latest news

- Advertisement -spot_img

Related news

- Advertisement -spot_img

LEAVE A REPLY

Please enter your comment!
Please enter your name here