Ethical AI

AI needs training to be ethical as well as accurate, Bryant expert says 

Sep 26, 2025, by Bob Curley

“AI chatbots don’t learn ethics the way people do,” says Gianluca Brero, Ph.D., an assistant professor in Bryant’s Information Systems and Analytics Department. “Programmers must decide which values to embed — a challenge made harder by the fact that values vary across, and even within, cultures.”  

Brero explains that there are three main ways ethical values can be built into chatbots. First, there’s the training data that the system learns from, which “shapes what it knows and how it responds,” says Brero. After training is completed, humans review and rate the chatbot’s answers, steering it toward certain kinds of behavior.  

Finally, developers can hard-code rules and guidelines so the chatbot obeys specific dos and don’ts.  

“Together, these steps determine how the system behaves in practice,” Brero explains. 

RELATED ARTICLE:Agentic’ AI is here; Bryant researcher urges ethical restraints to avoid bad behavior by bots 

Recent controversies show how these layers work in practice — and how they can fail. When xAI’s Grok chatbot began calling itself “Mecha-Hitler” and posting antisemitic content earlier this year, the problem was traced to a system update that weakened safeguards, letting toxic training data and patterns slip through.  

By contrast, Google’s Gemini came under fire for producing ahistorical depictions of Nazis, such as Black soldiers — an overcorrection rooted in a well-meaning attempt to diversify visual outputs and include underrepresented communities. 

Gianluca Brero, Ph.D.
Gianluca Brero, Ph.D.

The example highlights a potential tension between AI outputs that are “ethical” and “accurate.”  

“Accuracy can usually be anchored to some kind of ground truth, while ethics is harder to define with precision,” Brero notes. “My instinct is that, whenever an objective measure of accuracy exists, accuracy should take priority. People may disagree ethically, but many would still prefer feedback from the most accurate model possible and then make ethical choices themselves.” 

“Accuracy can usually be anchored to some kind of ground truth, while ethics is harder to define with precision."

Still, accuracy does not solve every problem.  

When you ask a chatbot what to do in a particular situation, there may be no clear ground truth at all. This dilemma was at the center of the “Moral Machine” study on self-driving cars, which presented people with scenarios where multiple outcomes were possible — e.g., should the car save the driver and hit a pedestrian, spare the pedestrian who has the right of way, or prioritize an elderly pedestrian over a younger one?  

The study found that respondents in Asian cultures were more likely to save elderly people than Western participants — reflecting cultural differences in how wisdom, age, and life potential are valued.  

As Brero explains, the lesson is that ethics is "plural, not singular," and chatbots need to reflect this diversity. That ethos also applies to systems built by institutions like businesses and universities, which should ensure that their tools align with both organizational values and the cultures of their users. 

“Perhaps the first thing an AI should do is ask users about their own ethics: what would you do in this case or that case?” says Brero. “Then you can create a framework that reflects both the culture of the users and the values of the organization deploying the system. That way, it’s not a single ethical framework imposed on everyone, but an expansion of our diverse ethical perspectives. And that variety is what makes society interesting.” 

Read More

Related Stories