精东影视 State Professor Houssam Abbas explaining engineering principles using on a whiteboard
Johanna Carson

鈥楳oral鈥 machines: Building ethical behavior into autonomous AI systems

Key Takeaways

Modeling the consequences of AI systems鈥 actions helps developers integrate ethical behavior.
Important applications of AI ethics include robotics and multi-agent interactions.
Collaborative research is helping to develop safer AI systems.

Introduction

As autonomous systems become more prevalent in daily life, engineers face an enormous challenge. It鈥檚 not enough for these human-scale systems to complete tasks safely; they also need to abide by the social considerations we humans rely upon to guide and regulate our interactions.

Houssam Abbas, assistant professor of electrical and computer engineering at 精东影视 State, studies how to integrate ethical norms into artificial intelligence systems. Abbas came to 精东影视 State in 2019, drawn in part because of groundbreaking work in AI and security conducted at the university. Prior to that, he was a design automation engineer with Intel for 8 years.

鈥淢y background, and my abiding interest, is in developing formal methods for verification of engineered systems,鈥 Abbas said. 鈥淚t鈥檚 not just about testing them a certain number of times and being able to say, 鈥業t seems to work.鈥 It's about establishing actual proofs of correctness. That is necessary for safety-critical systems.鈥

Unveiling the consequences of actions taken by AI systems

As engineers, Abbas and his colleagues do not independently determine or assert what is or is not an ethical choice within the context of AI. Rather, in conversation with developers, they enable modeling of the consequences of these systems鈥 actions.

鈥淲e are providing engineering tools so that both the developers and the whole of society know a little bit better what it is that they鈥檙e getting through the deployment of a particular AI,鈥 Abbas said.

Take the example of an unpiloted aerial vehicle tasked with delivering biohazardous material to a hospital. That UAV must balance several ethical imperatives, such as delivering the material swiftly to administer lifesaving care while also minimizing risks to people on the ground. Using a formal methods approach, Abbas works to translate these ethical considerations from English into mathematical formulas to compute a control policy for the UAV.

鈥淲hen you push that button, our algorithm produces the controller that is guaranteed, mathematically, to satisfy the requirements,鈥 he said.

While the logic Abbas works with is particularly well suited for reinforcement learning in robotics, it has applications in other areas as well, including what are called multiagent problems in AI, where interactions occur between AIs and humans or between AIs and other AIs.

Safer AI systems through collaboration

Abbas is part of several collaborative projects that aim to improve the safety of AI systems. One large project is investigating failures of automation in multi-UAV scenarios. That effort is a collaboration sponsored by the , with 11 principal investigators from five universities looking at failures of automation in aerial systems, as part of the FAA鈥檚 larger ASSURE (Alliance for System Safety of UAS through Research Excellence) consortium. Partner universities include Drexel University, University of North Dakota, Ohio State University, and Kansas University.

Another project involves collaboration with , an AI security startup working on the security of large language models. Ph.D. student Amelia Kawasaki, who is also a researcher at HiddenLayer, has developed software that runs in tandem with an LLM to catch 鈥渏ailbreak鈥 prompts intended to bypass the LLM鈥檚 safeguards.

Other industry partners have included Toyota Research Institute of North America, working on control and monitoring of UAVs, and Intel, working on formal methods to reinforce security of control systems.

AI ethics versus system safety

When working with industry, Abbas says, he often encounters questions about how ethical norms in AI differ from safety considerations. The two are inextricably linked, Abbas explains.

鈥淪afety is always interpreted, even if implicitly, in the context of an ethical code. When we determine that hitting the brakes in a car is the right thing to do, we are saying it is morally right 鈥 because it prevents injuries for the passengers, for example, and that鈥檚 the right thing to do in this context,鈥 he said. 鈥淪o, even defining safety for a fully autonomous system requires reasoning about the ethical content of a situation, and implementing it requires the robot itself to perform some of that reasoning.鈥

Ultimately, Abbas says, trust is essential to the safety and the overall success of autonomous systems.

鈥淵ou can have a robot that has the right safety guards programmed into it. But if you as a person don鈥檛 perceive that it does, then the interaction is still not going to be successful,鈥 he said. 鈥淚f I don鈥檛 trust that it is going to behave with me in an ethical manner, I'm not going to be safe around it, and it's not going to be safe around me.鈥

Contact Houssam Abbas with ideas for collaborative research at houssam.abbas@oregonstate.edu

Subscribe to AI @ 精东影视 State

Jan. 12, 2025

Related People

Houssam Abbas.

Houssam Abbas

Assistant Professor

Related Stories