This dialogue explores the primary ethical concerns in AI robotics, examining how advancements in technology influence decision-making, privacy, bias, accountability, employment, and future implications.
Read the full context here: AI Robotics Journey Intro
Expert: Konstantin
Interviewer: Robi
What are the primary ethical concerns that arise in the field of AI robotics?
Ethical concerns in AI robotics vary based on societal interests. While some regions prioritize fears of job displacement, others are more concerned about privacy issues, learned biases, or safety. Of course all of them should be adressed and the individual riscs understood by people providing AI enabled robotics.
These are just the civilian issues, and even more concerning topics revolve around the potential of autonomous weapon systems. In these systems, life-and-death decisions are made in split seconds and without human control, raising complex ethical dilemmas. While such systems aim to minimize the risk of human casualties on one side, they also raise the threshold for initiating real conflicts on the other. We draw insights from fields like rescue robotics, where decision-making complexities and flaws have been observed.
How have ethical considerations in AI robotics evolved over time with the advancement of technology?
If I recall my early lessons in robotics back at university, ethical considerations were introduced from the very beginning. However, they have definitely evolved and shifted due to technological advancements. The first time I seriously started thinking about these issues was after reading Isaac Asimov's books, which caught my attention following the release of the blockbuster movie I, Robot in 2004. It became clear to me that there was much more to consider than I initially thought. From a public viewpoint the ethical challenges where not important enough back in the day to be considered at all.
During the last hype cycle of autonomous cars, around 2018-2020, I was literally asked the same question hundreds of times: 'What should the car do—risk the passenger’s life or the cyclist’s?' This particular dilemma made it really challenging to discuss other technical risks or ethical considerations in the field. Other topics, such as black box decisions or people detection systems that completely fail to recognize people of color, were simply ignored.
With the new wave of AI, particularly Large Language Models, public focus has shifted once again to questions like what a robot should be allowed to do and what it should not. This is a very interesting topic, one I could discuss all day.
Can you discuss the challenges related to the decision-making processes of AI in robots and the ethical implications of these decisions?
Decisions in today's robots are typically created through a combination of rules, which can be further divided into smaller decisions along the way. The evolution of this decision-making process is well illustrated by the transformation from classical classification algorithms to AI-based solutions. In the automotive industry, for instance, lane-keeping assist systems once relied on detecting lanes and searching for the preferable center. In a classical approach, detection was achieved through human bootstrapping, which involved searching for specific colors, positions within the image, lane consistency, and more. While this method was easy for humans to understand, it was difficult to configure and required frequent adjustments for different roads, lane markings, camera positions, and so on.
With AI, these features no longer need to be manually configured; they are learned by the system through extensive examples. This approach is more reliable and can handle a wide range of scenarios and changes, but the decision-making process is no longer easy to understand. From an engineering standpoint, it also becomes challenging to incorporate safety patterns, as decisions can change in a split second without any obvious reason. Missing training data can clearly cause this issue, the question remains: can it be addressed by today's technology? The answer is yes, provided that engineers carefully consider training processes and their potential consequences but it is a lot of work.
Can you provide more details on the decision-making processes you're developing at ARTI?
At ARTI, we are focused on improving autonomous mobility in a multimodal way. Robots need to be able to navigate freely and effectively in a complex world. The key questions driving us are: How can a robot use elevators, doors, buses, and trains? How can it understand traffic conditions, construction sites, and much more? I want to highlight a recent example of multimodal mobility in a video created by the Technical University of Graz. The student team used our navigation system to provide a variety of skills that extended the robot’s range.
What practical steps could make AI’s decision-making more ethical?
One option could be "shadow mode" testing, where AI models are evaluated on ethical decision-making without directly influencing actions. Reinforcement learning, combined with embedded ethical principles, can train AI on what constitutes good or intended outcomes. However, understanding and controlling AI's decision-making is still an ongoing research challenge.
You mentioned that AI systems can inherit biases from their human creators. How significant is this problem in robotics, and what measures can be taken to ensure fairness and neutrality?
AI systems have biases due to either incorrect or insufficient training data. Just imagine a safety system in self-driving cars looking out for humans and struggling because Black individuals are underrepresented in its training data. Large Language Models (LLMs), a more recent example, are mainly created by American companies and therefore inherit a Western—or more precisely, an American—ethical view of the world. One solution currently being explored by companies in other industries is to switch between foundation models tailored to different cultural or regional standards. However, in robotics, managing multiple models within a cohesive decision pipeline presents a significant technical and operational challenge.
What role do you see for internal ethics frameworks versus external regulations?
External regulations, like the European AI Act, are a starting point, but they can’t address the vast scope and diversity of AI applications. Internal frameworks are necessary to proactively address specific ethical challenges, especially in regions with distinct values and standards.
What are the ethical implications of AI robotics on employment and the workforce? How should society address potential job displacement?
Robots become coworkers, and coworkers capture a lot of information about your daily routine. How quickly are you working? How motivated or lazy are you? In addition, intelligent robots might work better with an 'average' pool of people due to their internally trained skill set, which means some people can more easily work side-by-side with robots than others.
Job displacement due to robots is not yet a real concern in Europe, as for many jobs, it’s simply hard to find people. Industry and agriculture are struggling to keep businesses running as they are today. Even restaurants have to close because not enough people are applying for such work. Robots are essential for society to preserve many jobs, stabilize the economy, and sustain global competitiveness.
Looking ahead, what new ethical considerations do you think will emerge as AI in robotics continues to advance? How should the field prepare for these challenges?
Higher-level decision-making is becoming even more intriguing with the use of LLMs. Over the past year, several companies have demonstrated that multimodal artificial intelligence (text, audio, image, etc.) is more effective or useful in solving robotic tasks. But why? That’s still a question for today’s scientists to answer.
Similarly, in the ethics of AI, we need to discuss the potential future impacts and decisions made by such systems. AI is pushing robots into a completely new range of tasks and applications. As robots become increasingly intelligent, they may start making decisions we might not want to delegate. Given the conflicts around the world, AI in military robots is a significant concern. As Schallenberg, the Federal Minister for European and International Affairs of Austria, pointed out: 'The biggest revolution on the battlefield since gunpowder.'
Further Readings
- Conference on robots and the permission to kill: https://orf.at/stories/3355502/
- Addressing concerns about robots with too much autonomy in the military: https://www.stopkillerrobots.org/
- Highly recommended book: Wired for War by P.W. Singer: https://en.wikipedia.org/wiki/Wired_for_War