The Limits of Meaningful Human Control of AI in the Maritime Domain

Authors

  • Lukas Albrecht German Aerospace Center – Institute for the Protection of Maritime Infrastructures, Bremerhaven, Germany
  • Hagen Braun German Aerospace Center – Institute for the Protection of Maritime Infrastructures, Bremerhaven, Germany
  • Tim Robin Kosack German Aerospace Center – Institute for AI Safety and Security, Sankt Augustin, Germany
  • Thomas Krüger Aerodata AG, Braunschweig, Germany

DOI:

https://doi.org/10.7225/toms.v14.n03.w02

Keywords:

Artificial intelligence, Autonomous systems, Maritime vessels, Meaningful Human Control

Abstract

This paper analyses the viability of Meaningful Human Control as a mechanism to ensure ethical and safe use of autonomous systems, focusing on the maritime context. With future maritime systems increasingly containing Artificial Intelligence components as a main driver for autonomous operation, vehicles like Maritime Autonomous Surface Ships promise substantial benefits in terms of efficiency and safety. Particularly in maritime settings, where hazardous environments and dangerous working conditions put humans at risk, the deployment of autonomous systems is appealing both from an efficiency and a safety point of view – removing humans both as source of and subject to risk. This is especially true for sophisticated AI-driven autonomous systems that can be deployed in unknown environments and are able to deal with unpredicted problems, as they can operate independently from human input in a wide variety of applications. However, truly autonomous AI also introduces characteristic risks like the occurrence of Responsibility Gaps, where the ascription of responsibility for the behavior of autonomous systems is obscured, as humans are prima facie not sufficiently in control of such systems. Simply put, sophisticated AI agents are considered too autonomous for holding human agents morally responsible. If due to special ethical concerns or safety engineering reasons the human operator needs to be involved in AI decision making, human oversight and human control in a meaningful way are indispensable. To address this need for human oversight and control, the concept of Meaningful Human Control (MHC) has been introduced, primarily to guarantee the ascription of responsibility in case of harmful events. Yet, reintroducing the human element to an autonomous AI-driven system not only limits its potential, but faces conceptual and material barriers. This paper starts by looking at autonomous systems in relation to risk, before exploring the call for Meaningful Human Control and the barriers to its implementation. It concludes that there are technical and conceptual barriers that make Meaningful Human Control non-viable in some maritime applications.

Downloads

Published

2025-07-25

How to Cite

Albrecht, L. (2025) “The Limits of Meaningful Human Control of AI in the Maritime Domain”, Transactions on Maritime Science. Split, Croatia, 14(3). doi: 10.7225/toms.v14.n03.w02.

Similar Articles

<< < 35 36 37 38 39 40 41 42 > >> 

You may also start an advanced similarity search for this article.