[[{“value”:”
Professor John McDermid, director of the Centre for Assuring Autonomy, a partnership between the Lloyd’s Register Foundation and the University of York, kicks off 2025’s Contributions section.
Autonomous systems can improve the maritime industry in a number of ways. For instance, it can contribute to the industry’s decarbonisation efforts by reducing fuel consumption through optimising routes and determining when decision makers should switch fuels. In turn, this could lower operating expenditure and the price of sea-transported goods, benefiting consumers. Yet challenges around assurance and the large costs associated with this new technology means economic benefits will take time to materialise.
The sector’s long-standing staffing and recruitment issues could also be addressed through autonomous systems. Smaller crews on vessels mean fewer personnel are at risk, but ensuring the safety of autonomous capabilities remains a challenge. Indeed, the Global Maritime Trends report by Lloyd’s Register and Lloyd’s Register Foundation emphasises that even when automation is integrated on-board, crews will still be required for safety reasons.
The report predicts that while automation technology will initially slow the growth in the number of seafarers needed, global collaboration ensures trade volumes increases sufficiently to prevent job losses. While the technology continues to advance, stakeholders should remember autonomy was ultimately developed to make ships safer for employees. With more crew available, on-board personnel can focus on maintaining ships as safely as possible.
Charting safe paths forward
Debates around artificial intelligence (AI) have lately focused on effects on individuals, such as data breaches and bias, but physical safety and safety assurance in AI-enabled autonomous systems should also be explored. If vessels with autonomous functions still carry crew or passengers, safety and environmental protection objectives will remain largely unchanged. Yet when AI – particularly machine learning (ML), which uses data and algorithms to mimic human learning, – provides autonomous functions, assurance methods must adapt. Regulations and standards are lacking around ML but these gaps are being bridged.
The Centre for Assuring Autonomy (CfAA), a partnership between Lloyd’s Register Foundation and the University of York, and its predecessor, the Assuring Autonomy International Programme (AAIP), has pioneered work in AI, ML, and autonomy assurance. The CfAA has developed systematic approaches like SACE for systems and AMLAS for ML components to use in maritime. These tools help safety engineers assess and demonstrate the safety of AI or ML components and systems, linking this information into a coherent system safety case.
Navigating diverse regulations
Autonomous system regulations vary across sectors, and numerous standards are being developed by bodies like the International Organization for Standardization for AI verification and validation in autonomous vehicles. In maritime, the International Maritime Organization (IMO) has long been working on regulations for autonomous ships, but its large membership makes progress slow.
Individual nations are also creating their own rules to expedite the introduction of maritime autonomy within their waters. While regulations are typically led by governments and international bodies, organisations such as Lloyd’s Register Group in the UK and Det Norske Veritas in Norway have issued guidance on software and autonomous function assurance.
Ensuring ethical deployment
Debates on the responsible and ethical deployment of AI and autonomous systems in maritime tend to revolve around potential harms, including loss of life or environmental damage. For instance, if a vessel switches from high-sulphur fuels to cleaner alternatives too late upon entering national waters, the pollution this causes could result in fines for the shipowner.
However, the scope should be further broadened to consider the entire lifecycle of maritime infrastructure, including robotics and cognitive systems. Workers involved in AI development, including labelling training images, often face poor working conditions that are harmful to their health. Key questions must be answered on managing incidents without risking rescue crews, defining robotics and remote operations to avoid unfair blame on remote operators, and ensuring safe maintenance of autonomous vessels.
These issues should be addressed in the design and development of autonomous systems if operational risks are to be minimised. As new ships and technologies emerge, questions around responsible and ethical innovation will be under constant review. To that end, the CfAA is collaborating with industry and regulators to provide impartial advice to all stakeholders.
The post Steering through the challenges of maritime autonomy appeared first on Energy News Beat.
“}]]
Energy News Beat