AVS #7: Legal proof the AV is safe enough

For a senior manager of an ASDE (Authorised Self-Driving Entity), in accordance with the Law Commission Report on Self-Driving (chapter 11.6, p. 210), failure to demonstrate leadership and structural commitment to AV safety, classified as connivance, is soon to be viewed as a criminal offence.

It is serious, only a documented Due Diligence Defence, prepared in advance, guarantees we can focus on excellence. But in articles AVS #3 and AVS #4, we outlined how challenging the mathematics are of providing confidence to the required probability estimates.

In this seemingly difficult situation, how do you demonstrate the ultimate proof that your self-driving car is indeed safe enough for public roads?

In short: Fail-safe System Architecture.

A system, be it a self-driving car, an aircraft or a plant, that fails more often, but fails gracefully, is much safer than a seemingly unsinkable super-system. It allows to understand the failure-modes, manage challenging, epistemic risks and crucially, prevents the user/operator from becoming complacent.

The regulations do not call for probabilities, but for a token of understanding of their balance. While each and every subsystem can have stunning safety metrics, as the system architecture gets integrated, the probability of failure grows. The smallest failure of any of the in-series components induces a system-critical failure.

With the right architecture, a hierarchical structure, visualised in Fig. 1, that gets the priorities right: safety first, high-level optimisation at the top; it is easier. High level controls can fail. It can fail often! But fail gracefully: delivering safety insights, driving the every-day safety improvements every day, and building a healthy attitude towards operational safety.

Fig. 1: Sensor specialisation & diversification as a driver for fail-safe architecture. Narrow-band sensors can be tuned to detect specific objects, such as vulnerable road users, drivable surface (tarmac), etc.

Fig. 1: Sensor specialisation & diversification as a driver for fail-safe architecture. Narrow-band sensors can be tuned to detect specific objects, such as vulnerable road users, drivable surface (tarmac), etc.

Fail often? How is it a good thing? In AVS #5 we spoke about the challenges related to not-knowing what can go wrong – the epistemic risk.
Self-driving cars will be a novelty in traffic. Fig. 2 visualises a simple example of an AI failing. We can try to apply our human imagination to figure it out, but we will always be surprised by the way it fails.

Fig. 2. What can go wrong? The failure modes of safety-critical AI-systems are unforeseeable.(Source: openai.com)

Fig. 2. What can go wrong? The failure modes of safety-critical AI-systems are unforeseeable.(Source: openai.com)

That’s why with the correct architecture, we can afford the space to fail gracefully and learn from it, while protecting the public and showing, on the balance of probabilities that we did, do, and will continue to demonstrate our commitment to the public safety.

Written by: Dr Marcin Stryszowski – Head of AV Safety

Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion

CONTACT US

Got a question? Just fill in this form and send it to us and we'll get back to you shortly.

Sending

© Copyright 2010-2023 Claytex Services Ltd All Rights Reserved

Log in with your credentials

Forgot your details?