# AVS #4: Measuring Safety – the ‘99% reliable’ fallacy

“99% reliable” sounds good. An autonomous vehicle advertised as being ‘99%’ reliable conveys a message of trustworthiness, but naturally cannot be perfect.

But this is not enough. It has to be 99.9999% reliable, or more! Why?

“An autonomous car must be as safe as a human driver” – this statement is a key AV performance criterion. Every government publishes the road safety statistics. In the UK, the Department for Transport publishes detailed reports yearly. We easily find that in 2019 (last whole year before the lockdowns), on British roads 1752 people lost their lives. Is this a lot? Here is some background data: There are around 35 million cars registered in the UK, driving a total of 340 billion miles! For reference, an average driver will do 750 thousand miles in their entire life.

So, how safe is an average human driver? We can crunch all the above numbers and find the exact answer. A single day of driving is precisely 99.9999897% reliable.

Counterintuitively, the main challenge in delivering safety is grasping the meanings between numbers. Promises of high reliability do not mean much without numbers. But the numbers alone do not mean much without background knowledge. In the above example, the numbers were written in traditional notation – very hard to read. A seasoned safety engineer would note it down as (1 – 1.03e-7) – much easier to note down. This peculiar situation outlines the sheer scale of work that needs to be done just to comprehend what ‘1 in a billion’ means in a practical sense.

There is a lot of work to reach to safe AVs. Much of it is not related to software engineering and simulation, but to raise the issue publicly via education, dissemination of knowledge and sprouting a daily discussion about safety within engineering circles. Only then does autonomous vehicle safety begin to take the form it is meant to assume. Below are the steps that take our engineering expertise onto the desks of the AI engineers that can make it happen:

Training

The example above visualises the specificity of safety engineering. Just as a racing driver in a daily car will be faster than our average driver in a race car, we may provide the best tools, but only with the right background knowledge will they be used to full potential.
Training about the statistics, review of safety applied to other industries, focus groups for safety modelling and rare-event scenario simulation, workshops on safety architecture, fail-safe design and nemesis: common cause failures – all these concepts must be in place, before the work can begin.

For instance, the ISO 26262 ASIL suggests that for a component to receive the highest safety rating, the failure rate has to be less than 10e-9. The question is: how do we prove that number? It takes well over 10 billion tests to be confident in the estimate (here is why). Given that an automobile is the most complex machine on Earth, with hundreds of components, even more interactions between them and numerous Common Cause Failures – if ASIL is to be anyhow meaningful as a safety estimator, a robust implementation guideline is required.

Awareness

The specificity of AV safety is in the way it interacts with the public. Aeroplanes fly far above, nuclear plants are located in remote locations, and the public has no access to the pilots’ cabin or the control room – but autonomous cars are meant to directly interact with the public. Safe operation of AVs is a very distinct topic and deserves extensive public debate attention. Engaging the public and raising awareness is the key, as ultimately it is the public, not safety engineers, that will interact with these vehicles.

Figure 1: A control room for a power plant (Wikimedia Commons) – without it there is no electricity, but very few people have ever seen one.
AVs are different: everyone will get to control one, but they are no less complex.

When the automobiles were introduced, initially a pedestrian, who carried a warning flag was required by law to warn the public. It sounds silly now, but everyone understands what to watch out for with a car nearby. The transition to AVs is no different.
The public needs to understand its limitations, what to watch out for, what is normal and what is risky. Thankfully the modes of communications have progressed far beyond flags.

Safety culture

Let us consider the following quotes:
“The most important indication of a good safety foundation in an organisation is the extent to which employees are actively involved in safety on a daily basis”.
And:
“A group is seldom aware of its shared basic assumptions, because they are not directly visible. Although basic assumptions within an organisation’s culture can take several years to develop, we usually don’t think about what basic assumptions we hold”.
These come from International Atomic Energy Agency’s guidance documentation on the subject, where the objectives, responsibilities and management techniques are explained, as the nuclear installations’ leaders are welcome to, but not obliged to adhere or enforce them. Yet, everyone does it, simply because safety is naturally in the hearts and minds of each and every engineer.

Autonomous Vehicles, on the other hand, is a field, where so far safety was not relevant – it was still about the enabling technologies and proof of concepts. Now is the time to remind everyone: safety matters, and requires only 5 minutes of your attention every day.

Software deployment

The final piece of the puzzle. When it comes to one of the most comprehensive multi-physics simulations available, simulating some of the most complex systems, we cannot just send you the files and the license.
The software deployment involves careful engineering and leadership, stakeholder management, handling of sensitive Intellectual Property and manipulation of vast, unstructured datasets in a foreign organisation. It is an art in itself. It is why Claytex is an Engineering Consultancy first, and Software Developer second.

Summary

Safety is something that cannot be seen or measured, until it is gone and incident happens.
The road from 99% reliable, to human-grade 99.9999897% reliable is long, longer than it seems. We are getting there. But there is no progress without integrity, strive for excellence, understanding of rare-event statistics and deep-rooted commitment to safety manifesting itself in everyday actions.

Written by: Dr Marcin Stryszowski – Lead Engineer

Please get in touch if you have any questions or have got a topic in mind that you would like us to write about. You can submit your questions / topics via: Tech Blog Questions / Topic Suggestion