Chinaunix首页 | 论坛 | 博客
  • 博客访问: 1658754
  • 博文数量: 230
  • 博客积分: 10045
  • 博客等级: 上将
  • 技术积分: 3357
  • 用 户 组: 普通用户
  • 注册时间: 2006-12-30 20:40
文章分类

全部博文(230)

文章存档

2011年(7)

2010年(35)

2009年(62)

2008年(126)

我的朋友

分类:

2008-07-27 22:39:44

这是一篇关于可接受的风险的标准的讨论的文章,有关于ALARP、GAME和MEM的讲述。


When is risk acceptable?




NO-7465 Trondheim

E-mail:
URL:

Abstract: Some of the currently used risk tolerance principles are discussed and an equation for determining risk acceptability is derived. The equation contains measurable and empirical factors. For the empirical factors, some possible ways of obtaining at least relative values are proposed. Keywords: Risk acceptance, , ,

Introduction

Modern technological systems are usually introduced because they provide some benefit to society. But they also pose risks. These risks are usually accepted as the price we have to pay for the benefit the technology offers us, provided the risk is less than the benefit. If the risk is too high, the technology will be rejected.

Measures can be taken to reduce the risk that a technology poses. This will usually decrease the positive effects somewhat, for example by making the whole technology more expensive. But if the risk can be reduced far enough, people may be willing to accept it as the price they have to pay for the benefit the technology provides.

This can be illustrated with a very simple example. We all know that motor traffic can and does kill people. It also pollutes the environment quite considerably. But we all drive our cars around, because we think the chances of getting killed in a road accident, or of killing somebody ourselves, are so unlikely that we're willing to take the risk. What we need is some kind of mechanism for performing the trade off between the benefit a technology provides and the risk it poses. Ultimately, this means trying to find out when a risk is acceptable, and when it isn't.

The idea of risk acceptance (or risk tolerance as it is often called) is also a fundamental factor in the concept of safety integrity levels as used in ISO/IEC 61508 (ref. ) and other derived standards. This will be explained in the next section. There will then be a brief discussion of risk tolerance principles, and an equation for expressing an acceptable risk level will be derived. Finally, the factors in the equation will be discussed and possible ways of obtaining numerical values for empirical factors will be proposed.

Safety Integrity Levels

The concept of safety integrity levels is introduced in the international standard ISO/IEC 61508 (ref. ). This concept applies to safety functions and involves two basic elements: the "size" of a risk that the safety function is to mitigate, and the degree of acceptability of that risk. Let us look at an example.

A motor car has several safety functions implemented into it. One of them is the braking function, which is intended to reduce the risk of losing control over the vehicle and crashing into something, or simply flying off the road. This function is implemented with hydraulic brakes, a mechanical hand brake and - perhaps less conspicuously - by the gear case!

Now if the braking system fails, the risk of getting killed in an accident is quite large. So we want the braking system to continue to function as long as possible. We have dual circuit hydraulic brakes, so that if one circuit fails we still have the other one. If both circuits fail, we have the hand brake, and if that fails we can force the car into low gear or even reverse, which will bring it to a grinding halt. So the functional integrity of the braking system is very high.

Another safety function is the "containment" function. It is intended to reduce the risk of falling out of the vehicle while it is driving and is implemented by having doors. The safety belts can also help keep you in, provided you use them.

For the containment function we're less demanding. We don't usually get flung against the doors when a car drives round a bend, so we are quite willing to accept the risk of driving without doors. In fact, some cars and all motorbikes are sold without them! Because we think the risk is so small, we're willing to accept it, and because we're willing to accept the risk, we accept a much lower degree of functional integrity for the containment function.

So we see that the integrity level we demand for a particular safety function is dependent on amongst other things how willing we are to tolerate that the function can fail.

Risk tolerance principles

: The ALARP principle (the residual risk shall be As Low As Reasonably Practicable) is the only one described in IEC 61508 (ref. ). It's in an informative part of the standard (Annex B to Part 5), so it should not be construed as being the only principle that is conformant with the standard. It assumes that one "knows" a level of risk that is acceptable to the general public and requires that the risk posed by any new system shall at least be below that level.

How far below is where the term "reasonably practicable" comes in: theoretically, an infinite amount of effort could reduce the risk to an infinitely low level, but an infinite amount of effort will be infinitely expensive to implement. So we have to identify a level of risk that is so low that the public will accept that "it’s not worth the cost" to reduce it further.

Now associating risk reduction with cost tends to be misunderstood. Obviously, if achieving a safe system is prohibitively expensive, the system won’t be built. But cost is not just a question of money. We can, for example, reduce the risk of human error in an industrial control system by introducing an automatic, mechanical control system. But this entails the risk that the automatic system might not be able to cope with an unexpected situation where a human operator would recognize that this has to be handled in a special way. So the cost of reducing the risk of human error is an increase in the risk of not being able to handle exceptional situations. If the rise in the latter risk is higher than the reduction of the former risk, the cost will be deemed too high.

The term "practicable" is also important here. Science and technology are continually advancing, and what was state of the art yesterday may be surpassed by a new technology tomorrow. It is neither reasonable nor practicable to demand that only the very latest safety technologies shall be implemented. For a system that was developed and built several years ago, continuously modifying it in order to introduce the newest safety technology could indeed be more dangerous than retaining the original one.

: The French GAMAB principle ("Globalement Au Moins Aussi Bon": globally at least as good) is defined in reference and briefly explained in references and . Like the ALARP principle, it too assumes that there is already an "acceptable" solution and requires that any new solution shall in total be at least equally good.

The expression "in total" is important here, because it gives room for trade-offs: an individual aspect of the safety system may indeed be worsened if that worsening is overcompensated by an improvement elsewhere.

In our example above, the automatic control system would be acceptable if the increased risk of losing flexibility is compensated by the decreased risk of human blunders. So the GAMAB principle is effectively equivalent to ALARP!

: The German MEM (Minimum Endogenous Mortality) principle is described in reference . Brief descriptions can also be found in references and . It is based on the fact that there are various age-dependent death rates in society and that a portion of each death rate is caused by technological systems. The requirement is then that a new or modified system shall not "significantly" increase a technologically caused death rate for any age group.

An increase of more than 5% is considered to be "significant", which means the increase will be roughly within the limits of normal statistical variations.

Ultimately, this means that the age group with the lowest technologically caused death rate, the group of 5 to 15 year olds, is the reference level. For this group, the technologically caused mortality rate (in Germany) is currently given as 2.10-4 fatalities per person and year so that the limit for augmenting this rate becomes 10-5 fatalities per person and year.

Differential Risk Aversion

When considering risk acceptance, we must also consider emotional, irrational factors. People are usually quite willing to accept a risk if they think that they can directly influence how strongly it affects them. They are willing to accept a horrendous death toll on the roads, because they directly control the cars they drive. But for public transport, they are much more demanding: if they’re going to put their lives into the hands of somebody else, then every precaution must be taken to protect them!

People also tend to view accidents singularly. If a single accident causes a catastrophe, it will be taken very seriously. They're more willing to tolerate a hoard of small accidents that each has apparently minor effects, even if the total effect is much worse than the single, big accident.

This effect is taken into consideration in most countries by introducing "Differential Risk Aversion" (DRA). Basically, it is assumed that accidents up to a certain severity can be regarded as being equally serious, severity being interpreted in terms of the death toll. Above a certain threshold, people will react increasingly negatively, so their willingness to tolerate the associated risk will decrease accordingly.

In Britain and Germany, for example, a linear relationship is applied to the DRA, i.e. the decrease in risk acceptability is directly proportional to the increase in the potential death toll. This is not always expressed explicitly in all countries, whilst some are even more demanding. The Dutch, for example, use a DRA that is proportional to the square of the potential death toll.

Risk acceptability

From the above we can see that there is a risk level that is so high that people will categorically refuse to accept it. Such risks are intolerable. But there is also a level that is so low that people regard the risk as being negligible. The region between these two levels is where the tolerable risks lie, those non-negligible risks that people are willing to accept.

When considering risk acceptability, we should first look at why we would NOT want to use a particular technological system.

From the point of view of the "man in the street", the first thing that counts is the accident rate in terms of accidents per hour or day or any other unit of real time. He's not too concerned about down times, planned shutdowns or the like, so he won't really differentiate between operational and non-operational time.

Operational time will be of significance when considering how serious the accidents are. If a system has a lot of minor incidents with few or no casualties when it is operational, it will be considered safer than one with few incidents, but a heavy casualty toll when something does happen. So the number of casualties per operational hour, not the number of incidents, will influence acceptability.

We must also consider Differential Risk Aversion. It was pointed out earlier that this varies between countries, and this also applies to social groups within a country.

The differences in attitude are not only present between countries, there are also differences between the regions of a single country, indeed, there will even be differences within a big city! In a suburb where at least three people get gunned every day, a system that only kills two will be an improvement and quite acceptable. On the other side of town, where gunning is not common, the same system would be completely unacceptable!

And finally, the DRA will be influenced by public opinion, which is a very dynamic thing. If there have been accidents in the recent past, people will be more aware of safety matters, even if there are completely other technologies involved. If society is conscious of risks, it will pose higher requirements than a society that is not so conscious. A look at the public attitude to traffic safety in underdeveloped countries shows that there is virtually no awareness of the risks involved!

The public attitude will change over time, reflecting changes in the standard of education, the exposure to risks, the standard of living etc. So the differential risk aversion will be some function of place, society and time.

Now having considered why we do not want a technological system, we must also look at why we do want it! If a technology is considered to be of vital importance, people will be more willing to accept the risks it entails. So the benefit provided by the system will influence peoples' willingness to accept the risk.

Thus we see that the tolerable risk level for a technological system must be some function of:

the distribution of accidents over real time, dA/dtr;
the average number of casualties per operational time, dC/dto;
a differential risk aversion factor fDRA and
a factor b describing the benefit provided by the system.
The willingness to accept the risk will decrease when any of the above factors except b increases. It will increase with b, so assuming equal weighting of each of the factors, we can define tolerability T as
 
T = b / (dA/dtr * dC/dto * fDRA)
(1)

The term casualties is usually taken to mean a case where a human being requires medical attention. This is conformant with the concept of risk as used in reference . However, the term 'casualties' can be extended to include environmental and financial damages without influencing the underlying relationship.

A high value of T means that people will in general be willing to tolerate the risk; it corresponds to a low risk level. A low value of T means something has to be done to reduce the risk; it corresponds to a high risk level.

Since we usually talk about risk reduction rather than tolerability increase, it is more practical to refer to high or low risk levels rather than low and high tolerability levels. So we simply invert the expression to get an expression for the acceptable risk level r:
 

r = (dA/dtr * dC/dto * fDRA) / b
(2)

Now dA/dtr and dC/dto are measurable quantities, fDRA and b are more empirical. So we need some way of quantifying them if equation 2 is to be of any use.

The Differential Risk Aversion Factor fDRA

It was pointed out earlier that the DRA depends on place, society and time. Within certain - perhaps quite narrow - boundaries, the variation will be negligible, so we simply select a suitably small geopolitical entity and set its DRA to 1. We can then determine relative values for other places or societies by comparing how much stronger or weaker their DRA is, i.e. we express relative DRAs as a percentage of the reference DRA.

In order to compare DRAs, we still need some measurable quantities. This could be for example the percentage of the affected population that is or is not willing to accept a technology system, given that accidents with that system will cause a certain death toll.

By determining the relationship between these percentages and the stipulated death tolls, we get a curve describing the local DRA function. We can then use the average relationship between two such curves to determine the relative DRA factor, given that one of the curves is our reference value.

It should be noted that the DRA is time dependent, so determination of local DRA functions will have to be repeated at certain intervals. How long such intervals should be depends on how strongly attitudes in a society change. Certainly once per generation is too short, and once a year is probably too often, so social scientists will have to come up with sensible figures!

Measuring benefit

In reference , Wolfgang Ehrenberger has proposed a "benefit vector" to determine the pros and cons of a system. His model is focussed on medical systems, so factors like duration of sickness are present. However, the idea can be generalized.

Exactly which factors should be included needs clarification. Certainly things like increased safety, more efficient services, improved living conditions etc. will be part of a benefit vector. Some of the coordinates will depend on the technology, so we will end up with technology dependent benefit vectors.

The benefit of a technology is also dependent on "local factors", just as the DRA is. The benefit of a water supply is much greater in the desert than it is in arctic regions. So as with the DRA, we must select a suitably small geopolitical entity as our point of reference and determine relative benefit factors by comparison.

Which leads us to the same problem we had with the DRA function. We have to quantify the elements of the benefit vector suitably, so that we can compare them. Here too it will be a task for social scientists to determine which factors should be included in the measurements and how often the measured values must be updated.

Conclusion

The risk tolerance principles described above were developed in order to determine when a risk has been sufficiently reduced. The ALARP and GAMAB principles both assume that one already knows an acceptable risk level. They don't enable you to determine one from scratch. The usual philosophy has been to assume that what we have today can be regarded as acceptable, so that's where we start.

At least in Europe, the discussion around nuclear power has shown that this is not the best way to go. Numerically, the residual risk of a nuclear power station may be much lower than the risk of motor traffic, but people are still more skeptical towards nuclear power. So if the starting point for ALARP or GAMAB is controversial, then the results will be equally controversial.

The MEM principle has the same basic shortcoming, although it attempts to be more objective. The reference mortality rate is certainly a rational, measurable quantity. Whether or not it's acceptable is another question. 2.10-4 fatalities per person and year sounds quite low, but converting that figure to absolute numbers makes it look different:

The German population is about 80 millions. Average life expectancy is about 75 years, so with an even distribution of ages throughout the population the group of 5 to 15 year olds would correspond to 10/75 or 13% of the population. Allowing for a higher proportion of older people in the population, we get a conservative guess of 10%, which gives us 8 million 5 to 15 year olds.

The de facto 2.10-4 fatalities per person and year then means 1600 children being killed annually by technological systems. And the acceptable 5% increase corresponds to 80 more children being killed per year (that's about one every 4½ days). Now these figures may be fact, but that doesn't make them acceptable.

With equation 2 an expression has been derived that describes an acceptable risk level without relying on assumptions that the current status quo is acceptable.

The equation can certainly be refined. It is not certain that the various factors are equally weighted, as was assumed, nor are they necessarily independent of each other. But as a starting point, equation 2 certainly allows us to determine a technology specific risk level that can be broadly accepted. And from there we can return to ALARP and co.

References

  1. International Electrotechnical Commission. Functional safety of electrical / electronic / programmable electronic safety-related systems. IEC 61508 First edition. Geneva: 1998
  2. Ministère de l'Equipment, des Transports et du Tourisme. Projet de loi relatif à la sécurité des transports publics guidés. PD/CM (STPG1). Paris: 1994
  3. M. El Koursi et al. Generalised Assessment Method. ESPRIT P 9032 CASCADE Part 2. INRETS Villeneuve (France): 1997
  4. CENELEC (European Committee for Electrotechnical Standardization). Railway applications - The specification and demonstration of Reliability, Availability, Maintainability and Safety (RAMS). EN 50126. Brussels: 1998
  5. Albert Kuhlmann. Einführung in die Sicherheitswissenschaft. Friedr. Vieweg & Sohn Verlag TÜV Rheinland.
  6. Wolfgang Ehrenberger. Proposal for a definition of the term "risk" for medical devices. EWICS Working Paper 8043. EWICS: 1999

Biography

Odd Nordland, SINTEF Telecom and Informatics, Systems Engineering and Telematics, NO-7465 Trondheim, Norway, telephone: +47 - 73 59 29 58, fax: +47 - 73 59 29 77, E-mail: odd.nordland@informatics.sintef.no, URL: ~nordland

Odd Nordland studied nuclear physics and computing science at the University of Hamburg. He has worked as assessor for process control computers in nuclear power stations and as configuration manager for an international space program. Since 1997 he has been working for SINTEF in Trondheim, primarily concerned with safety assessments of railway signaling and interlocking systems throughout Scandinavia.

He is a member of the European Workshop on Industrial Computer Systems (EWICS), the Safety-Critical Systems Club, the Software Reliability & Metrics Club and the British Computer Society's Configuration Management Specialist Group (CMSG). He is also an external examiner for Programming methods and System development at the Norwegian Technical and Scientific University (NTNU) in Trondheim.

阅读(2496) | 评论(0) | 转发(0) |
给主人留下些什么吧!~~