Tuesday, August 28, 2007

FAIR is defended

Alex at http://www.riskanalys.is/ defends FAIR here and here.

In defence of FAIR, I think it should be possible to show that by making more fine grained decisions and then combine them, you get less errors than making a single monolithic decision. However I cannot come up with a good model that shows this. Maybe it is already done? Does anybody know?


Powered by ScribeFire.

5 comments:

Anonymous said...

Tomas,

In a Bayesian network - this is called creating a prior from a posterior. It's perfectly valid in the context of probability science.

In fact, Jaynes is to have cheekily said "One man's posterior is another man's prior".

Tomas said...

Yes, I have actually (partially) read Jaynes' book. And yes, I think Bayesian statistics in the sense of being an extension to logic is a valid approach.

But I think that Richard's problem is that he only sees "garbage in, garage out". He assumes that the result of FAIR is no more valid than the input and if he cannot trust the input, he cannot trust the output.

First of all is the problem of "guessing". We must convincingly argue that an expert makes "guesses" that is not arbitrary and more accurate than randomly draw a value from a uniform distribution.

Second we have to show that by breaking down the risk estimation in many steps as you do in FAIR we get a more accurate estimate than if you just estimated a single value.

But, this is just an intuitive thought. This must be shown mathematically some how.

The estimations done in all steps can be seen as independently drawn samples and the total error given by the combined estimation is less than the error of each sample in isolation. This assumes that each estimation is a random error distance from the correct value.

So far this is just some speculative thoughts.

Anonymous said...

Tomas,

These are particularly good points. My thoughts are that he, like when I first started using FAIR, was swimming in priors, and couldn't accurately match them to how they might contribute to the factors in FAIR analysis. I've been giving this some thought, but the only answer I can come up with at the moment is that "it comes with practice".

Second, yes, posteriors used as priors seems to be an issue raised, but as you probably know, is perfectly valid when your framework is logical.

How you overcome that is by practicing the input with priors.

I really do appreciate your thoughts on the subject.

Anonymous said...

Hi Tomas,

You said: "First of all is the problem of "guessing". We must convincingly argue that an expert makes "guesses" that is not arbitrary and more accurate than randomly draw a value from a uniform distribution."

You should read the book that Alex recently recommended to me: How to Measure Anything. It explains, with proof, that experts can learn to "guess" more reliably. And don't think of the guess as randomly drawing from a uniform distribution. The experts create the distribution, or, if they are using a standard one, can accurately estimate the relevant attributes such as standard deviation.

Tomas said...

Hi Jon,

Yes, this book is on my list of "to read" books.

Sorry I can now see what the problem is. What I meant to write was that

"First of all is the problem of "guessing". We must convincingly argue that an expert makes "guesses" that are (1) not arbitrary and that are (2) more accurate than randomly draw a value from a uniform distribution."