Notes of discussion
Generated from ASCE 1.5 on 19/12/02 at 19:07:08
Author: Luke Emmet
Version: 0.1a
Description: Discussion at Bieleschweig
Plan for workshop on comparing methods
Introduction by Jens
Possibly plan follow up workshop to compare methods with benchmarks - to find common ground. Jul-2003?
- look at system safety - across disciplines &
sectors
- has to remain as a club - self financing, eases
registration.
- connected from RVS web page
Could analyse methods by common criteria - to identify pros and cons of the different methods.
|
|
doc'n availability |
priority |
SOL |
STAMP |
WBA |
KIN |
... |
|
|
|
|
Fahlbruch |
Leveson |
Ladkin |
Slovak |
|
examples |
contact person |
|
|
|
|
|
|
|
*Ladbrooke Grove |
Lemke |
available on web |
1 |
(x) |
(x) |
x |
(x) |
|
Brühl (Germany) |
Lemke (IfEV) |
available |
2 |
(x) |
(x) |
(x) |
|
|
*Royal Majesty |
Emmet (Adelard) |
available on web |
1 |
(x?) |
|
(x) |
|
|
*Überlingen |
Miller (FHG)? |
available from standard sources |
1 |
(x) |
|
(x) |
|
|
Y2K Berlin |
? |
? |
1 |
|
|
|
|
|
Tokai Moura |
Fahlbruch (TU B) |
? |
1 |
|
|
|
|
|
Chinook |
Ladkin (RVS)? |
one report on web |
2 |
|
|
|
|
|
*KLIKBCA |
Wiryana (RVS) |
yes but in Indonesian |
1 |
|
|
x |
|
|
Friendly Fire |
Ladkin (RVS) |
? - needs books |
1 |
|
x |
(x?) |
|
|
KIN: Karnal Instance
Need to only use reference documentation
Need to have experts/caretakers for the methods/notations.
Proposed dates:
- initial documentation availability - End of Jan-2003
- consolidated analyses - End of May
Discussion on criteria for evaluation
Criteria for evaluation
Chris Johnson (Glasgow Uni) - comparative example for workshop in 2001
- WBA
- Johnson's Temporal Logic
- STEP
Johnson & Michael Holloway (NASA Langley)
- evaluation of STAMP, but they couldnt get enough info
on it
- not available yet - contact authors for details
List of Criteria
An initial list:
- expertise required - perhaps 2 points of view: - JB
- the "average engineer" even manager
- the "sophisticated user" and domain experts
- tool support - LOE
- scalability - can you apply subset to small problems and full monty to large problems? - JB
- graphical representation - should display the semantics clearly, cognitively surveyable, physical medium support - Claire B
- modularity - related to organisational division of
labour and domain expertise - PBL & IMW
- reproducability - different people get similar results for the same tasks - FT
- plausibility checks - independent of tool - what is the "correctness" - LOE
- rigour ~ related to semantics and logic - PBL
- Guidance on identifying additional causes - OL
- Improvement factor from what could be done
before - quality, expressiveness - PBL
- Evolutionary compatibility - does method support doing
new things with the data you have collected - PBL
- adaptability - can you make it do what you want - e.g.
simplification of WBG - PBL
- coverage - does it do all the things you want (sufficiency), complementarity with other techniques - are other techniques also required - JTG
- Support for different viewpoints - e.g. identifying design faults, identifying human error etc Timm Grams
- Documentation availability - fairly obvious - JB
Putting it all together - Ladkin
Priorities for these may be different for different industries/domains.
Proposal to have a handbook - 1 page per criterion - what is expected in evaluation - per evaluation criterion. With 2 parts:
- what is meant by criterion
- how to judge methods against each other - with examples
Fahlbruch et al may have already covered some of this
German and English versions to be made available