To build dependable information infrastructures for AmI, a rigorous modelling and design process must be followed including verification and validation (V&V) activities aimed at fault removal. V&V activities address the analysis and testing of developed artefacts to ascertain that the resulting (intermediate and final) products form:
A consistent construction: when going from more abstract to more concrete models (refinement); when considering differing views at a same level of a system (projection); and when combining interacting pieces that were developed separately (composition);
A useful infrastructure: with respect to the services delivered and the ambient requirements, considering functional as well as dependability aspects, namely reliability, timeliness, robustness, and similar.
The primary challenges posed to existing V&V methods and techniques in the context of the openness, mobility, adaptability, and workability aspects of AmI, result from:
Scale: the pervasiveness and interdependency of the information infrastructures needed to realize a trustworthy AmI lead to an incessant growth in the size and complexity of the systems that have to be analyzed. This factor affects all four aspects.
Dynamicity: the common denominator of adaptability, openness and mobility is that they make a priori, off-line V&V activities inadequate, because we can no longer rely on a stable model or design of the system being built. To address this, one possible approach is to seek methods for reasoning about infrastructures undergoing dynamic changes, and meta-models (e.g., component-based models of the architecture) that can characterize evolving infrastructures. Another approach is to entrust basic dependability properties (e.g., safety, security) to kernel subsystems that are simple enough to respond to established V&V methods.
Data intensiveness: this is an issue especially for workability and mobility. While methods and techniques have mostly focused in the past on checking the functional behaviour of a system, in data intensive systems the consistency of the data also needs to be verified, and the links between data faults and threats to dependability identified.
Malicious faults: in the context of security protocols for open systems, faults could be injected during the communication by malicious agents in order to forge unjust authentication or to steal secret information.
The primary means by which the above challenges might be tackled are:
Abstraction: effective abstraction mechanisms need to be adopted in methods and techniques used for analyzing and testing a large complex information infrastructure, so that dependability checks can be carried on models describing the system at the relevant level of abstraction.
Composition: methods and techniques are required to verify the dependable evolution/adaptation of the infrastructure, resulting from the intentional/accidental change of the interconnected component structure. Hence, we need to investigate verification techniques aimed at checking properties of the dynamic composition of system components, and test approaches allowing for component-based testing as well as test asset reuse when systems are reconfigured by component insertion/removal/substitution.
Complementarity of approaches: we need to consider mutual influence and integration between approaches addressing dependability from different backgrounds and perspectives, e.g., formal verification with model-based testing, or conformance testing with fault injection techniques. We also need to extend testing and formal verification to deal with non-functional aspects, most notably hard and soft real-time properties.
Fault forecasting
System evaluation is a key activity of fault forecasting, aimed at providing statistically well-founded quantitative measures of how much we can rely on a system. In particular, system evaluation achieved through modelling supports the prediction of how much we will be able to rely on a system before incurring the costs of building it, while system evaluation achieved through measurement and statistical inference supports the assessment of how much we can rely on an operational system.
These two basic techniques are by no means mutually exclusive, although they have traditionally been applied at different stages of the system life cycle. Our opinion is that research into both techniques is needed as well as into synergistic collaboration between the two throughout the system life cycle.
A number of new issues are raised by the openness, mobility, adaptability, and workability aspects of AmI:
Ambient Intelligence, in particular the openness aspect, raises the need for dependability evaluation to encompass not only unexpected accidental faults but also intentional faults perpetrated by malicious entities. Clearly, there is a need to develop modelling paradigms that will enable the evaluation of comprehensive dependability measures taking into account both types of faults in an integrated way. Such modelling paradigms must be based on sound assumptions concerning the vulnerabilities of the target systems and the attackers' behaviour. Such assumptions should be derived from direct measurements and observations.
In many systems of the future, humans will form a vital link in the complete dependability chain. In this respect, the adaptive nature of human behaviour is an important phenomenon [Besnard and Greathead, 2003]. Hence, it is of crucial importance to model and evaluate the impact of human/operator operational behaviour on overall system dependability. Current (technically-focused) system evaluation approaches to a large extent neglect this issue.
Future systems will adapt themselves to new and evolving circumstances. To assess the dependability of such systems, system evaluation techniques may have to be enhanced so as to cope with increasingly dynamic systems.
Future systems will heavily rely on mobile technologies. To provide an adequate level of service, fault-tolerance techniques will be devised to provide a certain level of service even in the presence of disconnections. System evaluation needs to provide adequate support for the development and evaluation of these techniques.
As a consequence, the following research themes can be identified:
An expanded role for system evaluation activities to further integrate them into the system life cycle (there is a need for a stronger interaction with the other dependability methods) and to better understand the limits of system evaluation (how much we can rely on modelling and measurement given the uncertainties in the real system being modelled or measured).
Better assessment – and better integration of this assessment into system evaluation – of the dependability effects of human activities (development, maintenance, use of, and attacks against, computer-based systems), currently often limited to generic assessments of “quality” and/or conformance checks about the application of prescribed precautions.
Attacking complexity of system descriptions. There is a definite need for composition, supported by property-driven abstractions in the definition of models and measurements, and also a need to provide macro structure to models, for example using layered approaches, to reflect the layered structure of real systems. Since different formalisms are suited to the description of different system characteristics, forcing the use of a single formalism may result in an inadequate and non-faithful representation of some relevant aspects. We, therefore, advocate a multiformalism system evaluation environment in which formalisms can work together in a synergistic manner, allowing different modelling styles (with different models of time, e.g., continuous, discrete, deterministic or stochastic) to be applied at different stages of the evaluation.
Attacking complexity of solutions. By solution, we mean computation of both classical dependability properties and temporal logic properties, possibly in a multiformalism and heterogeneous context. We envision three main sources of complexity: large state spaces, the presence of rare events, and new classes of stochastic models and their analysis.
Measurements in complex and open systems. Measurement-based assessment of large scale deployed networked systems raises several challenging problems, encompassing controllability and observability issues, as well as the definition of suitable dependability measures. Such problems are exacerbated when mobility is taken into account. Also, several research issues are raised when considering the collection and processing of real data that is needed to understand and characterize malicious attacks, and to model the impact of these attacks on computer system security. We need to develop a methodology and suitable means for collecting and analysing such data. Particular attention needs be put on the characterization of attacker behaviour based on the collected data and the definition of stochastic models that will enable the system administrators and designers to assess the ability of their systems to resist potential attacks.
Benchmarking for dependability, to provide a uniform, repeatable, and cost-effective way of performing the evaluation of dependability and security attributes, either as stand-alone assessment or, more often, for comparative evaluation across systems and components. The shift from system evaluation techniques based on measurements to the standardized approaches required by benchmarking touches all the fundamental problems of current measurement approaches (representativeness, portability, intrusion, scalability, cost, etc.) and needs a comprehensive research approach, with a special attention to COTS and middleware aspects.