Autonomic Computing in Large-Scale Distributed Systems (includes Clouds)
Autonomic Computing has been suggested as a new paradigm for building systems that are capable of adapting their operation to maintain a certain level performance when external changes occur. Such changing conditions in the environment are particularly common in large-scale distributed systems (e.g., Cloud Computing Systems). Applications and middleware running on these systems has to incorporate some degree of adaptation to cope with increasing volatility.
Existing approaches have used adaptation in the context of specific use cases with little effort to create generic approaches. For example, early work has considered adaptivity in the context of scheduling [1,2] or block size selection in data transfers . This work has used techniques based on control theory, heuristics, or utility functions.
The objective of this project is to move the current state-of-the-art one step forward by identifying the environment characteristics that determine the appropriateness of different techniques for different cases, and use this knowledge to incorporate different adaptation techniques into a generic autonomic computing framework.
 Sakellariou et al. A low-cost rescheduling policy for efficient mapping of workflows on grid systems. Scientific Programming, 2004.
 Lee et al. Utility Driven Adaptive Workflow Execution. CCGrid'2009.
 Gounaris et al. A Control Theoretical Approach to Self-optimizing block transfer in web service grids. ACM TAAS, 2008.