Dynamic resources allocation has become even more attractive as a research direction with commodity computing gaining momentum and bringing new requirements regarding elasticity and flexibility and with the growing importance of cloud computing. The aim is to deliver high-performance computational while leveraging on other aspects, such as energy consumption. Admittedly, heterogeneity comes at the cost of greater design and management complexity. A promising approach to address the difficulties posed by this scenario is to exploit better-specialized computing resources integrated into a heterogeneous system architecture such as asymmetric multicore CPUs, specialized graphic co-processors, GPUs, or reconfigurable HW such as FPGAs. Exploiting these new heterogeneous architectures means taking advantage of their individual characteristics to optimize the performance/energy trade-off for the overall system. Within this work, we focus on the definition of a runtime mechanism, called Orchestrator, that enables heterogeneous systems to manage easily and efficiently accelerators to meet application and system goals, such as high performance for a broad range of computationally intensive applications.
The early computer science literature is filled with a plethora of static optimization approaches, among which the most prominent example of a system applying static optimization is a compiler. A compiler applies a set of common transformations to an application so as to speed up its execution on an entire family of microprocessors (e.g., x86).
Whenever a compiler is given additional information, it can harness more aggressive transformations to obtain additional performance improvements on a subset of a family of microprocessors. The driver of static optimization is the availability of information, which may be scarce depending on the environment. On the one hand there are embedded computing systems that usually perform the same task over and over; they represent the perfect scenario to apply the highest level of static optimization so as to maximize the benefits for users (e.g., maximize performance while minimizing power consumption). On the other hand there are clusters of computing systems that may execute multiple tasks simultaneously without the possibility to anticipate the startup of a task and the finishing of another.
Furthermore, the increasing availability of different kinds of processing resources in Heterogeneous System Architecture (HSA) associated with today’s fast-changing, unpredictable workloads (e.g., of mobile or cloud-computing contexts), has propelled an interest towards self-adaptive systems that dynamically reorganize the usage of system resources to optimize for a given goal (e.g., performance, energy, reliability, resource utilization). This scenario calls for dynamic optimization. The SAVE (Self-Adaptive Virtualisation-Aware HighPerformance/Low-Energy Heterogeneous System Architectures) project will develop a stack of hardware, software and OS components that allow for deciding at run-time to execute tasks on the appropriate type of resource, based on the current system status/environment/application requirements.