A performance Brokerage for Heterogeneous Clouds



Yüklə 1,38 Mb.
səhifə16/49
tarix09.01.2019
ölçüsü1,38 Mb.
#94329
1   ...   12   13   14   15   16   17   18   19   ...   49

3.6 Chapter Summary

In section 3.1 we provided some hypothetical examples of how performance variation can lead to cost variation for Cloud users, and in section 3.2 we note that providers make no guarantees regarding performance in their SLAs. As such, performance measurement becomes vital for Cloud users.


In section 3.3 we reviewed virtualisation technologies, and in particular the Popek and Goldberg (1974) hypervisor properties. From these we can infer that CPU bound workloads should cause minimal trapping to the hypervisor, and so will likely have performance similar as to running directly on the underlying physical machine. However, I/O bound workloads will have significant hypervisor intervention, likely affecting performance.
In section 3.4 we addressed performance metrics, and we find the most useful and informative ones for users are defined in terms of progression towards a task, such as execution time and throughput. As such, when modelling the broker in chapter 6, the advertised performance of instances is expressed in terms of execution times.
Task progression metrics require a task be chosen in order to measure its ‘progression’, and a benchmark is a workload chosen for this purpose. In section 3.5 we discuss how numerous pitfalls exist with respect to good benchmarks, and this has led to a preference for real-world workloads being used. In particular, this approach ensures that benchmarks cannot fit into low level caches and so the memory hierarchy is stressed when the workload is being executed. Further, it also ensures there are sufficiently complex instruction dependencies which generates various hazards in CPU pipelines. The SPEC CPU benchmark suite SPEC is developed by an industry led consortium, with the specific attention aim of addressing these issues. The suite is widely used and we make use of a number of the benchmarks from the suite when measuring Cloud performance in chapter 5.
In summary, the literature review in section 35 reveals a preference use of real-world workloads for benchmarks, but also notes the large number of benchmarks available in the SPEC suite– 31. By including a large number of benchmarks the suite is useful to a wide audience. This is an acknowledgement that benchmarks are only useful to a particular user to the degree to which they correlate with the workloads the user intends to run, and that no benchmark can predict all workloads. From the discussion here, it is clear that a performance broker needs to offer performance with respect to a multitude of benchmarks, in order to address performance needs of a wide audience. Looking ahead to section 6.4, we choose to use 31 as the number of benchmarks the broker offers performance with respect to. In the next chapter we further review literature to understand, amongst other things, degree of performance variation possible, how brokers operate, pricing and markets, as to allow for a model of the broker to be developed.

17.

18.4 Cloud Performance, Brokers, Markets and Pricing

In chapter 2 we reviewed the characteristics of Cloud services, as well as the various service and deployment models. We noted that elasticity is perhaps the key characteristic of Cloud services that differentiates them from previous forms of distributed systems. Elasticity, together with standardised offerings, leads to consideration of utility and commodity, which are linked by the notion of performance. In the former, there is potential for using performance to define the computational equivalent of the kilowatt-hour, whilst for the latter, it may be used to determine when instances are equivalent. In chapter 3 we considered how the performance properties of an Infrastructure Cloud are derived from the system of physical components and software tools from which they are made. In particular, we note that by the efficiency property of hypervisors, which we defined in section 3.4, we would expect the performance of CPU bound workloads to be strongly determined by the underlying CPU. A literature review relating to defining and measuring performance reveals a preference for task progression metrics and real-world benchmarks.


Based on our understanding of Cloud systems and performance, we review work related to Cloud performance in section 4.1 with aim of identifying opportunities for the broker. We find that performance variation across supposedly identical instances is widely reported on; evidence that the problem is of wide interest to the Cloud community. Heterogeneity is identified as a major cause of performance variation, as is resource contention amongst instances co-located on the same host. We note that extant empirical work is typically limited to performing a cross-section study, and there is a lack of longitudinal work. As a consequence the performance characteristics of instances over time is not well known. We also consider whether results presented are sufficient to allow for a realistic model of performance to be constructed.
As far as we are aware, the only work that directly addresses performance variation at the same price is focused on improvement strategies, and we review such work in section 4.2. From these strategies we can estimate an expected cost for obtaining a particular performance level. This is useful as it serves to limit the markups a broker can apply when re-selling, and looking ahead to section 6.6, we make use of this during client and broker interactions. However, as we shall see, assumptions regarding the independence of the performance of different instances are typically made and this leads to an under-estimation of risk. This work is also interesting as we find examples of different models of instance performance, affording the opportunity to analyse them and determine if they are appropriate for our needs.
We review brokers in section 4.3, and in particular those offering a performance based service. We find that the typical focus is on differences across providers at different prices rather than differences in performance at the same price. The lack of discussion in the vast majority of work reviewed regarding how the various proposed brokers will be sustainable is a curious omission, and indeed Rogers and Cliff (2012) is the only example we can find that demonstrates a profitable broker.
When describing the broker in section 1.2 we noted that we have two marketplaces: the primary marketplace consisting of large Cloud providers offering mass-produced standardised resources, whilst performance broker(s) and users with specific performance needs come together to create a secondary marketplace. For reasons discussed in section 2.7, we will model the primary marketplace as a commodity exchange, and so in section 4.4 we review work regarding such exchanges. We also note the prevalence of the Zero Intelligence (ZI) trading agent as first described by Gode and Sundar (1993), and we choose to model the broker in this manner when interacting with clients in the secondary marketplace. We further justify our choice of using a ZI agent after a review of extant Cloud pricing strategies in section 4.5.
In section 4.6 we review Cloud simulators, including CloudSim, its GridSim and CReST, and we conclude that for different reasons all are insufficient for our needs. As such we choose to develop our own code for the simulation.

Yüklə 1,38 Mb.

Dostları ilə paylaş:
1   ...   12   13   14   15   16   17   18   19   ...   49




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin