A performance Brokerage for Heterogeneous Clouds



Yüklə 1,38 Mb.
səhifə39/49
tarix09.01.2019
ölçüsü1,38 Mb.
#94329
1   ...   35   36   37   38   39   40   41   42   ...   49

6.11 Summary

In this chapter we developed a model for the performance broker, an exchange where multiple suppliers can sell appropriately equivalent instances, the performance of those instances as well a set of clients with performance needs. We choose to implement a commodity marketplace (CeX) to reflect the view of Cloud resource as commodities. The majority of the world’s exchanges operate a CDA which allows the continuous posting of bids and asks. In section 6.2, and in line with extant work described in section 4.4 we implement a simplified CDA to determine instance prices at the CeX as well as serving to match the broker with different suppliers. As the CeX does not mandate hardware the broker’s pool of instances will be heterogeneous whenever the parameter NUM_CPUS > 1. In section 6.3 we model sellers on the CeX, and we give each one a particular set of CPUs models which they allocate to instances whenever they are chosen to trade with the broker. We allocate these in an irregular and somewhat ‘lumpy’ manner reflective of findings in section 5.5.


Once instances have been obtained the broker measures their performance. In section 6.4 we developed a model of performance that is designed to reflect findings summarized in section 5.10. In particular: (1) for heterogeneous instance types different workloads run better/worse on different CPUs whilst differences between CPUs can be controlled by a parameter; (2) the per CPU distribution is highly peaked with a long tail but the degree of peak and length of tail can be controlled as parameters; and (3) instances are stationary.
Finally, the broker operates a secondary performance market and its clients place orders requesting some number of instances at a particular performance tranche with respect to one of the workloads offered, to which the broker responds with a quote. In section 6.6 we construct our population of clients using data from a Google workload trace (Reiss et al., 2011) as this provides the only example of a data set where users are specifying a performance requirement. Further, the scale and nature of the workloads used in the trace make it a suitable proxy for Cloud resources. We make use of estimates of instance seeking costs so that clients can compare their expected effective cost with that of the broker offer. Section 6.7 through to 6.9 provide additional technical requirements for the model.
In section 6.10 we find that the broker can make a profit, however, to do so requires heterogeneity, albeit NUM_CPUS = 2 will suffice under certain conditions. Future marketplaces may specify particular hardware including a CPU model for a given instance type in order to limit variation due to heterogeneity. Doing this removes the opportunity for a broker to re-price. However, as noted, this is likely to lead to decrease in the number of sellers and as a consequence there will be less liquidity on the exchange. Further decreases in liquidity will be seen over time as sellers refresh hardware, and it seems hard to avoid the conclusion that homogeneous instance types invariably become heterogeneous.
We also note that the broker always makes a loss whenever the minimum charge on the CeX was set to 10 minutes. In this case the effective cost of instance seeking was too low for the broker to make a profit. This is potentially problematic: of the 3 major providers40, as of 18/10/2017 both EC2 and GCE have per-second billing with a 1 minute minimum charge, whilst Azure also has a 1 minute minimum charge but round to the nearest minute. If future exchanges use this level of price granularity then a broker operating under the assumptions considered in this chapter will not be profitable.
However, it is notable that the broker does make a profit under conditions currently seen on Clouds, and indeed does so when NUM_CPUs = 2 with a spread of 1.1 between them, and workloads with a degrade to median up of to 1.1 and up to 1.25 for the tail.
The simulation results presented in this chapter assume instances have stationary performance over the whole trading period. However, we know from empirical results in section 5.7 that not that all instances have stationary performance. Arguably, given sufficient time, most if not all instances, are likely to be locally-stationary, or indeed even non-stationary, as the resource consuming actions of its neighbours varies. The price the broker sets for an instance is fixed and based on mean of past performance. Suppose, however, that mean performance changes resulting in the broker over or under charging for the delivered performance. Arguably, in this case, pricing should change. But what if the new level of performance falls below the requested tranche level?
In the next chapter we consider this problem further, and in particular propose pricing that varies by performance as well as SLAs that provide guarantees for performance.

22.7 Critique

Reports of performance variation amongst instances of the same type on Public Clouds are long standing, and Armbrust et al. (2009) considered performance unpredictability as the number 5 obstacle to the adoption of Cloud Computing. To address this problem, in section 1.2 we proposed a performance broker offering performance-assured instances, that is, instances are acquired and their performance measured before being offered for resale at a price according to their performance. To be viable the broker must make a profit; however, somewhat curiously, a review of extant literature regarding performance broker reveals this not typically considered. The research question this thesis address is:


To what extent can a performance broker profitably address performance variation in commodity Infrastructure Cloud marketplaces through offering performance-assured instances?
In section 6.10 we find conditions under which the broker is profitable. However, this, and other conclusions, rest on various choices and assumptions made during the investigation. How can we be sure that we have demonstrated what we think we have?
The objective of this chapter is to provide a critique of the various choices and assumptions are made; the methodology employed; conclusions reached and the choice of problem itself. In section 7.1 we critique our choice of research problem, and we note the prevalence of the problem. In section 7.2 we consider our empirical focus on EC2 and choice of benchmarks. In section 7.3 we provide a system overview and discuss the modelling assumptions. We then discuss simulation methodology and our approach to verification and validation, as well as the approach of the broker in using public benchmarks as opposed to a private benchmark scheme.


Yüklə 1,38 Mb.

Dostları ilə paylaş:
1   ...   35   36   37   38   39   40   41   42   ...   49




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin