A performance Brokerage for Heterogeneous Clouds



Yüklə 1,38 Mb.
səhifə34/49
tarix09.01.2019
ölçüsü1,38 Mb.
#94329
1   ...   30   31   32   33   34   35   36   37   ...   49

6.2 The Cloud Exchange (CeX)

In this section we discuss a hypothetical Cloud Exchange (CeX) organized along the lines of extant financial exchanges, as described in section 3.4, from which the broker, and others, can rent instances. For a broadly considered CeX, this would mean that orders are not placed directly with providers and can be fulfilled by any provider selling appropriately equivalent resources on the CeX. In order to obtain new instances the broker must place a bid at the CeX. Whilst extant financial exchanges have a multitude of different types of orders, we recall from section 3.4 that when modelling markets it is common to make various simplifying assumptions regarding types of orders and how the CDA operates. We also make a number of simplifying assumptions with regards to: (1) instance types offered on the CeX; (2) the CDA; (3) how the sellers generate asks; (4) the types of order the broker places; and (5) numbers of sellers on the CeX.


In common with current Cloud offerings, we would expect a CeX to offer multiple instance families such as General-Purpose, High-CPU and High-Memory and for these to be available in multiple sizes such as small, medium and large. For simplicity however, we assume that the instance types are sufficiently different so that the broker cannot interchange instances from one type with those from another. As such, we can consider the broker pool to consist of multiple independent sub-pools defined by instance type. The profit generated by the pool is the sum of the profit generated by each sub-pool, so to investigate pool profitability it suffices to consider one instance type only, which we refer to as type R1.
Following current practice the CeX specifies the R1 instance type in terms of vCPUs, RAM, storage and a CeX Compute Unit (CeXCU). However, the CeX does not mandate particular hardware, making it possible for different sellers to offer instances of the same type without having to run identical (or near identical) physical servers in doing so. We do not specify particular types of hardware, such as CPU model, in instance type definitions for the following reasons: (1) the CeX considers instances which conform to the specification as equivalent and trades them on that basis only; (2) sellers may be unwilling to advertise low level hardware details; (3) hypervisors such as KVM can obfuscate the CPU model so verifying that an instance is running on the requested CPU may be difficult; (4) such a level of specificity is likely to reduce the number of sellers able to satisfy the order, and so reduce liquidity and the concomitant benefits; and (5) this follows current Cloud practice. The only requirement, then, is for instances to conform to the same specification.
We set the number of sellers on the CeX to 12, the same number of sellers in trading experiments run by Smith (1962) which led to a Nobel prize in Economics. We assume that 4 of these sellers are market makers(MM) , and so continuously post bids and asks, whilst the remaining standard sellers are either active, in which case they post asks, or inactive, so are not currently selling. At any given moment k > 0 of the standard sellers are active, where k is drawn from int[1,8].
Whenever the broker requires additional instances it places a market order onto the CeX, which specifies a quantity of instances at the market price. In response, all market makers submit an ask specifying the same number of instances as well as an ask price. Active standard sellers also submit an ask specifying the same number of instances as in the broker’s bid. Asks are then sorted in ascending order of price with the best ask, i.e. the ask with the lowest price is matched with the broker. As a simplification we assume that all orders are to be executed immediately and so we do not maintain the state of the order book over time.
A key consideration is the duration of the billing unit. As noted, EC2 round up to the nearest hour, and so the minimum charge was one hour until 02/10/17 when billing changed to per-second with a 1 minute minimum fee33. This style of billing has been common-place amongst providers, however other providers offer finer-grained billing. Azure prices to the nearest minute, whilst somewhat in-between Google use to prices to the nearest minute but with a 10 minute minimum charge. However, in response to the changes to the EC2 billing model, Google now bill to the nearest second with a 1 minute minimum fee.
Arguably, minimum billing of some duration is required as it allows for the recovery of initial costs incurred in satisfying the request, without which the service may be open to abuse. Such costs include, for example, initial energy consumption when starting a new instance, as well as the cost of participating in the CeX. We introduce a parameter called min_charge which is the minimum charge for an instance rented on the CeX irrespective of its actual duration. For durations in excess of min_charge billing is rounded to the nearest minute. For example, if min_charge = 10 an instance rented for 6 minutes incurs a charge of 10 instance minutes, whilst a duration of 12.5 minutes incurs a charge of 13 instance minutes.
We are employ per minute billing model, with various minimum charges, and so ask and bid prices are per minute. As a simplification we assume that all sellers generate ask prices from U[0.012,0.014]; this is a per minute price and we take dollars as the unit currency. This gives per hour pricing of between $0.72 and $0.84, and for comparison, per hour price34 of a c4.4xlarge in US-East Region is $0.796. We discuss next how sellers allocate CPU models to instances when satisfying requests.

6.3 Sellers on the CeX

As noted above, we consider 12 sellers on the CeX, 4 of which are MMs. The CeX does not mandate particular hardware and so we would expect a degree of heterogeneity amongst instances. However, we would also expect some degree of commonality amongst CPU models from different providers. By the degree of heterogeneity (NUM_CPUS) we understand the total number of different CPU models across all sellers, and NUM_CPUS is a specifiable parameter in our model. If NUM_CPUS = k we denote available CPUs by CPU1, CPU2,…,CPUk.


Arguably, MMs will have large infrastructures. Indeed, it is likely that current providers such as EC2, GCE and Azure will be a MM in future markets. As a consequence they will likely have greater heterogeneity as compared to standard sellers. For simplicity, we assume that the number of different CPU models a MM sells instances on is the same as NUM_CPUS. For example, if NUM_CPUS = 3 then we have CPU models CPU1, CPU2 and CPU3 and each MM offers instances on these 3. Further, we assume that each standard seller only sells instances on one CPU model, which is randomly chosen from the available set.
Suppose a MM has been matched with a bid for 20 instances, and that NUM_CPUS = 3. How should we set the CPU model for each of the 20 instances? Following Ou et al. (2013) and Farley et al. (2012) we would assign each CPU a probability and assume independence between the CPU models that each instance can obtain. However, as discussed in section 4.5, such an assumption leads to a distribution of CPU models within a request that is not typically observed. Indeed, empirical results indicate a far more unpredictable process underlying CPU to instance allocation, or rather, instance to host. To simulate this we proceed as follows: Suppose the seller must deliver n > 0 instances. We choose a CPU model at random from the available set CPU1, CPU2,…,CPUk. Next we choose m from int[1, n] and allocate m instances the chosen CPU model. We repeat this step for n – m instances and continue until we have delivered n instances. By simulating more irregular CPU allocation to instances, we ensure validity of structural assumptions regarding how provider/sellers on the marketplace operate.
In the next section we model the performance of instances with respect to benchmark workloads.

Yüklə 1,38 Mb.

Dostları ilə paylaş:
1   ...   30   31   32   33   34   35   36   37   ...   49




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin