A performance Brokerage for Heterogeneous Clouds


Demand and Pool Management



Yüklə 1,38 Mb.
səhifə37/49
tarix09.01.2019
ölçüsü1,38 Mb.
#94329
1   ...   33   34   35   36   37   38   39   40   ...   49

6.7 Demand and Pool Management


The broker is advertising 31 workloads with which performance can be measured. Further, for each workload clients can specify one of 3 tranches: A, B, and C. How should demand, i.e. the number of required instances for each workload/tranche pair in every ‘time slice’ be modelled? As Rogers and Cliff (2012) note, there is limited public data concerning demand for Cloud services. They make use of normalized sales data from 4 market segments obtained from the UK National Statistics Office38 as a proxy, allowing them to investigate the effect of demand side variation on their broker. However, as discussed in section 3.3, using fixed demand patterns is not without issue: different runs of the simulation using the same demand pattern have no variation in demand. Cartlidge and Phelps (2011) models Cloud demand as a homogeneous Poisson process, commonly used in the modelling of computer systems and networks (Lavenberg, 1983) and we follow this approach.
We model the number of orders arriving from clients in a given time slice as a homogeneous Poisson process. The arrival rate λ is a parameter of the model, allowing us to simulate different levels of demand. Given a quantity of demand n ~ Po(λ) in a given time slice, we randomly choose n > 0 clients, and each client randomly chooses one of its orders. This ensures that per workload per tranche demand is in accordance with distribution found across all clients. That is, if 1% of all possible client orders are for workload 1 tranche A then averaged over many runs of a simulation we would expect 1% of orders actually submitted to be of this type.
As the broker orders instances in expectation of demand, a major consideration for the broker is when the pool needs to be expanded by ordering new instances, and further, when it needs to be reduced by terminating extant instances. The broker assumes that future demand can be estimated from past demand. In particular, the broker operates in 15 minute intervals, and at the end of each one estimates demand, by which we understand the number of instances to be ordered irrespective of workload or performance requirements for the following interval. The estimate, which we denote by DEMAND, is obtained by calculating the moving average of per minute demand over the previous 60 minutes. We then multiply this by 15 to obtain the estimate for demand (DEMAND) in the following 15 minutes.
In a liquid market, buyers and sellers rapidly execute trades at prices they are comfortable with. In order to provide performance on-demand the broker must have sufficient numbers of instances at required performance levels, and these must be offered at prices clients are prepared to pay. By LIQUIDITY, we mean the number of instances in the pool available for rent. If LIQUIDITY >= DEMAND then no more instances are required. However, if LIQUIDITY < DEMAND then there are insufficient instances in the pool to cover expected demand. However, if the broker orders DEMAND – LIQUIDITY then this will likely lead to an oversupply of instances as some instances that are currently being sub-let to clients may become available again during the following 15 minutes. Consequently, the broker keeps track of the number of returned instances per minute, and takes a moving average of the previous 60 minutes to calculate the per minute rate of instance return. This is then multiplied by 15 to give an estimate of the number of returned instances in the following 15 minutes, which we denote by RETURNED. If DEMAND > LIQUIDITY + RETURNED the broker orders the difference.

In addition to expanding the pool, the broker will also need to reduce it, either in response to a drop in demand or over-ordering. For each available instance the broker determines how long it has been available for rent. If this is above the higher of (20, min_lease) then the instance is terminated. Finally, the broker processes orders arriving simultaneously at time ti by tranche priority and then by quantity. Orders for the same tranche and quantity are processed in a random order.


6.8 Model Parameters

Our model has 6 parameters which allow us to control the degree of heterogeneity amongst instances, the rate of orders arriving at the broker, and both inter and intra CPU performance differences. Specifically:



  • NUM_CPUS: The number of different CPU models amongst sellers on the CeX.

  • Order_rate (λ): The average per minute rate of order arrivals the broker receives.

  • Min_charge: The minimum charging period applied at the CeX.

  • Spread: Spread determines (in part) how ‘close’ the performance of instances running on different CPU models is.

  • Peak: Peak determines the distance from min to median.

  • Tail: Tail is such that tail > peak and determines the distance from median to max.

In order to determine profitability for the broker we implement the model as a discrete event simulation, which we discuss next.

6.9 Simulation Events and Collected Statistics

To investigate broker profitability, we simulate the interactions between the broker, its clients, and the marketplace, over a period of time. Over any time period we would not expect these interactions to be deterministic; that is, we would not expect to know exactly which clients will want instances, and in what quantity and at what performance level and for what period of time. Similarly, we would not expect to know exactly the performance of instances we obtain from the marketplace. As such we use a stochastic simulation where inputs, such as demand or instance performance, are drawn from probability distributions.


In practice, the broker will receive orders and acquire and terminate instances at any point in time. However, it is commonplace to divide a continuous time segment into a number of contiguous discrete time slices: for example, Cliff (2009) divides time into discrete slices during which one trading event takes place. In a discrete event simulation (DES) one or more (stochastic) events occurs at discrete points in time and the simulation clock moves forward by the discrete amount. It is commonplace to interpret some events as occurring instantaneously at these points; another interpretation is that exact timings are ignored but the event is assumed to have finished by the time the clock has moved forward. Some events can start at one point in time and finish at a later point, and indeed some events can cause new events to occur.
In our simulation we will assume certain events occur instantaneously (or complete within the specified time slice). In particular, we assume the broker can instantaneously assign instances in response to a client order. However, other events will take a number of time slices to complete; for example, we do not assume that an instance will boot and be accessible instantaneously.
We consider our broker to operate in discrete time slices of one minute duration for n > 0 minutes:
t1 < t2 < … tk where k = 1, 2,…, n.
In each time slice the following events occur:

  • User_Order_Event: NUM_CLIENTS ~ Po(λ) are chosen at random from the population. Each client randomly selects an order from its list of orders and submits it to the broker.

  • Lease_Termination_Event: A client may terminate a lease on an instance making it available for sub-letting again.

  • Pool_Scaling_Event: Every 15 minutes the broker determines if additional instances are required and if so an order is placed at the CeX.

  • Instance_Removal_Event: The CSB may terminate any instances in the pool that meet the termination criteria.

  • Performance_Update_Event: All available instances have their current performance updated and the broker keeps per workload per instance history. For each workload global_perf(m) is updated.

We consider trading periods of 48 hours but allow the broker to observe demand for a one-hour period before trading begins, allowing the broker to estimate an initial requirement for instances. The broker then trades for a 48 hour period and so the simulation runs for 2940 simulated minutes.


The events in each time slice do not depend on each other and so could occur in parallel. Instances ordered at time ti will be available in the pool at a future time of ti+k, where k is a random integer drawn from [3,5]. This provides a stochastic time for acquiring instances from the CeX, with a best time of 3 minutes and worst of 5, reflecting time for an order to be fulfilled plus a range of boot times of a virtual machine, as well as initial performance measurements. There is also a 1 minute turn-around time to allow for instances that have had their leases terminated to allow for re-measurement.
The CSB incurs various costs when running the pool including: staff, premises and other general business costs, as well as the costs incurred in renting instances from the CeX. However, to simplify concerns we only consider the costs incurred in renting instances. Suppose that inst ϵ I was rented from the CeX at time ti for a price of pr(I), and terminated at time tj. We denote by Cinst the cost of the instance, calculated as:
6.10

The total cost the broker incurs across all instances in I is:


6.11

Next, suppose that during its lifetime inst was sub-let k > 0 times for durations (in minutes) of d1, d2,...dk at a price of pr1, pr2,..,prk respectively. Then the revenue derived from the instance, REVinst is:


6.12
An instance’s profit/loss over its lifetime is:
PLinst := REVinst - Cinst . 6.13
Total profit/loss for the broker is given by:
6.14

Allen (2011) states that the purpose of a DES is to estimate the expected value of some statistic, and in our case we wish to estimate expected profit. We let PROFIT denote the profit reported by a run of the simulation, and as we have a stochastic simulation each run will produce a (potentially) different value and so PROFIT is a random variable. We wish to estimate E[PROFIT], and to do so we will make use of the Central Limit Theorem: Let X be a random variable with mean µ and standard deviation σ, and let X̅ denote the random variable defined by taking the sample mean of samples of size n > 0. For n ≥ 30 X̅ is approximated by the normal distribution N(µ, σ2/n), and so up to the approximation, we have a 95% confidence interval (CI) for µ given by x̅ ± (1.96* σ)/√n, where x̅ is the mean of a sample of size n > 0. Applying this result we can obtain a 95% CI for E[PROFIT] by performing 30 runs of a simulation and calculating sample mean and standard deviation. Note that we perform 30 runs for each combination of parameters.



Yüklə 1,38 Mb.

Dostları ilə paylaş:
1   ...   33   34   35   36   37   38   39   40   ...   49




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin