A performance Brokerage for Heterogeneous Clouds



Yüklə 1,38 Mb.
səhifə7/49
tarix09.01.2019
ölçüsü1,38 Mb.
#94329
1   2   3   4   5   6   7   8   9   10   ...   49

2.2 Elasticity

The elasticity of Cloud systems serves to highlight a number of issues found with on-premise systems, and in part helps to explain the wide-spread adoption of Cloud services. On-premise systems incur up-front costs for a fixed quantity of resource with unlimited use, for so long as the resource remains in a working condition. Typically, on-premise systems are provisioned with sufficient resources to meet a known peak performance requirement, such as maintaining response times for a service under peak demand or for the periodic execution of batch jobs which must finish by a given deadline. As a consequence, during the troughs in requirements, such as when demand on services drop or batch jobs are not being executed, resources may be under utilised or indeed even idle. Further, on-premise provisioning is risky for uncertain future requirements, as resources may be provisioned, and costs incurred, for a need that fails to materialise.


In addition, on-premise systems can’t be rapidly expanded, and indeed expansion may incur large marginal costs. With regards to the latter, the addition of an extra unit of compute resource may require new racks or cooling equipment, or indeed new data centres to be built. For the former, at a minimum, the time required to expand on-premise system includes the elapsed time between ordering new resources and their arrival, installing and configuring of software, as well as any burn-in time. Due to these various scaling issues, on-premise provisioning runs the risk of missed opportunities as, for example, it may not be possible to scale quickly enough to meet a rapid increase in demand for a service, with potential for loss of customers and revenue.
Elasticity is also notable for its ability to improve performance. The amount of work any on-premise system can deliver within the next hour is fixed, and determined by its size, whilst the promise of seemingly infinite resources available on-demand from the Cloud removes this limit, arguably making Clouds the highest performing systems available for certain types of work. Further, users can benefit from so-called cost associativity as renting n instances for one unit of billable time costs the same as renting 1 instance for n units of billable time.
Unsurprisingly, and due in part to elasticity, the use of Cloud for scientific problems which require large scale compute resources has gained much attention (Voss et al., 2013; Yang et al., 2017). Indeed, Dalpe and Joly (2014) discuss how elasticity may speed up drug discovery, and there have been some notable achievements in this direction. Darrow (2012) reports that Cycle computing deployed a 50,000 core cluster on EC2 for 3 hours; enabling Schrödinger to simulate 21 compounds for use in cancer research. Similarly, the University of Arizona made use of Google Compute Engine (GCE) to simulate the molecular docking of over a million different compounds against a target protein, and were able to complete this work for less than $200 (Cycle Computing, 2017).
With regards to Cloud services offering low level resources such as instances and storage, arguably elasticity is simply the appearance of unlimited resources. However, as an essential characteristic of Cloud services, higher level services built atop lower level ones must also be elastic. In the next section we discuss the types of resources that can be provisioned through Cloud services.

2.3 Service Models

Clouds are typically classified by the types of services offered and who owns the physical infrastructure on which the services run, and what is provided upon it. We begin by looking at service models, which are distinguished by the degree to which control of resources are afforded to users.


Infrastructure as a Service (IaaS): Infrastructure Clouds offer access to resources such as bare metal hardware2, virtual machines, networks and storage and as such are considered the ‘lowest level’ of Cloud Service. The provider owns the physical resources, such as servers and storage, and operates various resource managers to partition and manage them. A hypervisor3, for example, can partition a physical server into one or more virtual machines – each of which can be rented to different users. Users typically have full administrative rights, and so responsibilities, over resources. For example, users have root/administrator access to instances, though not usually to the underlying server.
Instances are created from a catalogue of machine images. In addition to provider supplied images, most Infrastructure Clouds also allow users to build and store their own images. On EC2, machine images are known as Amazon Machine Images (AMIs), and users can choose to keep their AMIs private or make them publically available within the same Region4. Storage is typically offered in multiple forms: object storage, network block volumes and local block devices. The flexibility in choice of OS, software applications and configuration, including firewall settings, provides for a highly customisable environment. However, flexibility comes at a cost, as on-going system administration of instances is the responsibility of the users.
Some users may not want, or indeed need, to manage infrastructure just to write and run their applications, and for these users Platform as a Service (PaaS) Clouds are likely more suitable.
Platform as a Service (PaaS): PaaS Clouds are essentially online programming and execution platforms offering a range of programming languages, together with a platform specific API used for provisioning resources such as storage and databases. Many platforms also provide integration with code repositories such as GitHub, allowing code to be pulled, built and deployed. Well known examples of Platform Clouds include Google App Engine (GAE) (Google Cloud Platform, 2017) and Heroku (Heroku, 2017).

Users do not have administrative rights on PaaS Clouds and are typically limited in their ability to configure or tune their environment to their requirements, or install additional libraries or languages. Further, as the resource abstractions offered through the API are platform specific, the code developed on them is also platform specific, which raises concerns over lock-in and service availability. In part, the recent development of application containers, such as Docker (Docker, 2017), alleviates these issues as they allow users to package all components of an application into a container which can be deployed and run on any host with a compatible kernel running the Docker daemon. Users have full control over the libraries and languages versions but, as noted, there is a dependency on the underlying kernel. This prevents, for example, Windows application containers running on Linux Docker hosts.


The emergence of Container as a Service (CaaS) platforms such EC2 Container Service (ECS) (Amazon Web Services, 2017) and Google Container Engine (Google Cloud Platform, 2017) is likely to change the current PaaS landscape, and indeed Heroku, one of the largest PaaS Clouds is now transitioning to a managed container service. Container services have also enabled a further granularity in compute service, namely, Functions as a Service (FaaS), where the execution of a specified piece of code occurs in response to some set of events. There are some limitations; on AWS for example, Lambda only allows for a maximum function execution time of 300 seconds. Further, for the purposes of billing, execution times are rounded up to the nearest 100ms (0.1 seconds). The execution of large quantities of short lived jobs has potential to incur high costs due to rounding.
Due to concerns over their security model, container services based on Docker tend to make use of instances provisioned from IaaS Clouds. For example, an ECS container will run inside an instance provisioned from EC2, with the hypervisor providing a guarantee of isolation. Further, Lambda functions execute inside containers, and so will also be executing on an EC2 instance.
This illustrates how IaaS Clouds serve as the building blocks for many higher level Cloud abstractions; as such the performance of the latter is determined by the former. A priori we would expect variation in the performance of container and function services. The adoption of a micro-service architecture for a workload frequently sees loosely coupled tasks deployed as either a container or a function. As such we would also expect to see potentially complex performance interactions between various micro-services, likely leading to degraded performance for the workload as a whole.
Many users, particularly non-technical ones, will not have a need for developing their own applications, and will simply want to consume software developed by others. This brings us to Software as a Service Clouds which allows for this:
Software as a Service (SaaS): Software Clouds allow users to provision and manage their own instance of a software application. These are generally accessed via a web browser. A well-known example of a SaaS offering is salesforce.com which offers customer relationship management (CRM) and business analytics software.
The SaaS Cloud is typically considered to be above the PaaS layer, as users have even less control over the environment. In a PaaS Cloud, users may write and configure an application to suit their requirements. In SaaS, they are using applications written and published by others, and can only configure it within the scope allowed by the application.
In this section we have considered types of resources different service models provide, in the next section we consider ownership and location of Cloud services.

Yüklə 1,38 Mb.

Dostları ilə paylaş:
1   2   3   4   5   6   7   8   9   10   ...   49




Verilənlər bazası müəlliflik hüququ ilə müdafiə olunur ©muhaz.org 2024
rəhbərliyinə müraciət

gir | qeydiyyatdan keç
    Ana səhifə


yükləyin