agenda of the second half

20
Agenda of the second half Agenda of the second half Thursday Week 6: Dr. Ligang He Friday Week 6: Guest lecture Dr. Kang Jing, University of Cambridge Thursday Week 7: Dr. Ligang He Friday Week 7 Thursday Week 9: Group presentations Friday Week 7 Thursday Week 9: Group presentations Friday Week 9: Guest Lecture Dr Matt Ismail Centre of Scientific Computing University of Warwick Dr. Matt Ismail, Centre of Scientific Computing, University of Warwick Thursday Week 10: Guest Lecture Mystery guest Mystery guest Friday Week 10: Ending of the module 1 Computer Science, University of Warwick Computer Science, University of Warwick

Upload: others

Post on 17-Feb-2022

1 views

Category:

Documents


0 download

TRANSCRIPT

Page 1: Agenda of the second half

Agenda of the second halfAgenda of the second half

•Thursday Week 6: Dr. Ligang He

•Friday Week 6: Guest lecture

• Dr. Kang Jing, University of Cambridge

•Thursday Week 7: Dr. Ligang He

•Friday Week 7 – Thursday Week 9: Group presentationsFriday Week 7 Thursday Week 9: Group presentations

•Friday Week 9: Guest Lecture

• Dr Matt Ismail Centre of Scientific Computing University of Warwick• Dr. Matt Ismail, Centre of Scientific Computing, University of Warwick

•Thursday Week 10: Guest Lecture

Mystery guest• Mystery guest

•Friday Week 10: Ending of the module

1Computer Science, University of WarwickComputer Science, University of Warwick

Page 2: Agenda of the second half

Cloud ComputingCloud ComputingCloud ComputingCloud Computing

Dr. Ligang HeDr. Ligang He

Page 3: Agenda of the second half

OutlineOutline

Background of Cloud computing

Key technology in Cloud computing

Popular Cloud systems in the world

3Computer Science, University of WarwickComputer Science, University of Warwick

Page 4: Agenda of the second half

Cloud Computing in WikipediaCloud Computing in Wikipedia

Cloud computing involves the provision of dynamically scalableand often virtualized resources as a service over the

4Computer Science, University of WarwickComputer Science, University of Warwick

and often virtualized resources as a service over the Internet.

Page 5: Agenda of the second half

Background of Cloud computingBackground of Cloud computing

Cloud computing is the further evolution and commercialisation of the following technologiescommercialisation of the following technologies parallel computing

di t ib t d ti distributed computing

Grid computing

Cloud computing is the combination of the following techniques Virtualization

Utility computing

IaaS IaaS

PaaS

SaaS

5Computer Science, University of WarwickComputer Science, University of Warwick

SaaS

Page 6: Agenda of the second half

Comparisons between Cloud Computing Comparisons between Cloud Computing and Grid Computingand Grid Computinga d G d Co put ga d G d Co put g

Grid Computing Cloud Computing

Different organisations

Heterogeneous resources

Single organisation

Homogeneous resources

Virtual organisation

Focus on scientific

Virtualised resources

data processingcomputing

No clear boundary between Client and Server

Client-server modelClient and Server

Free

Standardized

Pay as you use

No standard yet Standardized

Academia

y

Industry

6Computer Science, University of WarwickComputer Science, University of Warwick

Page 7: Agenda of the second half

Service categories in Cloud ComputingService categories in Cloud Computing

IaaS:Infrastructure as a Service

PaaS: Platform as a Service

SaaS: Software as a Service

Saa

SaaSS

SaaS: Software as a Service

Paa

PaaSS

Iaa

IaaSS

7Computer Science, University of WarwickComputer Science, University of Warwick

Page 8: Agenda of the second half

8Computer Science, University of WarwickComputer Science, University of Warwick

Page 9: Agenda of the second half

Virtualization technologyVirtualization technology

A key technology in Cloud Computing

Two well known productsX d l d b C b idXen, developed by Cambridge

VMWare

Allow multiple guest operating systems to share a computerA running instance of a guest Operating system is call

Virtual Machine

Can host up to a few hundreds of Virtual Machines

9Computer Science, University of WarwickComputer Science, University of Warwick

Page 10: Agenda of the second half

Xen ArchitectureXen Architecture

Xen DM&C(Domain 0) Guest Domain Guest Domain

Applic

Applic

Applic

Applic

Applic

Applic

Applic

Applic

Application

ation

ation

ation

ation

ation

ation

ation

ation

Xen Hypervisor

Domain0 XenoLinux XenoWindows

Hardware

Xen Hypervisor

10Computer Science, University of WarwickComputer Science, University of Warwick

Page 11: Agenda of the second half

Xen componentsXen components

•Hypervisor

• sits between the hardware and any operating systems

• responsible for CPU scheduling and memory partitioning of VMs

• Control the executions of VMs

• no knowledge of networking and I/O functions

•Domain 0:

• a modified linux kernel.

• Domain Management and Control

• Contain two drivers to access I/O and networking resources • Network Backend Driver and Block Backend Driver

i t t ith th VM

11Computer Science, University of WarwickComputer Science, University of Warwick

• interact with other VMs.

Page 12: Agenda of the second half

Xen componentsXen components

•Domain U

• Has no direct access to hardware

• Share resources with other domains (resources are virtualized)

• Modified operating systems (linux, Windows, Solaris, UNIX)

Contains t o dri ers for net orking and I/O•Contains two drivers for networking and I/O

• Network driver and Block driver

• Just send the networking and I/O requests to Domain 0

12Computer Science, University of WarwickComputer Science, University of Warwick

Page 13: Agenda of the second half

Xen componentsXen components

Two mechanisms exist to control interactions between h i d d ihypervisor and an domain

Hypercall• Synchronous

• from a domain to hypervisor to a domain

Domain performs a synchronous trap into the hypervisor to perform a• Domain performs a synchronous trap into the hypervisor to perform a privileged operation

Event channel • asynchronous

• From a hypervisor to a domain

• deliver notifications

13Computer Science, University of WarwickComputer Science, University of Warwick

Page 14: Agenda of the second half

•Full virtualisation

• No need to modify operating systems

• A domain is not aware of sharing the physical machine with other Domain Us and existence of other Domain Us

• VMWare

•ParaVirtualisation

• Modify operation systemsModify operation systems

• Is aware it does not have direct access to the hardware

• Recognize other VMs are running in the same machineRecognize other VMs are running in the same machine

• Xen

14Computer Science, University of WarwickComputer Science, University of Warwick

Page 15: Agenda of the second half

CPU virtualizationCPU virtualization

• Full virtualization• Dynamic binary translation

• ParavirtualizationChange the privilege of OS (Xen on x86)• Change the privilege of OS (Xen on x86)

- Originally, OS runs in Ring 0; applications run in ring 3; ring 1 and 2 are unused

- Change OS to run in ring 1 and hypervisor runs in ring 0• modify some system calls in OS to hypercall (call the functionality

provided by hypervisor)• Scheduling algorithms are used to share CPU among VMs

15Computer Science, University of WarwickComputer Science, University of Warwick

Page 16: Agenda of the second half

Scheduling algorithms in HypervisorScheduling algorithms in Hypervisor

•SEDF: Simple Earliest Deadline First

• Each domain specifies its CPU requirement with a tuple (Si, Pi, xi), representing Domi requests to receive at least si units of time in each period of pi Boolean flag xi represents whether Domi isin each period of pi. Boolean flag xi represents whether Domi is eligible to receive extra CPU time.

• SEDF give CPU to the domain that 1) has not received the t d h f CPU d 2) h th li t d dlirequested share of CPU and 2) has the earliest deadline

16Computer Science, University of WarwickComputer Science, University of Warwick

Page 17: Agenda of the second half

Scheduling algorithms in HypervisorScheduling algorithms in Hypervisor

•Credit

• Each domain is assigned a weight and a cap• A domain with higher weight will get more share of CPU

• The cap fixes the maximum amount of CPU a domain will be able to consume. It is expressed in percentage of one physical CPU: 100 is 1 physical CPU, 50 is half a CPU, 400 is 4 CPUs, etc

• Each CPU manages a local run queue of VCPUs, sorted by VCPU priority

• A VCPU’s priority can be: under and over representing whether this• A VCPU s priority can be: under and over, representing whether this VCPU has exceeded its fair share of CPU during a period

• When a VCPU is running, its credit is deducted by 100 every 1010ms

• If its credit is less than 0, the VCPU’s priority is set to OVER, otherwise it is UNDER

17Computer Science, University of WarwickComputer Science, University of Warwick

Page 18: Agenda of the second half

• All VCPUs waiting in the run-queue have their credits topped up once every 30ms,

• The higher weight a domain has, the more credits are topped up for its VCPUs.

• The Credit scheduler can automatically load-balance ythe VCPUs across physical CPUs.

• When a CPU doesn't find a VCPU of priority under on its local run queue it will “steal” one from other physical CPUsqueue, it will steal one from other physical CPUs.

• This guarantees that no CPU idles when there is runnable work in the system

18Computer Science, University of WarwickComputer Science, University of Warwick

Page 19: Agenda of the second half

Memory virtualization Memory virtualization

•MMU: Memory Management Unit

•Two level memory in a traditional system

•Three level memory in a VM system

19Computer Science, University of WarwickComputer Science, University of Warwick

•Hypervisor translates the guest physical memory address to machine memory address

Page 20: Agenda of the second half

I/O virtualization I/O virtualization

•Full virtualization

• Hypervisor can directly operate on the hardware devices

• When a Guest OS issues an I/O operation, Hypervisor intercepts it, performs actual I/O, and return the results to the Guest OS

• Shortcoming: hypervisor has to be developed to manage all hardware deviceshardware devices

•Paravirtualization

• Modify the Guest OS, so that when a Guest OS issues an I/O operation, it sends an I/O operation request to Domain 0

D i 0 f th I/O d t th lt t G t OS• Domain 0 performs the I/O and return the results to Guest OS

20Computer Science, University of WarwickComputer Science, University of Warwick