project report - ernetcs5100295/reports/baadal.pdf · 2015. 6. 8. · project report load balancing...
TRANSCRIPT
-
CSL732 - Group – 16
Project Report
Load Balancing Module for Baadal
Shantanu Chaudhary (2010CS50295)
Lovejeet Singh (2010CS50285)
Utkarsh Singh (2010CS50299)
-
Introduction:
The main motive of the project is to collect VM load data from each server and then create and
implement a migration plan to consolidate server utilization.
We created a new menu item in the Admin Menu called 'Consolidate Utilization', clicking this
would trigger the load balancing module.
Implementation: The project has the following sub – systems. The implementation is explained as we go:
1. Current Load - Collection of information from the Baadal Database/Hypervisor about all the
active VMs and their configurations.
Implementation – In Baadal github repository inside the models folder we have a python file
named 'admin_model.py'. This file contains a function named 'get_all_vm_list()'. This
function gives the list of all VMs with their host and RAM information. We can also get the
status of each of these VMs which can be either RUNNING or SUSPENDED or
SHUTDOWN.
2. Create Migration Plan – Processing of the load information to create a VM migration plan
so that each active server is more than 80% loaded.
Implementation – In order to do this task we first require the list of all hosts with their current
load information. Under the same 'admin_model.py', there is a function named
'get_all_hosts()', this function gives the list of all hosts with their RAM and CPUs information.
After getting this information we sort the hosts according to their RAM usage in descending
order and then proceed on by moving downwards from the maximally utilized host till we find
the host with less than 80% RAM usage. Once we have such a host then we remove the best
fitting VM from the least utilized host and migrate it to this host. This is done till no such host
-
is left and we are left with some hosts having no VMs (free servers) i.e. zero load and the
remaining hosts having more than 80% load with the exception of one host still having less
than 80% load for some cases.
We return the list of hosts with their usage and all the VMs statistics, before and after the
migration. The free servers are shut down after the migration plan.
Here is a small demonstration of the migration plan in action:
This image shows the state of each host before the migration in the sorted order.
The following image shows the state of each host after the migration:
Tasks Performed: In this section, we describe the methods we have written in our module for the load balancer.
For Backend:
get_used_ram_of_host(host): Returns the used RAM for the current host from all of its VMs.
get_used_vcpus_of_host(host): Returns the number of CPUs utilised for the current host.
find_new_host_consolidate(RAM,vCPU,hostList,usedRAMList,usedvCPUList): For the specified RAM,
vCPU as input and given the list of hosts and their respective used RAMs and CPUs, this function returns the
host ID and IP Address whose RAM utilisation is less than 80% and which can accommodate the given VM
Configuration (RAM, vCPU). Returns None if no such host found.
consolidate_vms(): First of all it creates a sorted list of the Hosts according to their RAM utilisation (least
-
utilised first), then for each host in this ordered list, this method iterates over all of its VMs and calls
find_new_host_consolidate() on those VMs. After the above iterations, it creates a list of tuples which
contains mapping of consolidated VMs to their new Hosts, migration task for all of these VMs is then added to
the queue by calling add_vm_task_to_queue() with appropriate parameters.
For Controller:
consolidate_utilization(): This method is part of the front end. It takes the call from the admin console and
passes the command to the backend. Broadly, it does the following:
1. calls consolidate_vms
2. gets no. of vms migrated in return from that call.
3. alerts the user, and forwards the page to show the task list with migrate commands for the shifted VMs.
Source Code:
-
Tests Conducted: We tested our load balancing module in two parts. In each part we increased the number of hosts we
had on the sandbox. For first part of test we took 2 hosts (1GB, 5 CPUs each). Next we created 4 test
VMs. The configuration of test VMs was 0.25GB and 1 CPU.
Test Case 1):
Initially each host had 2 VMs.
Next we click on Consolidate Utilization,
-
Above screenshot shows the migration in the pending tasks.
-
The above screenshot shows the completed migration of the VMs on 1 hosts which were initially on
2 hosts.
-
Test Case 2):
-
Test Case 3):
-
Test Case 4):