opennebulaconf 2016 - hypervisors and containers hands-on workshop by jaime melis, opennebula
TRANSCRIPT
Jaime MelisOpenNebula Engineer // @j_melis //
Hypervisors & Containers
OpenNebulaConf 20164th edition
Agenda
Introduction
KVM
Virtual Infra Management•Capacity management •Multi-VM management•Resource optimization •HA and business continuity
OpenNebula
Cloud Management•VDC multi-tenancy•Simple cloud GUI and interfaces•Service elasticity/provisioning•Federation/hybrid
vCenter
VMware
OpenNebula
Reference Architecture
Reference Architecture
Basic Advanced
Operating System
Supported OS (Ubuntu or CentOS/RHEL) in all machines Specific OpenNebula packages installed
Hypervisor KVM
Networking VLAN 802.1Q VXLAN
Storage Shared file system (NFS/GlusterFS) using qcow2
format for Image and System Datastores
Ceph Cluster for Image Datastores, and a separated
Shared FS for System Datastore
Authentication Native authentication or Active Directory
Basic and Advanced Implementations
Reference Architecture
Basic Advanced
Memory 2 GB 4 GB
CPU 1 CPU (2 cores) 2 CPU (4 cores)
Disk size 100 GB 500 GB
Network 2 NICs 2 NICs
Front-end Hardware recommendations
Reference Architecture
Network Implementations
Private Network
Communication between VMs.
Public Network To serve VMs that need internet access
Service Network
For front-end and virtualization node communication -including inter node communication for live migration-, as well as for storage traffic
Storage Network
To serve the the shared filesystem or the Ceph pools to the virtualization nodes
Configuring Drivers
VM_MAD = [
NAME = "kvm",
SUNSTONE_NAME = "KVM",
EXECUTABLE = "one_vmm_exec",
ARGUMENTS = "-t 15 -r 0 kvm",
DEFAULT = "vmm_exec/vmm_exec_kvm.conf",
TYPE = "kvm",
KEEP_SNAPSHOTS = "no",
IMPORTED_VMS_ACTIONS = "terminate, terminate-hard, hold,
release, suspend, resume, delete, reboot, reboot-hard, resched,
unresched, disk-attach, disk-detach, nic-attach, nic-detach,
snap-create, snap-delete" ]
Monitoring Hosts
Monitoring Hosts
Wed Oct 19 14:43:20 2016 [Z0][InM][D]: Monitoring host host01 (0)
Wed Oct 19 14:43:21 2016 [Z0][InM][D]: Host host01 (0)
successfully monitored.
Wed Oct 19 14:43:31 2016 [Z0][InM][D]: Host host01 (0)
successfully monitored.
Wed Oct 19 14:43:51 2016 [Z0][InM][D]: Host host01 (0)
successfully monitored.
...
Capacity
Attributes
● MEMORY
● CPU
● VCPU
Overcommitment
● RESERVED_CPU
● RESERVED_MEMORY
Cgroups
What is?
● Enforce CPU assigned to a VM● VM with CPU=0.5 gets half of another VM CPU=1.0● You can limit the total memory used by the VMs
How?
● Check your distro● Configuration in the hosts (not in the front-end)● There is a cgroups service● Enable in /etc/libvirt/qemu.conf● Add libvirt to /etc/cgrules.conf
Fast VM Deployments
● Libvirt listens by default on a unix socket● No concurrent operations
/etc/one/sched.conf
# MAX_HOST: Maximum number of Virtual
Machines dispatched to a given host in
# each scheduling action
#
MAX_HOST = 1
● Enable TCP socket in libvirtd.conf
RAW
If it's supported by Libvirt… it's supported by OpenNebula
RAW = [ type = "kvm",
data = "<devices>
<serial type=\"pty\"><source path=\"/dev/pts/5\"/><target
port=\"0\"/></serial>
<console type=\"pty\" tty=\"/dev/pts/5\"><source
path=\"/dev/pts/5\"/><target port=\"0\"/></console>
</devices>"
]
Libvirt Deployment File (XML)
Improve Performance
● Paravirtualized drivers● Network● Storage
Enable it by default:
/etc/one/vmm_exec/vmm_exec_kvm.conf
NIC = [ MODEL = "virtio" ]
/etc/one/oned.conf
DEFAULT_DEVICE_PREFIX = "vd"
virtio
Further Tips
KSM
● Kernel Samepage Merging● Combines Memory private pages● Increases VM density● Enabled by default in CentOS
SPICE
● Native in OpenNebula >= 4.12 (qlx display Driver)● Redirect printers, USB (mass-storage), Audio
Further Tips
Virsh Capabilities
/usr/share/libvirt/cpu_map.xml
OS = [ MACHINE = "..." ]
Cache
● Writethrough○ host page on, guest disk write cache off
● Writeback○ Good overall I/O Performance○ host page on, disk write cache on
● None○ Good write performance○ host page off, disk write cache on
vCenter Approach
KVM
Virtual Infra Management•Capacity management •Multi-VM management•Resource optimization •HA and business continuity
OpenNebula
Cloud Management•VDC multi-tenancy•Simple cloud GUI and interfaces•Service elasticity/provisioning•Federation/hybrid
vCenter
VMware
OpenNebula
Reference Architecture
Reference Architecture
Description
Front-end Supported OS (Ubuntu or CentOS/RHEL)Specific OpenNebula packages installed
Hypervisor VMware vSphere (managed through vCenter)
Networking Standard and Distributed Switches (managed through vCenter)
Storage Local and Networked (FC, iSCSI, SAS) (managed through vCenter)
Authentication Native authentication or Active Directory
Summary of the implementation
VM_MAD = [
NAME = "vcenter",
SUNSTONE_NAME = "VMWare vCenter",
EXECUTABLE = "one_vmm_sh",
ARGUMENTS = "-p -t 15 -r 0 vcenter -s sh",
DEFAULT = "vmm_exec/vmm_exec_vcenter.conf",
TYPE = "xml",
KEEP_SNAPSHOTS = "yes",
IMPORTED_VMS_ACTIONS = "terminate, terminate-hard, hold,
release, suspend, resume, delete, reboot, reboot-hard, resched,
unresched, poweroff, poweroff-hard, disk-attach, disk-detach,
nic-attach, nic-detach, snap-create, snap-delete"
]
Configuring Drivers (Virtualization)
Configuring Drivers (Monitoring)
IM_MAD = [
NAME = "vcenter",
SUNSTONE_NAME = "VMWare vCenter",
EXECUTABLE = "one_im_sh",
ARGUMENTS = "-c -t 15 -r 0 vcenter" ]
vCenter Delegation
VMs
Templates
Networks
Overview
Key Points
● VMware workflows● Leverages vMotion, HA, DRS● Templates and Networks must exist● Each vCenter cluster is a Host
○ OpenNebula chooses the Host (vCenter cluster)○ VMware DRS chooses the ESX Host
● VMware tools in guest OS
Limitations
● Security Groups● Files passed in the Context
vCenter
ESX HostESX Host
Connectivity
VNC
OpenNebula Frontend
ESX HostsVI API
ESX HostESX HostESX Hosts
VMM Driver
Importing Clusters
● Sunstone to import vCenter Clusters● CLI Tool also provides that functionality● Manages subsequent import actions
Importing Templates
● A Template must be already defined in OpenNebula.● It must contain all the basic information to be deployed● During instantiation we can add an extra network, but not
remove them.
Importing Templates
● The Template includes the vCenter UUID.● Keep VM Disks is optional
Importing Templates
● User can be asked about Resource Pool and Datastore
Importing Networks
● The Network must exist in OpenNebula.● When importing, we can assign an IP range for the
Network
Importing VMs
● Wild VMs can be imported● After importing, VMs can be managed by OpenNebula
● The following operations cannot be performed:○ delete --recreate○ undeploy○ migrate○ stop
Importing Datastores and VMDKs
● Available through CLI and Sunstone● Same mechanism as with VMs, Networks and Templates
Importing Datastores and VMDKs
vCenter datastores supported in OpenNebula
● Monitorization of Datastores and VMDKs● VMDK Creation● VMDK Upload● VMDK Cloning● VMDK Deletion
Persistent VMDK
VMDK Hotplug supported
● Attach disk
Contextualization
● Two supported Contextualizations methods:○ vCenter Customizations○ OpenNebula
● OpenNebula Contextualization works both for Windows and Linux.
● START_SCRIPT is supported
Scheduling
● OpenNebula chooses a Host (vCenter Cluster)
● The specific ESX is selected by vCenter (DRS)
● The specific Cluster can be forced:
SCHED_REQUIREMENTS = "NAME=\"<vcenter_cluster>\""
Docker
Docker Machine
Docker-Machine
● Official Docker project● Deploys transparently your Docker host● Supports Multiple Backends● Switch between your Docker hosts
Boot2Docker
Lightweight Linux distribution based on Tiny Core Linux made specifically to run Docker containers.
http://boot2docker.io
Requirements
● OpenNebula Cloud
● Image for Docker Engine (Boot2Docker) & Network
● Docker Client Tools & Docker Machine
● Docker Machine OpenNebula Plugin
○ github.com/OpenNebula/docker-machine-opennebula
Docker Machine OpenNebula Plugin
docker-machine create \
--driver opennebula \
--opennebula-network-name private \
--opennebula-image-name boot2docker \
--opennebula-b2d-size 18192 \
my_docker_host
Docker Swarm
● Native clustering for Docker● Pool of Docker hosts into a single, virtual Docker host● Scale to multiple hosts
Rancher
● Complete Platform for Running Containers● Entire software stack● Supports Docker Machine provisioning
OpenNebulaConf 20164th edition
Platinum
Gold
Silver
Community
THANKS!