fuel for openstack 3.1 userguide

126
version 3.1 User Guide

Upload: rohit-singh

Post on 21-Jan-2016

443 views

Category:

Documents


0 download

DESCRIPTION

Openstak

TRANSCRIPT

  • version 3.1

    User Guide

  • Fuel for Openstack v3.1User Guide

    2013, Mirantis Inc. Page i

  • ContentsIntroducing Fuel for OpenStack 1

    About Fuel 3

    How Fuel Works 4

    Deployment Configurations Provided By Fuel 7

    Supported Software Components 8

    Download Fuel 9

    Release Notes 10

    New Features in Fuel 3.1 10

    Resolved Issues in Fuel 3.1 11

    Known Issues in Fuel 3.1 14

    Reference Architectures 17

    Overview 18

    Simple (non-HA) Deployment 19

    Multi-node (HA) Deployment (Compact) 20

    Multi-node (HA) Deployment (Full) 21

    Details of HA Compact Deployment 22

    Red Hat OpenStack Architectures 24

    HA Logical Setup 27

    Cluster Sizing 30

    Network Architecture 32

    Technical Considerations 35

    Production Considerations 36

    Sizing Hardware for Production Deployment 37

    Redeploying An Environment 42

    Large Scale Deployments 44

    Create an OpenStack cluster using Fuel UI 45

    Installing Fuel Master Node 46

    Understanding and Configuring the Network 54

    Fuel Deployment Schema 57

    Network Issues 64

    Red Hat OpenStack Deployment Notes 67

    Fuel for Openstack v3.1User Guide

    2013, Mirantis Inc. Page ii

  • Overview 67

    Post-Deployment Check 71

    Deploy an OpenStack cluster using Fuel CLI 79

    Understanding the CLI Deployment Workflow 80

    Deploying OpenStack Cluster Using CLI 81

    Configuring Nodes for Provisioning 85

    Configuring Nodes for Deployment 92

    Configure Deployment Scenario 95

    Finally Triggering the Deployment 98

    Testing OpenStack Cluster 99

    FAQ (Frequently Asked Questions) and HowTos 101

    Common Technical Issues 102

    How HA with Pacemaker and Corosync Works 105

    HowTo Notes 108

    Other Questions 114

    Fuel License 116

    Index 121

    Fuel for Openstack v3.1User Guide

    2013, Mirantis Inc. Page iii

  • Introducing Fuel for OpenStackOpenStack is an extensible, versatile, and flexible cloud management platform. By exposing its portfolio of cloudinfrastructure services compute, storage, networking and other core resources through ReST APIs, OpenStackenables a wide range of control over these services, both from the perspective of an integrated Infrastructure as aService (IaaS) controlled by applications, as well as automated manipulation of the infrastructure itself.

    This architectural flexibility doesnt set itself up magically. It asks you, the user and cloud administrator, toorganize and manage an extensive array of configuration options. Consequently, getting the most out of yourOpenStack cloud over time in terms of flexibility, scalability, and manageability requires a thoughtfulcombination of complex configuration choices. This can be very time consuming and requires that you becomefamiliar with a lot of documentation from a number of different projects.

    Mirantis Fuel for OpenStack was created to eliminate exactly these problems. This step-by-step guide takes youthrough this process of:

    Configuring OpenStack and its supporting components into a robust cloud architecture

    Deploying that architecture through an effective, well-integrated automation package that sets up andmaintains the components and their configurations

    Providing access to a well-integrated, up-to-date set of components known to work togetherFuel for OpenStack can be used to create virtually any OpenStack configuration. To make things easier, theinstallation includes several pre-defined architectures. For the sake of simplicity, this guide emphasises a single,common reference architecture; the multi-node, high-availability configuration. We begin with an explanation ofthis architecture, then move on to the details of creating the configuration in a test environment usingVirtualBox. Finally, we give you the information you need to know to create this and other OpenStackarchitectures in a production environment.

    This guide assumes that you are familiar with general Linux commands and administration concepts, as well asgeneral networking concepts. You should have some familiarity with grid or virtualization systems such asAmazon Web Services or VMware, as well as OpenStack itself, but you don't need to be an expert.

    The Fuel User Guide is organized as follows:

    About Fuel, gives you an overview of Fuel and gives you a general idea of how it works.

    Reference Architectures, provides a general look at the components that make up OpenStack.

    Create an OpenStack cluster using Fuel UI, takes you step-by-step through the process of creating ahigh-availability OpenStack cluster using Fuel UI.

    Deploy an OpenStack cluster using Fuel CLI, takes you step-by-step through the more advanced process ofcreating a high-availability OpenStack cluster using the command line and Puppet manifests.

    Production Considerations, looks at the real-world questions and problems involved in creating an OpenStackcluster for production use. We discuss issues such as network layout and hardware requirements, andprovide tips and tricks for creating a cluster of up to 100 nodes.

    With the current (3.1) release Fuel UI (aka FuelWeb) and Fuel CLI (aka Fuel Library) are integrated. Weencourage all users to use the Fuel UI for installation and configuration. However, the standard Fuel CLIinstallation process is still available for those who prefer a more detailed approach to deployment. Evenwith a utility as powerful as Fuel, creating an OpenStack cluster can be complex, and FAQ (Frequently AskedQuestions) and HowTos section covers many of the issues that tend to arise during the process.

    Fuel for Openstack v3.1User Guide Introducing Fuel for OpenStack

    2013, Mirantis Inc. Page 1

  • Lets start off by taking a closer look at Fuel itself. We'll start by explaining How Fuel Works and then move to theprocess of installation itself.

    Fuel for Openstack v3.1User Guide Introducing Fuel for OpenStack

    2013, Mirantis Inc. Page 2

  • About FuelHow Fuel Works 4

    Deployment Configurations Provided By Fuel 7

    Supported Software Components 8

    Download Fuel 9

    Fuel for Openstack v3.1User Guide About Fuel

    2013, Mirantis Inc. Page 3

  • How Fuel WorksFuel is a ready-to-install collection of the packages and scripts you need to create a robust, configurable,vendor-independent OpenStack cloud in your own environment. As of Fuel 3.1, Fuel Library and Fuel Web havebeen merged into a single toolbox with options to use the UI or CLI for management.

    A single OpenStack cloud consists of packages from many different open source projects, each with its ownrequirements, installation procedures, and configuration management. Fuel brings all of these projects togetherinto a single open source distribution, with components that have been tested and are guaranteed to worktogether, and all wrapped up using scripts to help you work through a single installation.

    Simply put, Fuel is a way for you to easily configure and install an OpenStack-based infrastructure in your ownenvironment.

    Fuel for Openstack v3.1User Guide How Fuel Works

    2013, Mirantis Inc. Page 4

  • Fuel works on a simple premise. Rather than installing each of the components that make up OpenStack directly,you instead use a configuration management system like Puppet to create scripts that can provide a configurable,reproducible, sharable installation process.

    In practice, Fuel works as follows:

    1. First, set up Fuel Master Node using the ISO. This process only needs to be completed once per installation.

    2. Next, discover your virtual or physical nodes and configure your OpenStack cluster using the Fuel UI.

    3. Finally, deploy your OpenStack cluster on discovered nodes. Fuel will perform all deployment magic for youby applying pre-configured and pre-integrated Puppet manifests via Astute orchestration engine.

    Fuel is designed to enable you to maintain your cluster while giving you the flexibility to adapt it to your ownconfiguration.

    Fuel for Openstack v3.1User Guide How Fuel Works

    2013, Mirantis Inc. Page 5

  • Fuel comes with several pre-defined deployment configurations, some of them include additional configurationoptions that allow you to adapt OpenStack deployment to your environment.

    Fuel UI integrates all of the deployments scripts into a unified, Web-based Graphical User Interface that walksadministrators through the process of installing and configuring a fully functional OpenStack environment.

    Fuel for Openstack v3.1User Guide How Fuel Works

    2013, Mirantis Inc. Page 6

  • Deployment Configurations Provided By FuelOne of the advantages of Fuel is that it comes with a number of pre-built deployment configurations that you canuse to quickly build your own OpenStack cloud infrastructure. These are well-specified configurations ofOpenStack and its constituent components that are expertly tailored to one or more common cloud use cases.Fuel provides the ability to create the following cluster types without requiring extensive customization:

    Simple (non-HA): The Simple (non-HA) installation provides an easy way to install an entire OpenStack clusterwithout requiring the degree of increased hardware involved in ensuring high availability.

    Multi-node (HA): When you're ready to begin your move to production, the Multi-node (HA) configuration is astraightforward way to create an OpenStack cluster that provides high availability. With three controller nodesand the ability to individually specify services such as Cinder, Neutron (formerly Quantum), and Swift, Fuelprovides the following variations of the Multi-node (HA) configurations:

    Compact HA: When you choose this option, Swift will be installed on your controllers, reducing yourhardware requirements by eliminating the need for additional Swift servers while still addressing highavailability requirements.

    Full HA: This option enables you to install independent Swift and Proxy nodes, so that you can separatetheir operation from your controller nodes.

    In addition to these configurations, Fuel is designed to be completely customizable. For assistance on deepercustomization options based on the included configurations you can contact Mirantis for further assistance.

    Fuel for Openstack v3.1User Guide Deployment Configurations Provided By Fuel

    2013, Mirantis Inc. Page 7

  • Supported Software ComponentsFuel has been tested and is guaranteed to work with the following software components:

    Operating Systems

    CentOS 6.4 (x86_64 architecture only)

    RHEL 6.4 (x86_64 architecture only) Puppet (IT automation tool)

    2.7.19 MCollective

    2.3.1 Cobbler (bare-metal provisioning tool)

    2.2.3 OpenStack

    Grizzly release 2013.1.2 Hypervisor

    KVM

    QEMU Open vSwitch

    1.10.0 HA Proxy

    1.4.19 Galera

    23.2.2 RabbitMQ

    2.8.7 Pacemaker

    1.1.8 Corosync

    1.4.3

    Fuel for Openstack v3.1User Guide Supported Software Components

    2013, Mirantis Inc. Page 8

  • Download FuelThe first step in installing Fuel is to download the version appropriate to your environment.

    Fuel is available for Essex, Folsom and Grizzly OpenStack installations, and will be available for Havana shortlyafter Havana's release.

    The Fuel ISO and IMG, along with other Fuel releases, are available in the Downloads section of the Fuel portal.

    Fuel for Openstack v3.1User Guide Download Fuel

    2013, Mirantis Inc. Page 9

  • Release Notes

    New Features in Fuel 3.1Fuel 3.1 with Integrated Graphical and Command Line controls 10

    Option to deploy Red Hat Enterprise Linux OpenStack Platform 10

    Mirantis OpenStack Health Check 10

    Ability to deploy properly in networks that are not utilizing VLAN tagging 11

    Improved High Availability resiliency 11

    Horizon password entry can be hidden 11

    Full support of Neutron (Quantum) networking engine 11

    Fuel 3.1 with Integrated Graphical and Command Line controlsIn earlier releases, Fuel was distributed as two packages Fuel Web for graphical workflow, and Fuel Library forcommand-line based manipulation. Starting with this 3.1 release, weve integrated these two capabilities into asingle offering, referred to simply as Fuel. If you used Fuel Web, youll see that capability along with its latestimprovements to that capability in the the Fuel User Interface (UI), providing a streamlined, graphical consolethat enables a point-and-click experience for the most commonly deployed configurations. Advanced users withmore complex environmental needs can still get command-line access to the underlying deployment engine (akaFuel Library).

    Option to deploy Red Hat Enterprise Linux OpenStack PlatformMirantis Fuel now includes the ability to deploy the Red Hat Enterprise Linux OpenStack Platform (a solutionthat includes both Red Hat Enterprise Linux and the Red Hat OpenStack distribution). During the definition ofa new environment, the user will be presented with the option of either installing the Mirantis-providedOpenStack distribution onto CentOS-powered nodes or installing the Red Hat provided OpenStack distributiononto Red Hat Enterprise Linux powered nodes.

    Note

    A Red Hat subscription is required to download and deploy Red Hat Enterprise Linux OpenStackPlatform.

    Mirantis OpenStack Health CheckNew in this release is the Mirantis OpenStack Health Check which can be accessed through a tab in the Fuel UI.The OpenStack HealthCheck is a battery of tests that can be run against an OpenStack deployment to ensure thatit is installed properly and operating correctly. The suite of tests exercise not only the core components withinOpenStack, but also the added packages included in the Mirantis OpenStack distribution. Tests can be runindividually or in groups. A full list of available tests can be found in the documentation.

    Fuel for Openstack v3.1User Guide Release Notes

    2013, Mirantis Inc. Page 10

  • Ability to deploy properly in networks that are not utilizing VLAN taggingIn some environments, it may not be possible or desired to utilize VLANs to segregate network traffic. In thesenetworks, Fuel can now be configured through the Fuel UI to disable the need for VLAN tagging. Thisconfiguration option is available through the Network Settings tab.

    Improved High Availability resiliencyTo improve the resiliency of the Mirantis OpenStack High Availability reference architecture, Fuel now deploysservices including Neutron (Quantum) Agents, HAProxy, Galera or MySQL native Master/Slave replication underPacemaker from ClusterLabs. Neutron (Quantum) agents now support seamless failover with metadata proxy andagent support, allowing minimum downtime for cluster Neutron (Quantum)-enabled networking. TheGalera/Mysql replication engine now supports automatic cluster reassembling after the entire cluster is rebooted.

    Horizon password entry can be hiddenIn the OpenStack settings tab, the input of the password used for Horizon access can now be hidden by clickingon the eye icon to the left of the field. This icon acts as a toggle between hidden and visible input modes.

    Full support of Neutron (Quantum) networking engineAll the features of the Neutron (Quantum) OpenStack virtual networking implementation, including the networknamespaces feature allowing virtual networks to be overlapping, are now supported by Fuel. This improvementalso enables Neutron (Quantum) to work properly with Open vSwitch GRE tunnels. This capability is currentlysupported only with the Mirantis OpenStack distribution and CentOS version (2.6.32-358) included with Fuel.

    Resolved Issues in Fuel 3.1Disk Configuration now displays proper size and validates input 12

    Improved behavior for allocating space 12

    Eliminated the need to specify a bootable disk 12

    Floating IP allocation speed increased 12

    Ability to map logical networks to physical interfaces 12

    Separate Logical Volume Manager (LVM) now used for Glance storage 12

    Memory leaks in nailgun service 12

    Network Verification failures 13

    Installing Fuel Master node onto a system with em# network interfaces 13

    Provisioning failure on large hard drives 13

    Access to OpenStack API or VNC console in Horizon when running in VirtualBox 13

    Other resolved issues 13

    Fuel for Openstack v3.1User Guide Resolved Issues in Fuel 3.1

    2013, Mirantis Inc. Page 11

  • Disk Configuration now displays proper size and validates inputPreviously, the Total Space displayed in the Disk Configuration screen was slightly larger than what was actuallyavailable. This has now been corrected to be accurate. In addition, user input validation has been improved whenmaking changes to ensure that space is not incorrectly allocated. And finally, the unit of measure has beenchanged to MB from GB in the Disk Configuration screen.

    Improved behavior for allocating spaceIn Fuel 3.0.x, users were forced to manually zero out fields in the Disk Configuration screen if the total allocatedspace exceeded the total disk size before the "USE ALL UNALLOCATED SPACE" option could be utilized. Now youcan now enter a value above the maximum available for a volume group (as long as it does not exceed total disksize), select "USE ALL UNALLOCATED SPACE" for a second volume group and that group will be assigned theavailable space up to the maximum disk size. In addition, the current allocated sizes are reflected graphically inthe bars above the volume group.

    Eliminated the need to specify a bootable diskPreviously, the Disk Configuration screen had a special Make bootable option. This release of Fuel makes theoption unnecessary because Fuel now has a Master Boot Record (MBR) and boot partition installed on all harddrives. BIOS can now be configured to load from any disk and the node will boot the operating system. Becauseof this, the Make bootable option has been removed.

    Floating IP allocation speed increasedIn Fuel 3.0.x, the step of floating IP allocation was taking significant time. During cluster provisioning, it wastaking up to 8 minutes for creating the pool of 250 floating IP addresses. This has now been reduced down toseconds instead of minutes.

    Ability to map logical networks to physical interfacesWith the introduction of the ability to deploy properly in networks that are not utilizing VLAN tagging, it is nowpossible to map logical OpenStack networks to physical interfaces without using VLANs.

    Separate Logical Volume Manager (LVM) now used for Glance storageGlance storage was previously configured to use a root partition on a controller node. Because of this, in HAmode, Swift was configured to use only 5 GB of storage. A user was unable to load large images into Glance inHA mode and could receive an out of space error message if a small root partition were used. This situation hasbeen corrected by creating special LVM for Glance storage. You can modify the size of this partition in the DiskConfiguration screen.

    Memory leaks in nailgun serviceNailgun is the RESTful API backend service that is used in Fuel. In 3.0.1 an increase in memory consumptioncould occur over time. This has now been fixed.

    Fuel for Openstack v3.1User Guide Resolved Issues in Fuel 3.1

    2013, Mirantis Inc. Page 12

  • Network Verification failuresIn some cases, the "Verify Networks" option in the Network configuration tab reported a connectivity problem,however manual checks confirmed that the connection was fine. The problem was identified as a loss of packetswhen a particular Python library was used. That library has been replaced and verification now functionsproperly.

    Installing Fuel Master node onto a system with em# network interfacesIn Fuel 3.0.1 a fix was included to recognize network interfaces that start with em (meaning "embedded") insteadof eth. However the fix only applied to the Slave nodes used to deploy OpenStack components. The Fuel Masternode was still affected. This has now been corrected and Fuel can be deployed on machines where the operatingsystems uses the prefix of em instead of eth.

    Provisioning failure on large hard drivesIn previous releases, when ext4 was used as a file system for a partition, provisioning would fail for for largevolumes (larger than 16 TB) in some cases. Ext4 has been replaced by the xfs file system which works well onlarge volumes.

    Access to OpenStack API or VNC console in Horizon when running in VirtualBoxPreviously it was impossible to access the OpenStack API or VNC console in Horizon when running the OpenStackenvironment created in VirtualBox by the Mirantis demo VirtualBox. This was caused by an inability to create aroute to the OpenStack public network from a host system due to a lack of VLAN tags. With the introduction ofthe ability to deploy properly in networks that are not utilizing VLAN tagging, it is now possible to create theroute. Information on how to create this route is documented in the user guide.

    Other resolved issuesIf CPU speed could not be determined through an operating system level query on a slave node, that node wouldnot register properly with the Fuel Master node. This issue has been corrected to register the node even if someinformation about the node is unavailable.

    Fuel for Openstack v3.1User Guide Resolved Issues in Fuel 3.1

    2013, Mirantis Inc. Page 13

  • Known Issues in Fuel 3.1Limited Support for OpenStack Grizzly 14

    Nagios deployment is disabled 15

    Ability to deploy Swift and Neutron (Quantum) is limited to Fuel CLI 15

    Ability to add new nodes without redeployment 15

    Ability to deploy properly in networks that are not utilizing VLAN tagging 15

    Time synchronization failures in a VirtualBox environment 15

    If a controllers root partition runs out of space, the controller fails to operate 15

    The "Create instance volume" test in the Mirantis OpenStack Healthcheck tab has a wrong resultfor attachment volumes

    16

    Other Limitations: 16

    Limited Support for OpenStack GrizzlyThe following improvements in Grizzly are not currently supported directly by Fuel:

    Nova Compute

    Cells

    Availability zones

    Host aggregates Neutron (formerly Quantum)

    LBaaS (Load Balancer as a Service)

    Multiple L3 and DHCP agents per cloud Keystone

    Multi-factor authentication

    PKI authentication Swift

    Regions

    Adjustable replica count

    Cross-project ACLs Cinder

    Support for FCoE

    Support for LIO as an iSCSI backend

    Support for multiple backends on the same manager Ceilometer

    Fuel for Openstack v3.1User Guide Known Issues in Fuel 3.1

    2013, Mirantis Inc. Page 14

  • HeatIt is expected that these capabilities will be supported in future releases of Fuel.

    In addition, support for High Availability of Neutron (Quantum) on Red Hat Enterprise Linux (RHEL) is notavailable due to a limitation within the Red Hat kernel. It is expected that this issue will be addressed by a patchto RHEL in late 2013.

    Nagios deployment is disabledDue to instability of PuppetDB and Nagios manifests we decided to temporarily disable the Nagios deploymentfeature. It is planned to re-enable this feature in next release with improved and much more stable manifests.

    Ability to deploy Swift and Neutron (Quantum) is limited to Fuel CLIAt this time, customers wishing to deploy Swift or Neutron (Quantum) will need to do so through the Fuel CLI. Anoption to deploy these components as standalone nodes is not currently present in the Fuel UI. It is expect that anear future release will enable this capability.

    Ability to add new nodes without redeploymentIts possible to add new compute and Cinder nodes to an existing OpenStack environment. However, thiscapability can not be used yet to deploy additional controller nodes in HA mode.

    Ability to deploy properly in networks that are not utilizing VLAN taggingWhile included in Fuel and fully supported, network environments can be complex and Mirantis has notexhaustively identified all of the configurations where this feature works properly. Fuel does not prevent the userfrom creating an environment that may not work properly, although the Verify Networks function will confirmnecessary connectivity. As Mirantis discovers environments where a lack of VLAN tagging causes issue, they willbe further documented. Currently, a known limitation is that untagged networks should not be mapped to thephysical network interface that is used for PXE provisioning. Another known situation occurs when the userseparates the public and floating networks onto different physical interfaces without VLAN tagging, which willcause deployment to fail.

    Time synchronization failures in a VirtualBox environmentIf the ntpd service fails on the Fuel master node, desynchronization of nodes in the environment will occur.OpenStack identifies services as broken if the time synchronization is broken, which will cause the "Services listavailability" test in the Mirantis OpenStack HealthCheck to fail. In addition, instances may fail to boot. This issueappears to be limited to VirtualBox environments as it could not be replicated on KVM and physical hardwaredeployments.

    If a controllers root partition runs out of space, the controller fails to operateLogging is configured to send most of messages over rsyslog, and disk space consuming services use their ownlogical volumes (such as Cinder, Compute). However, if processes write to the root partition and the root partitionruns out of disk space, the controller will fail.

    Fuel for Openstack v3.1User Guide Known Issues in Fuel 3.1

    2013, Mirantis Inc. Page 15

  • The "Create instance volume" test in the Mirantis OpenStack Healthcheck tab has a wrongresult for attachment volumesThe "Create instance volume" test is designed to confirm that a volume can be created. However, even ifOpenStack fails to attach the volume to the VM, the test still passes.

    Other Limitations:

    When using the Fuel UI, IP addresses for Slave nodes (but not the Master node) are assigned via DHCPduring PXE booting from the master node. Because of this, even after installation, the Fuel Master nodemust remain available and continue to act as a DHCP server.

    When using the Fuel UI, the floating VLAN and public networks must use the same L2 network. In the UI,these two networks are locked together, and can only run via the same physical interface on the server.

    Deployments done through the Fuel UI creates all networks on all servers, even if they are not required by aspecific role (e.g. A Cinder node will have VLANs created and addresses obtained from the public network).

    Some of OpenStack services listen on all interfaces, which may be detected and reported by security auditsor scans. Please discuss this issue with your security administrator if it is of concern in your organization.

    The provided scripts that enable Fuel to be automatically installed on VirtualBox will create separated hostinterfaces. If a user associates logical networks to different physical interfaces on different nodes, it willlead to network connectivity issues between OpenStack components. Please check to see if this hashappened prior to deployment by clicking on the Verify Networks button on the networking tab.

    The networks tab was redesigned to allow the user to provide IP ranges instead of CIDRs, however not alluser input is properly verified. Entering a wrong wrong value may cause failures in deployment.

    Fuel UI may not reflect changes in NICs or disks after initial discovery, and it can lead to failure indeployment. In other words, if user powers on the node, it gets discovered, and then some disks arereplaced or network cards added or removed, rediscovering of changed hardware may not be done correctly.For example, the Total Space displayed in the Disk Configuration screen may be different than the actual sizeof the disk.

    Neutron (Quantum) Metadata API agents in High Availability mode are only supported for Compact and Fullscenarios if network namespaces (netns) is not used.

    The Neutron (Quantum) namespace metadata proxy is not supported unless netns is used.

    Neutron (Quantum) multi-node balancing conflicts with pacemaker, so the two should not be used togetherin the same environment.

    When deploying Neutron (Quantum) with the Fuel CLI and when virtual machines need to have access tointernet and/or external networks you need to set the floating network prefix and public_address so thatthey do not intersect with the network external interface to which it belongs. This is due to specifics of howNeutron(Quantum) sets Network Address Translation (NAT) rules and a lack of namespaces support inCentOS 6.4.

    In environments with a large number of tenant networks, e.g. over 300, network verification may stopresponding. In these cases, the networks themselves are unaffected and it is only the test that ceases tofunction correctly.

    Fuel for Openstack v3.1User Guide Known Issues in Fuel 3.1

    2013, Mirantis Inc. Page 16

  • Reference ArchitecturesOverview 18

    Simple (non-HA) Deployment 19

    Multi-node (HA) Deployment (Compact) 20

    Multi-node (HA) Deployment (Full) 21

    Details of HA Compact Deployment 22

    Red Hat OpenStack Architectures 24

    Simple (non-HA) Red Hat OpenStack Deployment 24

    Multi-node (HA) Red Hat OpenStack Deployment (Compact) 25

    HA Logical Setup 27

    Controller Nodes 27

    Compute Nodes 28

    Storage Nodes 29

    Cluster Sizing 30

    Network Architecture 32

    Public Network 34

    Internal (Management) Network 34

    Private Network 34

    Technical Considerations 35

    Neutron vs. nova-network 35

    Cinder vs. nova-volume 35

    Object Storage Deployment 35

    Fuel for Openstack v3.1User Guide Reference Architectures

    2013, Mirantis Inc. Page 17

  • OverviewBefore you install any hardware or software, you must know what you're trying to achieve. This section looks atthe basic components of an OpenStack infrastructure and organizes them into one of the more commonreference architectures. You'll then use that architecture as a basis for installing OpenStack in the next section.

    As you know, OpenStack provides the following basic services:

    Compute:Compute servers are the workhorses of your installation; they're the servers on which your users' virtualmachines are created. nova-scheduler controls the life-cycle of these VMs.

    Networking:Because an OpenStack cluster (virtually) always includes multiple servers, the ability for them tocommunicate with each other and with the outside world is crucial. Networking was originally handled bythe nova-network service, but it has given way to the newer Neutron (formerly Quantum) networking service.Authentication and authorization for these transactions are handled by keystone.

    Storage:OpenStack provides for two different types of storage: block storage and object storage. Block storage istraditional data storage, with small, fixed-size blocks that are mapped to locations on storage media. At itssimplest level, OpenStack provides block storage using nova-volume, but it is common to use cinder.

    Object storage, on the other hand, consists of single variable-size objects that are described by system-levelmetadata, and you can access this capability using swift.

    OpenStack storage is used for your users' objects, but it is also used for storing the images used to createnew VMs. This capability is handled by glance.

    These services can be combined in many different ways. Out of the box, Fuel supports the following deploymentconfigurations:

    Non-HA Simple

    HA Compact

    HA Full

    RHOS Non-HA Simple

    RHOS HA Compact

    Fuel for Openstack v3.1User Guide Overview

    2013, Mirantis Inc. Page 18

  • Simple (non-HA) DeploymentIn a production environment, you will never have a Simple non-HA deployment of OpenStack, partly because itforces you to make a number of compromises as to the number and types of services that you can deploy. It is,however, extremely useful if you just want to see how OpenStack works from a user's point of view.

    More commonly, your OpenStack installation will consist of multiple servers. Exactly how many is up to you, ofcourse, but the main idea is that your controller(s) are separate from your compute servers, on which your users'VMs will actually run. One arrangement that will enable you to achieve this separation while still keeping yourhardware investment relatively modest is to house your storage on your controller nodes.

    Fuel for Openstack v3.1User Guide Simple (non-HA) Deployment

    2013, Mirantis Inc. Page 19

  • Multi-node (HA) Deployment (Compact)Production environments typically require high availability, which involves several architectural requirements.Specifically, you will need at least three controllers, and certain components will be deployed in multiplelocations to prevent single points of failure. That's not to say, however, that you can't reduce hardwarerequirements by combining your storage, network, and controller nodes:

    We'll take a closer look at the details of this deployment configuration in Details of HA Compact Deploymentsection.

    Fuel for Openstack v3.1User Guide Multi-node (HA) Deployment (Compact)

    2013, Mirantis Inc. Page 20

  • Multi-node (HA) Deployment (Full)For large production deployments, its more common to provide dedicated hardware for storage. This architecturegives you the advantages of high availability, but this clean separation makes your cluster more maintainable byseparating storage and controller functionality:

    Where Fuel really shines is in the creation of more complex architectures, so in this document you'll learn how touse Fuel to easily create a multi-node HA OpenStack cluster. To reduce the amount of hardware you'll need tofollow the installation, however, the guide focuses on the Multi-node HA Compact architecture.

    Fuel for Openstack v3.1User Guide Multi-node (HA) Deployment (Full)

    2013, Mirantis Inc. Page 21

  • Details of HA Compact DeploymentIn this section, you'll learn more about the Multi-node (HA) Compact deployment configuration and how itachieves high availability. As you may recall, this configuration looks something like this:

    OpenStack services are interconnected by RESTful HTTP-based APIs and AMQP-based RPC messages. Soredundancy for stateless OpenStack API services is implemented through the combination of Virtual IP (VIP)management using Pacemaker and load balancing using HAProxy. Stateful OpenStack components, such as thestate database and messaging server, rely on their respective active/active and active/passive modes for highavailability. For example, RabbitMQ uses built-in clustering capabilities, while the database uses MySQL/Galerareplication.

    Fuel for Openstack v3.1User Guide Details of HA Compact Deployment

    2013, Mirantis Inc. Page 22

  • Lets take a closer look at what an OpenStack deployment looks like, and what it will take to achieve highavailability for an OpenStack deployment.

    Fuel for Openstack v3.1User Guide Details of HA Compact Deployment

    2013, Mirantis Inc. Page 23

  • Red Hat OpenStack ArchitecturesRed Hat has partnered with Mirantis to offer an end-to-end supported distribution of OpenStack powered by Fuel.Because Red Hat offers support for a subset of all available open source packages, the reference architecture hasbeen slightly modified to meet Red Hat's support requirements to provide a highly available OpenStack cluster.

    Below is the list of modifications:

    Database backend:MySQL with Galera has been replaced with native replication in a Master/Slave configuration. MySQL masteris elected via Corosync and master and slave status is managed via Pacemaker.

    Messaging backend:RabbitMQ has been replaced with QPID. Qpid is an AMQP provider that Red Hat offers, but it cannot beclustered in Red Hat's offering. As a result, Fuel configures three non-clustered, independent QPID brokers.Fuel still offers HA for messaging backend via virtual IP management provided by Corosync.

    Nova networking:Quantum is not available for Red Hat OpenStack because the Red Hat kernel lacks GRE tunneling support forOpenVSwitch. This issue should be fixed in a future release. As a result, Fuel for Red Hat OpenStack Platformwill only support Nova networking.

    Simple (non-HA) Red Hat OpenStack DeploymentIn a production environment, you will never have a Simple non-HA deployment of OpenStack, partly because itforces you to make a number of compromises as to the number and types of services that you can deploy. It is,however, extremely useful if you just want to see how OpenStack works from a user's point of view.

    More commonly, your OpenStack installation will consist of multiple servers. Exactly how many is up to you, ofcourse, but the main idea is that your controller(s) are separate from your compute servers, on which your users'VMs will actually run. One arrangement that will enable you to achieve this separation while still keeping yourhardware investment relatively modest is to house your storage on your controller nodes.

    Fuel for Openstack v3.1User Guide Red Hat OpenStack Architectures

    2013, Mirantis Inc. Page 24

  • Multi-node (HA) Red Hat OpenStack Deployment (Compact)Production environments typically require high availability, which involves several architectural requirements.Specifically, you will need at least three controllers, and certain components will be deployed in multiplelocations to prevent single points of failure. That's not to say, however, that you can't reduce hardwarerequirements by combining your storage, network, and controller nodes:

    OpenStack services are interconnected by RESTful HTTP-based APIs and AMQP-based RPC messages. Soredundancy for stateless OpenStack API services is implemented through the combination of Virtual IP (VIP)management using Corosync and load balancing using HAProxy. Stateful OpenStack components, such as thestate database and messaging server, rely on their respective active/passive modes for high availability. Forexample, MySQL uses built-in replication capabilities (plus the help of Pacemaker), while QPID is offered in threeindependent brokers with virtual IP management to provide high availability.

    Fuel for Openstack v3.1User Guide Red Hat OpenStack Architectures

    2013, Mirantis Inc. Page 25

  • Fuel for Openstack v3.1User Guide Red Hat OpenStack Architectures

    2013, Mirantis Inc. Page 26

  • HA Logical SetupAn OpenStack HA cluster involves, at a minimum, three types of nodes: controller nodes, compute nodes, andstorage nodes.

    Controller NodesThe first order of business in achieving high availability (HA) is redundancy, so the first step is to provide multiplecontroller nodes. You must keep in mind, however, that the database uses Galera to achieve HA, and Galera is aquorum-based system. That means that you must provide at least 3 controller nodes.

    Fuel for Openstack v3.1User Guide HA Logical Setup

    2013, Mirantis Inc. Page 27

  • Every OpenStack controller runs HAProxy, which manages a single External Virtual IP (VIP) for all controllernodes and provides HTTP and TCP load balancing of requests going to OpenStack API services, RabbitMQ, andMySQL.

    When an end user accesses the OpenStack cloud using Horizon or makes a request to the REST API for servicessuch as nova-api, glance-api, keystone-api, quantum-api, nova-scheduler, MySQL or RabbitMQ, the request goesto the live controller node currently holding the External VIP, and the connection gets terminated by HAProxy.When the next request comes in, HAProxy handles it, and may send it to the original controller or another in thecluster, depending on load conditions.

    Each of the services housed on the controller nodes has its own mechanism for achieving HA:

    nova-api, glance-api, keystone-api, quantum-api and nova-scheduler are stateless services that do notrequire any special attention besides load balancing.

    Horizon, as a typical web application, requires sticky sessions to be enabled at the load balancer.

    RabbitMQ provides active/active high availability using mirrored queues.

    MySQL high availability is achieved through Galera active/active multi-master deployment and Pacemaker.

    Quantum agents are managed by Pacemaker.

    Compute NodesOpenStack compute nodes are, in many ways, the foundation of your cluster; they are the servers on which yourusers will create their Virtual Machines (VMs) and host their applications. Compute nodes need to talk tocontroller nodes and reach out to essential services such as RabbitMQ and MySQL. They use the same approachthat provides redundancy to the end-users of Horizon and REST APIs, reaching out to controller nodes using theVIP and going through HAProxy.

    Fuel for Openstack v3.1User Guide HA Logical Setup

    2013, Mirantis Inc. Page 28

  • Storage NodesIn this OpenStack cluster reference architecture, shared storage acts as a backend for Glance, so that multipleGlance instances running on controller nodes can store images and retrieve images from it. To achieve this, youare going to deploy Swift. This enables you to use it not only for storing VM images, but also for any otherobjects such as user files.

    Fuel for Openstack v3.1User Guide HA Logical Setup

    2013, Mirantis Inc. Page 29

  • Cluster SizingThis reference architecture is well suited for production-grade OpenStack deployments on a medium and largescale when you can afford allocating several servers for your OpenStack controller nodes in order to build a fullyredundant and highly available environment.

    The absolute minimum requirement for a highly-available OpenStack deployment is to allocate 4 nodes:

    3 controller nodes, combined with storage

    1 compute node

    If you want to run storage separately from the controllers, you can do that as well by raising the bar to 9 nodes:

    3 Controller nodes

    3 Storage nodes

    2 Swift Proxy nodes

    1 Compute node

    Fuel for Openstack v3.1User Guide Cluster Sizing

    2013, Mirantis Inc. Page 30

  • Of course, you are free to choose how to deploy OpenStack based on the amount of available hardware and onyour goals (such as whether you want a compute-oriented or storage-oriented cluster).

    For a typical OpenStack compute deployment, you can use this table as high-level guidance to determine thenumber of controllers, compute, and storage nodes you should have:

    # of Nodes Controllers Computes Storages

    4-10 3 1-7 3 (on controllers)

    11-40 3 3-32 3+ (swift) + 2 (proxy)

    41-100 4 29-88 6+ (swift) + 2 (proxy)

    >100 5 >84 9+ (swift) + 2 (proxy)

    Fuel for Openstack v3.1User Guide Cluster Sizing

    2013, Mirantis Inc. Page 31

  • Network ArchitectureThe current architecture assumes the presence of 3 NICs, but it can be customized for two or 4+ networkinterfaces. Most servers arebuilt with at least two network interfaces. In this case, let's consider a typical exampleof three NIC cards. They're utilized as follows:

    eth0:The internal management network, used for communication with Puppet & Cobbler

    eth1:The public network, and floating IPs assigned to VMs

    eth2:The private network, for communication between OpenStack VMs, and the bridge interface (VLANs)

    In the multi-host networking mode, you can choose between the FlatDHCPManager and VlanManager networkmanagers in OpenStack. The figure below illustrates the relevant nodes and networks.

    Fuel for Openstack v3.1User Guide Network Architecture

    2013, Mirantis Inc. Page 32

  • Fuel for Openstack v3.1User Guide Network Architecture

    2013, Mirantis Inc. Page 33

  • Lets take a closer look at each network and how its used within the cluster.

    Public NetworkThis network allows inbound connections to VMs from the outside world (allowing users to connect to VMs fromthe Internet). It also allows outbound connections from VMs to the outside world.

    For security reasons, the public network is usually isolated from the private network and internal (management)network. Typically, it's a single C class network from your globally routed or private network range.

    To enable Internet access to VMs, the public network provides the address space for the floating IPs assigned toindividual VM instances by the project administrator. Nova-network or Neutron (formerly Quantum) services canthen configure this address on the public network interface of the Network controller node. Clusters based onnova-network use iptables to create a Destination NAT from this address to the fixed IP of the corresponding VMinstance through the appropriate virtual bridge interface on the Network controller node.

    In the other direction, the public network provides connectivity to the globally routed address space for VMs. TheIP address from the public network that has been assigned to a compute node is used as the source for theSource NAT performed for traffic going from VM instances on the compute node to Internet.

    The public network also provides VIPs for Endpoint nodes, which are used to connect to OpenStack services APIs.

    Internal (Management) NetworkThe internal network connects all OpenStack nodes in the cluster. All components of an OpenStack clustercommunicate with each other using this network. This network must be isolated from both the private and publicnetworks for security reasons.

    The internal network can also be used for serving iSCSI protocol exchanges between Compute and Storagenodes.

    This network usually is a single C class network from your private, non-globally routed IP address range.

    Private NetworkThe private network facilitates communication between each tenant's VMs. Private network address spaces arepart of the enterprise network address space. Fixed IPs of virtual instances are directly accessible from the rest ofEnterprise network.

    The private network can be segmented into separate isolated VLANs, which are managed by nova-network orNeutron (formerly Quantum) services.

    Fuel for Openstack v3.1User Guide Network Architecture

    2013, Mirantis Inc. Page 34

  • Technical ConsiderationsBefore performing any installations, you'll need to make a number of decisions about which services to deploy,but from a general architectural perspective, it's important to think about how you want to handle bothnetworking and block storage.

    Neutron vs. nova-networkNeutron (formerly Quantum) is a service which provides Networking-as-a-Service functionality in OpenStack. Ithas a rich tenant-facing API for defining network connectivity and addressing in the cloud, and gives operatorsthe ability to leverage different networking technologies to power their cloud networking.

    There are various deployment use cases for Neutron. Fuel supports the most common of them, called ProviderRouter with Private Networks. It provides each tenant with one or more private networks, which cancommunicate with the outside world via a Neutron router.

    Neutron is not, however, required in order to run an OpenStack cluster. If you don't need (or want) this addedfunctionality, it's perfectly acceptable to continue using nova-network.

    In order to deploy Neutron, you need to enable it in the Fuel configuration. Fuel will then set up an additionalnode in the OpenStack installation to act as an L3 router, or, depending on the configuration options you'vechosen, install Neutron on the controllers.

    Cinder vs. nova-volumeCinder is a persistent storage management service, also known as block-storage-as-a-service. It was created toreplace nova-volume, and provides persistent storage for VMs.

    If you want to use Cinder for persistent storage, you will need to both enable Cinder and create the block deviceson which it will store data. You will then provide information about those blocks devices during the Fuel install.

    Cinder block devices can be:

    created by Cobbler during the initial node installation, or

    attached manually (e.g. as additional virtual disks if you are using VirtualBox, or as additional physical RAID,SAN volumes)

    Object Storage DeploymentFuel currently supports several scenarios to deploy the object storage:

    Glance + filesystemBy default, Glance uses the file system backend to store virtual machine images. In this case, you can useany of shared file systems supported by Glance.

    Swift on controllersIn this mode the role of swift-storage and swift-proxy are combined with a nova-controller. Use it only fortesting in order to save nodes. It's not suitable for production environments.

    Swift on dedicated nodesIn this case the Proxy service and Storage (account/container/object) services reside on separate nodes, withtwo proxy nodes and a minimum of three storage nodes.

    Fuel for Openstack v3.1User Guide Technical Considerations

    2013, Mirantis Inc. Page 35

  • Production ConsiderationsFuel simplifies the set up of an OpenStack cluster, affording you the ability to dig in and fully understand howOpenStack works. You can deploy on test hardware or in a virtualized environment and root around all you like,but when it comes time to deploy to production there are a few things to take into consideration.

    In this section we will talk about such things including how to size your hardware and how to handle large-scaledeployments.Sizing Hardware for Production Deployment 37

    Processing 37

    Memory 37

    Storage Space 38

    Networking 40

    Summary 41

    Redeploying An Environment 42

    Environments 42

    Deployment pipeline 42

    Large Scale Deployments 44

    Certificate signing requests and Puppet Master/Cobbler capacity 44

    Downloading of operating systems and other software 44

    Fuel for Openstack v3.1User Guide Production Considerations

    2013, Mirantis Inc. Page 36

  • Sizing Hardware for Production DeploymentOne of the first questions people ask when planning an OpenStack deployment is "what kind of hardware do Ineed?" There is no such thing as a one-size-fits-all answer, but there are straightforward rules to selectingappropriate hardware that will suit your needs. The Golden Rule, however, is to always accommodate for growth.With the potential for growth accounted for, you can move on to the actual hardware needs.

    Many factors contribute to selecting hardware for an OpenStack cluster -- contact Mirantis for information onyour specific requirements -- but in general, you will want to consider the following factors:

    Processing

    Memory

    Storage

    NetworkingYour needs in each of these areas are going to determine your overall hardware requirements.

    ProcessingIn order to calculate how much processing power you need to acquire you will need to determine the number ofVMs your cloud will support. You must also consider the average and maximum processor resources you willallocate to each VM. In the vast majority of deployments, the allocated resources will be the same for all of yourVMs. However, if you are planning to create groups of VMs that have different requirements, you will need tocalculate for all of them in aggregate. Consider this example:

    100 VMs

    2 EC2 compute units (2 GHz) average

    16 EC2 compute units (16 GHz) maxTo make it possible to provide the maximum CPU in this example you will need at least 5 CPU cores (16 GHz/(2.4GHz per core * 1.3 to adjust for hyper-threading)) per machine, and at least 84 CPU cores ((100 VMs * 2 GHz perVM)/2.4 GHz per core) in total. If you were to select the Intel E5 2650-70 8 core CPU, that means you need 11sockets (84 cores / 8 cores per socket). This breaks down to six dual core servers (12 sockets / 2 sockets perserver), for a "packing density" of 17 VMs per server (102 VMs / 6 servers).

    This process also accommodates growth since you now know what a single server using this CPU configurationcan support. You can add new servers accounting for 17 VMs each as needed without having to re-calculate.

    You will also need to take into account the following:

    This model assumes you are not oversubscribing your CPU.

    If you are considering Hyper-threading, count each core as 1.3, not 2.

    Choose a good value CPU that supports the technologies you require.

    MemoryContinuing to use the example from the previous section, we need to determine how much RAM will be requiredto support 17 VMs per server. Let's assume that you need an average of 4 GBs of RAM per VM with dynamicallocation for up to 12GBs for each VM. Calculating that all VMs will be using 12 GBs of RAM requires that eachserver have 204 GBs of available RAM.

    Fuel for Openstack v3.1User Guide Sizing Hardware for Production Deployment

    2013, Mirantis Inc. Page 37

  • You must also consider that the node itself needs sufficient RAM to accommodate core OS operations as well asRAM for each VM container (not the RAM allocated to each VM, but the memory the core OS uses to run the VM).The node's OS must run it's own operations, schedule processes, allocate dynamic resources, and handle networkoperations, so giving the node itself at least 16 GBs or more RAM is not unreasonable.

    Considering that the RAM we would consider for servers comes in 4 GB, 8 GB, 16 GB and 32 GB sticks, we wouldneed a total of 256 GBs of RAM installed per server. For an average 2-CPU socket server board you get 16-24RAM slots. To have 256 GBs installed you would need sixteen 16 GB sticks of RAM to satisfy your RAM needs forup to 17 VMs requiring dynamic allocation up to 12 GBs and to support all core OS requirements.

    You can adjust this calculation based on your needs.

    Storage SpaceWhen it comes to disk space there are several types that you need to consider:

    Ephemeral (the local drive space for a VM)

    Persistent (the remote volumes that can be attached to a VM)

    Object Storage (such as images or other objects)As far as local drive space that must reside on the compute nodes, in our example of 100 VMs we make thefollowing assumptions:

    150 GB local storage per VM

    5 TB total of local storage (100 VMs * 50 GB per VM)

    500 GB of persistent volume storage per VM

    50 TB total persistent storageReturning to our already established example, we need to figure out how much storage to install per server. Thisstorage will service the 17 VMs per server. If we are assuming 50 GBs of storage for each VMs drive container,then we would need to install 2.5 TBs of storage on the server. Since most servers have anywhere from 4 to 322.5" drive slots or 2 to 12 3.5" drive slots, depending on server form factor (i.e., 2U vs. 4U), you will need toconsider how the storage will be impacted by the intended use.

    If storage impact is not expected to be significant, then you may consider using unified storage. For this examplea single 3 TB drive would provide more than enough storage for seventeen 150 GB VMs. If speed is really not anissue, you might even consider installing two or three 3 TB drives and configure a RAID-1 or RAID-5 forredundancy. If speed is critical, however, you will likely want to have a single hardware drive for each VM. In thiscase you would likely look at a 3U form factor with 24-slots.

    Don't forget that you will also need drive space for the node itself, and don't forget to order the correctbackplane that supports the drive configuration that meets your needs. Using our example specifications andassuming that speed it critical, a single server would need 18 drives, most likely 2.5" 15,000 RPM 146 GB SASdrives.

    Throughput

    As far as throughput, that's going to depend on what kind of storage you choose. In general, you calculate IOPSbased on the packing density (drive IOPS * drives in the server / VMs per server), but the actual drive IOPS willdepend on the drive technology you choose. For example:

    Fuel for Openstack v3.1User Guide Sizing Hardware for Production Deployment

    2013, Mirantis Inc. Page 38

  • 3.5" slow and cheap (100 IOPS per drive, with 2 mirrored drives)

    100 IOPS * 2 drives / 17 VMs per server = 12 Read IOPS, 6 Write IOPS 2.5" 15K (200 IOPS, four 600 GB drive, RAID-10)

    200 IOPS * 4 drives / 17 VMs per server = 48 Read IOPS, 24 Write IOPS SSD (40K IOPS, eight 300 GB drive, RAID-10)

    40K * 8 drives / 17 VMs per server = 19K Read IOPS, 9.5K Write IOPSClearly, SSD gives you the best performance, but the difference in cost between SSDs and the less costlyplatter-based solutions is going to be significant, to say the least. The acceptable cost burden is determined bythe balance between your budget and your performance and redundancy needs. It is also important to note thatthe rules for redundancy in a cloud environment are different than a traditional server installation in that entireservers provide redundancy as opposed to making a single server instance redundant.

    In other words, the weight for redundant components shifts from individual OS installation to server redundancy.It is far more critical to have redundant power supplies and hot-swappable CPUs and RAM than to haveredundant compute node storage. If, for example, you have 18 drives installed on a server and have 17 drivesdirectly allocated to each VM installed and one fails, you simply replace the drive and push a new node copy. Theremaining VMs carry whatever additional load is present due to the temporary loss of one node.

    Remote storage

    IOPS will also be a factor in determining how you plan to handle persistent storage. For example, consider theseoptions for laying out your 50 TB of remote volume space:

    12 drive storage frame using 3 TB 3.5" drives mirrored

    36 TB raw, or 18 TB usable space per 2U frame

    3 frames (50 TB / 18 TB per server)

    12 slots x 100 IOPS per drive = 1200 Read IOPS, 600 Write IOPS per frame

    3 frames x 1200 IOPS per frame / 100 VMs = 36 Read IOPS, 18 Write IOPS per VM 24 drive storage frame using 1TB 7200 RPM 2.5" drives

    24 TB raw, or 12 TB usable space per 2U frame

    5 frames (50 TB / 12 TB per server)

    24 slots x 100 IOPS per drive = 2400 Read IOPS, 1200 Write IOPS per frame

    5 frames x 2400 IOPS per frame / 100 VMs = 120 Read IOPS, 60 Write IOPS per frameYou can accomplish the same thing with a single 36 drive frame using 3 TB drives, but this becomes a singlepoint of failure in your cluster.

    Object storage

    When it comes to object storage, you will find that you need more space than you think. For example, thisexample specifies 50 TB of object storage.

    Easy right? Not really.

    Fuel for Openstack v3.1User Guide Sizing Hardware for Production Deployment

    2013, Mirantis Inc. Page 39

  • Object storage uses a default of 3 times the required space for replication, which means you will need 150 TB.However, to accommodate two hands-off zones, you will need 5 times the required space, which actually means250 TB. The calculations don't end there. You don't ever want to run out of space, so "full" should really be morelike 75% of capacity, which means you will need a total of 333 TB, or a multiplication factor of 6.66.

    Of course, that might be a bit much to start with; you might want to start with a happy medium of a multiplier of4, then acquire more hardware as your drives begin to fill up. That calculates to 200 TB in our example. So howdo you put that together? If you were to use 3 TB 3.5" drives, you could use a 12 drive storage frame, with 6servers hosting 36 TB each (for a total of 216 TB). You could also use a 36 drive storage frame, with just 2 servershosting 108 TB each, but its not recommended due to the high cost of failure to replication and capacity issues.

    NetworkingPerhaps the most complex part of designing an OpenStack cluster is the networking.

    An OpenStack cluster can involve multiple networks even beyond the Public, Private, and Internal networks. Yourcluster may involve tenant networks, storage networks, multiple tenant private networks, and so on. Many ofthese will be VLANs, and all of them will need to be planned out in advance to avoid configuration issues.

    In terms of the example network, consider these assumptions:

    100 Mbits/second per VM

    HA architecture

    Network Storage is not latency sensitiveIn order to achieve this, you can use two 1 Gb links per server (2 x 1000 Mbits/second / 17 VMs = 118Mbits/second).

    Using two links also helps with HA. You can also increase throughput and decrease latency by using two 10 Gblinks, bringing the bandwidth per VM to 1 Gb/second, but if you're going to do that, you've got one more factor toconsider.

    Scalability and oversubscription

    It is one of the ironies of networking that 1 Gb Ethernet generally scales better than 10Gb Ethernet -- at leastuntil 100 Gb switches are more commonly available. It's possible to aggregate the 1 Gb links in a 48 port switch,so that you have 48 x 1 Gb links down, but 4 x 10 Gb links up. Do the same thing with a 10 Gb switch, however,and you have 48 x 10 Gb links down and 4 x 100b links up, resulting in oversubscription.

    Like many other issues in OpenStack, you can avoid this problem to a great extent with careful planning.Problems only arise when you are moving between racks, so plan to create "pods", each of which includes bothstorage and compute nodes. Generally, a pod is the size of a non-oversubscribed L2 domain.

    Hardware for this example

    In this example, you are looking at:

    2 data switches (for HA), each with a minimum of 12 ports for data (2 x 1 Gb links per server x 6 servers)

    1 x 1 Gb switch for IPMI (1 port per server x 6 servers)

    Optional Cluster Management switch, plus a second for HA

    Fuel for Openstack v3.1User Guide Sizing Hardware for Production Deployment

    2013, Mirantis Inc. Page 40

  • Because your network will in all likelihood grow, it's best to choose 48 port switches. Also, as your networkgrows, you will need to consider uplinks and aggregation switches.

    SummaryIn general, your best bet is to choose a 2 socket server with a balance in I/O, CPU, Memory, and Disk that meetsyour project requirements. Look for a 1U R-class or 2U high density C-class servers. Some good options from Dellfor compute nodes include:

    Dell PowerEdge R620

    Dell PowerEdge C6220 Rack Server

    Dell PowerEdge R720XD (for high disk or IOPS requirements)You may also want to consider systems from HP (http://www.hp.com/servers) or from a smaller systems builderlike Aberdeen, a manufacturer that specializes in powerful, low-cost systems and storage servers(http://www.aberdeeninc.com).

    Fuel for Openstack v3.1User Guide Sizing Hardware for Production Deployment

    2013, Mirantis Inc. Page 41

  • Redeploying An EnvironmentBecause Puppet is additive only, there is no ability to revert changes as you would in a typical applicationdeployment. If a change needs to be backed out, you must explicitly add a configuration to reverse it, check theconfiguration in, and promote it to production using the pipeline. This means that if a breaking change does getdeployed into production, typically a manual fix is applied, with the proper fix subsequently checked into versioncontrol.

    Fuel offers the ability to isolate code changes while developing a deployment and minimizes the headachesassociated with maintaining multiple configurations through a single Puppet Master by creating what are calledenvironments.

    EnvironmentsPuppet supports assigning nodes 'environments'. These environments can be mapped directly to yourdevelopment, QA and production life cycles, so its a way to distribute code to nodes that are assigned to thoseenvironments.

    On the Master node:

    The Puppet Master tries to find modules using its modulepath setting, which by default is/etc/puppet/modules. It is common practice to set this value once in your/etc/puppet/puppet.conf. Environments expand on this idea and give you the ability to use differentsettings for different configurations.

    For example, you can specify several search paths. The following example dynamically sets themodulepath so Puppet will check a per-environment folder for a module before serving it from the mainset:

    [master] modulepath = $confdir/$environment/modules:$confdir/modules

    [production] manifest = $confdir/manifests/site.pp

    [development] manifest = $confdir/$environment/manifests/site.pp

    On the Slave Node:

    Once the slave node makes a request, the Puppet Master gets informed of its environment. If you dontspecify an environment, the agent uses the default production environment.

    To set aslave-side environment, just specify the environment setting in the [agent] block ofpuppet.conf:

    [agent] environment = development

    Deployment pipeline

    Fuel for Openstack v3.1User Guide Redeploying An Environment

    2013, Mirantis Inc. Page 42

  • 1. DeployIn order to deploy multiple environments that don't interfere with each other, you should specify thedeployment_id option in YAML file. It should be an even integer value in the range of 2-254.

    This value is used in dynamic environment-based tag generation. Fuel applies that tag globally to allresources and some services on each node.

    2. Clean/RevertAt this stage you just need to make sure the environment has the original/virgin state.

    3. Puppet node deactivateThis will ensure that any resources exported by that node will stop appearing in the catalogs served to theslave nodes:

    puppet node deactivate

    where is the fully qualified domain name as seen in puppetcertlist--all.

    You can deactivate nodes manually one by one, or execute the following command to automaticallydeactivate all nodes:

    cert list --all | awk '! /DNS:puppet/ { gsub(/"/, "", $2); print $2}' | xargs puppet node deactivate

    4. RedeployStart the puppet agent again to apply a desired node configuration.

    See also

    http://puppetlabs.com/blog/a-deployment-pipeline-for-infrastructure/

    http://docs.puppetlabs.com/guides/environment.html

    Fuel for Openstack v3.1User Guide Redeploying An Environment

    2013, Mirantis Inc. Page 43

  • Large Scale DeploymentsWhen deploying large clusters (of 100 nodes or more) there are two basic bottlenecks:

    Careful planning is key to eliminating these potential problem areas, but there's another way.

    Fuel takes care of these problems through caching and orchestration. We feel, however, that it's always good tohave a sense of how to solve these problems should they appear.

    Certificate signing requests and Puppet Master/Cobbler capacityWhen deploying a large cluster, you may find that Puppet Master begins to have difficulty when you startexceeding 20 or more simultaneous requests. Part of this problem is because the initial process of requesting andsigning certificates involves *.tmp files that can create conflicts. To solve this problem, you have two options:

    reduce the number of simultaneous requests,

    or increase the number of Puppet Master/Cobbler servers.The number of simultaneous certificate requests that are active can be controlled by staggering the Puppet agentrun schedule. This can be accomplished through orchestration. You don't need extreme staggering (1 to 5seconds will do) but if this method isn't practical, you can increase the number of Puppet Master/Cobbler servers.

    If you're simply overwhelming the Puppet Master process and not running into file conflicts, one way to getaround this problem is to use Puppet Master with Thin as the backend component and nginx as a frontendcomponent. This configuration dynamically scales the number of Puppet Master processes to betteraccommodate changing load.

    You can also increase the number of servers by creating a cluster that utilizes a round robin DNS configurationthrough a service like HAProxy. You will need to ensure that these nodes are kept in sync. For Cobbler, thatmeans a combination of the --replicate switch, XMLRPC for metadata, rsync for profiles and distributions.Similarly, Puppet Master can be kept in sync with a combination of rsync (for modules, manifests, and SSL data)and database replication.

    Downloading of operating systems and other softwareLarge deployments can also suffer from a bottleneck in terms of the additional traffic created by downloadingsoftware from external sources. One way to avoid this problem is by increasing LAN bandwidth through bondingmultiple gigabit interfaces. You might also want to consider 10G Ethernet trunking between infrastructureswitches using CAT-6a or fiber cables to improve backend speeds to reduce latency and provide more overallpipe.

    See also

    Sizing Hardware for Production Deployment for more information on choosing networking equipment.

    Fuel for Openstack v3.1User Guide Large Scale Deployments

    2013, Mirantis Inc. Page 44

  • Create an OpenStack cluster using Fuel UINow let's look at performing an actual OpenStack deployment using Fuel.Installing Fuel Master Node 46

    On Bare-Metal Environment 46

    On VirtualBox 46

    Changing Network Parameters Before the Installation 50

    Changing Network Parameters After Installation 51

    Name Resolution (DNS) 52

    PXE Booting Settings 52

    When Master Node Installation is Done 53

    Understanding and Configuring the Network 54

    FlatDHCPManager (multi-host scheme) 54

    FlatDHCPManager (single-interface scheme) 55

    VLANManager 55

    Fuel Deployment Schema 57

    Configuring the network 57

    Network Issues 64

    On VirtualBox 64

    Timeout In Connection to OpenStack API From Client Applications 64

    Red Hat OpenStack Deployment Notes 67

    Overview 67

    Deployment Requirements 67

    Red Hat Subscription Management (RHSM) 67

    Red Hat RHN Satellite 68

    Troubleshooting Red Hat OpenStack Deployment 69

    Post-Deployment Check 71

    Benefits 71

    Running Post-Deployment Checks 71

    What To Do When a Test Fails 72

    Sanity Tests Description 73

    Smoke Tests Description 75

    Fuel for Openstack v3.1User Guide Create an OpenStack cluster using Fuel UI

    2013, Mirantis Inc. Page 45

  • Installing Fuel Master NodeFuel is distributed as both ISO and IMG images, each of them contains an installer for Fuel Master node. The ISOimage is used for CD media devices, iLO or similar remote access systems. The IMG file is used for USB memorysticks.

    Once installed, Fuel can be used to deploy and manage OpenStack clusters. It will assign IP addresses to thenodes, perform PXE boot and initial configuration, and provision of OpenStack nodes according to their roles inthe cluster.

    On Bare-Metal EnvironmentTo install Fuel on bare-metal hardware, you need to burn the provided ISO to a CD/DVD or create a bootable USBstick. You would then begin the installation process by booting from that media, very much like any other OS.

    Burning an ISO to optical media is a deeply supported function on all OSes. For Linux there are several interfacesavailable such as Brasero or Xfburn, two of the more commonly pre-installed desktop applications. There are alsoa number for Windows such as ImgBurn and the open source InfraRecorder.

    Burning an ISO in Mac OS X is deceptively simple. Open Disk Utility from Applications > Utilities, drag the ISO intothe disk list on the left side of the window and select it, insert blank media with enough room, and click Burn. Ifyou prefer a utility, check out the open source Burn.

    Installing the ISO to a bootable USB stick, however, is an entirely different matter. Canonical suggestsPenDriveLinux which is a GUI tool for Windows.

    On Windows, you can write the installation image with a number of different utilities. The following list links tosome of the more popular ones and they are all available at no cost:

    Win32 Disk Imager.

    ISOtoUSB.After the installation is complete, you will need to allocate bare-metal nodes for your OpenStack cluster, putthem on the same L2 network as the Master node, and PXE boot. The UI will discover them and make themavailable for installing OpenStack.

    On VirtualBoxIf you are going to evaluate Fuel on VirtualBox, you should know that we provide a set of scripts that create andconfigure all of the required VMs for you, including the Master node and Slave nodes for OpenStack itself. It's avery simple, single-click installation.

    Note

    These scripts are not supported on Windows, but you can still test on VirtualBox by creating the VMs byyourself. See Manual Mode for more details.

    The requirements for running Fuel on VirtualBox are:

    A host machine with Linux or Mac OS.

    Fuel for Openstack v3.1User Guide Installing Fuel Master Node

    2013, Mirantis Inc. Page 46

  • The scripts have been tested on Mac OS 10.7.5, Mac OS 10.8.3, Ubuntu 12.04 and Ubuntu 12.10.

    VirtualBox 4.2.12 (or later) must be installed with the extension pack. Both can be downloaded fromhttp://www.virtualbox.org/.

    8 GB+ of RAMto handle 4 VMs for non-HA OpenStack installation (1 Master node, 1 Controller node, 1 Compute node, 1Cinder node) or to handle 5 VMs for HA OpenStack installation (1 Master node, 3 Controller nodes, 1Compute node)

    Automatic Mode

    When you unpack the scripts, you will see the following important files and folders:

    isoThis folder needs to contain a single ISO image for Fuel. Once you downloaded ISO from the portal, copy ormove it into this directory.

    config.shThis file contains configuration, which can be fine-tuned. For example, you can select how many virtualnodes to launch, as well as how much memory to give them.

    launch.shOnce executed, this script will pick up an image from the iso directory, create a VM, mount the image tothis VM, and automatically install the Fuel Master node. After installation of the Master node, the script willcreate Slave nodes for OpenStack and boot them via PXE from the Master node. Finally, the script will giveyou the link to access the Web-based UI for the Master node so you can start installation of an OpenStackcluster.

    Manual Mode

    Note

    However, these manual steps will allow you to set up the evaluation environment for vanilla OpenStackrelease only. RHOS installation is not possible.

    To download and deploy RedHat OpenStack you need to use automated VirtualBox helper scripts orinstall Fuel On Bare-Metal Environment.

    If you cannot or would rather not run our helper scripts, you can still run Fuel on VirtualBox by following thesesteps.

    Master Node Deployment

    First, create the Master node VM.

    1. Configure the host-only interface vboxnet0 in VirtualBox.

    IP address: 10.20.0.1

    Fuel for Openstack v3.1User Guide Installing Fuel Master Node

    2013, Mirantis Inc. Page 47

  • Interface mask: 255.255.255.0

    DHCP disabled

    2. Create a VM for the Master node with the following parameters:

    OS Type: Linux, Version: Red Hat (64bit)

    RAM: 1024 MB

    HDD: 20 GB, with dynamic disk expansion

    CDROM: mount Fuel ISO

    Network 1: host-only interface vboxnet0

    3. Power on the VM in order to start the installation.

    4. Wait for the Welcome message with all information needed to login into the UI of Fuel.

    Adding Slave Nodes

    Next, create Slave nodes where OpenStack needs to be installed.

    1. Create 3 or 4 additional VMs depending on your wish with the following parameters:

    OS Type: Linux, Version: Red Hat (64bit)

    RAM: 1024 MB

    HDD: 30 GB, with dynamic disk expansion

    Network 1: host-only interface vboxnet0, PCnet-FAST III device

    2. Set priority for the network boot:

    Fuel for Openstack v3.1User Guide Installing Fuel Master Node

    2013, Mirantis Inc. Page 48

  • 3. Configure the network adapter on each VM:

    Fuel for Openstack v3.1User Guide Installing Fuel Master Node

    2013, Mirantis Inc. Page 49

  • Changing Network Parameters Before the InstallationYou can change the network settings for the Fuel (PXE booting) network, which is10.20.0.2/24gw10.20.0.1 by default.

    In order to do so, press the key on the very first installation screen which says "Welcome to Fuel Installer!"and update the kernel options. For example, to use 192.168.1.10/24 IP address for the Master node and192.168.1.1 as the gateway and DNS server you should change the parameters to those shown in the imagebelow:

    Fuel for Openstack v3.1User Guide Installing Fuel Master Node

    2013, Mirantis Inc. Page 50

  • When you're finished making changes, press the key and wait for the installation to complete.

    Changing Network Parameters After InstallationIt is still possible to configure other interfaces, or add 802.1Q sub-interfaces to the Master node to be able toaccess it from your network if required. It is easy to do via standard network configuration scripts for CentOS.When the installation is complete, you can modify /etc/sysconfig/network-scripts/ifcfg-eth\*scripts. For example, if eth1 interface is on the L2 network which is planned for PXE booting, and eth2 is theinterface connected to your office network switch, eth0 is not in use, then settings can be the following:

    /etc/sysconfig/network-scripts/ifcfg-eth0:

    DEVICE=eth0ONBOOT=no

    /etc/sysconfig/network-scripts/ifcfg-eth1:

    DEVICE=eth1ONBOOT=yesHWADDR=

    Fuel for Openstack v3.1User Guide Installing Fuel Master Node

    2013, Mirantis Inc. Page 51

  • ..... (other settings in your config) .....PEERDNS=noBOOTPROTO=staticIPADDR=192.168.1.10NETMASK=255.255.255.0

    /etc/sysconfig/network-scripts/ifcfg-eth2:

    DEVICE=eth2ONBOOT=yesHWADDR=..... (other settings in your config) .....PEERDNS=noIPADDR=172.18.0.5NETMASK=255.255.255.0

    Warning

    Once IP settings are set at the boot time for Fuel Master node, they should not be changed during thewhole lifecycle of Fuel.

    After modification of network configuration files, it is required to apply the new configuration:

    service network restart

    Now you should be able to connect to Fuel UI from your network at http://172.18.0.5:8000/

    Name Resolution (DNS)During Master node installation, it is assumed that there is a recursive DNS service on 10.20.0.1.

    If you want to make it possible for Slave nodes to be able to resolve public names, you need to change thisdefault value to point to an actual DNS service. To make the change, run the following command on Fuel Masternode (replace IP to your actual DNS):

    echo "nameserver 172.0.0.1" > /etc/dnsmasq.upstream

    PXE Booting SettingsBy default, eth0 on Fuel Master node serves PXE requests. If you are planning to use another interface, then it isrequired to modify dnsmasq settings (which acts as DHCP server). Edit the file/etc/cobbler/dnsmasq.template, find the line interface=eth0 and replace the interface name withthe one you want to use.

    Fuel for Openstack v3.1User Guide Installing Fuel Master Node

    2013, Mirantis Inc. Page 52

  • Launch command to synchronize cobbler service afterwards:

    cobbler sync

    During synchronization cobbler builds actual dnsmasq configuration file /etc/dnsmasq.conf from template/etc/cobbler/dnsmasq.template. That is why you should not edit /etc/dnsmasq.conf. Cobblerrewrites it each time when it is synchronized.

    If you want to use virtual machines to launch Fuel then you have to be sure that dnsmasq on Master node isconfigured to support the PXE client you use on your virtual machines. We enabled dhcp-no-override optionbecause without it dnsmasq tries to move PXEfilename and PXEservername special fields into DHCPoptions. Not all PXE implementations can recognize those options and therefore they will not be able to boot.For example, CentOS 6.4 uses gPXE implementation instead of more advanced iPXE by default.

    When Master Node Installation is DoneOnce the Master node is installed, power on all other nodes and log in to the Fuel UI.

    Slave nodes will be booted in bootstrap mode (CentOS based Linux in memory) via PXE and you will seenotifications in the user interface about discovered nodes. This is the point when you can create an environment,add nodes into it, and start configuration...

    Networking configuration is most complicated part, so please read the networking section of documentationcarefully.

    Fuel for Openstack v3.1User Guide Installing Fuel Master Node

    2013, Mirantis Inc. Page 53

  • Understanding and Configuring the NetworkOpenStack clusters use several types of network managers: FlatDHCPManager, VLANManager and Neutron(formerly Quantum). The current version of Fuel UI supports only two (FlatDHCP and VLANManager), but Fuel CLIsupports all three. For more information about how the first two network managers work, you can read these tworesources:

    OpenStack Networking FlatManager and FlatDHCPManager

    Openstack Networking for Scalability and Multi-tenancy with VLANManager

    FlatDHCPManager (multi-host scheme)The main idea behind the flat network manager is to configure a bridge (i.e. br100) on every Compute node andhave one of the machine's host interfaces connect to it. Once the virtual machine is launched its virtual interfacewill connect to that bridge as well. The same L2 segment is used for all OpenStack projects, which means thatthere is no L2 isolation between virtual hosts, even if they are owned by separate projects, and there is only oneflat IP pool defined for the cluster. For this reason it is called the Flat manager.

    The simplest case here is as shown on the following diagram. Here the eth1 interface is used to give networkaccess to virtual machines, while eth0 interface is the management network interface.

    Fuel deploys OpenStack in FlatDHCP mode with the so called multi-host feature enabled. Without this featureenabled, network traffic from each VM would go through the single gateway host, which basically becomes asingle point of failure. In enabled mode, each Compute node becomes a gateway for all the VMs running on thehost, providing a balanced networking solution. In this case, if one of the Computes goes down, the rest of theenvironment remains operational.

    The current version of Fuel uses VLANs, even for the FlatDHCP network manager. On the Linux host, it isimplemented in such a way that it is not the physical network interfaces that are connected to the bridge, but theVLAN interface (i.e. eth0.102).

    Fuel for Openstack v3.1User Guide Understanding and Configuring the Network

    2013, Mirantis Inc. Page 54

  • FlatDHCPManager (single-interface scheme)

    Therefore all switch ports where Compute nodes are connected must be configured as tagged (trunk) ports withrequired VLANs allowed (enabled, tagged). Virtual machines will communicate with each other on L2 even if theyare on different Compute nodes. If the virtual machine sends IP packets to a different network, they will berouted on the host machine according to the routing table. The default route will point to the gateway specifiedon the networks tab in the UI as the gateway for the Public network.

    VLANManagerVLANManager mode is more suitable for large scale clouds. The idea behind this mode is to separate groups ofvirtual machines, owned by different projects, on different L2 layers. In VLANManager this is done by tagging IPframes, or simply speaking, by VLANs. It allows virtual machines inside the given project to communicate witheach other and not to see any traffic from VMs of other projects. Switch ports must be configured as tagged(trunk) ports to allow this scheme to work.

    Fuel for Openstack v3.1User Guide Understanding and Configuring the Network

    2013, Mirantis Inc. Page 55

  • Fuel for Openstack v3.1User Guide Understanding and Configuring the Network

    2013, Mirantis Inc. Page 56

  • Fuel Deployment SchemaOne of the physical interfaces on each host has to be chosen to carry VM-to-VM traffic (fixed network), and switchports must be configured to allow tagged traffic to pass through. OpenStack Computes will untag the IP packetsand send them to the appropriate VMs. Simplifying the configuration of VLAN Manager, there is no knownlimitation which Fuel could add in this particular networking mode.

    Configuring the networkOnce you choose a networking mode (FlatDHCP/VLAN), you must configure equipment accordingly. The diagrambelow shows an example configuration.

    Fuel operates with following logical networks:

    Fuel networkUsed for internal Fuel communications only and PXE booting (untagged on the scheme);

    Public networkIs used to get access from virtual machines to outside, Internet or office network (VLAN 101 on the scheme);

    Floating networkUsed to get access to virtual machines from outside (shared L2-interface with Public network; in this case it'sVLAN 101);

    Management networkIs used for internal OpenStack communications (VLAN 102 on the scheme);

    Storage networkIs used for Storage traffic (VLAN 103 on the scheme);

    Fixed network

    Fuel for Openstack v3.1User Guide Fuel Deployment Schema

    2013, Mirantis Inc. Page 57

  • One (for flat mode) or more (for VLAN mode) virtual machines networks (VLAN 104 on the scheme).

    Mapping logical networks to physical interfaces on servers

    Fuel allows you to use different physical interfaces to handle different types of traffic. When a node is added tothe environment, click at the bottom line of the node icon. In the detailed information window, click the"Network Configuration" button to open the physical interfaces configuration screen.

    On this screen you can drag-and-drop logical networks to physical interfaces according to your network setup.

    All networks are presented on the screen, except Fuel. It runs on the physical interface from which node wasinitially PXE booted, and in the current version it is not possible to map it on any other physical interface. Also,

    Fuel for Openstack v3.1User Guide Fuel Deployment Schema

    2013, Mirantis Inc. Page 58

  • once the network is configured and OpenStack is deployed, you may not modify network settings, even to move alogical network to another physical interface or VLAN number.

    Switch

    Fuel can configure hosts, however switch configuration is still manual work. Unfortunately the set ofconfiguration steps, and even the terminology used, is different for different vendors, so we will try to providevendor-agnostic information on how traffic should flow and leave the vendor-specific details to you. We willprovide an example for a Cisco switch.

    First of all, you should configure access ports to allow non-tagged PXE booting connections from all Slave nodesto the Fuel node. We refer this network as the Fuel network. By default, the Fuel Master node uses the eth0interface to serve PXE requests on this network. So if that's left unchanged, you have to set the switch port foreth0 of Fuel Master node to access mode. We recommend that you use the eth0 interfaces of all other nodes forPXE booting as well. Corresponding ports must also be in access mode.

    Taking into account that this is the network for PXE booting, do not mix this L2 segment with any other networksegments. Fuel runs a DHCP server, and if there is another DHCP on the same L2 network segment, both thecompany's infrastructure and Fuel's will be unable to function properly. You also need to configure each of theswitch's ports connected to nodes as an "STP Edge port" (or a "spanning-tree port fast trunk", according to Ciscoterminology). If you don't do that, DHCP timeout issues may occur.

    As long as the Fuel network is configured, Fuel can operate. Other networks are required for OpenStackenvironments, and currently all of these networks live in VLANs over the one or multiple physical interfaces on anode. This means that the switch should pass tagged traffic, and untagging is done on the Linux hosts.

    Note

    For the sake of simplicity, all t