{"id":310,"date":"2016-09-23T23:34:18","date_gmt":"2016-09-23T16:34:18","guid":{"rendered":"http:\/\/www.rickyadams.com\/wp\/?p=310"},"modified":"2016-09-23T23:34:41","modified_gmt":"2016-09-23T16:34:41","slug":"310","status":"publish","type":"post","link":"https:\/\/www.rickyadams.com\/wp\/310\/","title":{"rendered":"OpenStack Environment Architecture"},"content":{"rendered":"<div class=\"article-content entry-content\">\n<div id=\"openstack-environment-architecture\" class=\"section\">\n<h2>OpenStack Environment Architecture<\/h2>\n<div>Fuel deploys an OpenStack Environment with nodes that provide a specific set of functionality. Beginning with Fuel 5.0, a single architecture model can support HA (High Availability) and non-HA deployments; you can deploy a non-HA environment and then add additional nodes to implement HA rather than needing to redeploy the environment from scratch.<\/div>\n<div>The OpenStack environment consists of multiple physical server nodes (or an equivalent VM), each of which is one of the following node types:<\/div>\n<dl class=\"docutils\">\n<dt>Controller:<\/dt>\n<dd>\n<div class=\"first\">The\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#controller-node-term\"><em>Controller<\/em><\/a>\u00a0manages all activities in the environment. The\u00a0<cite>nova-controller<\/cite>\u00a0maintains the life cycle of the\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#openstack-term\"><em>OpenStack<\/em><\/a>\u00a0controller.<\/div>\n<div class=\"admonition note alert alert-info\">\n<div class=\"first admonition-title\">Note<\/div>\n<div class=\"last\"><a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#ha-term\"><em>HA<\/em><\/a>\u00a0environment must consist of at least 3 controllers in order to achieve HA for\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#galera-cluster-term\"><em>MySQL\/Galera<\/em><\/a>\u00a0cluster. And while two controllers could be enough for most of cases, such as HA for highly available OpenStack API services or reliable\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#rabbitmq-term\"><em>RabbitMQ AMQP<\/em><\/a>\u00a0messaging or resilient virtual IP addresses and load balancing, a third controller is required for quorum-based clusters, such as MySQL\/Galera or\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#pacemaker-term\"><em>Corosync\/Pacemaker<\/em><\/a>. The configuration for stateless and statefull services in HA differs a lot. HA environments also contain active\/active and active\/passive components. Please, see\u00a0<a class=\"reference external\" href=\"http:\/\/docs.openstack.org\/high-availability-guide\/content\/ch-intro.html\">HA-guide<\/a>\u00a0for more details. Fuel configures all stateless OpenStack API services and RabbitMQ HA cluster as active\/active. The MySQL\/Galera cluster is configured as active\/passive. For database clusters, active\/active is sometimes referred to as multi-master environments. Such environments should be able to successfully handle multi-node writing conflicts. But OpenStack support for multi-node writing to MySQL\/Galera nodes is\u00a0<a class=\"reference external\" href=\"http:\/\/lists.openstack.org\/pipermail\/openstack-operators\/2014-September\/005166.html\">not production ready yet<\/a>. &#8220;The simplest way to overcome this issue from the operator\u2019s point of view is to use only one writer node for these types of transactions&#8221;. That is why Fuel configures\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#haproxy-term\"><em>HAProxy<\/em><\/a>\u00a0frontend for MySQL\/Galera to use only one active node, while the other nodes in the cluster are retained standby (passive) state.\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#mongodb-term\"><em>Mongo<\/em><\/a>\u00a0DB backend for\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#ceilometer-term\"><em>Ceilometer<\/em><\/a>\u00a0is also configured as active\/passive. Note that it is possible to configure MySQL\/Galera HA with two controller nodes and a lightweight arbitrator service running at some other node, but this deployment layout is not supported in Fuel.<\/div>\n<\/div>\n<div class=\"last\">For more information about how Fuel deploys HA controllers, see\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/reference-architecture.html#multi-node-ha\"><em>Multi-node with HA Deployment<\/em><\/a>.<\/div>\n<\/dd>\n<dt>Compute:<\/dt>\n<dd>\n<div class=\"first\">Compute servers are the workhorses of your installation; they are the servers on which your users&#8217; virtual machines are created.\u00a0<cite>nova-compute<\/cite>\u00a0controls the life cycle of these VMs; Neutron Agent and Ceilometer Compute Agent may also run on Compute nodes.<\/div>\n<div class=\"last admonition note alert alert-info\">\n<div class=\"first admonition-title\">Note<\/div>\n<div class=\"last\">In environments that Fuel deploys using vCenter as the hypervisor, the\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#nova-term\"><em>Nova-compute<\/em><\/a>\u00a0service can run only on Controller nodes. Because of this, Fuel does not allow you to\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#assign-roles-vcenter-ug\"><em>assign<\/em><\/a>\u00a0the &#8220;Compute&#8221; role to any node when using vCenter.<\/div>\n<\/div>\n<\/dd>\n<dt>Storage:<\/dt>\n<dd>\n<div class=\"first\">OpenStack requires block and object storage to be provisioned. These can be provisioned as Storage nodes or as roles that run on Compute nodes. Fuel provides the following storage options out of the box:<\/div>\n<ul class=\"last simple\">\n<li>Cinder LVM provides persistent block storage to virtual machines over iSCSI protocol. The Cinder Storage node runs a Cinder Volume.<\/li>\n<li>Swift object store can be used by Glance to store VM images and snapshots; it may also be used directly by applications Swift is the default storage provider that is provisioned if another storage option is not chosen when the environment is deployed.<\/li>\n<li>Ceph combines object and block storage and can replace either one or both of the above. The Ceph Storage node runs Ceph OSD.<\/li>\n<\/ul>\n<\/dd>\n<\/dl>\n<div>The key principle is that your controller(s) are separate from the compute servers on which your user&#8217;s VMs run.<\/div>\n<\/div>\n<div id=\"multi-node-with-ha-deployment\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>Multi-node with HA Deployment<\/h2>\n<div>High availability is recommended for production environments. This provides replicated servers to prevent single points of failure. An HA deployment must have at least three controllers as well as replicas of other servers. You can combine compute, storage, and network nodes to reduce the hardware requirements for the environment, although this may degrade the performance and robustness of the environment.<\/div>\n<div class=\"align-center\" align=\"center\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/deployment-ha-compact.svg\" width=\"80%\" \/><\/div>\n<\/div>\n<div id=\"details-of-multi-node-with-ha-deployment\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>Details of Multi-node with HA Deployment<\/h2>\n<div>OpenStack services are interconnected by RESTful HTTP-based APIs and AMQP-based RPC messages. So redundancy for stateless OpenStack API services is implemented through the combination of Virtual IP (VIP) management using Pacemaker and load balancing using HAProxy. Stateful OpenStack components, such as the state database and messaging server, rely on their respective active\/active and active\/passive modes for high availability. For example, RabbitMQ uses built-in clustering capabilities, while the database uses MySQL\/Galera replication.<\/div>\n<div class=\"align-center\" align=\"center\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/ha-overview.svg\" width=\"100%\" \/><\/div>\n<div>Lets take a closer look at what an OpenStack deployment looks like, and what it will take to achieve high availability for an OpenStack deployment.<\/div>\n<\/div>\n<div id=\"ha-logical-setup\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>HA Logical Setup<\/h2>\n<div>An OpenStack Multi-node HA environment involves three types of nodes: controller nodes, compute nodes, and storage nodes.<\/div>\n<div id=\"controller-nodes\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Controller Nodes<\/h3>\n<div>The first order of business in achieving high availability (HA) is redundancy, so the first step is to provide multiple controller nodes.<\/div>\n<div>The MySQL database uses Galera to achieve HA, and Galera is a quorum-based system. That means that you should have at least 3 controller nodes.<\/div>\n<div class=\"align-center\" align=\"center\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/logical-diagram-controller.svg\" width=\"80%\" \/><\/div>\n<div>Every OpenStack controller runs HAProxy, which manages a single External Virtual IP (VIP) for all controller nodes and provides HTTP and TCP load balancing of requests going to OpenStack API services, RabbitMQ, and MySQL.<\/div>\n<div class=\"admonition note alert alert-info\">\n<div class=\"first admonition-title\">Note<\/div>\n<div class=\"last\">OpenStack services use\u00a0<a class=\"reference external\" href=\"https:\/\/wiki.openstack.org\/wiki\/Oslo\/Messaging\">Oslo messaging<\/a>\u00a0and are directly connected to the RabbitMQ nodes and do not need HAProxy.<\/div>\n<\/div>\n<div class=\"admonition note alert alert-info\">\n<div class=\"first admonition-title\">Note<\/div>\n<div class=\"last\">Fuel deploys HAProxy inside its own dedicated network namespace. In order to achieve this, custom resource agent scripts for Pacemaker are used instead of classic heartbeat provider for VIP addresses.<\/div>\n<\/div>\n<div>When an end user accesses the OpenStack cloud using Horizon or makes a request to the REST API for services such as nova-api, glance-api, keystone-api, neutron-api, nova-scheduler or MySQL, the request goes to the live controller node currently holding the External VIP, and the connection gets terminated by HAProxy. When the next request comes in, HAProxy handles it, and may send it to the original controller or another in the environment, depending on load conditions.<\/div>\n<div>Each of the services housed on the controller nodes has its own mechanism for achieving HA:<\/div>\n<ul class=\"simple\">\n<li>OpenStack services, such as nova-api, glance-api, keystone-api, neutron-api, nova-scheduler, cinder-api are stateless services that do not require any special attention besides load balancing.<\/li>\n<li>Horizon, as a typical web application, requires sticky sessions to be enabled at the load balancer.<\/li>\n<li>RabbitMQ provides active\/active high availability using mirrored queues and is deployed with custom resource agent scripts for Pacemaker.<\/li>\n<li>MySQL high availability is achieved through Galera deployment and custom resource agent scripts for Pacemaker. Please, note that HAProxy configures MySQL backends as active\/passive because OpenStack support for multi-node writes to Galera nodes is not production ready yet.<\/li>\n<li>Neutron agents are active\/passive and are managed by custom resource agent scripts for Pacemaker.<\/li>\n<li>Ceph monitors implement their own quorum based HA mechanism and require time synchronization between all nodes. Clock drift higher than 50ms may break the quorum or even crash the Ceph service.<\/li>\n<\/ul>\n<\/div>\n<div id=\"compute-nodes\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Compute Nodes<\/h3>\n<div>OpenStack compute nodes are, in many ways, the foundation of your environment; they are the servers on which your users will create their Virtual Machines (VMs) and host their applications. Compute nodes need to talk to controller nodes and reach out to essential services such as RabbitMQ and MySQL. They use the same approach that provides redundancy to the end-users of Horizon and REST APIs, reaching out to controller nodes using the VIP and going through HAProxy.<\/div>\n<div class=\"align-center\" align=\"center\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/logical-diagram-compute.svg\" width=\"40%\" \/><\/div>\n<\/div>\n<div id=\"storage-nodes\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Storage Nodes<\/h3>\n<div>Depending on the\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/planning-guide.html#storage-plan\"><em>storage options<\/em><\/a>\u00a0you select for your environment, you may have\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#ceph-term\"><em>Ceph<\/em><\/a>,\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#cinder-term\"><em>Cinder<\/em><\/a>, and\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#swift-object-storage-term\"><em>Swift<\/em><\/a>\u00a0services running on your storage nodes.<\/div>\n<div>Ceph implements its own HA; all you need is enough controller nodes running the Ceph Monitor service to\u00a0<a class=\"reference external\" href=\"http:\/\/ceph.com\/docs\/master\/rados\/troubleshooting\/troubleshooting-mon\/\">form a quorum<\/a>, and enough Ceph OSD nodes to satisfy the\u00a0<a class=\"reference external\" href=\"http:\/\/ceph.com\/docs\/master\/rados\/operations\/pools\/\">object replication factor<\/a>.<\/div>\n<div class=\"align-center\" align=\"center\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/ceph_nodes.svg\" width=\"80%\" \/><\/div>\n<div>Swift API relies on the same HAProxy setup with VIP on controller nodes as the other REST APIs. If don&#8217;t expect too much data traffic in Swift, you can also deploy Swift Storage and Proxy services on controller nodes. For a larger production environment you&#8217;ll need dedicated nodes: two for Swift Proxy and at least three for Swift Storage.<\/div>\n<div>Whether or not you&#8217;d want separate Swift nodes depends primarily on how much data you expect to keep there. A simple test is to fully populate your Swift object store with data and then fail one controller node. If replication of the degraded Swift objects between the remaining nodes controller generates enough network traffic, CPU load, or disk I\/O to impact performance of other OpenStack services running on the same nodes, you should separate Swift from controllers.<\/div>\n<div class=\"align-center\" align=\"center\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/logical-diagram-storage.svg\" width=\"40%\" \/><\/div>\n<div>If you select Cinder LVM as the block storage backend for Cinder volumes, you should have at least one Cinder LVM node. Unlike Swift and Ceph, Cinder LVM doesn&#8217;t implement data redundancy across nodes: if a Cinder node is lost, volumes stored on that node cannot be recovered from the data stored on other Cinder nodes. If you need your block storage to be resilient, use Ceph for volumes.<\/div>\n<\/div>\n<\/div>\n<div id=\"how-ha-with-pacemaker-and-corosync-works\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>How HA with Pacemaker and Corosync Works<\/h2>\n<div id=\"corosync-settings\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Corosync Settings<\/h3>\n<div>Corosync uses Totem protocol, which is an implementation of Virtual Synchrony protocol. It uses it in order to provide connectivity between cluster nodes, decide if cluster is quorate to provide services, to provide data layer for services that want to use features of Virtual Synchrony.<\/div>\n<div>Corosync functions in Fuel as the communication and quorum service via Pacemaker cluster resource manager (<cite>crm<\/cite>). It&#8217;s main configuration file is located in<code><span class=\"pre\">\/etc\/corosync\/corosync.conf<\/span><\/code>.<\/div>\n<div>The main Corosync section is the\u00a0<code><span class=\"pre\">totem<\/span><\/code>\u00a0section which describes how cluster nodes should communicate:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>totem {\r\n  version:                             2\r\n  token:                               3000\r\n  token_retransmits_before_loss_const: 10\r\n  join:                                60\r\n  consensus:                           3600\r\n  vsftype:                             none\r\n  max_messages:                        20\r\n  clear_node_high_bit:                 yes\r\n  rrp_mode:                            none\r\n  secauth:                             off\r\n  threads:                             0\r\n  interface {\r\n    ringnumber:  0\r\n    bindnetaddr: 10.107.0.8\r\n    mcastaddr:   239.1.1.2\r\n    mcastport:   5405\r\n  }\r\n}\r\n<\/pre>\n<\/div>\n<\/div>\n<div>Corosync usually uses multicast UDP transport and sets up a &#8220;redundant ring&#8221; for communication. Currently Fuel deploys controllers with one redundant ring. Each ring has it\u2019s own multicast address and bind net address that specifies on which interface Corosync should join corresponding multicast group. Fuel uses default Corosync configuration, which can also be altered in Fuel manifests.<\/div>\n<div class=\"admonition seealso alert alert-info\">\n<div class=\"first admonition-title\">See also<\/div>\n<div class=\"last\"><code><span class=\"pre\">man<\/span>\u00a0<span class=\"pre\">corosync.conf<\/span><\/code>\u00a0or Corosync documentation at\u00a0<a class=\"reference external\" href=\"http:\/\/clusterlabs.org\/doc\/\">http:\/\/clusterlabs.org\/doc\/<\/a>\u00a0if you want to know how to tune installation completely<\/div>\n<\/div>\n<\/div>\n<div id=\"pacemaker-settings\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Pacemaker Settings<\/h3>\n<div>Pacemaker is the cluster resource manager used by Fuel to manage Neutron resources, HAProxy, virtual IP addresses and MySQL Galera cluster. It is done by use of Open Cluster Framework (see\u00a0<a class=\"reference external\" href=\"http:\/\/linux-ha.org\/wiki\/OCF_Resource_Agents\">http:\/\/linux-ha.org\/wiki\/OCF_Resource_Agents<\/a>) agent scripts which are deployed in order to start\/stop\/monitor Neutron services, to manage HAProxy, virtual IP addresses and MySQL replication. These are located at\u00a0<code><span class=\"pre\">\/usr\/lib\/ocf\/resource.d\/mirantis\/ocf-neutron-[metadata|ovs|dhcp|l3]-agent<\/span><\/code>,<code><span class=\"pre\">\/usr\/lib\/ocf\/resource.d\/fuel\/mysql<\/span><\/code>,\u00a0<code><span class=\"pre\">\/usr\/lib\/ocf\/resource.d\/ocf\/haproxy<\/span><\/code>. Firstly, MySQL agent is started, HAproxy and virtual IP addresses are set up. Open vSwitch, metadata, L3, and DHCP agents are started as Pacemaker clones on all the nodes.<\/div>\n<div class=\"admonition seealso alert alert-info\">\n<div class=\"first admonition-title\">See also<\/div>\n<div class=\"last\"><a class=\"reference external\" href=\"http:\/\/clusterlabs.org\/doc\/en-US\/Pacemaker\/1.1\/html\/Pacemaker_Explained\/_using_rules_to_determine_resource_location.html\">Using Rules to Determine Resource Location<\/a><\/div>\n<\/div>\n<div>MySQL HA script primarily targets to the cluster rebuild after power failure or equal type of disaster &#8211; it needs working Corosync in which it forms quorum of an epochs of replication and then electing master from node with newest epoch. Be aware of default five minute interval in which every cluster member should be booted to participate in such election. Every node is a self-aware, that means if nobody pushes higher epoch that it retrieved from Corosync (neither no one did), it will just elect itself as a master.<\/div>\n<\/div>\n<div id=\"how-fuel-deploys-ha\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>How Fuel Deploys HA<\/h3>\n<div>Fuel installs Corosync service, configures\u00a0<code><span class=\"pre\">corosync.conf<\/span><\/code>, and includes the Pacemaker service plugin into\u00a0<code><span class=\"pre\">\/etc\/corosync\/service.d<\/span><\/code>. Then Corosync service starts and spawns corresponding Pacemaker processes. Fuel configures the cluster properties of Pacemaker and then injects resource configurations for virtual IPs, HAProxy, MySQL and Neutron agent resources.<\/div>\n<div>The running configuration can be retrieved from an OpenStack controller node by running:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre><span class=\"c\"># crm configure show<\/span>\r\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"mysql-and-galera\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>MySQL and Galera<\/h2>\n<div>My SQL with Galera implements true active\/active HA. Fuel configures MySQL\/Galera to have a single active node that receives write operations and serves read operations. You can add one or two Galera slave nodes; this is recommended for environments that have six or more nodes:<\/div>\n<ul class=\"simple\">\n<li>Only one MySQL\/Galera node is considered active at a time; the remaining cluster nodes are standby masters.<\/li>\n<li>The standby masters do not have the &#8220;slave lag&#8221; that is typical for MySQL master\/slave topologies because Galera employs synchronous replication and ensures that each cluster node is identical.<\/li>\n<li>Mirantis OpenStack uses Pacemaker and HAProxy to manage MySQL\/Galera:\n<ul>\n<li>Pacemaker manages the individual MySQL+Galera nodes, HAProxy, and the Virtual IP Address (VIP).<\/li>\n<li>HAProxy runs in the dedicated network namespace and manages connections between the MySQL\/Galera active master, backup masters, and the MySQL Clients connecting to the VIP.<\/li>\n<\/ul>\n<\/li>\n<li>Only one MySQL\/Galera master is active in the VIP; this single direction synchronous replication usually provides better performance than other implementations.<\/li>\n<\/ul>\n<div>The workflow is:<\/div>\n<ul class=\"simple\">\n<li>The node that is tied to the VIP serves new data updates and increases its global transaction ID number (GTID).<\/li>\n<li>Each other node in the Galera cluster then synchronizes its data with the node that has a GTID greater than its current values.<\/li>\n<li>If the status of any node falls too far behind the Galera cache, an entire replica is distributed to that node. This causes a master to switch to the Donor role, allowing an out-of-sync node to catch up.<\/li>\n<\/ul>\n<\/div>\n<div id=\"vmware-vsphere-integration\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>VMware vSphere Integration<\/h2>\n<div>This section provides technical details about how vCenter support is implemented in Mirantis OpenStack.<\/div>\n<ul class=\"simple\">\n<li>See\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/planning-guide.html#vcenter-plan\"><em>Preparing for vSphere Integration<\/em><\/a>\u00a0for information about planning the deployment;<\/li>\n<li><a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#vcenter-deploy\"><em>Deploying vCenter<\/em><\/a>\u00a0gives instructions for creating and deploying a Mirantis OpenStack environment that is integrated with VMware vSphere.<\/li>\n<\/ul>\n<div>VMware provides a vCenter driver for OpenStack. This driver enables the Nova-compute service to communicate with a VMware vCenter server that manages one or more ESXi host clusters. The vCenter driver makes management convenient from both the OpenStack Dashboard (<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#horizon-term\"><em>Horizon<\/em><\/a>) and from vCenter, where advanced vSphere features can be accessed.<\/div>\n<div>This enables Nova-compute to deploy workloads on vSphere and allows vSphere features such as vMotion workload migration, vSphere High Availability, and Dynamic Resource Scheduling (DRS). DRS is enabled by architecting the driver to aggregate ESXi hosts in each cluster to present one large hypervisor entity to the Nova scheduler. This enables OpenStack to schedule to the granularity of clusters, then call vSphere DRS to schedule the individual ESXi host within the cluster. The vCenter driver also interacts with the OpenStack Image Service (Glance) to copy\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#vmdk-term\"><em>VMDK<\/em><\/a>\u00a0(VMware virtual machine) images from the back-end image store to a database cache from which they can be quickly retrieved after they are loaded.<\/div>\n<div>The vCenter driver requires the\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#nova-network-term\"><em>Nova Network<\/em><\/a>\u00a0topology, which means that\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#ovs-term\"><em>OVS (Open vSwitch)<\/em><\/a>\u00a0does not work with vCenter.<\/div>\n<div>The Nova-compute service runs on a Controller node, not on a separate Compute node. This means that, in the Multi-node Deployment mode, a user has a single Controller node with both compute and network services running.<\/div>\n<div>Unlike other hypervisor drivers that require the Nova-compute service to be running on the same node as the hypervisor itself, the vCenter driver enables the Nova-compute service to manage ESXi hypervisors remotely. This means that you do not need a dedicated Compute node to use the vCenter hypervisor; instead, Fuel puts the Nova-compute service on a Controller node.<\/div>\n<div id=\"dual-hypervisor-support\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Dual hypervisor support<\/h3>\n<div>Beginning with Fuel 6.1, you can deploy an environment with two hypervisors: vCenter, KVM\/QEMU using availability zones.<\/div>\n<p><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/dual-hyperv-arch.png\" alt=\"_images\/dual-hyperv-arch.png\" \/><\/p>\n<\/div>\n<div id=\"multi-node-ha-deployment-with-vsphere-integration\" class=\"section\">\n<h3>Multi-node HA Deployment with vSphere integration<\/h3>\n<p><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/vcenter-ha-architecture.png\" alt=\"_images\/vcenter-ha-architecture.png\" \/><\/p>\n<div>On a highly-available Controller cluster (meaning that three or more Controller nodes are configured), the Nova-compute and Nova-network services can either run on the same or on different Controller nodes. If some service fails, it is restarted by\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#pacemaker-term\"><em>Pacemaker<\/em><\/a>\u00a0several times; if service fails to start or the whole Controller node fails, service is started on one of the available Controllers.<\/div>\n<\/div>\n<div id=\"example-of-network-topology\" class=\"section\">\n<h3>Example of network topology<\/h3>\n<p><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/vcenter-network-topology.png\" alt=\"_images\/vcenter-network-topology.png\" \/><\/p>\n<div>This is an example of the default Fuel OpenStack network configuration that a user should have if the target nodes have at least two NICs and are connected to a Fuel Admin (PXE) network with\u00a0<cite>eth0<\/cite>\u00a0interfaces.<\/div>\n<div>The Nova-network service must serve DHCP requests and NAT translations of the VMs&#8217; traffic, so the VMs on the ESXi nodes must be connected directly to the Fixed (Private) network. By default, this network uses VLAN 103 for the Nova-Network Flat DHCP topology. So, a user can create a tagged Port Group on the ESXi servers with VLAN 103 and connect the corresponding\u00a0<cite>vmnic<\/cite>\u00a0NIC to the same switch as the OpenStack Controller nodes.<\/div>\n<div>The Nova Compute service must be able to reach the vCenter management IP from the OpenStack Public network in order to connect to the vSphere API.<\/div>\n<\/div>\n<div id=\"fuel-running-under-vsphere\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Fuel running under vSphere<\/h3>\n<p><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/Fuel_in_vCenter_networking.png\" alt=\"_images\/Fuel_in_vCenter_networking.png\" \/><\/p>\n<div>For information about configuring your vSphere environment so that you can install Fuel in it, see\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/planning-guide.html#fuel-on-vsphere-plan\"><em>Preparing to run Fuel on vSphere<\/em><\/a>.<\/div>\n<\/div>\n<\/div>\n<div id=\"ceph-monitors\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>Ceph Monitors<\/h2>\n<div>Ceph monitors (MON) manage various maps like MON map, CRUSH map, and others. The CRUSH map is used by clients to deterministically select the storage devices (OSDs) to receive copies of the data. Ceph monitor nodes manage where the data should be stored and maintain data consistency with the Ceph OSD nodes that store the actual data.<\/div>\n<div>Ceph monitors implement HA using a master-master model:<\/div>\n<ul class=\"simple\">\n<li>One Ceph monitor node is designated the &#8220;leader.&#8221; This is the node that first received the most recent cluster map replica.<\/li>\n<li>Each other monitor node must sync its cluster map with the current leader.<\/li>\n<li>Each monitor node that is already sync&#8217;ed with the leader becomes a provider; the leader knows which nodes are currently providers. The leader tells the other nodes which provider they should use to sync their data.<\/li>\n<\/ul>\n<div>Ceph Monitors use the Paxos algorithm to determine all updates to the data they manage. All monitors that are in quorum have consistent up-to-date data because of this.<\/div>\n<div>You can read more in\u00a0<a class=\"reference external\" href=\"http:\/\/ceph.com\/docs\/master\/architecture\">Ceph documentation<\/a>.<\/div>\n<\/div>\n<div id=\"network-architecture\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>Network Architecture<\/h2>\n<div id=\"logical-networks\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Logical Networks<\/h3>\n<div>For better network performance and manageability, Fuel places different types of traffic into separate logical networks. This section describes how to distribute the network traffic in an OpenStack environment.<\/div>\n<div id=\"index-9\">Admin (PXE) network (&#8220;Fuel network&#8221;)<\/div>\n<blockquote>\n<div>The Fuel Master node uses this network to provision and orchestrate the OpenStack environment. It is used during installation to provide DNS, DHCP, and gateway services to a node before that node is provisioned. Nodes retrieve their network configuration from the Fuel Master node using DHCP, which is why this network must be isolated from the rest of your network and must not have a DHCP server other than the Fuel Master running on it.<\/div>\n<\/blockquote>\n<div id=\"index-10\">Public network<\/div>\n<blockquote>\n<div>\n<div>The word &#8220;Public&#8221; means that these addresses can be used to communicate with the cluster and its VMs from outside of the cluster (the Internet, corporate network, end users).<\/div>\n<div>The public network provides connectivity to the globally routed address space for VMs. The IP address from the public network that has been assigned to a compute node is used as the source for the Source NAT performed for traffic going from VM instances on the compute node to the Internet.<\/div>\n<div>The public network also provides Virtual IPs for public endpoints, which are used to connect to OpenStack services APIs.<\/div>\n<div>Finally, the public network provides a contiguous address range for the floating IPs that are assigned to individual VM instances by the project administrator. Nova Network or Neutron services can then configure this address on the public network interface of the Network controller node. Environments based on Nova Network use iptables to create a Destination NAT from this address to the private IP of the corresponding VM instance through the appropriate virtual bridge interface on the Network controller node.<\/div>\n<div>For security reasons, the public network is usually isolated from other networks in the cluster.<\/div>\n<div>If you use tagged networks for your configuration and combine multiple networks onto one NIC, you should leave the Public network untagged on that NIC. This is not a requirement, but it simplifies external access to OpenStack Dashboard and public OpenStack API endpoints.<\/div>\n<\/div>\n<\/blockquote>\n<div>Storage network (Storage Replication)<\/div>\n<blockquote>\n<div>Part of a cluster&#8217;s internal network. It carries replication traffic from Ceph or Swift. Ceph public traffic is dispatched through br-mgmt bridge (Management network).<\/div>\n<\/blockquote>\n<div>Management network<\/div>\n<blockquote>\n<div>Part of the cluster&#8217;s internal network. It is used to put tagged VLAN traffic from private tenant networks on physical NIC interfaces. This network can also be used for serving iSCSI protocol exchanges between Compute and Storage nodes. As to the Management, it serves for all other internal communications, including database queries, AMQP messaging, high availability services).<\/div>\n<\/blockquote>\n<div>Private network (Fixed network)<\/div>\n<blockquote>\n<div>\n<div>The private network facilitates communication between each tenant&#8217;s VMs. Private network address spaces are not a part of the enterprise network address space; fixed IPs of virtual instances cannot be accessed directly from the rest of the Enterprise network.<\/div>\n<div>Just like the public network, the private network should be isolated from other networks in the cluster for security reasons.<\/div>\n<\/div>\n<\/blockquote>\n<div>Internal Network<\/div>\n<blockquote>\n<div>The internal network connects all OpenStack nodes in the environment. All components of an OpenStack environment communicate with each other using this network. This network must be isolated from both the private and public networks for security reasons. The internal network can also be used for serving iSCSI protocol exchanges between Compute and Storage nodes. The\u00a0<em>Internal Network<\/em>\u00a0is a generalizing term; it means that any network except for Public can be regarded as Internal: for example, Storage or Management. Do not confuse\u00a0<em>Internal<\/em>\u00a0with\u00a0<em>Private<\/em>, as the latter is only related to the networks within a tenant, that provides communication between VMs within the specific tenant.<\/div>\n<\/blockquote>\n<div class=\"admonition note alert alert-info\">\n<div class=\"first admonition-title\">Note<\/div>\n<div class=\"last\">If you want to combine another network with the Admin network on the same network interface, you must leave the Admin network untagged. This is the default configuration and cannot be changed in the Fuel UI although you could modify it by manually editing configuration files.<\/div>\n<\/div>\n<\/div>\n<div id=\"ha-deployment-for-networking\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>HA deployment for Networking<\/h3>\n<div>Fuel leverages\u00a0<a class=\"reference external\" href=\"http:\/\/www.linux-ha.org\/wiki\/Resource_agents\">Pacemaker resource agents<\/a>\u00a0in order to deploy highly avaiable networking for OpenStack environments.<\/div>\n<div id=\"virtual-ip-addresses-deployment-details\" class=\"section\">\n<h4>Virtual IP addresses deployment details<\/h4>\n<div>Starting from the Fuel 5.0 release, HAProxy service and network interfaces running virtual IP addresses reside in separate\u00a0<cite>haproxy<\/cite>\u00a0network namespace. Using a separate namespace forces Linux kernel to treat connections from OpenStack services to HAProxy as remote ones, this ensures reliable failover of established connections when the management IP address migrates to another node. In order to achieve this, resource agent scripts for\u00a0<cite>ocf:fuel:ns_haproxy<\/cite>\u00a0and\u00a0<cite>ocf:fuel:ns_IPaddr2<\/cite>\u00a0were hardened with network namespaces support.<\/div>\n<div>Successfull failover of public VIP address requires controller nodes to perform active checking of the public gateway. Fuel configures the Pacemaker resource<cite>clone_ping_vip__public<\/cite>\u00a0that makes public VIP to migrate in case the controller can&#8217;t ping its public gateway.<\/div>\n<\/div>\n<div id=\"tcp-keepalive-configuration-details\" class=\"section\">\n<h4>TCP keepalive configuration details<\/h4>\n<div>Failover sometimes ends up with dead connections. The detection of such connections requires additional assistance from the Linux kernel. To speed up the detection process from the default of two hours to a more acceptable 3 minutes, Fuel adjusts kernel parameters for\u00a0<cite>net.ipv4.tcp_keepalive_time<\/cite>,\u00a0<cite>net.ipv4.tcp_keepalive_intvl<\/cite>,<cite>net.ipv4.tcp_keepalive_probes<\/cite>\u00a0and\u00a0<cite>net.ipv4.tcp_retries2<\/cite>.<\/div>\n<\/div>\n<\/div>\n<div id=\"implementing-multiple-cluster-networks\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Implementing Multiple Cluster Networks<\/h3>\n<div>Mirantis OpenStack supports configuring multiple network domains per single OpenStack environment. This feature is used for environments that deploy a large number of target\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#node-term\"><em>nodes<\/em><\/a>, to avoid the broadcast storms that can occur when all nodes share a single L2 domain. Multiple Cluster Networks can be configured for OpenStack environments that use an encapsulation protocol such as\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/reference-architecture.html#neutron-gre-ovs-arch\"><em>Neutron GRE<\/em><\/a>\u00a0and are deployed using Fuel 6.0 and later.<\/div>\n<div>This section discusses how support for multiple cluster networks is implemented.\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/operations.html#mcn-ops\"><em>Configuring Multiple Cluster Networks<\/em><\/a>\u00a0tells how to configure this feature for your Fuel environments.<\/div>\n<div>The Multiple Cluster Network feature is based on\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#node-group-term\"><em>Node Groups<\/em><\/a>, which are groupings of nodes in the current cluster:<\/div>\n<ul class=\"simple\">\n<li>Each of the major\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/reference-architecture.html#logical-networks-arch\"><em>logical network<\/em><\/a>\u00a0(public, management, storage, and fuelweb_admin) is associated with a Node Group rather than a cluster.<\/li>\n<li>Each Node Group belongs to a cluster.<\/li>\n<li>A default Node Group is created for each cluster. The default values are derived from Fuel Menu (for the Fuel Admin (PXE) network) and release metadata.<\/li>\n<li>Each cluster can support multiple Node Groups.<\/li>\n<\/ul>\n<div><a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#nailgun-term\"><em>Nailgun<\/em><\/a>\u00a0manages multiple cluster networks:<\/div>\n<ul class=\"simple\">\n<li>A\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#node-term\"><em>node<\/em><\/a>\u00a0serializes its network information based on its relationship to networks in its Node Group.<\/li>\n<li>Each node must have a Node Group; if it is not explicitly assigned to one, it is assumed to be a member of the default network group. If it is not configured properly, it will result in the cluster failing to deploy.<\/li>\n<li>A set of default networks is generated when a Node Group is created. These networks are deleted when the Node Group is deleted.<\/li>\n<li>Each\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/reference-architecture.html#logical-networks-arch\"><em>logical network<\/em><\/a>\u00a0is associated with a Node Group rather than with a cluster.<\/li>\n<li>Each fuelweb_admin network must have a DHCP network configured in the\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/file-ref.html#dnsmasq-template-ref\"><em>dnsmasq.template<\/em><\/a>\u00a0file.<\/li>\n<li>DHCP requests can be forwarded to the Fuel Master node using either of the following methods:\n<ul>\n<li>configure switches to relay DHCP<\/li>\n<li>using a relay client such as\u00a0dhcp-helper<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<div>The\u00a0<cite>nodegroups<\/cite>\u00a0table stores information about all configured Node Groups. To view the contents of this table, issue the\u00a0fuel nodegroup\u00a0command<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>[root@nailgun ~]# fuel nodegroup\r\n\r\nid | cluster | name\r\n---|---------|---------------\r\n1  | 1       | default\r\n2  | 1       | alpha\r\n<\/pre>\n<\/div>\n<\/div>\n<div>The fields displayed are:<\/div>\n<table class=\"field-list table\" border=\"0\" frame=\"void\" rules=\"none\">\n<colgroup>\n<col class=\"field-name\" \/>\n<col class=\"field-body\" \/><\/colgroup>\n<tbody valign=\"top\">\n<tr class=\"field-odd field\">\n<th class=\"field-name\">id:<\/th>\n<td class=\"field-body\">Sequential ID number assigned when the Node Group is created and used as the Public Key for the Node Group.<\/td>\n<\/tr>\n<tr class=\"field-even field\">\n<th class=\"field-name\">cluster:<\/th>\n<td class=\"field-body\">Cluster with which the Node Group is associated.<\/td>\n<\/tr>\n<tr class=\"field-odd field\">\n<th class=\"field-name\">name:<\/th>\n<td class=\"field-body\">Display name for the Node Group, assigned by the operator.<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<div>The\u00a0<cite>network_groups<\/cite>\u00a0table can be viewed in the\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/file-ref.html#network-1-yaml-ref\"><em>network_1.yaml<\/em><\/a>\u00a0file.<\/div>\n<\/div>\n<div id=\"public-and-floating-ip-address-requirements\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Public and Floating IP address requirements<\/h3>\n<div>This section describes the OpenStack requirements for Public and Floating IP addresses that are available. Each network type (Nova-Network and Neutron) has distinct requirements.<\/div>\n<div class=\"admonition note alert alert-info\">\n<div class=\"first admonition-title\">Note<\/div>\n<div class=\"last\">Public and Floating IP ranges must not intersect!<\/div>\n<\/div>\n<div id=\"nova-network-requirements\" class=\"section\">\n<h4>Nova-Network requirements<\/h4>\n<div>Both Public and Floating IP ranges should be defined within the same network segment (CIDR). If this is not possible, additional routing settings between these ranges are required on your hardware router to connect the two ranges.<\/div>\n<div>Public range with Nova-Network requirements:<\/div>\n<ul class=\"simple\">\n<li>Each deployed node requires one IP address from the Public IP range. In addition, two extra IP addresses for the environment&#8217;s Virtual IPs and one for the default gateway are required.<\/li>\n<\/ul>\n<div>Floating range with Nova-Network requirements:<\/div>\n<ul class=\"simple\">\n<li>Every VM instance connected to the external network requires one IP address from the Floating IP range. These IP addresses are assigned on demand and may be released from the VM and returned back to the pool of non-assigned Floating IP addresses.<\/li>\n<\/ul>\n<\/div>\n<div id=\"neutron-requirements\" class=\"section\">\n<h4>Neutron requirements<\/h4>\n<div>Both Public and Floating IP ranges must be defined inside the same network segment (CIDR)! Fuel cannot configure Neutron with external workarounds at this time.<\/div>\n<div>Public range with Neutron requirements:<\/div>\n<ul class=\"simple\">\n<li>Each deployed Controller node and each deployed Zabbix node requires one IP address from the Public IP range. This IP address goes to the node&#8217;s bridge to the external network (&#8220;br-ex&#8221;).<\/li>\n<li>Two additional IP addresses for the environment&#8217;s Virtual IPs and one for the default gateway are required.<\/li>\n<\/ul>\n<div class=\"admonition note alert alert-info\">\n<div class=\"first admonition-title\">Note<\/div>\n<ul class=\"last simple\">\n<li>For 5.1 and later Neutron environments, Public IP addresses can be allocated either to all nodes or just to Controllers and Zabbix servers. By default, IP addressess are allocated to Controllers and Zabbix servers only. To get them allocated to all nodes,\u00a0Public network assignment -&gt; Assign public network to all nodes\u00a0should be selected on the\u00a0<cite>Settings<\/cite>\u00a0tab.<\/li>\n<li>When using Fuel 6.1 to manage 5.0.x environments, the environment must conform to the 5.0.x practice, so each target node must have a Public IP assigned to it, even when using Neutron.<\/li>\n<li>In Fuel 6.1, nodes that do not have Public IP addresses use Controllers to reach out the outside networks. There is a virtual router running on Controller nodes (controlled by Corosync), which utilizes a pair of Public and Management Virtual IPs to NAT traffic from Management to Public network. And nodes with no Public IPs assigned have the default gateway pointed to that Virtual IP from Management network.<\/li>\n<\/ul>\n<\/div>\n<div>Floating range with Neutron requirements:<\/div>\n<ul class=\"simple\">\n<li>Each defined tenant, including the Admin tenant, requires one IP address from the Floating range.<\/li>\n<li>This IP address goes to the virtual interface of the tenant&#8217;s virtual router. Therefore, one Floating IP is assigned to the Admin tenant automatically as part of the OpenStack deployment process.<\/li>\n<li>Each VM instance connected to the external network requires one IP address from the Floating IP range. These IP addresses are assigned on demand and may be released from the VM and returned back to the pool of non-assigned Floating IP addresses.<\/li>\n<\/ul>\n<\/div>\n<div id=\"example\" class=\"section\">\n<h4>Example<\/h4>\n<div>Calculate the numbers of the required Public and Floating IP addresses using these formulas:<\/div>\n<div>Neutron<\/div>\n<ul class=\"simple\">\n<li>for the Public IP range: [(X+Y) + N];<\/li>\n<li>for the Floating range: [K+M].<\/li>\n<\/ul>\n<div>Nova-Network<\/div>\n<ul class=\"simple\">\n<li>for the Public IP range: [(X+Y+Z) + N];<\/li>\n<li>for the Floating IP range: [M].<\/li>\n<\/ul>\n<div><cite>Where:<\/cite><\/div>\n<ul class=\"simple\">\n<li>Number of nodes:\n<ul>\n<li>X\u00a0= controller nodes<\/li>\n<li>Y\u00a0= Zabbix nodes<\/li>\n<li>Z\u00a0= other nodes (Compute, Storage, and MongoDB)<\/li>\n<\/ul>\n<\/li>\n<li>K\u00a0= the number of virtual routers for all the tenants (on condition all of them are connected to the external network)<\/li>\n<li>M\u00a0= the number of virtual instances you want to provide the direct external access to<\/li>\n<li>N\u00a0= the number of extra IP addresses. It is 3 in total for the following:\n<ul>\n<li>for environment&#8217;s virtual IP:\n<ul>\n<li>virtual IP address for a virtual router<\/li>\n<li>public virtual IP address<\/li>\n<\/ul>\n<\/li>\n<li>1 for the default gateway<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<hr class=\"docutils\" \/>\n<div>Lets consider the following environment:<\/div>\n<ul class=\"simple\">\n<li>X = 3 controller nodes<\/li>\n<li>Y = 1 Zabbix node<\/li>\n<li>Z = 10 compute + 5 Ceph OSD + 3 MongoDB nodes<\/li>\n<li>K = 10 tenants with one router for each tenant connected to the external network<\/li>\n<li>M = 100 VM instances with the direct external access required<\/li>\n<li>N = 3 extra IP addresses<\/li>\n<\/ul>\n<div>Your calculations will result in the following number of the required IP addresses:<\/div>\n<table class=\"table\" border=\"0\">\n<colgroup>\n<col width=\"28%\" \/>\n<col width=\"18%\" \/>\n<col width=\"18%\" \/>\n<col width=\"0%\" \/>\n<col width=\"16%\" \/>\n<col width=\"20%\" \/><\/colgroup>\n<tbody valign=\"top\">\n<tr class=\"row-odd\">\n<td rowspan=\"2\">\n<div class=\"first last line-block\">\n<div class=\"line\">Environment<\/div>\n<div class=\"line\">details<\/div>\n<\/div>\n<\/td>\n<td colspan=\"5\">\n<div class=\"first last line-block\">\n<div class=\"line\">Neutron\u00a0| |\u00a0Nova-Network<\/div>\n<div class=\"line\">requirements for | | requirements for<\/div>\n<\/div>\n<\/td>\n<\/tr>\n<tr class=\"row-even\">\n<td>Public IPs<\/td>\n<td colspan=\"2\">Floating IPs<\/td>\n<td>Public IPs<\/td>\n<td>Floating IPs<\/td>\n<\/tr>\n<tr class=\"row-odd\">\n<td>X = 3<\/td>\n<td>\u2713<\/td>\n<td colspan=\"2\"><\/td>\n<td>\u2713<\/td>\n<td><\/td>\n<\/tr>\n<tr class=\"row-even\">\n<td>Y = 1<\/td>\n<td>\u2713<\/td>\n<td colspan=\"2\"><\/td>\n<td>\u2713<\/td>\n<td><\/td>\n<\/tr>\n<tr class=\"row-odd\">\n<td>Z = 18<\/td>\n<td>\u2713*<\/td>\n<td colspan=\"2\"><\/td>\n<td>\u2713<\/td>\n<td><\/td>\n<\/tr>\n<tr class=\"row-even\">\n<td>K = 10<\/td>\n<td><\/td>\n<td colspan=\"2\">\u2713<\/td>\n<td>n\/a<\/td>\n<td>n\/a<\/td>\n<\/tr>\n<tr class=\"row-odd\">\n<td>M = 100<\/td>\n<td><\/td>\n<td colspan=\"2\">\u2713<\/td>\n<td><\/td>\n<td>\u2713<\/td>\n<\/tr>\n<tr class=\"row-even\">\n<td>N = 3<\/td>\n<td>\u2713<\/td>\n<td colspan=\"2\"><\/td>\n<td>\u2713<\/td>\n<td><\/td>\n<\/tr>\n<tr class=\"row-odd\">\n<td>Total:<\/td>\n<td>7\/25*<\/td>\n<td colspan=\"2\">110<\/td>\n<td>25<\/td>\n<td>100<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<div class=\"admonition tip alert alert-info\">\n<div class=\"first admonition-title\">Tip<\/div>\n<div>\u2713*\u00a0&#8211; it is the additional requirement for Public IP range for the 6.1 Neutron environment with\u00a0Public network assignment -&gt; Assign public network to all nodes\u00a0set. In the example, it is [(X+Y+Z) + N] =\u00a025.<\/div>\n<div class=\"last\">n\/a\u00a0&#8211; this value is not applicable to Nova-Network environments.<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"router\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Router<\/h3>\n<div>Your network must have an IP address in the Public\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/reference-architecture.html#logical-networks-arch\"><em>Logical Networks<\/em><\/a>\u00a0set on a router port as an &#8220;External Gateway&#8221;. Without this, your VMs are unable to access the outside world. In many of the examples provided in these documents, that IP is\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/reference-architecture.html#conf-netw\"><em>12.0.0.1 in VLAN 101<\/em><\/a>.<\/div>\n<div>If you add a new router, be sure to set its gateway IP:<\/div>\n<p><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/new_router.png\" alt=\"_images\/new_router.png\" \/>\u00a0<img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/set_gateway.png\" alt=\"_images\/set_gateway.png\" \/><\/p>\n<div>The Fuel UI includes a field on the networking tab for the gateway address. When OpenStack deployment starts, the network on each node is reconfigured to use this gateway IP address as the default gateway.<\/div>\n<div>If Floating addresses are from another L3 network, then you must configure the IP address (or multiple IPs if Floating addresses are from more than one L3 network) for them on the router as well. Otherwise, Floating IPs on nodes will be inaccessible.<\/div>\n<div>Consider the following routing recommendations when you configure your network:<\/div>\n<ul class=\"simple\">\n<li>Use the default routing via a router in the Public network<\/li>\n<li>Use the the management network to access your management infrastructure (L3 connectivity if necessary)<\/li>\n<li>The Storage and VM networks should be configured without access to other networks (no L3 connectivity)<\/li>\n<\/ul>\n<\/div>\n<div id=\"switches\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Switches<\/h3>\n<div>You must manually configure your switches before deploying your OpenStack environment. Unfortunately the set of configuration steps, and even the terminology used, is different for different vendors; this section provides some vendor-agnostic information about how traffic should flow. We also provide sample switch configurations:<\/div>\n<ul class=\"simple\">\n<li><a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/planning-guide.html#cisco-2960g-neutron\"><em>Neutron Switch configuration (Cisco Catalyst 2960G)<\/em><\/a><\/li>\n<li><a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/planning-guide.html#juniper-ex4200-neutron\"><em>Neutron Switch configuration (Juniper EX4200)<\/em><\/a><\/li>\n<li><a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/planning-guide.html#cisco-2960g-nova\"><em>Nova-network Switch configuration (Cisco Catalyst 2960G)<\/em><\/a><\/li>\n<li><a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/planning-guide.html#juniper-ex4200-nova\"><em>Nova-network Switch configuration (Juniper EX4200)<\/em><\/a><\/li>\n<\/ul>\n<div>To configure your switches:<\/div>\n<ul class=\"simple\">\n<li>Configure all access ports to allow non-tagged PXE booting connections from each slave node to the Fuel Master node. This network is referred to as the Fuel network.<\/li>\n<li>By default, the Fuel Master node uses the\u00a0<cite>eth0<\/cite>\u00a0interface to serve PXE requests on this network, but this can be changed\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#network-install\"><em>during installation<\/em><\/a>\u00a0of the Fuel Master node.<\/li>\n<li>If you use the\u00a0<cite>eth0<\/cite>\u00a0interface for PXE requests, you must set the switch port for\u00a0<cite>eth0<\/cite>\u00a0on the Fuel Master node to access mode.<\/li>\n<li>We recommend that you use the\u00a0<cite>eth0<\/cite>\u00a0interfaces of all other nodes for PXE booting as well. Corresponding ports must also be in access mode.<\/li>\n<li>Taking into account that this is the network for PXE booting, do not mix this L2 segment with any other network segments. Fuel runs a DHCP server, and, if there is another DHCP on the same L2 network segment, both the company&#8217;s infrastructure and Fuel&#8217;s are unable to function properly.<\/li>\n<li>You must also configure each of the switch&#8217;s ports connected to nodes as an &#8220;STP Edge port&#8221; (or a &#8220;spanning-tree port fast trunk&#8221;, according to Cisco terminology). If you do not do that, DHCP timeout issues may occur.<\/li>\n<\/ul>\n<div>As soon as the Fuel network is configured, Fuel can operate. Other networks are required for OpenStack environments, and currently all of these networks live in VLANs over the one or multiple physical interfaces on a node. This means that the switch should pass tagged traffic, and untagging is done on the Linux hosts.<\/div>\n<div class=\"admonition note alert alert-info\">\n<div class=\"first admonition-title\">Note<\/div>\n<div class=\"last\">For the sake of simplicity, all the VLANs specified on the networks tab of the Fuel UI should be configured on switch ports, pointing to Slave nodes, as tagged.<\/div>\n<\/div>\n<div>Of course, it is possible to specify as tagged only certain ports for certain nodes. However, in the current version, all existing networks are automatically allocated for each node, with any role. And network check also checks if tagged traffic can pass, even if some nodes do not require this check. for example, Cinder nodes do not need fixed network traffic)<\/div>\n<div>This is enough to deploy the OpenStack environment. However, from a practical standpoint, it is still not really usable because there is no connection to other corporate networks yet. To make that possible, you must configure uplink port(s).<\/div>\n<div>One of the VLANs may carry the office network. To provide access to the Fuel Master node from your network, any other free physical network interface on the Fuel Master node can be used and configured according to your network rules (static IP or DHCP). The same network segment can be used for Public and Floating ranges. In this case, you must provide the corresponding VLAN ID and IP ranges in the UI. One Public IP per node is used to SNAT traffic out of the VMs network, and one or more floating addresses per VM instance are used to get access to the VM from your network, or even the global Internet. To have a VM visible from the Internet is similar to having it visible from corporate network &#8211; corresponding IP ranges and VLAN IDs must be specified for the Floating and Public networks. One current limitation of Fuel is that the user must use the same L2 segment for both Public and Floating networks.<\/div>\n<div>Example configuration for one of the ports on a Cisco switch:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>interface GigabitEthernet0\/6               # switch port\r\ndescription s0_eth0 jv                     # description\r\nswitchport trunk encapsulation dot1q       # enables VLANs\r\nswitchport trunk native vlan 262           # access port, untags VLAN 262\r\nswitchport trunk allowed vlan 100,102,104  # 100,102,104 VLANs are passed with tags\r\nswitchport mode trunk                      # To allow more than 1 VLAN on the port\r\nspanning-tree portfast trunk               # STP Edge port to skip network loop\r\n                                           # checks (to prevent DHCP timeout issues)\r\nvlan 262,100,102,104                       # Might be needed for enabling VLANs\r\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"neutron-network-topologies\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>Neutron Network Topologies<\/h2>\n<div>Neutron (formerly Quantum) is a service which provides Networking-as-a-Service functionality in OpenStack. It has a rich tenant-facing API for defining network connectivity and addressing in the cloud, and gives operators the ability to leverage different networking technologies to power their cloud networking.<\/div>\n<div>There are various deployment use cases for Neutron. Fuel supports the most common of them, called Per-tenant Routers with Private Networks. Each tenant has a virtual Neutron router with one or more private networks, which can communicate with the outside world. This allows full routing isolation for each tenant private network.<\/div>\n<div>Neutron is not, however, required in order to run an OpenStack environment. If you don&#8217;t need (or want) this added functionality, it&#8217;s perfectly acceptable to continue using nova-network.<\/div>\n<div>In order to deploy Neutron, you need to enable it in the Fuel configuration. Fuel sets up Neutron components on each of the controllers to act as a virtual Neutron router in HA (if deploying in HA mode).<\/div>\n<div id=\"neutron-versus-nova-network\" class=\"section\">\n<h3>Neutron versus Nova-Network<\/h3>\n<div>OpenStack networking with Neutron has some differences from Nova-network. Neutron is able to virtualize and manage both layer 2 (logical) and layer 3 (network) of the OSI network model, as compared to simple layer 3 virtualization provided by nova-network. This is the main difference between the two networking models for OpenStack. Virtual networks (one or more) can be created for a single tenant, forming an isolated L2 network called a &#8220;private network&#8221;. Each private network can support one or many IP subnets. Private networks can be segmented using one of two different topologies:<\/div>\n<ul class=\"simple\">\n<li>VLAN segmentation\u00a0Ideally, &#8220;Private network&#8221; traffic is located on a dedicated network adapter that is attached to an untagged network port. It is, however, possible for this network to share a network adapter with other networks. In this case, you should use non-intersecting VLAN-ID ranges for &#8220;Private network&#8221; and other networks.<\/li>\n<li>GRE segmentation\u00a0In this mode of operation, Neutron does not require a dedicated network adapter. Neutron builds a mesh of GRE tunnels from each compute node and controller nodes to every other node. Private networks for each tenant make use of this mesh for isolated traffic.<\/li>\n<\/ul>\n<div>Both Neutron topologies are based on\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#ovs-term\"><em>OVS (Open vSwitch)<\/em><\/a>.<\/div>\n<\/div>\n<div id=\"neutron-with-vlan-segmentation-and-ovs\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Neutron with VLAN segmentation and OVS<\/h3>\n<div>The following diagram shows the network isolation using\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#ovs-term\"><em>OVS (Open vSwitch)<\/em><\/a>\u00a0and VLANs:<\/div>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/Neutron_32_vlan_v2.png\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/Neutron_32_vlan_v2.png\" alt=\"_images\/Neutron_32_vlan_v2.png\" \/><\/a><\/p>\n<div class=\"admonition note alert alert-info\">\n<div class=\"first admonition-title\">Note<\/div>\n<div class=\"last\">You must have at least three network interfaces for this configuration<\/div>\n<\/div>\n<\/div>\n<div id=\"neutron-with-gre-segmentation-and-ovs\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Neutron with GRE segmentation and OVS<\/h3>\n<div>A typical network configuration for Neutron with GRE segmentation might look like this:<\/div>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/Neutron_32_gre_v2.png\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/Neutron_32_gre_v2.png\" alt=\"_images\/Neutron_32_gre_v2.png\" \/><\/a><\/p>\n<div>Open vSwitch (OVS) GRE tunnels are provided through Management Network.<\/div>\n<div class=\"admonition note alert alert-info\">\n<div class=\"first admonition-title\">Note<\/div>\n<div class=\"last\">This setup does not include physical Private network.<\/div>\n<\/div>\n<div id=\"neutron-vlan-segmentation-planning\" class=\"section\">\n<p>&nbsp;<\/p>\n<h4>Neutron VLAN Segmentation Planning<\/h4>\n<div>Depending on the number of NICs you have in your node servers, you can use the following examples to plan your NIC assignment to the OpenStack\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/reference-architecture.html#logical-networks-arch\"><em>Logical Networks<\/em><\/a>. Note that you must have at least three NICS configured to use the Neutron VLAN topology.<\/div>\n<div>3 NIC deployment<\/div>\n<ul class=\"simple\">\n<li>eth0 &#8211; untagged port for Administrative network<\/li>\n<li>eth1 (br-eth1) &#8211; port for networks: Public\/Floating, Management, Storage<\/li>\n<li>eth2 (br-eth2) &#8211; port for Private network (where the number of VLANs depends on the number of tenant networks with a continuous range)<\/li>\n<\/ul>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/preinstall_d_vlan_3nics.png\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/preinstall_d_vlan_3nics.png\" alt=\"_images\/preinstall_d_vlan_3nics.png\" \/><\/a><\/p>\n<div>4 NIC deployment<\/div>\n<ul class=\"simple\">\n<li>eth0 &#8211; port for Administrative network<\/li>\n<li>eth1 (br-eth1) &#8211; port for networks: Public\/Floating, Management<\/li>\n<li>eth2 (br-eth2) &#8211; port for Private network, with defined VLAN range IDs<\/li>\n<li>eth3 (br-eth1) &#8211; port for Storage network<\/li>\n<\/ul>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/preinstall_d_vlan_4nics.png\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/preinstall_d_vlan_4nics.png\" alt=\"_images\/preinstall_d_vlan_4nics.png\" \/><\/a><\/p>\n<div>Routing recommendations<\/div>\n<ul class=\"simple\">\n<li>Use the default routing via a router in the Public network<\/li>\n<li>Use the the management network to access to your management infrastructure (L3 connectivity if necessary)<\/li>\n<li>The administrative network or only the Fuel server (via dedicated NIC) should have Internet access<\/li>\n<li>The Storage and Private network (VLANs) should be configured without access to other networks (no L3 connectivity)<\/li>\n<\/ul>\n<\/div>\n<div id=\"neutron-gre-segmentation-planning\" class=\"section\">\n<p>&nbsp;<\/p>\n<h4>Neutron GRE Segmentation Planning<\/h4>\n<div>Depending on the number of NICs you have in your node servers, you can use the following examples to plan your NIC assignment:<\/div>\n<div>2 \u00a0NIC deployment<\/div>\n<ul class=\"simple\">\n<li>eth0 &#8211; untagged port for Administrative network<\/li>\n<li>eth1 (br-eth1) &#8211; port for networks: Public\/Floating, Management, Storage<\/li>\n<\/ul>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/preinstall_d_gre_2nics.png\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/preinstall_d_gre_2nics.png\" alt=\"_images\/preinstall_d_gre_2nics.png\" \/><\/a><\/p>\n<div>3 \u00a0NIC deployment<\/div>\n<ul class=\"simple\">\n<li>eth0 &#8211; untagged port for Administrative network<\/li>\n<li>eth1 (br-eth1) &#8211; port for networks: Public\/Floating, Management<\/li>\n<li>eth2 (br-eth2) &#8211; port for Storage network<\/li>\n<\/ul>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/preinstall_d_gre_3nics.png\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/preinstall_d_gre_3nics.png\" alt=\"_images\/preinstall_d_gre_3nics.png\" \/><\/a><\/p>\n<div>4 \u00a0NIC deployment<\/div>\n<ul class=\"simple\">\n<li>eth0 &#8211; untagged port for Administrative network<\/li>\n<li>eth1 (br-eth1) &#8211; port for Management network<\/li>\n<li>eth2 (br-eth2) &#8211; port for Public\/Floating network<\/li>\n<li>eth3 (br-eth3) &#8211; port for Storage network<\/li>\n<\/ul>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/preinstall_d_gre_4nics.png\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/preinstall_d_gre_4nics.png\" alt=\"_images\/preinstall_d_gre_4nics.png\" \/><\/a><\/p>\n<div>Routing recommendations<\/div>\n<ul class=\"simple\">\n<li>Use the default routing via router in the Public network<\/li>\n<li>Use the management network access to your management infrastructure (L3 connectivity if necessary)<\/li>\n<li>The administrative network or only Fuel server (via dedicated NIC) should have Internet access<\/li>\n<li>The Storage and Private network (VLANs) should be configured without access to other networks (no L3 connectivity)<\/li>\n<\/ul>\n<\/div>\n<\/div>\n<div id=\"known-limitations\" class=\"section\">\n<h3>Known limitations<\/h3>\n<ul>\n<li>\n<div class=\"first\">Neutron will not allocate a floating IP range for your tenants. After each tenant is created, a floating IP range must be created. Note that this does not prevent Internet connectivity for a tenant&#8217;s instances, but it would prevent them from receiving incoming connections. You, the administrator, should assign a floating IP addresses for the tenant. Below are steps you can follow to do this:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre># get admin credentials:\r\nsource \/root\/openrc\r\n# get admin tenant-ID:\r\nkeystone tenant-list\r\n<\/pre>\n<\/div>\n<\/div>\n<table class=\"table\" border=\"0\">\n<colgroup>\n<col width=\"64%\" \/>\n<col width=\"19%\" \/>\n<col width=\"17%\" \/><\/colgroup>\n<thead valign=\"bottom\">\n<tr class=\"row-odd\">\n<th class=\"head\">id<\/th>\n<th class=\"head\">name<\/th>\n<th class=\"head\">enabled<\/th>\n<\/tr>\n<\/thead>\n<tbody valign=\"top\">\n<tr class=\"row-even\">\n<td>b796f91df6b84860a7cd474148fb2229<\/td>\n<td>admin<\/td>\n<td>True<\/td>\n<\/tr>\n<tr class=\"row-odd\">\n<td>cba7b0ff68ee4985816ac3585c8e23a9<\/td>\n<td>services<\/td>\n<td>True<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre># create one floating-ip address for admin tenant:\r\nneutron floatingip-create --tenant-id=b796f91df6b84860a7cd474148fb2229 net04_ext\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">You can&#8217;t combine Private or Admin network with any other networks on one NIC.<\/div>\n<\/li>\n<li>\n<div class=\"first\">To deploy OpenStack using Neutron with GRE segmentation, each node requires at least 2 NICs.<\/div>\n<\/li>\n<li>\n<div class=\"first\">To deploy OpenStack using Neutron with VLAN segmentation, each node requires at least 3 NICs.<\/div>\n<\/li>\n<\/ul>\n<\/div>\n<div id=\"nic-assignment-example-neutron-vlan\" class=\"section\">\n<h3>NIC Assignment Example (Neutron VLAN)<\/h3>\n<div>The current architecture assumes the presence of 3 NICs, but it can be customized for two or 4+ network interfaces. Most servers are built with at least two network interfaces. In this case, let&#8217;s consider a typical example of three NIC cards. They are utilized as follows:<\/div>\n<dl class=\"docutils\">\n<dt>eth0:<\/dt>\n<dd>The Admin (PXE) network, used for communication with Fuel Master for deployment.<\/dd>\n<dt>eth1:<\/dt>\n<dd>The public network and floating IPs assigned to VMs<\/dd>\n<dt>eth2:<\/dt>\n<dd>The private network, for communication between OpenStack VMs, and the bridge interface (VLANs)<\/dd>\n<\/dl>\n<div>The figure below illustrates the relevant nodes and networks in Neutron VLAN mode.<\/div>\n<div class=\"align-center\" align=\"center\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/080-networking-diagram.svg\" width=\"75%\" \/><\/div>\n<\/div>\n<\/div>\n<div id=\"nova-network-topologies\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>Nova Network Topologies<\/h2>\n<div>Nova-network offers two options for deploying private network for tenants:<\/div>\n<ul class=\"simple\">\n<li>FlatDHCP Manager<\/li>\n<li>VLAN Manager<\/li>\n<\/ul>\n<div>This section describes the Nova-network topologies. For more information about how the network managers work, you can read these blogs:<\/div>\n<ul class=\"simple\">\n<li><a class=\"reference external\" href=\"http:\/\/www.mirantis.com\/blog\/openstack-networking-flatmanager-and-flatdhcpmanager\/\">OpenStack Networking \u2013 FlatManager and FlatDHCPManager<\/a><\/li>\n<li><a class=\"reference external\" href=\"http:\/\/www.mirantis.com\/blog\/openstack-networking-vlanmanager\/\">OpenStack Networking for Scalability and Multi-tenancy with VLANManager<\/a><\/li>\n<\/ul>\n<div id=\"nova-network-flatdhcp-manager\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Nova-network FlatDHCP Manager<\/h3>\n<div>In this topology, a bridge (i.e.\u00a0br100) is configured on every Compute node and one of the machine&#8217;s physical interfaces is connected to it. Once the virtual machine is launched, its virtual interface connects to that bridge as well. The same L2 segment is used for all OpenStack projects, which means that there is no L2 isolation between virtual hosts, even if they are owned by separate projects. Additionally, only one flat IP pool is defined for the entire environment. For this reason, it is called the\u00a0<em>Flat<\/em>\u00a0manager.<\/div>\n<div>The simplest case here is as shown on the following diagram of the FlatDHCPManager used with the multi-host scheme. Here the\u00a0<em>eth1<\/em>\u00a0interface is used to give network access to virtual machines, while the\u00a0<em>eth0<\/em>\u00a0interface is the management network interface.<\/div>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/flatdhcpmanager-mh_scheme.jpg\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/flatdhcpmanager-mh_scheme.jpg\" alt=\"_images\/flatdhcpmanager-mh_scheme.jpg\" \/><\/a><\/p>\n<div>Fuel deploys OpenStack in FlatDHCP mode with the\u00a0multi-host\u00a0feature enabled. Without this feature enabled, network traffic from each VM would go through the single gateway host, which creates a single point of failure. In\u00a0multi-host\u00a0mode, each Compute node becomes a gateway for all the VMs running on the host, providing a balanced networking solution: if one of the Compute nodes goes down, the rest of the environment remains operational.<\/div>\n<div>The current version of Fuel uses VLANs, even for the FlatDHCP network manager. On the Linux host, it is implemented in such a way that it is not the physical network interfaces that connects to the bridge, but the VLAN interface (i.e.\u00a0<em>eth0.102<\/em>).<\/div>\n<div>The following diagram illustrates FlatDHCPManager used with the single-interface scheme:<\/div>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/flatdhcpmanager-sh_scheme.jpg\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/flatdhcpmanager-sh_scheme.jpg\" alt=\"_images\/flatdhcpmanager-sh_scheme.jpg\" \/><\/a><\/p>\n<div>In order for FlatDHCPManager to work, one designated switch port where each Compute node is connected needs to be configured as a tagged (trunk) port with the required VLANs allowed (enabled, tagged). Virtual machines communicate with each other on L2 even if they are on different Compute nodes. If the virtual machine sends IP packets to a different network, they are routed on the host machine according to the routing table. The default route points to the gateway specified on the\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#network-settings-ug\"><em>Network settings<\/em><\/a>\u00a0tab in the UI as the gateway for the Public network.<\/div>\n<div>The following diagram describes network configuration when you use Nova-network with FlatDHCP Manager:<\/div>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/preinstall_d_flat_dhcp.jpg\"><img decoding=\"async\" class=\"align-center\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/preinstall_d_flat_dhcp.jpg\" alt=\"_images\/preinstall_d_flat_dhcp.jpg\" \/><\/a><\/p>\n<\/div>\n<div id=\"nova-network-vlan-manager\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Nova-network VLAN Manager<\/h3>\n<div>The Nova-network VLANManager topology is more suitable for large scale clouds. The idea behind this topology is to separate groups of virtual machines owned by different projects into separate and distinct L2 networks. In VLANManager, this is done by tagging IP frames, identified by a given VLAN. It allows virtual machines inside a specific project to communicate with each other and not to see any traffic from VMs of other projects. Again, as with FlatDHCPManager, switch ports must be configured as tagged (trunk) ports to allow this scheme to work.<\/div>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/vlanmanager_scheme.jpg\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/vlanmanager_scheme.jpg\" alt=\"_images\/vlanmanager_scheme.jpg\" \/><\/a><\/p>\n<div>The following diagram describes network configuration when you use Nova-network with VLAN Manager:<\/div>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/preinstall_d_vlan.jpg\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/preinstall_d_vlan.jpg\" alt=\"_images\/preinstall_d_vlan.jpg\" \/><\/a><\/p>\n<\/div>\n<div id=\"fuel-deployment-schema\" class=\"section\">\n<h3>Fuel Deployment Schema<\/h3>\n<div>OpenStack Compute nodes untag the IP packets using VLAN tagging on a physical interface packets and send them to the appropriate VMs. Simplifying the configuration of VLAN Manager, there is no known limitation that Fuel could add in this particular networking mode.<\/div>\n<\/div>\n<div id=\"configuring-the-network\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Configuring the network<\/h3>\n<div>Once you choose a networking topology (Nova-network FlatDHCP or VLAN), you must configure equipment accordingly. The diagram below shows an example configuration (with a router network IP 12.0.0.1\/24).<\/div>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/physical-network.png\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/physical-network.png\" alt=\"_images\/physical-network.png\" \/><\/a><\/p>\n<div>Fuel operates with a set of\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/reference-architecture.html#logical-networks-arch\"><em>logical networks<\/em><\/a>. In this scheme, these logical networks are mapped as follows:<\/div>\n<ul class=\"simple\">\n<li>Admin (Fuel)\u00a0network: untagged on the scheme<\/li>\n<li>Public\u00a0network: VLAN 101<\/li>\n<li>Floating\u00a0network: VLAN 101<\/li>\n<li>Management\u00a0network: VLAN 100<\/li>\n<li>Storage\u00a0network: VLAN 102<\/li>\n<li>Fixed\u00a0network: VLANs 103-200<\/li>\n<\/ul>\n<\/div>\n<div id=\"nova-network-planning-examples\" class=\"section\">\n<h3>Nova-network Planning Examples<\/h3>\n<\/div>\n<div id=\"nova-network-flatdhcp\" class=\"section\">\n<h3>Nova-network FlatDHCP<\/h3>\n<div>Depending on the number of NICs you have in your node servers, you can use the following examples to plan your NIC assignment:<\/div>\n<div>1 NIC deployment<\/div>\n<ul class=\"simple\">\n<li>eth0 &#8211; VLAN tagged port for networks: Storage, Public\/Floating, Private, Management and Administrative (untagged)<\/li>\n<\/ul>\n<div>2 NIC deployment<\/div>\n<ul class=\"simple\">\n<li>eth0 &#8211; Management network (tagged), Storage network (tagged) and Administrative network \u00a0(untagged)<\/li>\n<li>eth1\u00a0&#8211; VLAN tagged port with VLANs for networks: Public\/Floating, Private<\/li>\n<\/ul>\n<div>3 NIC deployment<\/div>\n<ul class=\"simple\">\n<li>eth0 &#8211; untagged port for Administrative network<\/li>\n<li>eth1\u00a0&#8211; VLAN tagged port with VLANs for networks: Public\/Floating, Private, Management<\/li>\n<li>eth2 &#8211; untagged port for Storage network<\/li>\n<\/ul>\n<div>4 NIC deployment<\/div>\n<ul class=\"simple\">\n<li>eth0 &#8211; untagged port for Administrative network<\/li>\n<li>eth1 &#8211; tagged port for networks: Public\/Floating, Management<\/li>\n<li>eth2 &#8211; untagged port for Private network<\/li>\n<li>eth3\u00a0&#8211; untagged port for Storage network<\/li>\n<\/ul>\n<div>Routing recommendations<\/div>\n<ul class=\"simple\">\n<li>Use the default routing via a router in the Public network<\/li>\n<li>Use the the management network to access to your management infrastructure (L3 connectivity if necessary)<\/li>\n<li>The administrative network or only the Fuel server (via dedicated NIC) should have Internet access<\/li>\n<li>The Storage and Private network (VLANs) should be configured without access to other networks (no L3 connectivity)<\/li>\n<\/ul>\n<\/div>\n<div id=\"nova-config-vlan\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Nova-network VLAN Manager<\/h3>\n<div>Depending on the number of NICs you have in your node servers, you can use the following examples to plan your NIC assignment to the OpenStack\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/reference-architecture.html#logical-networks-arch\"><em>Logical Networks<\/em><\/a>:<\/div>\n<div>1 NIC deployment<\/div>\n<ul class=\"simple\">\n<li>eth0 &#8211; VLAN tagged port for networks: Storage, Public\/Floating, Private \u00a0(where the number of VLANs depends on the number of tenant networks with a continuous range), Management and Administrative network (untagged)<\/li>\n<\/ul>\n<div>2 NIC deployment<\/div>\n<ul class=\"simple\">\n<li>eth0 &#8211; Management network (tagged), Storage network (tagged) and Administrative network \u00a0(untagged)<\/li>\n<li>eth1 &#8211; VLAN tagged port with minimum two VLANs for networks: Public\/Floating, Private (where number of VLANs depend on number of tenant networks &#8211; continuous range)<\/li>\n<\/ul>\n<div>3 NIC deployment<\/div>\n<ul class=\"simple\">\n<li>eth0 &#8211; untagged port for Administrative network<\/li>\n<li>eth1 &#8211; VLAN tagged port with two VLANs for networks: Public\/Floating, Management Private (where the number of VLANs depends on the number of tenant networks with a continuous range)<\/li>\n<li>eth2 &#8211; untagged port for Storage network<\/li>\n<\/ul>\n<div>4 NIC deployment<\/div>\n<ul class=\"simple\">\n<li>eth0 &#8211; untagged port for Administrative network<\/li>\n<li>eth1 &#8211; tagged port for networks: Public\/Floating, Management<\/li>\n<li>eth2 &#8211; VLAN tagged port for Private network, with defined VLAN range IDs &#8211; continuous range<\/li>\n<li>eth3 &#8211; untagged port for Storage network<\/li>\n<\/ul>\n<div>Routing recommendations<\/div>\n<ul class=\"simple\">\n<li>Use the default routing via a router in the Public network<\/li>\n<li>Use the the management network to access to your management infrastructure (L3 connectivity if necessary)<\/li>\n<li>The administrative network or only the Fuel server (via dedicated NIC) should have Internet access<\/li>\n<li>The Storage and Private network (VLANs) should be configured without access to other networks (no L3 connectivity)<\/li>\n<\/ul>\n<\/div>\n<\/div>\n<div id=\"advanced-network-configuration-using-open-vswitch\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>Advanced Network Configuration using Open VSwitch<\/h2>\n<div>The Neutron networking model uses Open VSwitch (OVS) bridges and the Linux namespaces to create a flexible network setup and to isolate tenants from each other on L2 and L3 layers. Mirantis OpenStack also provides a flexible network setup model based on Open VSwitch primitives, which you can use to customize your nodes. Its most popular feature is link aggregation. While the FuelWeb UI uses a hardcoded per-node network model, the Fuel CLI tool allows you to modify it in your own way.<\/div>\n<div class=\"admonition note alert alert-info\">\n<div class=\"first admonition-title\">Note<\/div>\n<div class=\"last\">When using encapsulation protocols for network segmentation, take header overhead into account to avoid guest network slowdowns from packet fragmentation or packet rejection. With a physical host MTU of 1500 the maximum instance (guest) MTU is 1430 for GRE and 1392 for VXLAN. When possible, increase MTU on the network infrastructure using jumbo frames. The default OpenVSwitch behavior in Mirantis OpenStack 6.0 and newer is to fragment packets larger than the MTU. In prior versions OpenVSwitch discards packets exceeding MTU. See\u00a0<a class=\"reference external\" href=\"http:\/\/docs.openstack.org\/icehouse\/install-guide\/install\/yum\/content\/neutron-ml2-network-node.html\">the Official OpenStack documentation<\/a>\u00a0for more information.<\/div>\n<\/div>\n<div id=\"reference-network-model-in-neutron\" class=\"section\">\n<h3>Reference Network Model in Neutron<\/h3>\n<div>The FuelWeb UI uses the following per-node network model:<\/div>\n<ul class=\"simple\">\n<li>Create an OVS bridge for each NIC except for the NIC with Admin network (for example,\u00a0br-eth0\u00a0bridge for\u00a0eth0\u00a0NIC) and put NICs into their bridges<\/li>\n<li>Create a separate bridge for each OpenStack network:\n<ul>\n<li>br-ex\u00a0for the Public network<\/li>\n<li>br-prv\u00a0for the Private network<\/li>\n<li>br-mgmt\u00a0for the Management network<\/li>\n<li>br-storage\u00a0for the Storage network<\/li>\n<\/ul>\n<\/li>\n<li>Connect each network&#8217;s bridge with an appropriate NIC bridge using an OVS patch with an appropriate VLAN tag.<\/li>\n<li>Assign network IP addresses to the corresponding bridges.<\/li>\n<\/ul>\n<div>Note that the Admin network IP address is assigned to its NIC directly.<\/div>\n<div>This network model allows the cluster administrator to manipulate cluster network entities and NICs separately, easily, and on the fly during the cluster life-cycle.<\/div>\n<\/div>\n<div id=\"adjust-the-network-configuration-via-cli\" class=\"section\">\n<h3>Adjust the Network Configuration via CLI<\/h3>\n<div>On a basic level, this network configuration is part of a data structure that provides instructions to the Puppet modules to set up a network on the current node. You can examine and modify this data using the Fuel CLI tool. Just download (then modify and upload if needed) the environment&#8217;s &#8216;deployment default&#8217; configuration:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>[root@fuel ~]# fuel --env 1 deployment default\r\ndirectory \/root\/deployment_1 was created\r\nCreated \/root\/deployment_1\/compute_1.yaml\r\nCreated \/root\/deployment_1\/controller_2.yaml\r\n[root@fuel ~]# vi .\/deployment_1\/compute_1.yaml\r\n[root@fuel ~]# fuel --env 1 deployment --upload\r\n<\/pre>\n<\/div>\n<\/div>\n<div class=\"admonition note alert alert-info\">\n<div class=\"first admonition-title\">Note<\/div>\n<div class=\"last\">Please, make sure you read\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#cli-usage\"><em>the Fuel CLI documentation<\/em><\/a>\u00a0carefully.<\/div>\n<\/div>\n<div>The part of this data structure that describes how to apply the network configuration is the &#8216;network_scheme&#8217; key in the top-level hash of the YAML file. Let&#8217;s take a closer look at this substructure. The value of the &#8216;network_scheme&#8217; key is a hash with the following keys:<\/div>\n<ul class=\"simple\">\n<li>interfaces\u00a0&#8211; A hash of NICs and their low-level\/physical parameters. You can set an MTU feature here.<\/li>\n<li>provider\u00a0&#8211; Set to &#8216;ovs&#8217; for Neutron.<\/li>\n<li>endpoints\u00a0&#8211; A hash of network ports (OVS ports or NICs) and their IP settings.<\/li>\n<li>roles\u00a0&#8211; A hash that specifies the mappings between the endpoints and internally-used roles in Puppet manifests (&#8216;management&#8217;, &#8216;storage&#8217;, and so on).<\/li>\n<li>transformations\u00a0&#8211; An ordered list of OVS network primitives.<\/li>\n<\/ul>\n<div>Here is an example of a &#8220;network_scheme&#8221; section in a node&#8217;s configuration, showing how to change MTU parameters:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>network_scheme:\r\n endpoints:\r\n   br-ex:\r\n     IP:\r\n     - 172.16.0.7\/24\r\n     gateway: 172.16.0.1\r\n   br-fw-admin:\r\n     IP:\r\n     - 10.20.0.7\/24\r\n   br-mgmt:\r\n     IP:\r\n     - 192.168.0.7\/24\r\n   br-prv:\r\n     IP: none\r\n   br-storage:\r\n     IP:\r\n     - 192.168.1.6\/24\r\n interfaces:\r\n   eth0:\r\n     mtu: 1234\r\n     L2:\r\n       vlan_splinters: 'off'\r\n   eth1:\r\n     mtu: 4321\r\n     L2:\r\n       vlan_splinters: 'off'\r\n   eth2:\r\n     L2:\r\n       vlan_splinters: 'off'\r\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"the-transformations-section\" class=\"section\">\n<h3>The &#8220;Transformations&#8221; Section<\/h3>\n<div>You can use four OVS primitives:<\/div>\n<ul class=\"simple\">\n<li>add-br\u00a0&#8211; To add an OVS bridge to the system<\/li>\n<li>add-port\u00a0&#8211; To add a port to an existent OVS bridge<\/li>\n<li>add-bond\u00a0&#8211; To create a port in OVS bridge and add aggregated NICs to it<\/li>\n<li>add-patch\u00a0&#8211; To create an OVS patch between two existing OVS bridges<\/li>\n<\/ul>\n<div>The primitives will be applied in the order they are listed.<\/div>\n<div>Here are the the available options:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre><span class=\"p\">{<\/span>\r\n  <span class=\"s\">\"action\"<\/span><span class=\"p\">:<\/span> <span class=\"s\">\"add-br\"<\/span><span class=\"p\">,<\/span>         <span class=\"c\"># type of primitive<\/span>\r\n  <span class=\"s\">\"name\"<\/span><span class=\"p\">:<\/span> <span class=\"s\">\"xxx\"<\/span>               <span class=\"c\"># unique name of the new bridge<\/span>\r\n<span class=\"p\">},<\/span>\r\n<span class=\"p\">{<\/span>\r\n  <span class=\"s\">\"action\"<\/span><span class=\"p\">:<\/span> <span class=\"s\">\"add-port\"<\/span><span class=\"p\">,<\/span>       <span class=\"c\"># type of primitive<\/span>\r\n  <span class=\"s\">\"name\"<\/span><span class=\"p\">:<\/span> <span class=\"s\">\"xxx-port\"<\/span><span class=\"p\">,<\/span>         <span class=\"c\"># unique name of the new port<\/span>\r\n  <span class=\"s\">\"bridge\"<\/span><span class=\"p\">:<\/span> <span class=\"s\">\"xxx\"<\/span><span class=\"p\">,<\/span>            <span class=\"c\"># name of the bridge where the port should be created<\/span>\r\n  <span class=\"s\">\"type\"<\/span><span class=\"p\">:<\/span> <span class=\"s\">\"internal\"<\/span><span class=\"p\">,<\/span>         <span class=\"c\"># [optional; default: \"internal\"] a type of OVS<\/span>\r\n                              <span class=\"c\"># interface # for the port (see OVS documentation);<\/span>\r\n                              <span class=\"c\"># possible values:<\/span>\r\n                              <span class=\"c\"># \"system\", \"internal\", \"tap\", \"gre\", \"null\"<\/span>\r\n  <span class=\"s\">\"tag\"<\/span><span class=\"p\">:<\/span> <span class=\"mi\">0<\/span><span class=\"p\">,<\/span>                   <span class=\"c\"># [optional; default: 0] a 802.1q tag of traffic that<\/span>\r\n                              <span class=\"c\"># should be captured from an OVS bridge;<\/span>\r\n                              <span class=\"c\"># possible values: 0 (means port is a trunk),<\/span>\r\n                              <span class=\"c\"># 1-4094 (means port is an access)<\/span>\r\n  <span class=\"s\">\"trunks\"<\/span><span class=\"p\">:<\/span> <span class=\"p\">[],<\/span>               <span class=\"c\"># [optional; default: []] a set of 802.1q tags<\/span>\r\n                              <span class=\"c\"># (integers from 0 to 4095) that are allowed to<\/span>\r\n                              <span class=\"c\"># pass through if \"tag\" option equals 0;<\/span>\r\n                              <span class=\"c\"># possible values: an empty list (all traffic passes),<\/span>\r\n                              <span class=\"c\"># 0 (untagged traffic only), 1 (strange behaviour;<\/span>\r\n                              <span class=\"c\"># shouldn't be used), 2-4095 (traffic with this<\/span>\r\n                              <span class=\"c\"># tag passes); e.g. [0,10,20]<\/span>\r\n  <span class=\"s\">\"port_properties\"<\/span><span class=\"p\">:<\/span> <span class=\"p\">[],<\/span>      <span class=\"c\"># [optional; default: []] a list of additional<\/span>\r\n                              <span class=\"c\"># OVS port properties to modify them in OVS DB<\/span>\r\n  <span class=\"s\">\"interface_properties\"<\/span><span class=\"p\">:<\/span> <span class=\"p\">[],<\/span> <span class=\"c\"># [optional; default: []] a list of additional<\/span>\r\n                              <span class=\"c\"># OVS interface properties to modify them in OVS DB<\/span>\r\n<span class=\"p\">},<\/span>\r\n<span class=\"p\">{<\/span>\r\n  <span class=\"s\">\"action\"<\/span><span class=\"p\">:<\/span> <span class=\"s\">\"add-bond\"<\/span><span class=\"p\">,<\/span>       <span class=\"c\"># type of primitive<\/span>\r\n  <span class=\"s\">\"name\"<\/span><span class=\"p\">:<\/span> <span class=\"s\">\"xxx-port\"<\/span><span class=\"p\">,<\/span>         <span class=\"c\"># unique name of the new bond<\/span>\r\n  <span class=\"s\">\"interfaces\"<\/span><span class=\"p\">:<\/span> <span class=\"p\">[],<\/span>           <span class=\"c\"># a set of two or more bonded interfaces' names;<\/span>\r\n                              <span class=\"c\"># e.g. ['eth1','eth2']<\/span>\r\n  <span class=\"s\">\"bridge\"<\/span><span class=\"p\">:<\/span> <span class=\"s\">\"xxx\"<\/span><span class=\"p\">,<\/span>            <span class=\"c\"># name of the bridge where the bond should be created<\/span>\r\n  <span class=\"s\">\"tag\"<\/span><span class=\"p\">:<\/span> <span class=\"mi\">0<\/span><span class=\"p\">,<\/span>                   <span class=\"c\"># [optional; default: 0] a 802.1q tag of traffic which<\/span>\r\n                              <span class=\"c\"># should be catched from an OVS bridge;<\/span>\r\n                              <span class=\"c\"># possible values: 0 (means port is a trunk),<\/span>\r\n                              <span class=\"c\"># 1-4094 (means port is an access)<\/span>\r\n  <span class=\"s\">\"trunks\"<\/span><span class=\"p\">:<\/span> <span class=\"p\">[],<\/span>               <span class=\"c\"># [optional; default: []] a set of 802.1q tags<\/span>\r\n                              <span class=\"c\"># (integers from 0 to 4095) which are allowed to<\/span>\r\n                              <span class=\"c\"># pass through if \"tag\" option equals 0;<\/span>\r\n                              <span class=\"c\"># possible values: an empty list (all traffic passes),<\/span>\r\n                              <span class=\"c\"># 0 (untagged traffic only), 1 (strange behaviour;<\/span>\r\n                              <span class=\"c\"># shouldn't be used), 2-4095 (traffic with this<\/span>\r\n                              <span class=\"c\"># tag passes); e.g. [0,10,20]<\/span>\r\n  <span class=\"s\">\"properties\"<\/span><span class=\"p\">:<\/span> <span class=\"p\">[],<\/span>           <span class=\"c\"># [optional; default: []] a list of additional<\/span>\r\n                              <span class=\"c\"># OVS bonded port properties to modify them in OVS DB;<\/span>\r\n                              <span class=\"c\"># you can use it to set the aggregation mode and<\/span>\r\n                              <span class=\"c\"># balancing # strategy, to configure LACP, and so on<\/span>\r\n                              <span class=\"c\"># (see the OVS documentation)<\/span>\r\n<span class=\"p\">},<\/span>\r\n<span class=\"p\">{<\/span>\r\n  <span class=\"s\">\"action\"<\/span><span class=\"p\">:<\/span> <span class=\"s\">\"add-patch\"<\/span><span class=\"p\">,<\/span>      <span class=\"c\"># type of primitive<\/span>\r\n  <span class=\"s\">\"bridges\"<\/span><span class=\"p\">:<\/span> <span class=\"p\">[<\/span><span class=\"s\">\"br0\"<\/span><span class=\"p\">,<\/span> <span class=\"s\">\"br1\"<\/span><span class=\"p\">],<\/span>  <span class=\"c\"># a pair of different bridges that will be connected<\/span>\r\n  <span class=\"s\">\"peers\"<\/span><span class=\"p\">:<\/span> <span class=\"p\">[<\/span><span class=\"s\">\"p1\"<\/span><span class=\"p\">,<\/span> <span class=\"s\">\"p2\"<\/span><span class=\"p\">],<\/span>      <span class=\"c\"># [optional] abstract names for each end of the patch<\/span>\r\n  <span class=\"s\">\"tags\"<\/span><span class=\"p\">:<\/span> <span class=\"p\">[<\/span><span class=\"mi\">0<\/span><span class=\"p\">,<\/span> <span class=\"mi\">0<\/span><span class=\"p\">]<\/span> <span class=\"p\">,<\/span>            <span class=\"c\"># [optional; default: [0,0]] a pair of integers that<\/span>\r\n                              <span class=\"c\"># represent an 802.1q tag of traffic that is<\/span>\r\n                              <span class=\"c\"># captured from an appropriate OVS bridge; possible<\/span>\r\n                              <span class=\"c\"># values: 0 (means port is a trunk), 1-4094 (means<\/span>\r\n                              <span class=\"c\"># port is an access)<\/span>\r\n  <span class=\"s\">\"trunks\"<\/span><span class=\"p\">:<\/span> <span class=\"p\">[],<\/span>               <span class=\"c\"># [optional; default: []] a set of 802.1q tags<\/span>\r\n                              <span class=\"c\"># (integers from 0 to 4095) which are allowed to<\/span>\r\n                              <span class=\"c\"># pass through each bridge if \"tag\" option equals 0;<\/span>\r\n                              <span class=\"c\"># possible values: an empty list (all traffic passes),<\/span>\r\n                              <span class=\"c\"># 0 (untagged traffic only), 1 (strange behavior;<\/span>\r\n                              <span class=\"c\"># shouldn't be used), 2-4095 (traffic with this<\/span>\r\n                              <span class=\"c\"># tag passes); e.g., [0,10,20]<\/span>\r\n<span class=\"p\">}<\/span>\r\n<\/pre>\n<\/div>\n<\/div>\n<div>A combination of these primitives allows you to make custom and complex network configurations.<\/div>\n<\/div>\n<\/div>\n<div id=\"nics-aggregation\" class=\"section\">\n<h2>NICs Aggregation<\/h2>\n<div>The NIC bonding allows you to aggregate multiple physical links to one link to increase speed and provide fault tolerance.<\/div>\n<div>Documentation<\/div>\n<ul class=\"simple\">\n<li>The Linux kernel documentation about bonding can be found in Linux Ethernet Bonding Driver\u00a0<a class=\"reference external\" href=\"https:\/\/www.kernel.org\/doc\/Documentation\/networking\/bonding.txt\">HOWTO<\/a><\/li>\n<li>You can find shorter\u00a0<a class=\"reference external\" href=\"http:\/\/wiki.mikrotik.com\/wiki\/Manual:Interface\/Bonding\">introduction<\/a>\u00a0to bonding and tips on link monitoring here<\/li>\n<li>Cisco switches configuration\u00a0<a class=\"reference external\" href=\"http:\/\/www.cisco.com\/c\/en\/us\/td\/docs\/switches\/datacenter\/nexus3000\/sw\/layer2\/503_U2_1\/b_Cisco_n3k_layer2_config_guide_503_U2_1\/b_Cisco_n3k_layer2_config_gd_503_U2_1_chapter_01000.html\">guide<\/a><\/li>\n<li>Switches configuration tips for Fuel can be found\u00a0<a class=\"reference external\" href=\"https:\/\/etherpad.openstack.org\/p\/LACP_FUEL_bonding\">here<\/a><\/li>\n<\/ul>\n<div id=\"types-of-bonding\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Types of Bonding<\/h3>\n<div>Open VSwitch supports the same bonding features as the Linux kernel. Fuel supports bonding either via Open VSwitch or by using Linux native bonding interfaces. Open vSwitch mode is supported in Fuel UI and should be used by default. You may want to fall back to Linux native interfaces if Open vSwitch bonding is not working for you or is not compatible with your hardware.<\/div>\n<div>Linux supports two types of bonding: * IEEE 802.1AX (formerly known as 802.3ad) Link Aggregation Control Protocol (LACP). Devices on both sides of links must communicate using LACP to set up an aggregated link. So both devices must support LACP, enable and configure it on these links. * One side bonding does not require any special feature support from the switch side. Linux handles it using a set of traffic balancing algorithms.<\/div>\n<div>One Side Bonding Policies:<\/div>\n<ul class=\"simple\">\n<li>Balance-rr\u00a0&#8211; Round-robin policy. This mode provides load balancing and fault tolerance.<\/li>\n<li>Active-backup\u00a0&#8211; Active-backup policy: Only one slave in the bond is active.This mode provides fault tolerance.<\/li>\n<li>Balance-xor\u00a0&#8211; XOR policy: Transmit based on the selected transmit hash policy. This mode provides load balancing and fault tolerance.<\/li>\n<li>Broadcast\u00a0&#8211; Broadcast policy: transmits everything on all slave interfaces. This mode provides fault tolerance.<\/li>\n<li>balance-tlb\u00a0&#8211; Adaptive transmit load balancing based on a current links&#8217; utilization. This mode provides load balancing and fault tolerance.<\/li>\n<li>balance-alb\u00a0&#8211; Adaptive transmit and receive load balancing based on the current links&#8217; utilization. This mode provides load balancing and fault tolerance.<\/li>\n<li>balance-slb\u00a0&#8211; Modification of balance-alb mode. SLB bonding allows a limited form of load balancing without the remote switch&#8217;s knowledge or cooperation. SLB assigns each source MAC+VLAN pair to a link and transmits all packets from that MAC+VLAN through that link. Learning in the remote switch causes it to send packets to that MAC+VLAN through the same link.<\/li>\n<li>balance-tcp\u00a0&#8211; Adaptive transmit load balancing among interfaces.<\/li>\n<\/ul>\n<div>LACP Policies:<\/div>\n<ul class=\"simple\">\n<li>Layer2\u00a0&#8211; Uses XOR of hardware MAC addresses to generate the hash.<\/li>\n<li>Layer2+3\u00a0&#8211; uses a combination of layer2 and layer3 protocol information to generate the hash.<\/li>\n<li>Layer3+4\u00a0&#8211; uses upper layer protocol information, when available, to generate the hash.<\/li>\n<li>Encap2+3\u00a0&#8211; uses the same formula as layer2+3 but it relies on skb_flow_dissect to obtain the header fields which might result in the use of inner headers if an encapsulation protocol is used. For example this will improve the performance for tunnel users because the packets will be distributed according to the encapsulated flows.<\/li>\n<li>Encap3+4\u00a0&#8211; Similar to Encap2+3 but uses layer3+4.<\/li>\n<\/ul>\n<div>Policies Supported by Fuel<\/div>\n<div>Fuel supports the following policies: Active Backup, Balance SLB and LACP Balance TCP. These interfaces can be configured in Fuel UI when nodes are being added tho the environment or by using Fuel CLI and editing YAML configuration manually.<\/div>\n<div>Network Verification in Fuel<\/div>\n<div>Fuel has limited network verification capabilities when working with bonds. Network connectivity can be checked for the new cluster only (not for deployed one) so check is done when nodes are in bootstrap and no bonds are up. Connectivity between slave interfaces can be checked but not bonds themselves.<\/div>\n<\/div>\n<div id=\"an-example-of-nic-aggregation-using-fuel-cli-tools\" class=\"section\">\n<h3>An Example of NIC Aggregation using Fuel CLI tools<\/h3>\n<div>Suppose you have a node with 4 NICs and you want to bond two of them with LACP enabled (&#8220;eth2&#8221; and &#8220;eth3&#8221; here) and then assign Private and Storage networks to them. The Admin network uses a dedicated NIC (&#8220;eth0&#8221;). The Management and Public networks use the last NIC (&#8220;eth1&#8221;).<\/div>\n<div>To create bonding interface using Open vSwitch, do the following:<\/div>\n<ul class=\"simple\">\n<li>Create a separate OVS bridge &#8220;br-bond0&#8221; instead of &#8220;br-eth2&#8221; and &#8220;br-eth3&#8221;.<\/li>\n<li>Connect &#8220;eth2&#8221; and &#8220;eth3&#8221; to &#8220;br-bond0&#8221; as a bonded port with property &#8220;lacp=active&#8221;.<\/li>\n<li>Connect &#8220;br-prv&#8221; and &#8220;br-storage&#8221; bridges to &#8220;br-bond0&#8221; by OVS patches.<\/li>\n<li>Leave all of the other things unchanged.<\/li>\n<\/ul>\n<div>Here is an example of &#8220;network_scheme&#8221; section in the node configuration:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>'network_scheme':\r\n  'provider': 'ovs'\r\n  'version': '1.0'\r\n  'interfaces':\r\n    'eth0': {}\r\n    'eth1': {}\r\n    'eth2': {}\r\n    'eth3': {}\r\n  'endpoints':\r\n    'br-ex':\r\n      'IP': ['172.16.0.2\/24']\r\n      'gateway': '172.16.0.1'\r\n    'br-mgmt':\r\n      'IP': ['192.168.0.2\/24']\r\n    'br-prv': {'IP': 'none'}\r\n    'br-storage':\r\n      'IP': ['192.168.1.2\/24']\r\n    'eth0':\r\n      'IP': ['10.20.0.4\/24']\r\n  'roles':\r\n    'ex': 'br-ex'\r\n    'fw-admin': 'eth0'\r\n    'management': 'br-mgmt'\r\n    'private': 'br-prv'\r\n    'storage': 'br-storage'\r\n  'transformations':\r\n  - 'action': 'add-br'\r\n    'name': 'br-ex'\r\n  - 'action': 'add-br'\r\n    'name': 'br-mgmt'\r\n  - 'action': 'add-br'\r\n    'name': 'br-storage'\r\n  - 'action': 'add-br'\r\n    'name': 'br-prv'\r\n  - 'action': 'add-br'\r\n    'name': 'br-bond0'\r\n  - 'action': 'add-br'\r\n    'name': 'br-eth1'\r\n  - 'action': 'add-bond'\r\n    'bridge': 'br-bond0'\r\n    'interfaces': ['eth2', 'eth3']\r\n    'properties': ['lacp=active']\r\n    'name': 'bond0'\r\n  - 'action': 'add-port'\r\n    'bridge': 'br-eth1'\r\n    'name': 'eth1'\r\n  - 'action': 'add-patch'\r\n    'bridges': ['br-bond0', 'br-storage']\r\n    'tags': [103, 0]\r\n  - 'action': 'add-patch'\r\n    'bridges': ['br-eth1', 'br-ex']\r\n    'tags': [101, 0]\r\n  - 'action': 'add-patch'\r\n    'bridges': ['br-eth1', 'br-mgmt']\r\n    'tags': [102, 0]\r\n  - 'action': 'add-patch'\r\n    'bridges': ['br-bond0', 'br-prv']\r\n<\/pre>\n<\/div>\n<\/div>\n<div>If you are going to use Linux native bonding, follow these steps:<\/div>\n<ul>\n<li>\n<div class=\"first\">Create a new interface &#8220;bond0&#8221; instead of &#8220;br-eth2&#8221; and &#8220;br-eth3&#8221;.<\/div>\n<\/li>\n<li>\n<div class=\"first\">Connect &#8220;eth2&#8221; and &#8220;eth3&#8221; to &#8220;bond0&#8221; as a bonded port.<\/div>\n<\/li>\n<li>\n<div class=\"first\">Add &#8216;provider&#8217;: &#8216;lnx&#8217; to choose Linux native mode.<\/div>\n<\/li>\n<li>\n<div class=\"first\">Add properties as a hash instead of an array used in ovs mode. Properties are same as options used during the bonding kernel modules loading. You should provide which mode this bonding interface should use. Any other options are not mandatory. You can find all these options in the Linux Kernel Documentation.<\/div>\n<dl class=\"docutils\">\n<dt>&#8216;properties&#8217;:<\/dt>\n<dd>\n<div class=\"first last\">&#8216;mode&#8217;: 1<\/div>\n<\/dd>\n<\/dl>\n<\/li>\n<li>\n<div class=\"first\">Connect &#8220;br-prv&#8221; and &#8220;br-storage&#8221; bridges to &#8220;br-bond0&#8221; by OVS patches.<\/div>\n<\/li>\n<li>\n<div class=\"first\">Leave all of the other things unchanged.<\/div>\n<\/li>\n<\/ul>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>'network_scheme':\r\n  'provider': 'ovs'\r\n  'version': '1.0'\r\n  'interfaces':\r\n    'eth0': {}\r\n    'eth1': {}\r\n    'eth2': {}\r\n    'eth3': {}\r\n  'endpoints':\r\n    'br-ex':\r\n      'IP': ['172.16.0.2\/24']\r\n      'gateway': '172.16.0.1'\r\n    'br-mgmt':\r\n      'IP': ['192.168.0.2\/24']\r\n    'br-prv': {'IP': 'none'}\r\n    'br-storage':\r\n      'IP': ['192.168.1.2\/24']\r\n    'eth0':\r\n      'IP': ['10.20.0.4\/24']\r\n  'roles':\r\n    'ex': 'br-ex'\r\n    'fw-admin': 'eth0'\r\n    'management': 'br-mgmt'\r\n    'private': 'br-prv'\r\n    'storage': 'br-storage'\r\n  'transformations':\r\n  - 'action': 'add-br'\r\n    'name': 'br-ex'\r\n  - 'action': 'add-br'\r\n    'name': 'br-mgmt'\r\n  - 'action': 'add-br'\r\n    'name': 'br-storage'\r\n  - 'action': 'add-br'\r\n    'name': 'br-prv'\r\n  - 'action': 'add-br'\r\n    'name': 'br-bond0'\r\n  - 'action': 'add-br'\r\n    'name': 'br-eth1'\r\n  - 'action': 'add-bond'\r\n    'bridge': 'br-bond0'\r\n    'interfaces': ['eth2', 'eth3']\r\n    'provider': 'lnx'\r\n    'properties':\r\n      'mode': '1'\r\n    'name': 'bond0'\r\n  - 'action': 'add-port'\r\n    'bridge': 'br-eth1'\r\n    'name': 'eth1'\r\n  - 'action': 'add-patch'\r\n    'bridges': ['br-bond0', 'br-storage']\r\n    'tags': [103, 0]\r\n  - 'action': 'add-patch'\r\n    'bridges': ['br-eth1', 'br-ex']\r\n    'tags': [101, 0]\r\n  - 'action': 'add-patch'\r\n    'bridges': ['br-eth1', 'br-mgmt']\r\n    'tags': [102, 0]\r\n  - 'action': 'add-patch'\r\n    'bridges': ['br-bond0', 'br-prv']\r\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"how-fuel-upgrade-works\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>How Fuel upgrade works<\/h2>\n<div>Users running Fuel 6.0 can upgrade the Fuel Master Node to the latest release. See\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#upgrade-patch-top-ug\"><em>Upgrading and Updating from Earlier Releases<\/em><\/a>\u00a0for instructions. This section discusses the processing flow for the Fuel upgrade.<\/div>\n<div>The upgrade is implemented with three upgrade engines (also called upgraders or upgrade stages). The engines are python modules that are located in a\u00a0<a class=\"reference external\" href=\"https:\/\/github.com\/stackforge\/fuel-web\/tree\/master\/fuel_upgrade_system\/fuel_upgrade\/fuel_upgrade\/engines\">separate directory<\/a>:<\/div>\n<ul class=\"simple\">\n<li>Host system engine\u00a0&#8212; Copies new repositories to Fuel Master node, installs the\u00a0<code><span class=\"pre\">fuel-6.1.0.rpm<\/span><\/code>\u00a0package and all the dependencies such as Puppet manifests, bootstrap images, provisioning images and so on.<\/li>\n<li>Docker engine:\n<ol class=\"arabic\">\n<li>Point the supervisor to a new directory with the configuration files. Since it is empty, no containers will be started by the supervisor.<\/li>\n<li>Stop old containers.<\/li>\n<li>Upload new Docker images.<\/li>\n<li>Run containers one by one, in the proper order.<\/li>\n<li>Generate new supervisor configs.<\/li>\n<li>Verify the services running in the containers.<\/li>\n<\/ol>\n<\/li>\n<li>OpenStack engine\u00a0&#8212; Installs all data that is required for the OpenStack patching feature.\n<ol class=\"arabic\">\n<li>Adds new releases using the\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#nailgun-term\"><em>Nailgun<\/em><\/a>\u00a0REST API. This allows the full list of OpenStack releases to be displayed in the Fuel UI.<\/li>\n<\/ol>\n<\/li>\n<\/ul>\n<div>Design considerations:<\/div>\n<ul class=\"simple\">\n<li>The Docker engine does not use\u00a0supervisord\u00a0to run the services during upgrade because it can cause race conditions, especially if the iptables clean-up script runs at the same time. In addition,\u00a0supervisord\u00a0may not always be able to start all containers, which can result in NAT rules that have the same port number but different IP addresses.<\/li>\n<li>Stopping containers during the upgrade process may interrupt non-atomic actions such as database migration in the Keystone container.<\/li>\n<li>Running containers one by one prevents IP duplication problems that could otherwise occur during the upgrade because of a Docker IP allocation bug.<\/li>\n<li>A set of\u00a0<a class=\"reference external\" href=\"https:\/\/github.com\/stackforge\/fuel-web\/tree\/master\/fuel_upgrade_system\/fuel_upgrade\/fuel_upgrade\/pre_upgrade_hooks\">pre-upgrade hooks<\/a>\u00a0are run before the upgrade engines to perform some necessary preliminary steps for upgrade. This is not the optimal implementation, but is required for Fuel to manage environments that were deployed with earlier versions that had a different design. For example, one of these hooks adds default login credentials to the configuration file before the upgrade process runs; this is required because earlier versions of Fuel did not have the authentication feature.<\/li>\n<\/ul>\n<\/div>\n<div id=\"how-the-operating-system-role-is-provisioned\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>How the Operating System Role is provisioned<\/h2>\n<div>Fuel provisions the\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#operating-system-role-term\"><em>Operating System Role<\/em><\/a>\u00a0with either the CentOS or Ubuntu operating system that was selected for the environment but\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#puppet-term\"><em>Puppet<\/em><\/a>\u00a0does not deploy other packages on this node or provision the node in any way.<\/div>\n<div>The Operating System role is defined in the\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/file-ref.html#openstack-yaml-ref\"><em>openstack.yaml<\/em><\/a>\u00a0file; the internal name is\u00a0base-os. Fuel installs a standard set of operating system packages similar to what it installs on other roles; use the\u00a0dpkg -l\u00a0command on Ubuntu or the\u00a0rpm -qa\u00a0command on CentOS to see the exact list of packages that are installed.<\/div>\n<div>A few configurations are applied to an Operating System role. For environments provisioned with the traditional tools, these configurations are applied by\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#cobbler-term\"><em>Cobbler<\/em><\/a>\u00a0snippets that run during the provisioning phase. When using image-based provisioning,\u00a0cloud init\u00a0applies these configurations. These include:<\/div>\n<ul class=\"simple\">\n<li><a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#customize-partitions-ug\"><em>Disk partitioning<\/em><\/a>. The default partitioning allocates a small partition (about 15GB) on the first disk for the\u00a0<cite>root<\/cite>\u00a0partition and leaves the rest of the space unallocated; users can manually allocate the remaining space.<\/li>\n<li>The\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#public-key-ug\"><em>public key<\/em><\/a>\u00a0that is assigned to all target nodes in the environment<\/li>\n<li>The\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#kernel-parameters-ug\"><em>Kernel parameters<\/em><\/a>\u00a0that are applied to all target nodes<\/li>\n<li><a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#network-settings-ug\"><em>Network settings<\/em><\/a>\u00a0configure the Admin\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/reference-architecture.html#logical-networks-arch\"><em>logical networks<\/em><\/a>\u00a0with a static IP address. No other networking is configured.<\/li>\n<\/ul>\n<div>The following configurations that are set in the Fuel Web UI have no effect on the Operating System role:<\/div>\n<ul class=\"simple\">\n<li><a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#map-logical-to-physical\"><em>Mapping of logical networks to physical interfaces<\/em><\/a>. All connections for the\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/reference-architecture.html#logical-networks-arch\"><em>logical networks<\/em><\/a>\u00a0that connect this node to the rest of the environment need to be defined.<\/li>\n<li><a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#debug-level-ug\"><em>Debug logging<\/em><\/a><\/li>\n<li><a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#syslog-ug\"><em>Syslog<\/em><\/a><\/li>\n<\/ul>\n<div>See\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/operations.html#operating-system-role-ops\"><em>Configuring an Operating System node<\/em><\/a>\u00a0for information about configuring a provisioned Operating System role.<\/div>\n<\/div>\n<div id=\"two-provisioning-methods\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>Two provisioning methods<\/h2>\n<div>There are two possible methods of provisioning an operating system on a node. They are:<\/div>\n<ol class=\"arabic simple\">\n<li>Classic method &#8212; Anaconda or Debian-installer is used to build the operating system from scratch on each node using online or local repositories.<\/li>\n<li>Image based method &#8212; A base image is created and copied to each node to be used to deploy the operating system on the local disks.<\/li>\n<\/ol>\n<div>Starting with Mirantis Openstack 6.1, the image based method is used by default. It significantly reduces the time required for provisioning and it is more reliable to copy the same image on all nodes instead of building an operating system from scratch on each node.<\/div>\n<\/div>\n<div id=\"image-based-provisioning\" class=\"section\">\n<h2>Image Based Provisioning<\/h2>\n<div>Image based provisioning is implemented using the Fuel Agent. The image based provisioning process consists of two independent steps, which are:<\/div>\n<ol class=\"arabic simple\">\n<li>Operating system image building.<\/li>\n<\/ol>\n<div>This step assumes that we build an operating system image from a set of repositories in a directory which is then packed into the operating system image. The build script is run once no matter how many nodes one is going to deploy.<\/div>\n<div>Currently, the CentOS image is built at the development stage and then this image is put into Mirantis OpenStack ISO and used for all CentOS based environments.<\/div>\n<div>Ubuntu images are built on the master node, one operating system image per environment. We need to build different images for each environment because each environment has its own set of repositories. In order to deal with package differences between repository sets, we create an operating system image for each environment. When the user clicks the &#8220;Deploy changes&#8221; button, we check if the operating system package is already available for a particular environment, and if it is not, we build a new one just before starting the actual provisioning.<\/div>\n<ol class=\"arabic simple\" start=\"2\">\n<li>Copying of operating system image to nodes.<\/li>\n<\/ol>\n<div>Operating system images that have been built can be downloaded via HTTP from the Fuel Master node. So, when a node is booted into the so called Bootstrap operating system, we can run an executable script to download the necessary operating system image and put it on a hard drive. We don&#8217;t need to reboot the node into the installer OS like we do when we use an Anaconda or Debian-installer. Our executable script in this case plays the same role. We just need it to be installed into the Bootstrap operating system.<\/div>\n<div>For both of these steps we have a special program component which is called Fuel Agent. Fuel Agent is nothing more than just a set of data driven executable scripts. One of these scripts is used for building operating system images and we run this script on the master node passing a set of repository URIs and a set of package names to it. Another script is used for the actual provisioning. We run it on each node and pass provisioning data to it. These data contain information about disk partitions, initial node configuration, operating system image location, etc. So, this script being run on a node, prepares disk partitions, downloads operating system images and puts these images on partitions. It is necessary to note that when we say operating system image we actually mean a set of images, one per file system. If, for example, we want\u00a0<code><span class=\"pre\">\/<\/span><\/code>\u00a0and\u00a0<code><span class=\"pre\">\/boot<\/span><\/code>\u00a0be two separate file systems, then this means we need to separate the operating system images, one for\u00a0<code><span class=\"pre\">\/<\/span><\/code>\u00a0and another for\u00a0<code><span class=\"pre\">\/boot<\/span><\/code>. Images in this case are binary copies of corresponding file systems.<\/div>\n<\/div>\n<div id=\"fuel-agent\" class=\"section\">\n<h2>Fuel Agent<\/h2>\n<div>Fuel Agent is a set of data driven executable scripts. It is written in Python. Its high level architecture is depicted below:<\/div>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/fuel-agent-architecture.png\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/fuel-agent-architecture.png\" alt=\"_images\/fuel-agent-architecture.png\" \/><\/a><\/p>\n<div>When we run one of its executable entry, we pass the input data to it where it is written what needs to be done and how. We also point out which data driver it needs to use in order to parse these input data. For example:<\/div>\n<div class=\"highlight-sh\">\n<div class=\"highlight\">\n<pre>\/usr\/bin\/provision --input_data_file \/tmp\/provision.json --data_driver nailgun\r\n<\/pre>\n<\/div>\n<\/div>\n<div>The heart of Fuel Agent is the manager\u00a0<code><span class=\"pre\">fuel_agent\/manager.py<\/span><\/code>, which does not directly understand input data, but it does understand sets of Python objects defined in<code><span class=\"pre\">fuel_agent\/objects<\/span><\/code>. Data driver is the place where raw input data are converted into a set of objects. Using this set of objects manager then does something useful like creating partitions, building operating system images, etc. But the manager implements only high-level logic for all these cases and uses a low-level utility layer which is defined in\u00a0<code><span class=\"pre\">fuel_agent\/utils<\/span><\/code>\u00a0to perform real actions like launching parted or mkfs commands.<\/div>\n<div>The Fuel Agent config file is located in\u00a0<code><span class=\"pre\">\/etc\/fuel-agent\/fuel-agent.conf<\/span><\/code>. There are plenty of configuration parameters that can be set and all these parameters have default values which are defined in the source code. All configuration parameters are well commented.<\/div>\n<div>The Fuel Agent leverages cloud-init for the Image based deployment process. It also creates a\u00a0<a class=\"reference external\" href=\"https:\/\/cloudinit.readthedocs.org\/en\/latest\/\">cloud-init drive<\/a>\u00a0which allows for post-provisioning configuration. The config drive uses jinja2 templates which can be found in\u00a0<code><span class=\"pre\">\/usr\/share\/fuel-agent\/cloud-init-templates<\/span><\/code>. These templates are filled with values given from the input data.<\/div>\n<\/div>\n<div id=\"image-building\" class=\"section\">\n<h2>Image building<\/h2>\n<div>When Ubuntu based environment is being provisioned, there is a pre-provisioning task which runs the\u00a0<code><span class=\"pre\">\/usr\/bin\/fa_build_image<\/span><\/code>\u00a0script. This script is one of the executable Fuel Agent entry points. This script is installed in the &#8216;mcollective&#8217; docker container on the Fuel master node. As input data we pass a list of Ubuntu repositories from which an operating system image is built and some other metadata. When launched, Fuel Agent checks if there is a Ubuntu image available for this environment and if there is not, it builds an operating system image and puts this image in a directory defined in the input data so as to make it available via HTTP. See the sequence diagram below:<\/div>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/fuel-agent-build-image-sequence.png\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/fuel-agent-build-image-sequence.png\" alt=\"_images\/fuel-agent-build-image-sequence.png\" \/><\/a><\/p>\n<\/div>\n<div id=\"operating-system-provisioning\" class=\"section\">\n<h2>Operating system provisioning<\/h2>\n<div>The Fuel Agent is installed into a bootstrap ramdisk. An operating system can easily be installed on a node if the node has been booted with this ramdisk. We can simply run the<code><span class=\"pre\">\/usr\/bin\/provision<\/span><\/code>\u00a0executable with the required input data to start provisioning. This allows provisioning to occur without a reboot unlike the classic provisioning method using Anaconda or Debian-installer.<\/div>\n<div>The input data need to contain at least the following information:<\/div>\n<ul class=\"simple\">\n<li>Partitioning scheme for the node. This scheme needs to contain information about the necessary partitions and on which disks we need to create these partitions, information about the necessary LVM groups and volumes, about software raid devices. This scheme contains also information about on which disk a bootloader needs to be installed and about the necessary file systems and their mount points. On some block devices we are assumed to put operating system images (one image per file system), while on other block devices we need to create file systems using the\u00a0<code><span class=\"pre\">mkfs<\/span><\/code>\u00a0command.<\/li>\n<li>Operating system images URIs. Fuel Agent needs to know where to download the images and which protocol to use for this (by default, HTTP is used).<\/li>\n<li>Data for initial node configuration. Currently, we use cloud-init for the initial configuration and Fuel Agent prepares the cloud-init config drive which is put on a small partition at the end of the first hard drive. Config drive is created using jinja2 templates which are to be filled with values given from the input data. After the first reboot, cloud-init is run by upstart or similar. It then finds this config drive and configures services like NTP, MCollective, etc. It also performs an initial network configuration to make it possible for Fuel to access this particular node via SSH or MCollective and run Puppet to perform the final deployment.<\/li>\n<\/ul>\n<div>The sequence diagram is below:<\/div>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/fuel-agent-sequence.png\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/fuel-agent-sequence.png\" alt=\"_images\/fuel-agent-sequence.png\" \/><\/a><\/p>\n<div id=\"viewing-the-control-files-on-the-fuel-master-node\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Viewing the control files on the Fuel Master node<\/h3>\n<div>To view the contents of the bootstrap ramdisk, run the following commands on the Fuel Master node:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>cd \/var\/www\/nailgun\/bootstrap\r\nmkdir initramfs\r\ncd initramfs\r\ngunzip -c ..\/initramfs.img | cpio -idv\r\n<\/pre>\n<\/div>\n<\/div>\n<div>You are now in the root file system of the ramdisk and can view the files that are included in the bootstrap node. For example:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre><span class=\"n\">cat<\/span> <span class=\"o\">\/<\/span><span class=\"n\">etc<\/span><span class=\"o\">\/<\/span><span class=\"n\">fuel<\/span><span class=\"o\">-<\/span><span class=\"n\">agent<\/span><span class=\"o\">\/<\/span><span class=\"n\">fuel<\/span><span class=\"o\">-<\/span><span class=\"n\">agent<\/span><span class=\"o\">.<\/span><span class=\"n\">conf<\/span>\r\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"troubleshooting-image-based-provisioning\" class=\"section\">\n<h3>Troubleshooting image-based provisioning<\/h3>\n<div>The following files provide information for analyzing problems with the Fuel Agent provisioning.<\/div>\n<ul class=\"simple\">\n<li>Bootstrap\n<ul>\n<li><em>etc\/fuel-agent\/fuel-agent.conf<\/em>\u00a0&#8212; main configuration file for the Fuel Agent, defines the location of the provision data file, data format and log output, whether debugging is on or off, and so forth.<\/li>\n<li><em>tmp\/provision.json<\/em>\u00a0&#8212; Astute puts this file on a node (on the in-memory file system) just before running the\u00a0provision\u00a0script.<\/li>\n<li><em>usr\/bin\/provision<\/em>\u00a0&#8212; executable entry point for provisioning. Astute runs this; it can also be run manually.<\/li>\n<\/ul>\n<\/li>\n<li>Master\n<ul>\n<li><em>var\/log\/remote\/node-N.domain.tld\/bootstrap\/fuel-agent.log<\/em>\u00a0&#8212; this is where Fuel Agent log messages are recorded when the\u00a0provision\u00a0script is run; &lt;N&gt; is the\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/terminology.html#node-term\"><em>node<\/em><\/a>ID of the provisioned node.<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/div>\n<\/div>\n<div id=\"task-based-deployment\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>Task-based deployment<\/h2>\n<div id=\"task-schema\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Task schema<\/h3>\n<div>Tasks that are used to build a deployment graph can be grouped according to the common types:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>- id: graph_node_id\r\n  type: one of [stage, group, skipped, puppet, shell etc]\r\n  role: [match where this tasks should be executed]\r\n  requires: [requirements for a specific node]\r\n  required_for: [specify which nodes depend on this task]\r\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"stages\" class=\"section\">\n<h3>Stages<\/h3>\n<div>Stages are used to build a graph skeleton. The skeleton is then extended with additional functionality like provisioning, etc.<\/div>\n<div>The deployment graph of Fuel 6.1 has the following stages:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre><span class=\"o\">-<\/span> <span class=\"n\">pre_deployment_start<\/span>\r\n<span class=\"o\">-<\/span> <span class=\"n\">pre_deployment_end<\/span>\r\n<span class=\"o\">-<\/span> <span class=\"n\">deploy_start<\/span>\r\n<span class=\"o\">-<\/span> <span class=\"n\">deploy_end<\/span>\r\n<span class=\"o\">-<\/span> <span class=\"n\">post_deployment_start<\/span>\r\n<span class=\"o\">-<\/span> <span class=\"n\">post_deployment_end<\/span>\r\n<\/pre>\n<\/div>\n<\/div>\n<div>Here is the stage example:<\/div>\n<div class=\"highlight-yaml\">\n<div class=\"highlight\">\n<pre><span class=\"p-Indicator\">-<\/span> <span class=\"l-Scalar-Plain\">id<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">deploy_end<\/span>\r\n  <span class=\"l-Scalar-Plain\">type<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">stage<\/span>\r\n  <span class=\"l-Scalar-Plain\">requires<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">deploy_start<\/span><span class=\"p-Indicator\">]<\/span>\r\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"groups\" class=\"section\">\n<h3>Groups<\/h3>\n<div>In Fuel 6.1, groups are a representation of roles in the main deployment graph:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>- id: controller\r\n  type: group\r\n  role: [controller]\r\n  requires: [primary-controller]\r\n  required_for: [deploy_end]\r\n  parameters:\r\n    strategy:\r\n      type: parallel\r\n        amount: 6\r\n<\/pre>\n<\/div>\n<\/div>\n<div class=\"admonition note alert alert-info\">\n<div class=\"first admonition-title\">Note<\/div>\n<div class=\"last\">Primary-controller should be installed when Controller starts its own execution. The execution of this group should be finished to consider\u00a0<code><span class=\"pre\">deploy_end<\/span><\/code>\u00a0done.<\/div>\n<\/div>\n<div>Here is the full graph of groups, available in 6.1:<\/div>\n<p><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/groups.png\" alt=\"_images\/groups.png\" \/><\/p>\n<div id=\"strategy\" class=\"section\">\n<h4>Strategy<\/h4>\n<div>You can also specify a strategy for groups in the\u00a0<code><span class=\"pre\">parameters<\/span><\/code>\u00a0section. Fuel 6.1 supports the following strategies:<\/div>\n<ul class=\"simple\">\n<li>parallel &#8211; all nodes in this group will be executed in parallel. If there are other groups that do not depend on each other, they will be executed in parallel as well. For example, Cinder and Compute groups.<\/li>\n<li>parallel by amount &#8211; run in parallel by a specified number. For example,\u00a0<code><span class=\"pre\">amount:<\/span>\u00a0<span class=\"pre\">6<\/span><\/code>.<\/li>\n<li>one_by_one &#8211; deploy all nodes in this group in a strict one-by-one succession.<\/li>\n<\/ul>\n<\/div>\n<\/div>\n<div id=\"skipped\" class=\"section\">\n<h3>Skipped<\/h3>\n<div>Making a task\u00a0<code><span class=\"pre\">skipped<\/span><\/code>\u00a0will guarantee that this task will not be executed, but all the task&#8217;s depdendencies will be preserved:<\/div>\n<div class=\"highlight-yaml\">\n<div class=\"highlight\">\n<pre><span class=\"p-Indicator\">-<\/span> <span class=\"l-Scalar-Plain\">id<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">netconfig<\/span>\r\n  <span class=\"l-Scalar-Plain\">type<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">skipped<\/span>\r\n  <span class=\"l-Scalar-Plain\">groups<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">primary-controller<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">controller<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">cinder<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">compute<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">ceph-osd<\/span><span class=\"p-Indicator\">,<\/span>\r\n           <span class=\"nv\">zabbix-server<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">primary-mongo<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">mongo<\/span><span class=\"p-Indicator\">]<\/span>\r\n  <span class=\"l-Scalar-Plain\">required_for<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">deploy_end<\/span><span class=\"p-Indicator\">]<\/span>\r\n  <span class=\"l-Scalar-Plain\">requires<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">logging<\/span><span class=\"p-Indicator\">]<\/span>\r\n  <span class=\"l-Scalar-Plain\">parameters<\/span><span class=\"p-Indicator\">:<\/span>\r\n    <span class=\"l-Scalar-Plain\">puppet_manifest<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">\/etc\/puppet\/modules\/osnailyfacter\/other_path\/netconfig.pp<\/span>\r\n    <span class=\"l-Scalar-Plain\">puppet_modules<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">\/etc\/puppet\/modules<\/span>\r\n    <span class=\"l-Scalar-Plain\">timeout<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">3600<\/span>\r\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"puppet\" class=\"section\">\n<h3>Puppet<\/h3>\n<div>Task of\u00a0<code><span class=\"pre\">type:<\/span>\u00a0<span class=\"pre\">puppet<\/span><\/code>\u00a0is the preferable way to execute the deployment code on nodes. Only mcollective agent is capable of executing code in background.<\/div>\n<div>In Fuel 6.1, this is the only task that can be used in the main deployment stages, between\u00a0<code><span class=\"pre\">deploy_start<\/span><\/code>\u00a0and\u00a0<code><span class=\"pre\">deploy_end<\/span><\/code>.<\/div>\n<div>Example:<\/div>\n<div class=\"highlight-yaml\">\n<div class=\"highlight\">\n<pre><span class=\"p-Indicator\">-<\/span> <span class=\"l-Scalar-Plain\">id<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">netconfig<\/span>\r\n    <span class=\"l-Scalar-Plain\">type<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">puppet<\/span>\r\n    <span class=\"l-Scalar-Plain\">groups<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">primary-controller<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">controller<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">cinder<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">compute<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">ceph-osd<\/span><span class=\"p-Indicator\">,<\/span>\r\n             <span class=\"nv\">zabbix-server<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">primary-mongo<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">mongo<\/span><span class=\"p-Indicator\">]<\/span>\r\n    <span class=\"l-Scalar-Plain\">required_for<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">deploy_end<\/span><span class=\"p-Indicator\">]<\/span>\r\n    <span class=\"l-Scalar-Plain\">requires<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">logging<\/span><span class=\"p-Indicator\">]<\/span>\r\n    <span class=\"l-Scalar-Plain\">parameters<\/span><span class=\"p-Indicator\">:<\/span>\r\n      <span class=\"l-Scalar-Plain\">puppet_manifest<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">\/etc\/puppet\/modules\/osnailyfacter\/other_path\/netconfig.pp<\/span>\r\n      <span class=\"l-Scalar-Plain\">puppet_modules<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">\/etc\/puppet\/modules<\/span>\r\n      <span class=\"l-Scalar-Plain\">timeout<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">3600<\/span>\r\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"shell\" class=\"section\">\n<h3>Shell<\/h3>\n<div>Shell tasks should be used outside of the main deployment procedure. Basically, shell tasks will just execute the blocking command on specified roles.<\/div>\n<div>Example:<\/div>\n<div class=\"highlight-yaml\">\n<div class=\"highlight\">\n<pre><span class=\"p-Indicator\">-<\/span> <span class=\"l-Scalar-Plain\">id<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">enable_quorum<\/span>\r\n  <span class=\"l-Scalar-Plain\">type<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">shell<\/span>\r\n  <span class=\"l-Scalar-Plain\">role<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">primary-controller<\/span><span class=\"p-Indicator\">]<\/span>\r\n  <span class=\"l-Scalar-Plain\">requires<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">post_deployment_start<\/span><span class=\"p-Indicator\">]<\/span>\r\n  <span class=\"l-Scalar-Plain\">required_for<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">post_deployment_end<\/span><span class=\"p-Indicator\">]<\/span>\r\n  <span class=\"l-Scalar-Plain\">parameters<\/span><span class=\"p-Indicator\">:<\/span>\r\n    <span class=\"l-Scalar-Plain\">cmd<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">ruby \/etc\/puppet\/modules\/osnailyfacter\/modular\/astute\/enable_quorum.rb<\/span>\r\n    <span class=\"l-Scalar-Plain\">timeout<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">180<\/span>\r\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"upload-file\" class=\"section\">\n<h3>Upload file<\/h3>\n<div>This task will upload data specified in\u00a0<code><span class=\"pre\">data<\/span><\/code>\u00a0parameters to the\u00a0<code><span class=\"pre\">path<\/span><\/code>\u00a0destination:<\/div>\n<div class=\"highlight-yaml\">\n<div class=\"highlight\">\n<pre><span class=\"p-Indicator\">-<\/span> <span class=\"l-Scalar-Plain\">id<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">upload_data_to_file<\/span>\r\n  <span class=\"l-Scalar-Plain\">type<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">upload_file<\/span>\r\n  <span class=\"l-Scalar-Plain\">role<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"s\">'*'<\/span>\r\n  <span class=\"l-Scalar-Plain\">requires<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">pre_deployment_start<\/span><span class=\"p-Indicator\">]<\/span>\r\n  <span class=\"l-Scalar-Plain\">parameters<\/span><span class=\"p-Indicator\">:<\/span>\r\n    <span class=\"l-Scalar-Plain\">path<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">\/etc\/file_name<\/span>\r\n    <span class=\"l-Scalar-Plain\">data<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"s\">'arbitrary<\/span> <span class=\"s\">info'<\/span>\r\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"sync\" class=\"section\">\n<h3>Sync<\/h3>\n<div>Sync task will distribute files from\u00a0<code><span class=\"pre\">src<\/span><\/code>\u00a0direcory on the Fuel Master node to\u00a0<code><span class=\"pre\">dst<\/span><\/code>\u00a0directory on target hosts that will be matched by role:<\/div>\n<div class=\"highlight-yaml\">\n<div class=\"highlight\">\n<pre><span class=\"p-Indicator\">-<\/span> <span class=\"l-Scalar-Plain\">id<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">rsync_core_puppet<\/span>\r\n  <span class=\"l-Scalar-Plain\">type<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">sync<\/span>\r\n  <span class=\"l-Scalar-Plain\">role<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"s\">'*'<\/span>\r\n  <span class=\"l-Scalar-Plain\">required_for<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">pre_deployment_end<\/span><span class=\"p-Indicator\">]<\/span>\r\n  <span class=\"l-Scalar-Plain\">requires<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">upload_core_repos<\/span><span class=\"p-Indicator\">]<\/span>\r\n  <span class=\"l-Scalar-Plain\">parameters<\/span><span class=\"p-Indicator\">:<\/span>\r\n    <span class=\"l-Scalar-Plain\">src<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">rsync:\/\/&lt;FUEL_MASTER_IP&gt;:\/puppet\/<\/span>\r\n    <span class=\"l-Scalar-Plain\">dst<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">\/etc\/puppet<\/span>\r\n    <span class=\"l-Scalar-Plain\">timeout<\/span><span class=\"p-Indicator\">:<\/span>\r\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"copy-files\" class=\"section\">\n<h3>Copy files<\/h3>\n<div>Task with\u00a0<code><span class=\"pre\">copy_files<\/span><\/code>\u00a0type will read data from\u00a0<code><span class=\"pre\">src<\/span><\/code>\u00a0and save it in the file specified in\u00a0<code><span class=\"pre\">dst<\/span><\/code>\u00a0argument. Permissions can be specified for a group of files, as provided in example:<\/div>\n<div class=\"highlight-yaml\">\n<div class=\"highlight\">\n<pre><span class=\"p-Indicator\">-<\/span> <span class=\"l-Scalar-Plain\">id<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">copy_keys<\/span>\r\n  <span class=\"l-Scalar-Plain\">type<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">copy_files<\/span>\r\n  <span class=\"l-Scalar-Plain\">role<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"s\">'*'<\/span>\r\n  <span class=\"l-Scalar-Plain\">required_for<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">pre_deployment_end<\/span><span class=\"p-Indicator\">]<\/span>\r\n  <span class=\"l-Scalar-Plain\">requires<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">generate_keys<\/span><span class=\"p-Indicator\">]<\/span>\r\n  <span class=\"l-Scalar-Plain\">parameters<\/span><span class=\"p-Indicator\">:<\/span>\r\n    <span class=\"l-Scalar-Plain\">files<\/span><span class=\"p-Indicator\">:<\/span>\r\n      <span class=\"p-Indicator\">-<\/span> <span class=\"l-Scalar-Plain\">src<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">\/var\/lib\/fuel\/keys\/{CLUSTER_ID}\/neutron\/neutron.pub<\/span>\r\n        <span class=\"l-Scalar-Plain\">dst<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">\/var\/lib\/astute\/neutron\/neutron.pub<\/span>\r\n    <span class=\"l-Scalar-Plain\">permissions<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"s\">'0600'<\/span>\r\n    <span class=\"l-Scalar-Plain\">dir_permissions<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"s\">'0700'<\/span>\r\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"api\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>API<\/h3>\n<div>If you want to change or add some tasks right on the Fuel Master node, just add the\u00a0<code><span class=\"pre\">tasks.yaml<\/span><\/code>\u00a0file and respective manifests in the folder for the release that you are interested in. Then run the following command:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel rel --sync-deployment-tasks --dir \/etc\/puppet\r\n<\/pre>\n<\/div>\n<\/div>\n<div>If you want to overwrite the deployment tasks for any specific release\/cluster, use the following commands:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel rel --rel &lt;id&gt; --deployment-tasks --download\r\nfuel rel --rel &lt;id&gt; --deployment-tasks --upload\r\n\r\nfuel env --env &lt;id&gt; --deployment-tasks --download\r\nfuel env --env &lt;id&gt; --deployment-tasks --upload\r\n<\/pre>\n<\/div>\n<\/div>\n<div>After this is done, you will be able to run a customized graph of tasks. To do that, use a basic command:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel node --node &lt;1&gt;,&lt;2&gt;,&lt;3&gt; --tasks upload_repos netconfig\r\n<\/pre>\n<\/div>\n<\/div>\n<div>The developer will need to specify nodes that should be used in deployment and task IDs. The order in which these are provided does not matter. It will be computed from the dependencies specified in the database.<\/div>\n<div class=\"admonition note alert alert-info\">\n<div class=\"first admonition-title\">Note<\/div>\n<div class=\"last\">The node will not be executed, if a task is mapped to Controller role, but the node where you want to apply the task does not have this role.<\/div>\n<\/div>\n<\/div>\n<div id=\"skipping-tasks\" class=\"section\">\n<h3>Skipping tasks<\/h3>\n<div>Use the\u00a0<code><span class=\"pre\">skip<\/span><\/code>\u00a0parameter to skip tasks:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel node --node &lt;1&gt;,&lt;2&gt;,&lt;3&gt; --skip netconfig hiera\r\n<\/pre>\n<\/div>\n<\/div>\n<div>The list of tasks specified with the\u00a0<code><span class=\"pre\">skip<\/span><\/code>\u00a0parameter will be skipped during graph traversal in Nailgun.<\/div>\n<div>If there are task dependencies, you may want to make use of a &#8220;smarter&#8221; traversal &#8211; you will need to specify the start and end nodes in the graph:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel node --node &lt;1&gt;,&lt;2&gt;,&lt;3&gt; --end netconfig\r\n<\/pre>\n<\/div>\n<\/div>\n<div>This will deploy everything up to the netconfig task, including it. This means, that this commands will deploy all tasks that are a part of\u00a0<code><span class=\"pre\">pre_deployment<\/span><\/code>: keys generation, rsync manifests, sync time, upload repos, including such tasks as hiera setup, globals computation and maybe some other basic preparatory tasks:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel node --node &lt;1&gt;,&lt;2&gt;,&lt;3&gt; --start netconfig\r\n<\/pre>\n<\/div>\n<\/div>\n<div>Start from\u00a0<code><span class=\"pre\">netconfig<\/span><\/code>\u00a0task (including it), deploy all the tasks that are a part of\u00a0<code><span class=\"pre\">post_deployment<\/span><\/code>.<\/div>\n<div>For example, if you want to execute only the netconfig successors, use:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel node --node &lt;1&gt;,&lt;2&gt;,&lt;3&gt; --start netconfig --skip netconfig\r\n<\/pre>\n<\/div>\n<\/div>\n<div>You will also be able to use\u00a0<code><span class=\"pre\">start<\/span><\/code>\u00a0and\u00a0<code><span class=\"pre\">end<\/span><\/code>\u00a0at the same time:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel node --node &lt;1&gt;,&lt;2&gt;,&lt;3&gt; --start netconfig --end upload_cirros\r\n<\/pre>\n<\/div>\n<\/div>\n<div>Nailgun will build a path that includes only necessary tasks to join these two points.<\/div>\n<\/div>\n<div id=\"graph-representation\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Graph representation<\/h3>\n<div>Beginning with Fuel 6.1, in addition to commands above, there also exists a helper that allows to download deployment graph in DOT\u00a0<a class=\"reference external\" href=\"http:\/\/www.graphviz.org\/doc\/info\/lang.html\">DOT<\/a>\u00a0format and later render it.<\/div>\n<div id=\"commands-for-downloading-graphs\" class=\"section\">\n<h4>Commands for downloading graphs<\/h4>\n<div>Use the following commands to download graphs:<\/div>\n<ul>\n<li>\n<div class=\"first\">To download a full graph for environment with id 1 and print it on the screen, use the command below. Note, that it will print its output to the stdout.<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel graph --env &lt;1&gt; --download\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">To download graph and save it to the\u00a0<code><span class=\"pre\">graph.gv<\/span><\/code>\u00a0file:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel graph --env &lt;1&gt; --download &gt; graph.gv\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">It is also possible to specify the same options as for the deployment command. Point out start and end nodes in graph:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel graph --env &lt;1&gt; --download --start netconfig &gt; graph.gv\r\n\r\nfuel graph --env &lt;1&gt; --download --end netconfig &gt; graph.gv\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">You can also specify both:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel graph --env &lt;1&gt; --download --start netconfig --end upload_cirros &gt; graph.gv\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">To skip the tasks (they will be grayed out in the graph visualization), use:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel graph --env &lt;1&gt; --download --skip netconfig hiera  &gt; graph.gv\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">To completely remove skipped tasks from graph visualization, use\u00a0<code><span class=\"pre\">--remove<\/span><\/code>\u00a0parameter:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel graph --env &lt;1&gt; --download --start netconfig --end upload_cirros --remove skipped &gt; graph.gv\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">To see only parents of a particular tasks:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel graph --env 1 --download --parents-for hiera  &gt; graph.gv\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<\/ul>\n<\/div>\n<div id=\"commands-for-rendering-graphs\" class=\"section\">\n<h4>Commands for rendering graphs<\/h4>\n<ul>\n<li>\n<div class=\"first\">Downloaded graph in DOT format can be rendered. It requires additional packages to be installed:<\/div>\n<ul class=\"simple\">\n<li><a class=\"reference external\" href=\"http:\/\/www.graphviz.org\/\">Graphviz<\/a>\u00a0using\u00a0<code><span class=\"pre\">apt-get<\/span>\u00a0<span class=\"pre\">install<\/span>\u00a0<span class=\"pre\">graphviz<\/span><\/code>\u00a0or\u00a0<code><span class=\"pre\">yum<\/span>\u00a0<span class=\"pre\">install<\/span>\u00a0<span class=\"pre\">graphviz<\/span><\/code>\u00a0commands.<\/li>\n<li><a class=\"reference external\" href=\"https:\/\/pypi.python.org\/pypi\/pydot-ng\/\">pydot-ng<\/a>\u00a0using\u00a0<code><span class=\"pre\">pip<\/span>\u00a0<span class=\"pre\">install<\/span>\u00a0<span class=\"pre\">pydot-ng<\/span><\/code>\u00a0command or\u00a0<a class=\"reference external\" href=\"https:\/\/pypi.python.org\/pypi\/pygraphviz\">pygraphivz<\/a>\u00a0using\u00a0<code><span class=\"pre\">pip<\/span>\u00a0<span class=\"pre\">install<\/span>\u00a0<span class=\"pre\">pygraphivz<\/span><\/code>\u00a0command.<\/li>\n<\/ul>\n<\/li>\n<li>\n<div class=\"first\">After installing the packages, you can render the graph using the command below. It will take the contents of\u00a0<code><span class=\"pre\">graph.gv<\/span><\/code>\u00a0file, render it as a PNG image and save as<code><span class=\"pre\">graph.gv.png<\/span><\/code>.<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel graph --render graph.gv\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">To read graph representation from the st, use:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel graph --render -\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">To avoid creating an intermediate file when downloading and rendering graph, you can combine both commands:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel graph --env &lt;1&gt; --download | fuel graph --render -\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<\/ul>\n<\/div>\n<\/div>\n<div id=\"faq\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>FAQ<\/h3>\n<div id=\"what-can-i-use-for-deployment-with-groups\" class=\"section\">\n<h4>What can I use for deployment with groups?<\/h4>\n<div>In Fuel 6.1, it is possible to use only Puppet for the main deployment.<\/div>\n<div>All agents, except for Puppet, work in a blocking way. The current deployment model cannot execute some tasks that are blocking and non-blocking.<\/div>\n<div>In the\u00a0<code><span class=\"pre\">pre_deployment<\/span><\/code>\u00a0and\u00a0<code><span class=\"pre\">post_deployment<\/span><\/code>\u00a0stages, any of the supported task drivers can be used.<\/div>\n<\/div>\n<div id=\"is-it-possible-to-specify-cross-dependencies-between-groups\" class=\"section\">\n<h4>Is it possible to specify cross-dependencies between groups?<\/h4>\n<div>In Fuel 6.0 or earlier, there is no model that will allow to run tasks on a primary Controller, then run on a controlle with getting back to the primary Controller.<\/div>\n<div>In Fuel 6.1, cross-dependencies are resolved by the\u00a0<code><span class=\"pre\">post_deployment<\/span><\/code>\u00a0stage.<\/div>\n<\/div>\n<div id=\"how-i-can-end-at-the-provision-state\" class=\"section\">\n<h4>How I can end at the provision state?<\/h4>\n<div>Provision is not a part of task-based deployment in Fuel 6.1.<\/div>\n<\/div>\n<div id=\"how-to-stop-deployment-at-the-network-configuration-state\" class=\"section\">\n<h4>How to stop deployment at the network configuration state?<\/h4>\n<div><a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#cli-usage\"><em>Fuel CLI<\/em><\/a>\u00a0call can be used: it will execute the deployment up to the network configuration state:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel node --node &lt;1&gt;,&lt;2&gt;,&lt;3&gt; --end netconfig\r\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"additional-task-for-an-existing-role\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Additional task for an existing role<\/h3>\n<div>If you would like to add extra task for an existing role, follow these steps:<\/div>\n<ol class=\"arabic\">\n<li>\n<div class=\"first\">Add the task description to\u00a0<code><span class=\"pre\">\/etc\/puppet\/2014.2.2-6.1\/modules\/my_tasks.yaml<\/span><\/code>\u00a0file.<\/div>\n<div class=\"highlight-yaml\">\n<div class=\"highlight\">\n<pre><span class=\"p-Indicator\">-<\/span> <span class=\"l-Scalar-Plain\">id<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">my_task<\/span>\r\n<span class=\"l-Scalar-Plain\">type<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">puppet<\/span>\r\n<span class=\"l-Scalar-Plain\">groups<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">compute<\/span><span class=\"p-Indicator\">]<\/span>\r\n<span class=\"l-Scalar-Plain\">required_for<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">deploy_end<\/span><span class=\"p-Indicator\">]<\/span>\r\n<span class=\"l-Scalar-Plain\">requires<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">netconfig<\/span><span class=\"p-Indicator\">]<\/span>\r\n<span class=\"l-Scalar-Plain\">parameters<\/span><span class=\"p-Indicator\">:<\/span>\r\n   <span class=\"l-Scalar-Plain\">puppet_manifest<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">\/etc\/puppet\/modules\/my_task.pp<\/span>\r\n   <span class=\"l-Scalar-Plain\">puppet_modules<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">\/etc\/puppet\/modules<\/span>\r\n   <span class=\"l-Scalar-Plain\">timeout<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">3600<\/span>\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">Run the following command:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel rel --sync-deployment-tasks --dir \/etc\/puppet\/2014.2.2-6.1\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<\/ol>\n<div>After syncing the task to nailgun database, you will be able to deploy it on the selected groups.<\/div>\n<\/div>\n<div id=\"skipping-task-by-api-or-by-configuration\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Skipping task by API or by configuration<\/h3>\n<div>There are several mechanisms to skip a certain task.<\/div>\n<div>To skip a task, you can use one of the following:<\/div>\n<ul>\n<li>\n<div class=\"first\">Change the task&#8217;s type to\u00a0<code><span class=\"pre\">skipped<\/span><\/code>:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>- id: horizon\r\ntype: skipped\r\nrole: [primary-controller]\r\nrequires: [post_deployment_start]\r\nrequired_for: [post_deployment_end]\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">Add a condition that is always false:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre> - id: horizon\r\ntype: puppet\r\nrole: [primary-controller]\r\nrequires: [post_deployment_start]\r\nrequired_for: [post_deployment_end]\r\ncondition: 'true != false'\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">Do an API request:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel node --node &lt;1&gt;,&lt;2&gt;,&lt;3&gt; --skip horizon\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<\/ul>\n<\/div>\n<div id=\"creating-a-separate-role-and-attaching-a-task-to-it\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Creating a separate role and attaching a task to it<\/h3>\n<div>To create a separate role and attach a task to it, follow these steps:<\/div>\n<ol class=\"arabic\">\n<li>\n<div class=\"first\">Create a file with\u00a0<code><span class=\"pre\">redis.yaml<\/span><\/code>\u00a0with the following content:<\/div>\n<div class=\"highlight-yaml\">\n<div class=\"highlight\">\n<pre><span class=\"l-Scalar-Plain\">meta<\/span><span class=\"p-Indicator\">:<\/span>\r\n  <span class=\"l-Scalar-Plain\">description<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">Simple redis server<\/span>\r\n  <span class=\"l-Scalar-Plain\">name<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">Controller<\/span>\r\n<span class=\"l-Scalar-Plain\">name<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">redis<\/span>\r\n<span class=\"l-Scalar-Plain\">volumes_roles_mapping<\/span><span class=\"p-Indicator\">:<\/span>\r\n  <span class=\"p-Indicator\">-<\/span> <span class=\"l-Scalar-Plain\">allocate_size<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">min<\/span>\r\n    <span class=\"l-Scalar-Plain\">id<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">os<\/span>\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">Create a role:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel role --rel 1 --create --file redis.yaml\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">After this is done, you can go to the Fuel web UI and check if a role\u00a0<em>redis<\/em>\u00a0is created.<\/div>\n<\/li>\n<li>\n<div class=\"first\">You can now attach tasks to the role. First, install redis puppet module:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>puppet module install thomasvandoren-redis\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">Write a simple manifest to\u00a0<code><span class=\"pre\">\/etc\/puppet\/modules\/redis\/example\/simple_redis.pp<\/span><\/code>\u00a0and include\u00a0<em>redis<\/em>.<\/div>\n<\/li>\n<li>\n<div class=\"first\">Create a configuration for Fuel in\u00a0<code><span class=\"pre\">\/etc\/puppet\/modules\/redis\/example\/redis_tasks.yaml<\/span><\/code>:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre># redis group\r\n  - id: redis\r\n    type: group\r\n    role: [redis]\r\n    required_for: [deploy_end]\r\n    tasks: [globals, hiera, netconfig, install_redis]\r\n    parameters:\r\n      strategy:\r\n        type: parallel\r\n\r\n# Install simple redis server\r\n  - id: install_redis\r\n    type: puppet\r\n    requires: [netconfig]\r\n    required_for: [deploy_end]\r\n    parameters:\r\n      puppet_manifest: \/etc\/puppet\/modules\/redis\/example\/simple_redis.pp\r\n      puppet_modules: \/etc\/puppet\/modules\r\n      timeout: 180\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">Run the following command:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel rel --sync-deployment-tasks --dir \/etc\/puppet\/2014.2.2-6.1\/\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\"><a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#create-env-ug\"><em>Create an enviroment<\/em><\/a>. Note the following:<\/div>\n<ul class=\"simple\">\n<li><a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#network-settings-ug\"><em>configure public network<\/em><\/a>\u00a0properly since\u00a0<em>redis<\/em>\u00a0packages are fetched from the upstream.<\/li>\n<li>enable\u00a0<em>Assign public network to all nodes<\/em>\u00a0option on the\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/user-guide.html#settings-ug\"><em>Settings<\/em><\/a>\u00a0tab of the Fuel web UI.<\/li>\n<\/ul>\n<\/li>\n<li>\n<div class=\"first\">Provision the\u00a0<em>redis<\/em>\u00a0node:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel node --node &lt;1&gt; --env &lt;1&gt; --provision\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<li>\n<div class=\"first\">Finish the installation on\u00a0<code><span class=\"pre\">install_redis<\/span><\/code>\u00a0(there is no need to execute all tasks from\u00a0<code><span class=\"pre\">post_deployment<\/span><\/code>\u00a0stage):<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre>fuel node --node &lt;1&gt; --end install_redis\r\n<\/pre>\n<\/div>\n<\/div>\n<\/li>\n<\/ol>\n<\/div>\n<div id=\"swapping-a-task-with-a-custom-task\" class=\"section\">\n<p>&nbsp;<\/p>\n<h3>Swapping a task with a custom task<\/h3>\n<div>To swap a task with a custom one, you should change the path to the executable file:<\/div>\n<div class=\"highlight-yaml\">\n<div class=\"highlight\">\n<pre><span class=\"p-Indicator\">-<\/span> <span class=\"l-Scalar-Plain\">id<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">netconfig<\/span>\r\n  <span class=\"l-Scalar-Plain\">type<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">puppet<\/span>\r\n  <span class=\"l-Scalar-Plain\">groups<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">primary-controller<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">controller<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">cinder<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">compute<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">ceph-osd<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">zabbix-server<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">primary-mongo<\/span><span class=\"p-Indicator\">,<\/span> <span class=\"nv\">mongo<\/span><span class=\"p-Indicator\">]<\/span>\r\n  <span class=\"l-Scalar-Plain\">required_for<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">deploy_end<\/span><span class=\"p-Indicator\">]<\/span>\r\n  <span class=\"l-Scalar-Plain\">requires<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"p-Indicator\">[<\/span><span class=\"nv\">logging<\/span><span class=\"p-Indicator\">]<\/span>\r\n  <span class=\"l-Scalar-Plain\">parameters<\/span><span class=\"p-Indicator\">:<\/span>\r\n      <span class=\"c1\"># old puppet manifest<\/span>\r\n      <span class=\"c1\"># puppet_manifest: \/etc\/puppet\/modules\/osnailyfacter\/netconfig.pp<\/span>\r\n\r\n      <span class=\"l-Scalar-Plain\">puppet manifest<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">\/etc\/puppet\/modules\/osnailyfacter\/custom_network_configuration.pp<\/span>\r\n      <span class=\"l-Scalar-Plain\">puppet_modules<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">\/etc\/puppet\/modules<\/span>\r\n      <span class=\"l-Scalar-Plain\">timeout<\/span><span class=\"p-Indicator\">:<\/span> <span class=\"l-Scalar-Plain\">3600<\/span>\r\n<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"the-fuel-master-node-containers-structure\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>The Fuel Master node containers structure<\/h2>\n<div>Most services hosted on the Fuel Master node, require connectivity to PXE network. The services used only for internal Fuel processes (such as Nailgun and Postgres) are limited to local connections only.<\/div>\n<div id=\"containers-structure\" class=\"section\">\n<h3>Containers structure<\/h3>\n<p><a class=\"reference internal image-reference\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/fuel-master-node-containers.png\"><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/fuel-master-node-containers.png\" alt=\"_images\/fuel-master-node-containers.png\" \/><\/a><\/p>\n<table class=\"table\" border=\"0\">\n<colgroup>\n<col width=\"22%\" \/>\n<col width=\"29%\" \/>\n<col width=\"48%\" \/><\/colgroup>\n<thead valign=\"bottom\">\n<tr class=\"row-odd\">\n<th class=\"head\">Container<\/th>\n<th class=\"head\">Ports<\/th>\n<th class=\"head\">Allow connections from<\/th>\n<\/tr>\n<\/thead>\n<tbody valign=\"top\">\n<tr class=\"row-even\">\n<td>Cobbler<\/td>\n<td>TCP 80, 443 UDP 53, 69<\/td>\n<td>PXE network only<\/td>\n<\/tr>\n<tr class=\"row-odd\">\n<td>Postgres<\/td>\n<td>TCP 5432<\/td>\n<td>the Fuel Master node only<\/td>\n<\/tr>\n<tr class=\"row-even\">\n<td>RabbitMQ<\/td>\n<td>TCP 5672,4369 15672,61613<\/td>\n<td>PXE network only<\/td>\n<\/tr>\n<tr class=\"row-odd\">\n<td>Rsync<\/td>\n<td>TCP 873<\/td>\n<td>PXE network only<\/td>\n<\/tr>\n<tr class=\"row-even\">\n<td>Astute<\/td>\n<td>none<\/td>\n<td>N\/A<\/td>\n<\/tr>\n<tr class=\"row-odd\">\n<td>Nailgun<\/td>\n<td>TCP 8001<\/td>\n<td>the Fuel Master node only<\/td>\n<\/tr>\n<tr class=\"row-even\">\n<td>OSTF<\/td>\n<td>TCP 8777<\/td>\n<td>the Fuel Master node only<\/td>\n<\/tr>\n<tr class=\"row-odd\">\n<td>Nginx<\/td>\n<td>TCP 8000,8080<\/td>\n<td>the Fuel Master node only<\/td>\n<\/tr>\n<tr class=\"row-even\">\n<td>Rsyslog<\/td>\n<td>TCP 8777,25150 UDP 514<\/td>\n<td>PXE network only<\/td>\n<\/tr>\n<tr class=\"row-odd\">\n<td>MCollective<\/td>\n<td>none<\/td>\n<td>N\/A<\/td>\n<\/tr>\n<tr class=\"row-even\">\n<td>Keystone<\/td>\n<td>TCP 5000,35357<\/td>\n<td>PXE network only<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<\/div>\n<div id=\"fuel-repository-mirroring\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>Fuel Repository Mirroring<\/h2>\n<div>Starting in Mirantis OpenStack 6.1, the location of repositories now extends beyond just being local to the Fuel Master. It is now assumed that a given user will have Internet access and can download content from Mirantis and upstream mirrors. This impacts users with limited Internet access or unreliable connections.<\/div>\n<div>Internet-based mirrors can be broken down into three categories:<\/div>\n<ul class=\"simple\">\n<li>Ubuntu<\/li>\n<li>MOS DEBs<\/li>\n<li>MOS RPMs<\/li>\n<\/ul>\n<div>There are two command-line utilities,\u00a0<code><span class=\"pre\">fuel-createmirror<\/span><\/code>\u00a0and\u00a0<code><span class=\"pre\">fuel-package-updates<\/span><\/code>, which can replicate the mirrors.<\/div>\n<div>Use\u00a0<code><span class=\"pre\">fuel-createmirror<\/span><\/code>\u00a0for Ubuntu and MOS DEBs packages.<\/div>\n<div>Use\u00a0<code><span class=\"pre\">fuel-package-updates<\/span><\/code>\u00a0for MOS RPMs packages.<\/div>\n<div><code><span class=\"pre\">fuel-createmirror<\/span><\/code>\u00a0is a utility that can be used as a backend to replicate part or all of an APT repository. It can replicate Ubuntu and MOS DEBs repositories. It uses rsync as a backend. See\u00a0<a class=\"reference internal\" href=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/operations.html#external-ubuntu-ops\"><em>Downloading Ubuntu system packages<\/em><\/a>.<\/div>\n<div><code><span class=\"pre\">fuel-package-updates<\/span><\/code>\u00a0is a utility written in Python that can pull entire APT and YUM repositories via recursive wget or rsync. Additionally, it can update Fuel environment configurations to use a given set of configuration.<\/div>\n<div>Issue the following command to check the\u00a0<code><span class=\"pre\">fuel-package-updates<\/span><\/code>\u00a0options:<\/div>\n<div class=\"highlight-python\">\n<div class=\"highlight\">\n<pre><span class=\"n\">fuel<\/span><span class=\"o\">-<\/span><span class=\"n\">package<\/span><span class=\"o\">-<\/span><span class=\"n\">updates<\/span> <span class=\"o\">-<\/span><span class=\"n\">h<\/span>\r\n<\/pre>\n<\/div>\n<\/div>\n<div class=\"admonition note alert alert-info\">\n<div class=\"first admonition-title\">Note<\/div>\n<div class=\"last\">If you change the default password (admin) in Fuel web UI, you will need to run the utility with the\u00a0<code><span class=\"pre\">--password<\/span><\/code>\u00a0switch, or it will fail.<\/div>\n<\/div>\n<div class=\"admonition seealso alert alert-info\">\n<div class=\"first admonition-title\">See also<\/div>\n<div class=\"last\">Documentation on\u00a0<a class=\"reference external\" href=\"http:\/\/docs.fuel-infra.org\/fuel-dev\/develop\/separateMOS.html\">MOS RPMs mirror structure<\/a>.<\/div>\n<\/div>\n<\/div>\n<div id=\"mirantis-openstack-6-1-network-performance-changes-and-results\" class=\"section\">\n<p>&nbsp;<\/p>\n<h2>Mirantis OpenStack 6.1 Network Performance Changes and Results<\/h2>\n<div id=\"architecture-in-6-1-as-compared-to-6-0\" class=\"section\">\n<h3>Architecture in 6.1 as compared to 6.0<\/h3>\n<div>The network architecture in Mirantis OpenStack 6.1 has undergone considerable changes when compared to Mirantis OpenStack 6.0 and older releases.<\/div>\n<div>In Mirantis OpenStack 6.0 (MOS 6.0) bridging, bonding, and VLAN segmentation were provided by Open vSwitch.<\/div>\n<div>In Mirantis OpenStack 6.1 (MOS 6.1) Open vSwitch provides infrastructure only required for Neutron. All other networks, bridges, bonds are provided by native Linux means.<\/div>\n<p><img decoding=\"async\" src=\"https:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/_images\/6061network.png\" alt=\"_images\/6061network.png\" \/><\/p>\n<\/div>\n<div id=\"mirantis-openstack-6-1-network-performance-hardware\" class=\"section\">\n<h3>Mirantis OpenStack 6.1 Network Performance Hardware<\/h3>\n<div>The following hardware was used to run the network performance tests:<\/div>\n<ul class=\"simple\">\n<li>Compute nodes:\n<ul>\n<li>Node is a part of DELL microcloud with 10Gbps NICs.<\/li>\n<li>Node has 4xIntel(R) Xeon(R) CPU E3-1230 v3 @ 3.30GHz, 32GB RAM.<\/li>\n<\/ul>\n<\/li>\n<li>Network infrastructure:\n<ul>\n<li>10GE network built by one Dell PowerConnect 8132 10GE switch (PCT8132).<\/li>\n<\/ul>\n<\/li>\n<\/ul>\n<\/div>\n<div id=\"storage-network-performance\" class=\"section\">\n<h3>Storage network performance<\/h3>\n<div>Storage network on the 10ge interface.<\/div>\n<div>The following results were achieved for default MTU and no NIC tuning:<\/div>\n<ul class=\"simple\">\n<li>Centos\/MOS-6.0 &#8212; 9.4 Gbit\/s<\/li>\n<li>Ubuntu\/MOS-6.0 &#8212; 8.3 Gbit\/s<\/li>\n<li>Ubuntu\/MOS-6.1 &#8212; 9.4 Gbit\/s<\/li>\n<\/ul>\n<div>The following results were achieved for MTU=9000 and NIC with offloading enabled:<\/div>\n<ul class=\"simple\">\n<li>CentOS\/MOS-6.0 &#8212; 9.9 Gbit\/s<\/li>\n<li>Ubuntu\/MOS-6.0 &#8212; 9.4 Gbit\/s<\/li>\n<li>Ubuntu\/MOS-6.1 &#8212; 9.9 Gbit\/s<\/li>\n<\/ul>\n<\/div>\n<div id=\"virtual-network-vm-to-vm-performance-vlan-segmentation\" class=\"section\">\n<h3>Virtual network (VM to VM) performance (VLAN segmentation)<\/h3>\n<div>Private network on the 10ge interface.<\/div>\n<div>The following results were achieved for default MTU and no NIC tuning:<\/div>\n<ul class=\"simple\">\n<li>CentOS\/MOS-6.0 &#8212; 2.8 Gbit\/s<\/li>\n<li>Ubuntu\/MOS-6.0 &#8212; 3.8 Gbit\/s<\/li>\n<li>Ubuntu\/MOS-6.1 &#8212; 3.3 Gbit\/s<\/li>\n<\/ul>\n<div>The following results were achieved for MTU=9000 and NIC with offloading enabled:<\/div>\n<ul class=\"simple\">\n<li>CentOS\/MOS-6.0 &#8212; 7.4 Gbit\/s<\/li>\n<li>Ubuntu\/MOS-6.0 &#8212; 9.9 Gbit\/s<\/li>\n<li>Ubuntu\/MOS-6.1 &#8212; 9.9 Gbit\/s<\/li>\n<\/ul>\n<\/div>\n<div id=\"virtual-network-vm-to-vm-performance-gre-segmentation\" class=\"section\">\n<h3>Virtual network (VM to VM) performance (GRE segmentation)<\/h3>\n<div>The following results were achieved for Mirantis OpenStack 6.1 Ubuntu based environments:<\/div>\n<ul class=\"simple\">\n<li>Non-optimized network &#8212; 3.5 Gbit\/s<\/li>\n<li>Optimized network &#8212; 9.7 Gbit\/s<\/li>\n<\/ul>\n<\/div>\n<\/div>\n<p>source:<br \/>\nhttps:\/\/docs.mirantis.com\/openstack\/fuel\/fuel-6.1\/reference-architecture.html<\/p>\n<\/div>\n<div class=\"article-content entry-content\">http:\/\/sdnfv.blogspot.com\/2015\/10\/openstack-environment-architecture.html<\/div>\n","protected":false},"excerpt":{"rendered":"<p>OpenStack Environment Architecture Fuel deploys an OpenStack Environment with nodes that provide a specific set of functionality. Beginning with Fuel 5.0, a single architecture model can support HA (High Availability) and non-HA deployments; you can deploy a non-HA environment and then add additional nodes to implement HA rather than needing to redeploy the environment from scratch. The OpenStack environment consists of multiple physical server nodes (or an equivalent VM), each of which is one of the following node types: Controller:&#8230;<\/p>\n<p class=\"read-more\"><a class=\"btn btn-default\" href=\"https:\/\/www.rickyadams.com\/wp\/310\/\"> Read More<span class=\"screen-reader-text\">  Read More<\/span><\/a><\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11,3],"tags":[],"class_list":["post-310","post","type-post","status-publish","format-standard","hentry","category-openstack","category-virtualization"],"_links":{"self":[{"href":"https:\/\/www.rickyadams.com\/wp\/wp-json\/wp\/v2\/posts\/310","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/www.rickyadams.com\/wp\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/www.rickyadams.com\/wp\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/www.rickyadams.com\/wp\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/www.rickyadams.com\/wp\/wp-json\/wp\/v2\/comments?post=310"}],"version-history":[{"count":2,"href":"https:\/\/www.rickyadams.com\/wp\/wp-json\/wp\/v2\/posts\/310\/revisions"}],"predecessor-version":[{"id":312,"href":"https:\/\/www.rickyadams.com\/wp\/wp-json\/wp\/v2\/posts\/310\/revisions\/312"}],"wp:attachment":[{"href":"https:\/\/www.rickyadams.com\/wp\/wp-json\/wp\/v2\/media?parent=310"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/www.rickyadams.com\/wp\/wp-json\/wp\/v2\/categories?post=310"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/www.rickyadams.com\/wp\/wp-json\/wp\/v2\/tags?post=310"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}