Virsh provisioner is explicitly designed to be used for setup of virtual environments.
Such environments are used to emulate production environment like tripleo-undercloud
instances on one baremetal machine. It requires one prepared baremetal host (designated
to be reachable through SSH initially.
First, Libvirt and KVM are installed and configured to provide a virtualized environment. Then, virtual machines are created for all requested nodes.
The first thing you need to decide before you deploy your environment is the
This refers to the number and type of VMs in your desired deployment environment.
If we use OpenStack as an example, a topology may look something like:
- 1 VM called undercloud
- 1 VM called controller
- 1 VM called compute
To control how each VM is created, we have creates a YAML file that describes the specification of each VM. For more information about the structure of the topology files and how to create your own, please refer to Topology.
Please see Bootstrap guide where usage is demonstrated.
Baremetal machine used as host for such setup is called hypervisor. The whole deployment is designed to work within boundaries of this machine and (except public/natted traffic) shouldn’t reach beyond. The following layout is part of default setup defined in plugins defaults:
hypervisor | +--------+ nic0 - public IP | +--------+ nic1 - not managed | ... Libvirt VM's | | ------+--------+ data bridge (ctlplane, 192.0.2/24) +------+ data (nic0) | | | libvirt --+--------+ management bridge (nat, dhcp, 172.16.0/24) +------+ managementnt (nic1) | | | ------+--------+ external bridge (nat, dhcp, 10.0.0/24) +------+ external (nic2)
On hypervisor, there are 3 new bridges created with libvirt - data, management and external.
Most important is data network which does not have DHCP and NAT enabled.
This network can later be used as
ctlplane for OSP director deployments (tripleo-undercloud).
Other (usually physical) interfaces are not used (nic0, nic1, ...) except for public/natted traffic.
External network is used for SSH forwarding so client (or Ansible) can access dynamically created nodes.
By default, all networks above are NATed, meaning that they private networks only reachable via the hypervisor node. infrared configures the nodes SSH connection to use the hypervisor host as proxy.
Some use-cases call for direct access to some of the nodes.
This is achieved by adding a network with
forward: bridge in its attributes to the
network-topology file, and marking this network as external network on the relevant node
The result will create a virtual bridge on the hypervisor connected to the main NIC by default. VMs attached to this bridge will be served by the same LAN as the hypervisor.
To specify any secondary NIC for the bridge, the
nic property should be added to the network
file under the bridge network:
net4: name: br1 forward: bridge nic: eth1
Be careful when using this feature. For example, an
in this manner can disrupt the LAN by serving as an unauthorized DHCP server.
infrared virsh [...] --topology-nodes ironic:1,[...] --topology-network 3_net_1_bridge [...]
- Setup libvirt and kvm environment
- Setup libvirt networks
- Download base image for undercloud (
- Create desired amount of images and integrate to libvirt
- Define virtual machines with requested parameters (
- Start virtual machines
Environments prepared such in way are usually used as basic virtual infrastructure for tripleo-undercloud.
Virsh provisioner has idempotency issues, so
infrared virsh ... --cleanup must be run before reprovisioning every time.