- 1. Installation Overview
- 2. Requirements
- 3. Preparing Storage for oVirt
- 4. Installing the Self-hosted Engine Deployment Host
- 5. Installing the oVirt Engine
- 6. Installing Hosts for oVirt
- 7. Adding Storage for oVirt
- Appendix A: Troubleshooting a Self-hosted Engine Deployment
- Appendix B: Migrating Databases and Services to a Remote Server
- Appendix C: Setting up Cinderlib
- Appendix D: Configuring a Host for PCI Passthrough
- Appendix E: Removing the standalone oVirt Engine
- Appendix F: Legal notice
Installing oVirt as a self-hosted engine using the Cockpit web interface
Self-hosted engine installation is automated using Ansible. The Cockpit web interface’s installation wizard runs on an initial deployment host, and the oVirt Engine (or "engine") is installed and configured on a virtual machine that is created on the deployment host. The Engine and Data Warehouse databases are installed on the Engine virtual machine, but can be migrated to a separate server post-installation if required.
Cockpit is available by default on oVirt Nodes, and can be installed on Enterprise Linux hosts.
Hosts that can run the Engine virtual machine are referred to as self-hosted engine nodes. At least two self-hosted engine nodes are required to support the high availability feature.
A storage domain dedicated to the Engine virtual machine is referred to as the self-hosted engine storage domain. This storage domain is created by the installation script, so the underlying storage must be prepared before beginning the installation.
See the Planning and Prerequisites Guide for information on environment options and recommended configuration. See Self-Hosted Engine Recommendations for configuration specific to a self-hosted engine environment.
Component Name | Description |
---|---|
oVirt Engine |
A service that provides a graphical user interface and a REST API to manage the resources in the environment. The Engine is installed on a physical or virtual machine running Enterprise Linux. |
Hosts |
Enterprise Linux hosts (Enterprise Linux hosts) and oVirt Nodes (image-based hypervisors) are the two supported types of host. Hosts use Kernel-based Virtual Machine (KVM) technology and provide resources used to run virtual machines. |
Shared Storage |
A storage service is used to store the data associated with virtual machines. |
Data Warehouse |
A service that collects configuration information and statistical data from the Engine. |
Self-Hosted Engine Architecture
The oVirt Engine runs as a virtual machine on self-hosted engine nodes (specialized hosts) in the same environment it manages. A self-hosted engine environment requires one less physical server, but requires more administrative overhead to deploy and manage. The Engine is highly available without external HA management.
The minimum setup of a self-hosted engine environment includes:
-
One oVirt Engine virtual machine that is hosted on the self-hosted engine nodes. The Engine Appliance is used to automate the installation of a Enterprise Linux 8 virtual machine, and the Engine on that virtual machine.
-
A minimum of two self-hosted engine nodes for virtual machine high availability. You can use Enterprise Linux hosts or oVirt Nodes (oVirt Node). VDSM (the host agent) runs on all hosts to facilitate communication with the oVirt Engine. The HA services run on all self-hosted engine nodes to manage the high availability of the Engine virtual machine.
-
One storage service, which can be hosted locally or on a remote server, depending on the storage type used. The storage service must be accessible to all hosts.

1. Installation Overview
The self-hosted engine installation uses Ansible and the Engine Appliance (a pre-configured Engine virtual machine image) to automate the following tasks:
-
Configuring the first self-hosted engine node
-
Installing a Enterprise Linux virtual machine on that node
-
Installing and configuring the oVirt Engine on that virtual machine
-
Configuring the self-hosted engine storage domain
The Engine Appliance is only used during installation. It is not used to upgrade the Engine. |
Installing a self-hosted engine environment involves the following steps:
-
Prepare storage to use for the self-hosted engine storage domain and for standard storage domains. You can use one of the following storage types:
-
Install a deployment host to run the installation on. This host will become the first self-hosted engine node. You can use either host type:
-
Cockpit is available by default on oVirt Nodes, and can be installed on Enterprise Linux hosts.
-
Add more self-hosted engine nodes and standard hosts to the Engine. Self-hosted engine nodes can run the Engine virtual machine and other virtual machines. Standard hosts can run all other virtual machines, but not the Engine virtual machine.
-
Use either host type, or both:
-
-
Add more storage domains to the Engine. The self-hosted engine storage domain is not recommended for use by anything other than the Engine virtual machine.
-
If you want to host any databases or services on a server separate from the Engine, you can migrate them after the installation is complete.
Keep the environment up to date. Since bug fixes for known issues are frequently released, use scheduled tasks to update the hosts and the Engine. |
2. Requirements
2.1. oVirt Engine Requirements
2.1.1. Hardware Requirements
The minimum and recommended hardware requirements outlined here are based on a typical small to medium-sized installation. The exact requirements vary between deployments based on sizing and load.
The oVirt Engine runs on Enterprise Linux operating systems like CentOS Linux or Red Hat Enterprise Linux.
Resource | Minimum | Recommended |
---|---|---|
CPU |
A dual core x86_64 CPU. |
A quad core x86_64 CPU or multiple dual core x86_64 CPUs. |
Memory |
4 GB of available system RAM if Data Warehouse is not installed and if memory is not being consumed by existing processes. |
16 GB of system RAM. |
Hard Disk |
25 GB of locally accessible, writable disk space. |
50 GB of locally accessible, writable disk space. You can use the RHV Engine History Database Size Calculator to calculate the appropriate disk space for the Engine history database size. |
Network Interface |
1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. |
1 Network Interface Card (NIC) with bandwidth of at least 1 Gbps. |
2.1.2. Browser Requirements
The following browser versions and operating systems can be used to access the Administration Portal and the VM Portal.
Browser testing is divided into tiers:
-
Tier 1: Browser and operating system combinations that are fully tested.
-
Tier 2: Browser and operating system combinations that are partially tested, and are likely to work.
-
Tier 3: Browser and operating system combinations that are not tested, but may work.
Support Tier | Operating System Family | Browser |
---|---|---|
Tier 1 |
Enterprise Linux |
Mozilla Firefox Extended Support Release (ESR) version |
Any |
Most recent version of Google Chrome, Mozilla Firefox, or Microsoft Edge |
|
Tier 2 |
||
Tier 3 |
Any |
Earlier versions of Google Chrome or Mozilla Firefox |
Any |
Other browsers |
2.1.3. Client Requirements
Virtual machine consoles can only be accessed using supported Remote Viewer (virt-viewer
) clients on Enterprise Linux and Windows. To install virt-viewer
, see Installing Supporting Components on Client Machines in the Virtual Machine Management Guide. Installing virt-viewer
requires Administrator privileges.
Virtual machine consoles are accessed through the SPICE, VNC, or RDP (Windows only) protocols. The QXL graphical driver can be installed in the guest operating system for improved/enhanced SPICE functionalities. SPICE currently supports a maximum resolution of 2560x1600 pixels.
Supported QXL drivers are available on Enterprise Linux, Windows XP, and Windows 7.
SPICE support is divided into tiers:
-
Tier 1: Operating systems on which Remote Viewer has been fully tested and is supported.
-
Tier 2: Operating systems on which Remote Viewer is partially tested and is likely to work. Limited support is provided for this tier. Red Hat Engineering will attempt to fix issues with remote-viewer on this tier.
Support Tier | Operating System |
---|---|
Tier 1 |
Enterprise Linux 7.2 and later |
Microsoft Windows 7 |
|
Tier 2 |
Microsoft Windows 8 |
Microsoft Windows 10 |
2.1.4. Operating System Requirements
The oVirt Engine must be installed on a base installation of Enterprise Linux 8 that has been updated to the latest minor release.
Do not install any additional packages after the base installation, as they may cause dependency issues when attempting to install the packages required by the Engine.
Do not enable additional repositories other than those required for the Engine installation.
2.2. Host Requirements
2.2.1. CPU Requirements
All CPUs must have support for the Intel® 64 or AMD64 CPU extensions, and the AMD-V™ or Intel VT® hardware virtualization extensions enabled. Support for the No eXecute flag (NX) is also required.
The following CPU models are supported:
-
AMD
-
Opteron G4
-
Opteron G5
-
EPYC
-
-
Intel
-
Nehalem
-
Westmere
-
SandyBridge
-
IvyBridge
-
Haswell
-
Broadwell
-
Skylake Client
-
Skylake Server
-
Cascadelake Server
-
-
IBM
-
POWER8
-
POWER9
-
For each CPU model with security updates, the CPU Type lists a basic type and a secure type. For example: * Intel Cascadelake Server Family * Secure Intel Cascadelake Server Family
The Secure CPU type contains the latest updates. For details, see BZ#1731395
Checking if a Processor Supports the Required Flags
You must enable virtualization in the BIOS. Power off and reboot the host after this change to ensure that the change is applied.
-
At the Enterprise Linux or oVirt Node boot screen, press any key and select the Boot or Boot with serial console entry from the list.
-
Press
Tab
to edit the kernel parameters for the selected option. -
Ensure there is a space after the last kernel parameter listed, and append the parameter
rescue
. -
Press
Enter
to boot into rescue mode. -
At the prompt, determine that your processor has the required extensions and that they are enabled by running this command:
# grep -E 'svm|vmx' /proc/cpuinfo | grep nx
If any output is shown, the processor is hardware virtualization capable. If no output is shown, your processor may still support hardware virtualization; in some circumstances manufacturers disable the virtualization extensions in the BIOS. If you believe this to be the case, consult the system’s BIOS and the motherboard manual provided by the manufacturer.
2.2.2. Memory Requirements
The minimum required RAM is 2 GB. The maximum supported RAM per VM in oVirt Node is 4 TB.
However, the amount of RAM required varies depending on guest operating system requirements, guest application requirements, and guest memory activity and usage. KVM can also overcommit physical RAM for virtualized guests, allowing you to provision guests with RAM requirements greater than what is physically present, on the assumption that the guests are not all working concurrently at peak load. KVM does this by only allocating RAM for guests as required and shifting underutilized guests into swap.
2.2.3. Storage Requirements
Hosts require storage to store configuration, logs, kernel dumps, and for use as swap space. Storage can be local or network-based. oVirt Node (oVirt Node) can boot with one, some, or all of its default allocations in network storage. Booting from network storage can result in a freeze if there is a network disconnect. Adding a drop-in multipath configuration file can help address losses in network connectivity. If oVirt Node boots from SAN storage and loses connectivity, the files become read-only until network connectivity restores. Using network storage might result in a performance downgrade.
The minimum storage requirements of oVirt Node are documented in this section. The storage requirements for Enterprise Linux hosts vary based on the amount of disk space used by their existing configuration but are expected to be greater than those of oVirt Node.
The minimum storage requirements for host installation are listed below. However, use the default allocations, which use more storage space.
-
/ (root) - 6 GB
-
/home - 1 GB
-
/tmp - 1 GB
-
/boot - 1 GB
-
/var - 15 GB
-
/var/crash - 10 GB
-
/var/log - 8 GB
-
/var/log/audit - 2 GB
-
swap - 1 GB (for the recommended swap size, see https://access.redhat.com/solutions/15244)
-
Anaconda reserves 20% of the thin pool size within the volume group for future metadata expansion. This is to prevent an out-of-the-box configuration from running out of space under normal usage conditions. Overprovisioning of thin pools during installation is also not supported.
-
Minimum Total - 55 GB
If you are also installing the Engine Appliance for self-hosted engine installation, /var/tmp
must be at least 5 GB.
If you plan to use memory overcommitment, add enough swap space to provide virtual memory for all of virtual machines. See Memory Optimization.
2.2.4. PCI Device Requirements
Hosts must have at least one network interface with a minimum bandwidth of 1 Gbps. Each host should have two network interfaces, with one dedicated to supporting network-intensive activities, such as virtual machine migration. The performance of such operations is limited by the bandwidth available.
For information about how to use PCI Express and conventional PCI devices with Intel Q35-based virtual machines, see Using PCI Express and Conventional PCI Devices with the Q35 Virtual Machine.
2.2.5. Device Assignment Requirements
If you plan to implement device assignment and PCI passthrough so that a virtual machine can use a specific PCIe device from a host, ensure the following requirements are met:
-
CPU must support IOMMU (for example, VT-d or AMD-Vi). IBM POWER8 supports IOMMU by default.
-
Firmware must support IOMMU.
-
CPU root ports used must support ACS or ACS-equivalent capability.
-
PCIe devices must support ACS or ACS-equivalent capability.
-
All PCIe switches and bridges between the PCIe device and the root port should support ACS. For example, if a switch does not support ACS, all devices behind that switch share the same IOMMU group, and can only be assigned to the same virtual machine.
-
For GPU support, Enterprise Linux 7 supports PCI device assignment of PCIe-based NVIDIA K-Series Quadro (model 2000 series or higher), GRID, and Tesla as non-VGA graphics devices. Currently up to two GPUs may be attached to a virtual machine in addition to one of the standard, emulated VGA interfaces. The emulated VGA is used for pre-boot and installation and the NVIDIA GPU takes over when the NVIDIA graphics drivers are loaded. Note that the NVIDIA Quadro 2000 is not supported, nor is the Quadro K420 card.
Check vendor specification and datasheets to confirm that your hardware meets these requirements. The lspci -v
command can be used to print information for PCI devices already installed on a system.
2.2.6. vGPU Requirements
A host must meet the following requirements in order for virtual machines on that host to use a vGPU:
-
vGPU-compatible GPU
-
GPU-enabled host kernel
-
Installed GPU with correct drivers
-
Predefined mdev_type set to correspond with one of the mdev types supported by the device
-
vGPU-capable drivers installed on each host in the cluster
-
vGPU-supported virtual machine operating system with vGPU drivers installed
2.3. Networking Requirements
2.3.1. General Requirements
oVirt requires IPv6 to remain enabled on the computer or virtual machine where you are running the Engine (also called "the Engine machine"). Do not disable IPv6 on the Engine machine, even if your systems do not use it.
2.3.2. Firewall Requirements for DNS, NTP, IPMI Fencing, and Metrics Store
The firewall requirements for all of the following topics are special cases that require individual consideration.
oVirt does not create a DNS or NTP server, so the firewall does not need to have open ports for incoming traffic.
By default, Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, define exceptions for requests that are sent to DNS and NTP servers.
|
For IPMI (Intelligent Platform Management Interface) and other fencing mechanisms, the firewall does not need to have open ports for incoming traffic.
By default, Enterprise Linux allows outbound IPMI traffic to ports on any destination address. If you disable outgoing traffic, make exceptions for requests being sent to your IPMI or fencing servers.
Each oVirt Node and Enterprise Linux host in the cluster must be able to connect to the fencing devices of all other hosts in the cluster. If the cluster hosts are experiencing an error (network error, storage error…) and cannot function as hosts, they must be able to connect to other hosts in the data center.
The specific port number depends on the type of the fence agent you are using and how it is configured.
The firewall requirement tables in the following sections do not represent this option.
2.3.3. oVirt Engine Firewall Requirements
The oVirt Engine requires that a number of ports be opened to allow network traffic through the system’s firewall.
The engine-setup
script can configure the firewall automatically.
The firewall configuration documented here assumes a default configuration.
A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211. You can use the IDs in the table to look up connections in the diagram. |
ID | Port(s) | Protocol | Source | Destination | Purpose | Encrypted by default |
---|---|---|---|---|---|---|
M1 |
- |
ICMP |
oVirt Nodes Enterprise Linux hosts |
oVirt Engine |
Optional. May help in diagnosis. |
No |
M2 |
22 |
TCP |
System(s) used for maintenance of the Engine including backend configuration, and software upgrades. |
oVirt Engine |
Secure Shell (SSH) access. Optional. |
Yes |
M3 |
2222 |
TCP |
Clients accessing virtual machine serial consoles. |
oVirt Engine |
Secure Shell (SSH) access to enable connection to virtual machine serial consoles. |
Yes |
M4 |
80, 443 |
TCP |
Administration Portal clients VM Portal clients oVirt Nodes Enterprise Linux hosts REST API clients |
oVirt Engine |
Provides HTTP (port 80, not encrypted) and HTTPS (port 443, encrypted) access to the Engine. HTTP redirects connections to HTTPS. |
Yes |
M5 |
6100 |
TCP |
Administration Portal clients VM Portal clients |
oVirt Engine |
Provides websocket proxy access for a web-based console client, |
No |
M6 |
7410 |
UDP |
oVirt Nodes Enterprise Linux hosts |
oVirt Engine |
If Kdump is enabled on the hosts, open this port for the fence_kdump listener on the Engine. See fence_kdump Advanced Configuration. |
No |
M7 |
54323 |
TCP |
Administration Portal clients |
oVirt Engine (ImageIO Proxy server) |
Required for communication with the ImageIO Proxy ( |
Yes |
M8 |
6442 |
TCP |
oVirt Nodes Enterprise Linux hosts |
Open Virtual Network (OVN) southbound database |
Connect to Open Virtual Network (OVN) database |
Yes |
M9 |
9696 |
TCP |
Clients of external network provider for OVN |
External network provider for OVN |
OpenStack Networking API |
Yes, with configuration generated by engine-setup. |
M10 |
35357 |
TCP |
Clients of external network provider for OVN |
External network provider for OVN |
OpenStack Identity API |
Yes, with configuration generated by engine-setup. |
M11 |
53 |
TCP, UDP |
oVirt Engine |
DNS Server |
DNS lookup requests from ports above 1023 to port 53, and responses. Open by default. |
No |
M12 |
123 |
UDP |
oVirt Engine |
NTP Server |
NTP requests from ports above 1023 to port 123, and responses. Open by default. |
No |
|
2.3.4. Host Firewall Requirements
Enterprise Linux hosts and oVirt Nodes (oVirt Node) require a number of ports to be opened to allow network traffic through the system’s firewall. The firewall rules are automatically configured by default when adding a new host to the Engine, overwriting any pre-existing firewall configuration.
To disable automatic firewall configuration when adding a new host, clear the Automatically configure host firewall check box under Advanced Parameters.
To customize the host firewall rules, see https://access.redhat.com/solutions/2772331.
A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211. You can use the IDs in the table to look up connections in the diagram. |
ID | Port(s) | Protocol | Source | Destination | Purpose | Encrypted by default |
---|---|---|---|---|---|---|
H1 |
22 |
TCP |
oVirt Engine |
oVirt Nodes Enterprise Linux hosts |
Secure Shell (SSH) access. Optional. |
Yes |
H2 |
2223 |
TCP |
oVirt Engine |
oVirt Nodes Enterprise Linux hosts |
Secure Shell (SSH) access to enable connection to virtual machine serial consoles. |
Yes |
H3 |
161 |
UDP |
oVirt Nodes Enterprise Linux hosts |
oVirt Engine |
Simple network management protocol (SNMP). Only required if you want Simple Network Management Protocol traps sent from the host to one or more external SNMP managers. Optional. |
No |
H4 |
111 |
TCP |
NFS storage server |
oVirt Nodes Enterprise Linux hosts |
NFS connections. Optional. |
No |
H5 |
5900 - 6923 |
TCP |
Administration Portal clients VM Portal clients |
oVirt Nodes Enterprise Linux hosts |
Remote guest console access via VNC and SPICE. These ports must be open to facilitate client access to virtual machines. |
Yes (optional) |
H6 |
5989 |
TCP, UDP |
Common Information Model Object Manager (CIMOM) |
oVirt Nodes Enterprise Linux hosts |
Used by Common Information Model Object Managers (CIMOM) to monitor virtual machines running on the host. Only required if you want to use a CIMOM to monitor the virtual machines in your virtualization environment. Optional. |
No |
H7 |
9090 |
TCP |
oVirt Engine Client machines |
oVirt Nodes Enterprise Linux hosts |
Required to access the Cockpit web interface, if installed. |
Yes |
H8 |
16514 |
TCP |
oVirt Nodes Enterprise Linux hosts |
oVirt Nodes Enterprise Linux hosts |
Virtual machine migration using libvirt. |
Yes |
H9 |
49152 - 49215 |
TCP |
oVirt Nodes Enterprise Linux hosts |
oVirt Nodes Enterprise Linux hosts |
Virtual machine migration and fencing using VDSM. These ports must be open to facilitate both automated and manual migration of virtual machines. |
Yes. Depending on agent for fencing, migration is done through libvirt. |
H10 |
54321 |
TCP |
oVirt Engine oVirt Nodes Enterprise Linux hosts |
oVirt Nodes Enterprise Linux hosts |
VDSM communications with the Engine and other virtualization hosts. |
Yes |
H11 |
54322 |
TCP |
oVirt Engine (ImageIO Proxy server) |
oVirt Nodes Enterprise Linux hosts |
Required for communication with the ImageIO daemon (ovirt-imageio-daemon). |
Yes |
H12 |
6081 |
UDP |
oVirt Nodes Enterprise Linux hosts |
oVirt Nodes Enterprise Linux hosts |
Required, when Open Virtual Network (OVN) is used as a network provider, to allow OVN to create tunnels between hosts. |
No |
H13 |
53 |
TCP, UDP |
oVirt Nodes Enterprise Linux hosts |
DNS Server |
DNS lookup requests from ports above 1023 to port 53, and responses. This port is required and open by default. |
No |
By default, Enterprise Linux allows outbound traffic to DNS and NTP on any destination address. If you disable outgoing traffic, make exceptions for the oVirt Nodes Enterprise Linux hosts to send requests to DNS and NTP servers. Other nodes may also require DNS and NTP. In that case, consult the requirements for those nodes and configure the firewall accordingly. |
2.3.5. Database Server Firewall Requirements
oVirt supports the use of a remote database server for the Engine database (engine
) and the Data Warehouse database (ovirt-engine-history
). If you plan to use a remote database server, it must allow connections from the Engine and the Data Warehouse service (which can be separate from the Engine).
Similarly, if you plan to access a local or remote Data Warehouse database from an external system, such as Red Hat CloudForms, the database must allow connections from that system.
Accessing the Engine database from external systems is not supported. |
A diagram of these firewall requirements is available at https://access.redhat.com/articles/3932211. You can use the IDs in the table to look up connections in the diagram. |
ID | Port(s) | Protocol | Source | Destination | Purpose | Encrypted by default |
---|---|---|---|---|---|---|
D1 |
5432 |
TCP, UDP |
oVirt Engine Data Warehouse service |
Engine ( Data Warehouse ( |
Default port for PostgreSQL database connections. |
|
D2 |
5432 |
TCP, UDP |
External systems |
Data Warehouse ( |
Default port for PostgreSQL database connections. |
Disabled by default. No, but can be enabled. |
3. Preparing Storage for oVirt
Prepare storage to be used for storage domains in the new environment. A oVirt environment must have at least one data storage domain, but adding more is recommended.
A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center, and cannot be shared across data centers while active (but can be migrated between data centers). Data domains of multiple storage types can be added to the same data center, provided they are all shared, rather than local, domains.
You can use one of the following storage types:
-
Self-hosted engines must have an additional data domain with at least 74 GiB dedicated to the Engine virtual machine. The self-hosted engine installer creates this domain. Prepare the storage for this domain before installation.
-
When using a block storage domain, either FCP or iSCSI, a single target LUN is the only supported setup for a self-hosted engine.
-
If you use iSCSI storage, the self-hosted engine storage domain must use a dedicated iSCSI target. Any additional storage domains must use a different iSCSI target.
-
It is strongly recommended to create additional data storage domains in the same data center as the self-hosted engine storage domain. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you cannot add new storage domains or remove the corrupted storage domain. You must redeploy the self-hosted engine.
3.1. Preparing NFS Storage
Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts.
For information on setting up and configuring NFS, see Network File System (NFS) in the Enterprise Linux 7 Storage Administration Guide.
For information on how to export an 'NFS' share, see How to export 'NFS' share from NetApp Storage / EMC SAN in Red Hat Virtualization
Specific system user accounts and system user groups are required by oVirt so the Engine can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown
and chmod
steps for all of the directories you intend to use as storage domains in oVirt.
-
Create the group
kvm
:# groupadd kvm -g 36
-
Create the user
vdsm
in the groupkvm
:# useradd vdsm -u 36 -g 36
-
Set the ownership of your exported directory to 36:36, which gives
vdsm:kvm
ownership:# chown -R 36:36 /exports/data
-
Change the mode of the directory so that read and write access is granted to the owner, and so that read and execute access is granted to the group and other users:
# chmod 0755 /exports/data
3.2. Preparing iSCSI Storage
oVirt supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time.
For information on setting up and configuring iSCSI storage, see Online Storage Management in the Enterprise Linux 7 Storage Administration Guide.
If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. See https://access.redhat.com/solutions/2662261 for details. |
oVirt currently does not support storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. |
If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } |
3.3. Preparing FCP Storage
oVirt supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time.
oVirt system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage.
For information on setting up and configuring FCP or multipathing on Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide.
If you are using block storage and you intend to deploy virtual machines on raw devices or direct LUNs and to manage them with the Logical Volume Manager, you must create a filter to hide the guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. See https://access.redhat.com/solutions/2662261 for details. |
oVirt currently does not support storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. |
If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection: # cat /etc/multipath/conf.d/host.conf multipaths { multipath { wwid boot_LUN_wwid no_path_retry queue } |
3.4. Preparing Gluster Storage
For information on setting up and configuring Gluster Storage, see the Gluster Storage Installation Guide.
3.5. Customizing Multipath Configurations for SAN Vendors
To customize the multipath configuration settings, do not modify /etc/multipath.conf
. Instead, create a new configuration file that overrides /etc/multipath.conf
.
Upgrading Virtual Desktop and Server Manager (VDSM) overwrites the |
-
This topic only applies to systems that have been configured to use multipath connections storage domains, and therefore have a
/etc/multipath.conf
file. -
Do not override the
user_friendly_names
andfind_multipaths
settings. For more information, see Recommended Settings for Multipath.conf -
Avoid overriding
no_path_retry
andpolling_interval
unless required by the storage vendor. For more information, see Recommended Settings for Multipath.conf
-
To override the values of settings in
/etc/multipath.conf
, create a new configuration file in the/etc/multipath/conf.d/
directory.The files in
/etc/multipath/conf.d/
execute in alphabetical order. Follow the convention of naming the file with a number at the beginning of its name. For example,/etc/multipath/conf.d/90-myfile.conf
. -
Copy the settings you want to override from
/etc/multipath.conf
to the new configuration file in/etc/multipath/conf.d/
. Edit the setting values and save your changes. -
Apply the new configuration settings by entering the
systemctl reload multipathd
command.Avoid restarting the multipathd service. Doing so generates errors in the VDSM logs.
If you override the VDSM-generated settings in /etc/multipath.conf
, verify that the new configuration performs as expected in a variety of failure scenarios.
For example, disable all of the storage connections. Then enable one connection at a time and verify that doing so makes the storage domain reachable.
If a oVirt Node has trouble accessing shared storage, check /etc/multpath.conf
and files under /etc/multipath/conf.d/
for values that are incompatible with the SAN.
-
Enterprise Linux DM Multipath in the RHEL documentation.
-
Configuring iSCSI Multipathing in the Administration Guide.
-
How do I customize /etc/multipath.conf on my RHVH hypervisors? What values must not change and why? on the Red Hat Customer Portal, which shows an example
multipath.conf
file and was the basis for this topic.
3.6. Recommended Settings for Multipath.conf
When overriding /etc/multipath.conf
, Do not override the following settings:
user_friendly_names no
-
This setting controls whether user-friendly names are assigned to devices in addition to the actual device names. Multiple hosts must use the same name to access devices. Disabling this setting prevents user-friendly names from interfering with this requirement.
find_multipaths no
-
This setting controls whether oVirt Node tries to access all devices through multipath, even if only one path is available. Disabling this setting prevents oVirt from using the too-clever behavior when this setting is enabled.
Avoid overriding the following settings unless required by the storage system vendor:
no_path_retry 4
-
This setting controls the number of polling attempts to retry when no paths are available. Before oVirt version 4.2, the value of
no_path_retry
wasfail
because QEMU had trouble with the I/O queuing when no paths were available. Thefail
value made it fail quickly and paused the virtual machine. oVirt version 4.2 changed this value to4
so when multipathd detects the last path has failed, it checks all of the paths four more times. Assuming the default 5-second polling interval, checking the paths takes 20 seconds. If no path is up, multipathd tells the kernel to stop queuing and fails all outstanding and future I/O until a path is restored. When a path is restored, the 20-second delay is reset for the next time all paths fail. For more details, see the commit that changed this setting. polling_interval 5
-
This setting determines the number of seconds between polling attempts to detect whether a path is open or has failed. Unless the vendor provides a clear reason for increasing the value, keep the VDSM-generated default so the system responds to path failures sooner.
4. Installing the Self-hosted Engine Deployment Host
A self-hosted engine can be deployed from a oVirt Node or a Enterprise Linux host.
If you plan to use bonded interfaces for high availability or VLANs to separate different types of traffic (for example, for storage or management connections), you should configure them on the host before beginning the self-hosted engine deployment. See Networking Recommendations in the Planning and Prerequisites Guide. |
4.1. Installing oVirt Nodes
oVirt Node (oVirt Node) is a minimal operating system based on Enterprise Linux that is designed to provide a simple method for setting up a physical machine to act as a hypervisor in a oVirt environment. The minimal operating system contains only the packages required for the machine to act as a hypervisor, and features a Cockpit web interface for monitoring the host and performing administrative tasks. See http://cockpit-project.org/running.html for the minimum browser requirements.
oVirt Node supports NIST 800-53 partitioning requirements to improve security. oVirt Node uses a NIST 800-53 partition layout by default.
The host must meet the minimum host requirements.
-
Visit the oVirt Node Download page.
-
Choose the version of oVirt Node to download and click its Installation ISO link.
-
Write the oVirt Node Installation ISO disk image to a USB, CD, or DVD.
-
Start the machine on which you are installing oVirt Node, booting from the prepared installation media.
-
From the boot menu, select Install oVirt Node 4.4 and press
Enter
.You can also press the
Tab
key to edit the kernel parameters. Kernel parameters must be separated by a space, and you can boot the system using the specified kernel parameters by pressing theEnter
key. Press theEsc
key to clear any changes to the kernel parameters and return to the boot menu. -
Select a language, and click Continue.
-
Select a keyboard layout from the Keyboard Layout screen and click Done.
-
Select the device on which to install oVirt Node from the Installation Destination screen. Optionally, enable encryption. Click Done.
Use the Automatically configure partitioning option.
-
Select a time zone from the Time & Date screen and click Done.
-
Select a network from the Network & Host Name screen and click Configure… to configure the connection details.
To use the connection every time the system boots, select the Connect automatically with priority check box. For more information, see Configuring network and host name options in the Enterprise Linux 8 Installation Guide.
Enter a host name in the Host Name field, and click Done.
-
Optionally configure Language Support, Security Policy, and Kdump. See Customizing your RHEL installation using the GUI in Performing a standard RHEL installation for _Enterprise Linux 8 for more information on each of the sections in the Installation Summary screen.
-
Click Begin Installation.
-
Set a root password and, optionally, create an additional user while oVirt Node installs.
Do not create untrusted users on oVirt Node, as this can lead to exploitation of local security vulnerabilities.
-
Click Reboot to complete the installation.
When oVirt Node restarts,
nodectl check
performs a health check on the host and displays the result when you log in on the command line. The messagenode status: OK
ornode status: DEGRADED
indicates the health status. Runnodectl check
to get more information. The service is enabled by default.
4.2. Installing Enterprise Linux hosts
A Enterprise Linux host is based on a standard basic installation of Enterprise Linux 8 on a physical server, with the Enterprise Linux Server
and oVirt
repositories enabled.
For detailed installation instructions, see the Performing a standard EL installation.
The host must meet the minimum host requirements.
Virtualization must be enabled in your host’s BIOS settings. For information on changing your host’s BIOS settings, refer to your host’s hardware documentation. |
Do not install third-party watchdogs on Enterprise Linux hosts. They can interfere with the watchdog daemon provided by VDSM. |
4.2.1. Installing Cockpit on Enterprise Linux hosts
You can install Cockpit for monitoring the host’s resources and performing administrative tasks.
-
Install the dashboard packages:
# yum install cockpit-ovirt-dashboard
-
Enable and start the
cockpit.socket
service:# systemctl enable cockpit.socket # systemctl start cockpit.socket
-
Check if Cockpit is an active service in the firewall:
# firewall-cmd --list-services
You should see
cockpit
listed. If it is not, enter the following with root permissions to addcockpit
as a service to your firewall:# firewall-cmd --permanent --add-service=cockpit
The
--permanent
option keeps thecockpit
service active after rebooting.
You can log in to the Cockpit web interface at https://HostFQDNorIP:9090
.
5. Installing the oVirt Engine
5.1. Manually installing the Engine Appliance
When you deploy the self-hosted engine, the following sequence of events takes place:
-
The installer installs the Engine Appliance to the deployment host.
-
The appliance installs the Engine virtual machine.
-
The appliance installs the Engine on the Engine virtual machine.
However, you can install the appliance manually on the deployment host beforehand if you need to. The appliance is large and network connectivity issues might cause the appliance installation to take a long time, or possibly fail.
To install the Engine Appliance to the host manually, enter the following command:
# yum install ovirt-engine-appliance
Now, when you deploy the self-hosted engine, the installer detects that the appliance is already installed.
5.2. Deploying the Self-hosted Engine using Cockpit
Deploy a self-hosted engine, using Cockpit to collect the details of your environment. This is the recommended method. Cockpit is enabled by default on oVirt Nodes, and can be installed on Enterprise Linux hosts.
-
FQDNs prepared for your Engine and the deployment host. Forward and reverse lookup records must both be set in the DNS.
-
When using a block storage domain, either FCP or iSCSI, a single target LUN is the only supported setup for a self-hosted engine.
-
Log in to Cockpit at
https://HostIPorFQDN:9090
using the root user and click . -
Click Start under the Hosted Engine option.
-
Enter the details for the Engine virtual machine:
-
Enter the Engine VM FQDN. This is the FQDN for the Engine virtual machine, not the base host.
-
Enter a MAC Address for the Engine virtual machine, or accept a randomly generated one.
-
Choose either DHCP or Static from the Network Configuration drop-down list.
For IPv6, oVirt supports only static addressing.
-
If you choose DHCP, you must have a DHCP reservation for the Engine virtual machine so that its host name resolves to the address received from DHCP. Specify its MAC address in the MAC Address field.
-
If you choose Static, enter the following details:
-
VM IP Address - The IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Engine virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).
-
Gateway Address
-
DNS Servers
-
-
-
Select the Bridge Interface from the drop-down list.
-
Enter and confirm the virtual machine’s Root Password.
-
Specify whether to allow Root SSH Access.
-
Enter the Number of Virtual CPUs for the virtual machine.
-
Enter the Memory Size (MiB). The available memory is displayed next to the input field.
-
-
Optionally expand the Advanced fields:
-
Enter a Root SSH Public Key to use for root access to the Engine virtual machine.
-
Select or clear the Edit Hosts File check box to specify whether to add entries for the Engine virtual machine and the base host to the virtual machine’s
/etc/hosts
file. You must ensure that the host names are resolvable. -
Change the management Bridge Name, or accept the default
ovirtmgmt
. -
Enter the Gateway Address for the management bridge.
-
Enter the Host FQDN of the first host to add to the Engine. This is the FQDN of the base host you are running the deployment on.
-
-
Click Next.
-
Enter and confirm the Admin Portal Password for the
admin@internal
user. -
Configure event notifications:
-
Enter the Server Name and Server Port Number of the SMTP server.
-
Enter the Sender E-Mail Address.
-
Enter the Recipient E-Mail Addresses.
-
-
Click Next.
-
Review the configuration of the Engine and its virtual machine. If the details are correct, click Prepare VM.
-
When the virtual machine installation is complete, click Next.
-
Select the Storage Type from the drop-down list, and enter the details for the self-hosted engine storage domain:
-
For NFS:
-
Enter the full address and path to the storage in the Storage Connection field.
-
If required, enter any Mount Options.
-
Enter the Disk Size (GiB).
-
Select the NFS Version from the drop-down list.
-
Enter the Storage Domain Name.
-
-
For iSCSI:
-
Enter the Portal IP Address, Portal Port, Portal Username, and Portal Password.
-
Click Retrieve Target List and select a target. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.
To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.
-
Enter the Disk Size (GiB).
-
Enter the Discovery Username and Discovery Password.
-
-
For Fibre Channel:
-
Enter the LUN ID. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide.
-
Enter the Disk Size (GiB).
-
-
For Gluster Storage:
-
Enter the full address and path to the storage in the Storage Connection field.
-
If required, enter any Mount Options.
-
Enter the Disk Size (GiB).
-
-
-
Click Next.
-
Review the storage configuration. If the details are correct, click Finish Deployment.
-
When the deployment is complete, click Close.
One data center, cluster, host, storage domain, and the Engine virtual machine are already running. You can log in to the Administration Portal to add further resources.
-
Optionally, add a directory server using the
ovirt-engine-extension-aaa-ldap-setup
interactive setup script so you can add additional users to the environment. For more information, see Configuring an External LDAP Provider in the Administration Guide.
The self-hosted engine’s status is displayed in Cockpit’s
tab. The Engine virtual machine, the host running it, and the self-hosted engine storage domain are flagged with a gold crown in the Administration Portal.Enabling the oVirt Engine repositories is not part of the automated installation. Log in to the Engine virtual machine to enable the repositories:
5.3. Enabling the oVirt Engine Repositories
Ensure the correct repositories are enabled:
You can check which repositories are currently enabled by running yum repolist
.
-
Enable the
javapackages-tools
module.# yum module -y enable javapackages-tools
-
Enable the
pki-deps
module.# yum module -y enable pki-deps
-
Enable version 12 of the
postgresql
module.# yum module -y enable postgresql:12
Log in to the Administration Portal, where you can add hosts and storage to the environment:
5.4. Connecting to the Administration Portal
Access the Administration Portal using a web browser.
-
In a web browser, navigate to
https://manager-fqdn/ovirt-engine
, replacing manager-fqdn with the FQDN that you provided during installation.You can access the Administration Portal using alternate host names or IP addresses. To do so, you need to add a configuration file under /etc/ovirt-engine/engine.conf.d/. For example:
# vi /etc/ovirt-engine/engine.conf.d/99-custom-sso-setup.conf SSO_ALTERNATE_ENGINE_FQDNS="alias1.example.com alias2.example.com"
The list of alternate host names needs to be separated by spaces. You can also add the IP address of the Engine to the list, but using IP addresses instead of DNS-resolvable host names is not recommended.
-
Click Administration Portal. An SSO login page displays. SSO login enables you to log in to the Administration and VM Portal at the same time.
-
Enter your User Name and Password. If you are logging in for the first time, use the user name admin along with the password that you specified during installation.
-
Select the Domain to authenticate against. If you are logging in using the internal admin user name, select the internal domain.
-
Click Log In.
-
You can view the Administration Portal in multiple languages. The default selection is chosen based on the locale settings of your web browser. If you want to view the Administration Portal in a language other than the default, select your preferred language from the drop-down list on the welcome page.
To log out of the oVirt Administration Portal, click your user name in the header bar and click Sign Out. You are logged out of all portals and the Engine welcome screen displays.
6. Installing Hosts for oVirt
oVirt supports two types of hosts: oVirt Nodes (oVirt Node) and Enterprise Linux hosts. Depending on your environment, you may want to use one type only, or both. At least two hosts are required for features such as migration and high availability.
See Recommended Practices for Configuring Host Networks for networking information.
SELinux is in enforcing mode upon installation. To verify, run |
Host Type | Other Names | Description |
---|---|---|
oVirt Node |
oVirt Node, thin host |
This is a minimal operating system based on Enterprise Linux. It is distributed as an ISO file from the Customer Portal and contains only the packages required for the machine to act as a host. |
Enterprise Linux host |
Enterprise Linux host, thick host |
Enterprise Linux systems with the appropriate repositories enabled can be used as hosts. |
When you create a new data center, you can set the compatibility version. Select the compatibility version that suits all the hosts in the data center. Once set, version regression is not allowed. For a fresh oVirt installation, the latest compatibility version is set in the default data center and default cluster; to use an earlier compatibility version, you must create additional data centers and clusters. For more information about compatibility versions see oVirt Engine Compatibility in oVirt Life Cycle.
6.1. oVirt Nodes
6.1.1. Installing oVirt Nodes
oVirt Node (oVirt Node) is a minimal operating system based on Enterprise Linux that is designed to provide a simple method for setting up a physical machine to act as a hypervisor in a oVirt environment. The minimal operating system contains only the packages required for the machine to act as a hypervisor, and features a Cockpit web interface for monitoring the host and performing administrative tasks. See http://cockpit-project.org/running.html for the minimum browser requirements.
oVirt Node supports NIST 800-53 partitioning requirements to improve security. oVirt Node uses a NIST 800-53 partition layout by default.
The host must meet the minimum host requirements.
-
Visit the oVirt Node Download page.
-
Choose the version of oVirt Node to download and click its Installation ISO link.
-
Write the oVirt Node Installation ISO disk image to a USB, CD, or DVD.
-
Start the machine on which you are installing oVirt Node, booting from the prepared installation media.
-
From the boot menu, select Install oVirt Node 4.4 and press
Enter
.You can also press the
Tab
key to edit the kernel parameters. Kernel parameters must be separated by a space, and you can boot the system using the specified kernel parameters by pressing theEnter
key. Press theEsc
key to clear any changes to the kernel parameters and return to the boot menu. -
Select a language, and click Continue.
-
Select a keyboard layout from the Keyboard Layout screen and click Done.
-
Select the device on which to install oVirt Node from the Installation Destination screen. Optionally, enable encryption. Click Done.
Use the Automatically configure partitioning option.
-
Select a time zone from the Time & Date screen and click Done.
-
Select a network from the Network & Host Name screen and click Configure… to configure the connection details.
To use the connection every time the system boots, select the Connect automatically with priority check box. For more information, see Configuring network and host name options in the Enterprise Linux 8 Installation Guide.
Enter a host name in the Host Name field, and click Done.
-
Optionally configure Language Support, Security Policy, and Kdump. See Customizing your RHEL installation using the GUI in Performing a standard RHEL installation for _Enterprise Linux 8 for more information on each of the sections in the Installation Summary screen.
-
Click Begin Installation.
-
Set a root password and, optionally, create an additional user while oVirt Node installs.
Do not create untrusted users on oVirt Node, as this can lead to exploitation of local security vulnerabilities.
-
Click Reboot to complete the installation.
When oVirt Node restarts,
nodectl check
performs a health check on the host and displays the result when you log in on the command line. The messagenode status: OK
ornode status: DEGRADED
indicates the health status. Runnodectl check
to get more information. The service is enabled by default.
6.1.2. Advanced Installation
Custom Partitioning
Custom partitioning on oVirt Node (oVirt Node) is not recommended. Use the Automatically configure partitioning option in the Installation Destination window.
If your installation requires custom partitioning, select the I will configure partitioning
option during the installation, and note that the following restrictions apply:
-
Ensure the default LVM Thin Provisioning option is selected in the Manual Partitioning window.
-
The following directories are required and must be on thin provisioned logical volumes:
-
root (
/
) -
/home
-
/tmp
-
/var
-
/var/crash
-
/var/log
-
/var/log/audit
Do not create a separate partition for
/usr
. Doing so will cause the installation to fail./usr
must be on a logical volume that is able to change versions along with oVirt Node, and therefore should be left on root (/
).For information about the required storage sizes for each partition, see Storage Requirements.
-
-
The
/boot
directory should be defined as a standard partition. -
The
/var
directory must be on a separate volume or disk. -
Only XFS or Ext4 file systems are supported.
Configuring Manual Partitioning in a Kickstart File
The following example demonstrates how to configure manual partitioning in a Kickstart file.
clearpart --all part /boot --fstype xfs --size=1000 --ondisk=sda part pv.01 --size=42000 --grow volgroup HostVG pv.01 --reserved-percent=20 logvol swap --vgname=HostVG --name=swap --fstype=swap --recommended logvol none --vgname=HostVG --name=HostPool --thinpool --size=40000 --grow logvol / --vgname=HostVG --name=root --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=6000 --grow logvol /var --vgname=HostVG --name=var --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=15000 logvol /var/crash --vgname=HostVG --name=var_crash --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=10000 logvol /var/log --vgname=HostVG --name=var_log --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=8000 logvol /var/log/audit --vgname=HostVG --name=var_audit --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=2000 logvol /home --vgname=HostVG --name=home --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=1000 logvol /tmp --vgname=HostVG --name=tmp --thin --fstype=ext4 --poolname=HostPool --fsoptions="defaults,discard" --size=1000
If you use |
Automating oVirt Node Deployment
You can install oVirt Node (oVirt Node) without a physical media device by booting from a PXE server over the network with a Kickstart file that contains the answers to the installation questions.
General instructions for installing from a PXE server with a Kickstart file are available in the Enterprise Linux Installation Guide, as oVirt Node is installed in much the same way as Enterprise Linux. oVirt Node-specific instructions, with examples for deploying oVirt Node with Red Hat Satellite, are described below.
The automated oVirt Node deployment has 3 stages:
Preparing the Installation Environment
-
Visit the oVirt Node Download page.
-
Choose the version of oVirt Node to download and click its Installation ISO link.
-
Make the oVirt Node ISO image available over the network. See Installation Source on a Network in the Enterprise Linux Installation Guide.
-
Extract the squashfs.img hypervisor image file from the oVirt Node ISO:
# mount -o loop /path/to/oVirt Node-ISO /mnt/rhvh # cp /mnt/rhvh/Packages/redhat-virtualization-host-image-update* /tmp # cd /tmp # rpm2cpio redhat-virtualization-host-image-update* | cpio -idmv
This squashfs.img file, located in the
/tmp/usr/share/redhat-virtualization-host/image/
directory, is called redhat-virtualization-host-version_number_version.squashfs.img. It contains the hypervisor image for installation on the physical machine. It should not be confused with the /LiveOS/squashfs.img file, which is used by the Anacondainst.stage2
option.
Configuring the PXE Server and the Boot Loader
-
Configure the PXE server. See Preparing for a Network Installation in the Enterprise Linux Installation Guide.
-
Copy the oVirt Node boot images to the
/tftpboot
directory:# cp mnt/rhvh/images/pxeboot/{vmlinuz,initrd.img} /var/lib/tftpboot/pxelinux/
-
Create a
rhvh
label specifying the oVirt Node boot images in the boot loader configuration:LABEL rhvh MENU LABEL Install oVirt Node KERNEL /var/lib/tftpboot/pxelinux/vmlinuz APPEND initrd=/var/lib/tftpboot/pxelinux/initrd.img inst.stage2=URL/to/oVirt Node-ISO
oVirt Node Boot Loader Configuration Example for Red Hat SatelliteIf you are using information from Red Hat Satellite to provision the host, you must create a global or host group level parameter called
rhvh_image
and populate it with the directory URL where the ISO is mounted or extracted:<%# kind: PXELinux name: oVirt Node PXELinux %> # Created for booting new hosts # DEFAULT rhvh LABEL rhvh KERNEL <%= @kernel %> APPEND initrd=<%= @initrd %> inst.ks=<%= foreman_url("provision") %> inst.stage2=<%= @host.params["rhvh_image"] %> intel_iommu=on console=tty0 console=ttyS1,115200n8 ssh_pwauth=1 local_boot_trigger=<%= foreman_url("built") %> IPAPPEND 2
-
Make the content of the oVirt Node ISO locally available and export it to the network, for example, using an HTTPD server:
# cp -a /mnt/rhvh/ /var/www/html/rhvh-install # curl URL/to/oVirt Node-ISO/rhvh-install
Creating and Running a Kickstart File
-
Create a Kickstart file and make it available over the network. See Kickstart Installations in the Enterprise Linux Installation Guide.
-
Ensure that the Kickstart file meets the following oVirt-specific requirements:
-
The
%packages
section is not required for oVirt Node. Instead, use theliveimg
option and specify the redhat-virtualization-host-version_number_version.squashfs.img file from the oVirt Node ISO image:liveimg --url=example.com/tmp/usr/share/redhat-virtualization-host/image/redhat-virtualization-host-version_number_version.squashfs.img
-
Autopartitioning is highly recommended:
autopart --type=thinp
Thin provisioning must be used with autopartitioning.
The
--no-home
option does not work in oVirt Node because/home
is a required directory.If your installation requires manual partitioning, see Custom Partitioning for a list of limitations that apply to partitions and an example of manual partitioning in a Kickstart file.
-
A
%post
section that calls thenodectl init
command is required:%post nodectl init %end
Kickstart Example for Deploying oVirt Node on Its OwnThis Kickstart example shows you how to deploy oVirt Node. You can include additional commands and options as required.
liveimg --url=http://FQDN/tmp/usr/share/redhat-virtualization-host/image/redhat-virtualization-host-version_number_version.squashfs.img clearpart --all autopart --type=thinp rootpw --plaintext ovirt timezone --utc America/Phoenix zerombr text reboot %post --erroronfail nodectl init %end
-
-
Add the Kickstart file location to the boot loader configuration file on the PXE server:
APPEND initrd=/var/tftpboot/pxelinux/initrd.img inst.stage2=URL/to/oVirt Node-ISO inst.ks=URL/to/oVirt Node-ks.cfg
-
Install oVirt Node following the instructions in Booting from the Network Using PXE in the Enterprise Linux Installation Guide.
6.2. Enterprise Linux hosts
6.2.1. Installing Enterprise Linux hosts
A Enterprise Linux host is based on a standard basic installation of Enterprise Linux 8 on a physical server, with the Enterprise Linux Server
and oVirt
repositories enabled.
For detailed installation instructions, see the Performing a standard EL installation.
The host must meet the minimum host requirements.
Virtualization must be enabled in your host’s BIOS settings. For information on changing your host’s BIOS settings, refer to your host’s hardware documentation. |
Do not install third-party watchdogs on Enterprise Linux hosts. They can interfere with the watchdog daemon provided by VDSM. |
6.2.2. Installing Cockpit on Enterprise Linux hosts
You can install Cockpit for monitoring the host’s resources and performing administrative tasks.
-
Install the dashboard packages:
# yum install cockpit-ovirt-dashboard
-
Enable and start the
cockpit.socket
service:# systemctl enable cockpit.socket # systemctl start cockpit.socket
-
Check if Cockpit is an active service in the firewall:
# firewall-cmd --list-services
You should see
cockpit
listed. If it is not, enter the following with root permissions to addcockpit
as a service to your firewall:# firewall-cmd --permanent --add-service=cockpit
The
--permanent
option keeps thecockpit
service active after rebooting.
You can log in to the Cockpit web interface at https://HostFQDNorIP:9090
.
6.3. Recommended Practices for Configuring Host Networks
If your network environment is complex, you may need to configure a host network manually before adding the host to the oVirt Engine.
Consider the following practices for configuring a host network:
-
Configure the network with Cockpit. Alternatively, you can use
nmtui
ornmcli
. -
If a network is not required for a self-hosted engine deployment or for adding a host to the Engine, configure the network in the Administration Portal after adding the host to the Engine. See Creating a New Logical Network in a Data Center or Cluster.
-
Use the following naming conventions:
-
VLAN devices:
VLAN_NAME_TYPE_RAW_PLUS_VID_NO_PAD
-
VLAN interfaces:
physical_device.VLAN_ID
(for example,eth0.23
,eth1.128
,enp3s0.50
) -
Bond interfaces:
bondnumber
(for example,bond0
,bond1
) -
VLANs on bond interfaces:
bondnumber.VLAN_ID
(for example,bond0.50
,bond1.128
)
-
-
Use network bonding. Networking teaming is not supported in oVirt and will cause errors if the host is used to deploy a self-hosted engine or added to the Engine.
-
Use recommended bonding modes:
-
If the
ovirtmgmt
network is not used by virtual machines, the network may use any supported bonding mode. -
If the
ovirtmgmt
network is used by virtual machines, see Which bonding modes work when used with a bridge that virtual machine guests or containers connect to?. -
oVirt’s default bonding mode is
(Mode 4) Dynamic Link Aggregation
. If your switch does not support Link Aggregation Control Protocol (LACP), use(Mode 1) Active-Backup
. See Bonding Modes for details.
-
-
Configure a VLAN on a physical NIC as in the following example (although
nmcli
is used, you can use any tool):# nmcli connection add type vlan con-name vlan50 ifname eth0.50 dev eth0 id 50 # nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123.0.1/24 +ivp4.gateway 123.123.0.254
-
Configure a VLAN on a bond as in the following example (although
nmcli
is used, you can use any tool):# nmcli connection add type bond con-name bond0 ifname bond0 bond.options "mode=active-backup,miimon=100" ipv4.method disabled ipv6.method ignore # nmcli connection add type ethernet con-name eth0 ifname eth0 master bond0 slave-type bond # nmcli connection add type ethernet con-name eth1 ifname eth1 master bond0 slave-type bond # nmcli connection add type vlan con-name vlan50 ifname bond0.50 dev bond0 id 50 # nmcli con mod vlan50 +ipv4.dns 8.8.8.8 +ipv4.addresses 123.123.0.1/24 +ivp4.gateway 123.123.0.254
-
Do not disable
firewalld
. -
Customize the firewall rules in the Administration Portal after adding the host to the Engine. See Configuring Host Firewall Rules.
6.4. Adding Self-Hosted Engine Nodes to the oVirt Engine
Self-hosted engine nodes are added in the same way as a standard host, with an additional step to deploy the host as a self-hosted engine node. The shared storage domain is automatically detected and the node can be used as a failover host to host the Engine virtual machine when required. You can also attach standard hosts to a self-hosted engine environment, but they cannot host the Engine virtual machine. Have at least two self-hosted engine nodes to ensure the Engine virtual machine is highly available. Additional hosts can also be added using the REST API. See Hosts in the REST API Guide.
-
If you are reusing a self-hosted engine node, remove its existing self-hosted engine configuration. See Removing a Host from a Self-Hosted Engine Environment.
-
In the Administration Portal, click
. -
Click New.
For information on additional host settings, see Explanation of Settings and Controls in the New Host and Edit Host Windows in the Administration Guide.
-
Use the drop-down list to select the Data Center and Host Cluster for the new host.
-
Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field.
-
Select an authentication method to use for the Engine to access the host.
-
Enter the root user’s password to use password authentication.
-
Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication.
-
-
Optionally, configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide.
-
Click the Hosted Engine tab.
-
Select Deploy.
-
Click OK.
6.5. Adding Standard Hosts to the oVirt Engine
Adding a host to your oVirt environment can take some time, as the following steps are completed by the platform: virtualization checks, installation of packages, and creation of a bridge.
-
From the Administration Portal, click
. -
Click New.
-
Use the drop-down list to select the Data Center and Host Cluster for the new host.
-
Enter the Name and the Address of the new host. The standard SSH port, port 22, is auto-filled in the SSH Port field.
-
Select an authentication method to use for the Engine to access the host.
-
Enter the root user’s password to use password authentication.
-
Alternatively, copy the key displayed in the SSH PublicKey field to /root/.ssh/authorized_keys on the host to use public key authentication.
-
-
Optionally, click the Advanced Parameters button to change the following advanced host settings:
-
Disable automatic firewall configuration.
-
Add a host SSH fingerprint to increase security. You can add it manually, or fetch it automatically.
-
-
Optionally configure power management, where the host has a supported power management card. For information on power management configuration, see Host Power Management Settings Explained in the Administration Guide.
-
Click OK.
The new host displays in the list of hosts with a status of Installing
, and you can view the progress of the installation in the Events section of the Notification Drawer (). After a brief delay the host status changes to
Up
.
7. Adding Storage for oVirt
Add storage as data domains in the new environment. A oVirt environment must have at least one data domain, but adding more is recommended.
Add the storage you prepared earlier:
If you are using iSCSI storage, new data domains must not use the same iSCSI target as the self-hosted engine storage domain. |
Creating additional data domains in the same data center as the self-hosted engine storage domain is highly recommended. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you will not be able to add new storage domains or remove the corrupted storage domain; you will have to redeploy the self-hosted engine. |
7.1. Adding NFS Storage
This procedure shows you how to attach existing NFS storage to your oVirt environment as a data domain.
If you require an ISO or export domain, use this procedure, but select ISO or Export from the Domain Function list.
-
In the Administration Portal, click
. -
Click New Domain.
-
Enter a Name for the storage domain.
-
Accept the default values for the Data Center, Domain Function, Storage Type, Format, and Host lists.
-
Enter the Export Path to be used for the storage domain. The export path should be in the format of 123.123.0.10:/data (for IPv4), [2001:0:0:0:0:0:0:5db1]:/data (for IPv6), or domain.example.com:/data.
-
Optionally, you can configure the advanced parameters:
-
Click Advanced Parameters.
-
Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
-
Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
-
Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
-
-
Click OK.
The new NFS data domain has a status of Locked
until the disk is prepared. The data domain is then automatically attached to the data center.
7.2. Adding iSCSI Storage
This procedure shows you how to attach existing iSCSI storage to your oVirt environment as a data domain.
-
Click
. -
Click New Domain.
-
Enter the Name of the new storage domain.
-
Select a Data Center from the drop-down list.
-
Select Data as the Domain Function and iSCSI as the Storage Type.
-
Select an active host as the Host.
Communication to the storage domain is from the selected host and not directly from the Engine. Therefore, all hosts must have access to the storage device before the storage domain can be configured.
-
The Engine can map iSCSI targets to LUNs or LUNs to iSCSI targets. The New Domain window automatically displays known targets with unused LUNs when the iSCSI storage type is selected. If the target that you are using to add storage does not appear, you can use target discovery to find it; otherwise proceed to the next step.
-
Click Discover Targets to enable target discovery options. When targets have been discovered and logged in to, the New Domain window automatically displays targets with LUNs unused by the environment.
LUNs used externally to the environment are also displayed.
You can use the Discover Targets options to add LUNs on many targets or multiple paths to the same LUNs.
-
Enter the FQDN or IP address of the iSCSI host in the Address field.
-
Enter the port with which to connect to the host when browsing for targets in the Port field. The default is
3260
. -
If CHAP is used to secure the storage, select the User Authentication check box. Enter the CHAP user name and CHAP password.
You can define credentials for an iSCSI target for a specific host with the REST API. See StorageServerConnectionExtensions: add in the REST API Guide for more information.
-
Click Discover.
-
Select one or more targets from the discovery results and click Login for one target or Login All for multiple targets.
If more than one path access is required, you must discover and log in to the target through all the required paths. Modifying a storage domain to add additional paths is currently not supported.
-
-
Click the + button next to the desired target. This expands the entry and displays all unused LUNs attached to the target.
-
Select the check box for each LUN that you are using to create the storage domain.
-
Optionally, you can configure the advanced parameters:
-
Click Advanced Parameters.
-
Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
-
Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
-
Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
-
Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains.
-
-
Click OK.
If you have configured multiple storage connection paths to the same target, follow the procedure in Configuring iSCSI Multipathing to complete iSCSI bonding.
If you want to migrate your current storage network to an iSCSI bond, see Migrating a Logical Network to an iSCSI Bond.
7.3. Adding FCP Storage
This procedure shows you how to attach existing FCP storage to your oVirt environment as a data domain.
-
Click
. -
Click New Domain.
-
Enter the Name of the storage domain.
-
Select an FCP Data Center from the drop-down list.
If you do not yet have an appropriate FCP data center, select
(none)
. -
Select the Domain Function and the Storage Type from the drop-down lists. The storage domain types that are not compatible with the chosen data center are not available.
-
Select an active host in the Host field. If this is not the first data domain in a data center, you must select the data center’s SPM host.
All communication to the storage domain is through the selected host and not directly from the oVirt Engine. At least one active host must exist in the system and be attached to the chosen data center. All hosts must have access to the storage device before the storage domain can be configured.
-
The New Domain window automatically displays known targets with unused LUNs when Fibre Channel is selected as the storage type. Select the LUN ID check box to select all of the available LUNs.
-
Optionally, you can configure the advanced parameters.
-
Click Advanced Parameters.
-
Enter a percentage value into the Warning Low Space Indicator field. If the free space available on the storage domain is below this percentage, warning messages are displayed to the user and logged.
-
Enter a GB value into the Critical Space Action Blocker field. If the free space available on the storage domain is below this value, error messages are displayed to the user and logged, and any new action that consumes space, even temporarily, will be blocked.
-
Select the Wipe After Delete check box to enable the wipe after delete option. This option can be edited after the domain is created, but doing so will not change the wipe after delete property of disks that already exist.
-
Select the Discard After Delete check box to enable the discard after delete option. This option can be edited after the domain is created. This option is only available to block storage domains.
-
-
Click OK.
The new FCP data domain remains in a Locked
status while it is being prepared for use. When ready, it is automatically attached to the data center.
7.4. Adding Gluster Storage
To use Gluster Storage with oVirt, see Configuring oVirt with Gluster Storage.
For the Gluster Storage versions that are supported with oVirt, see https://access.redhat.com/articles/2356261.
Appendix A: Troubleshooting a Self-hosted Engine Deployment
To confirm whether the self-hosted engine has already been deployed, run hosted-engine --check-deployed
. An error will only be displayed if the self-hosted engine has not been deployed.
Troubleshooting the Engine Virtual Machine
Check the status of the Engine virtual machine by running hosted-engine --vm-status
.
Any changes made to the Engine virtual machine will take about 20 seconds before they are reflected in the status command output. |
Depending on the Engine status
in the output, see the following suggestions to find or fix the issue.
Engine status: "health": "good", "vm": "up" "detail": "up"
-
If the Engine virtual machine is up and running as normal, you will see the following output:
--== Host 1 status ==-- Status up-to-date : True Hostname : hypervisor.example.com Host ID : 1 Engine status : {"health": "good", "vm": "up", "detail": "up"} Score : 3400 stopped : False Local maintenance : False crc32 : 99e57eba Host timestamp : 248542
-
If the output is normal but you cannot connect to the Engine, check the network connection.
Engine status: "reason": "failed liveliness check", "health": "bad", "vm": "up", "detail": "up"
-
If the
health
isbad
and thevm
isup
, the HA services will try to restart the Engine virtual machine to get the Engine back. If it does not succeed within a few minutes, enable the global maintenance mode from the command line so that the hosts are no longer managed by the HA services.# hosted-engine --set-maintenance --mode=global
-
Connect to the console. When prompted, enter the operating system’s root password. For more console options, see https://access.redhat.com/solutions/2221461.
# hosted-engine --console
-
Ensure that the Engine virtual machine’s operating system is running by logging in.
-
Check the status of the
ovirt-engine
service:# systemctl status -l ovirt-engine # journalctl -u ovirt-engine
-
Check the following logs: /var/log/messages, /var/log/ovirt-engine/engine.log, and /var/log/ovirt-engine/server.log.
-
After fixing the issue, reboot the Engine virtual machine manually from one of the self-hosted engine nodes:
# hosted-engine --vm-shutdown # hosted-engine --vm-start
When the self-hosted engine nodes are in global maintenance mode, the Engine virtual machine must be rebooted manually. If you try to reboot the Engine virtual machine by sending a
reboot
command from the command line, the Engine virtual machine will remain powered off. This is by design. -
On the Engine virtual machine, verify that the
ovirt-engine
service is up and running:# systemctl status ovirt-engine.service
-
After ensuring the Engine virtual machine is up and running, close the console session and disable the maintenance mode to enable the HA services again:
# hosted-engine --set-maintenance --mode=none
Engine status: "vm": "down", "health": "bad", "detail": "unknown", "reason": "vm not running on this host"
-
If you have more than one host in your environment, ensure that another host is not currently trying to restart the Engine virtual machine.
-
Ensure that you are not in global maintenance mode.
-
Check the ovirt-ha-agent logs in /var/log/ovirt-hosted-engine-ha/agent.log.
-
Try to reboot the Engine virtual machine manually from one of the self-hosted engine nodes:
# hosted-engine --vm-shutdown # hosted-engine --vm-start
Engine status: "vm": "unknown", "health": "unknown", "detail": "unknown", "reason": "failed to getVmStats"
This status means that ovirt-ha-agent
failed to get the virtual machine’s details from VDSM.
-
Check the VDSM logs in /var/log/vdsm/vdsm.log.
-
Check the ovirt-ha-agent logs in /var/log/ovirt-hosted-engine-ha/agent.log.
Engine status: The self-hosted engine’s configuration has not been retrieved from shared storage
If you receive the status The hosted engine configuration has not been retrieved from shared storage. Please ensure that ovirt-ha-agent is running and the storage server is reachable
there is an issue with the ovirt-ha-agent
service, or with the storage, or both.
-
Check the status of
ovirt-ha-agent
on the host:# systemctl status -l ovirt-ha-agent # journalctl -u ovirt-ha-agent
-
If the
ovirt-ha-agent
is down, restart it:# systemctl start ovirt-ha-agent
-
Check the
ovirt-ha-agent
logs in /var/log/ovirt-hosted-engine-ha/agent.log. -
Check that you can ping the shared storage.
-
Check whether the shared storage is mounted.
Additional Troubleshooting Commands
-
hosted-engine --reinitialize-lockspace
: This command is used when the sanlock lockspace is broken. Ensure that the global maintenance mode is enabled and that the Engine virtual machine is stopped before reinitializing the sanlock lockspaces. -
hosted-engine --clean-metadata
: Remove the metadata for a host’s agent from the global status database. This makes all other hosts forget about this host. Ensure that the target host is down and that the global maintenance mode is enabled. -
hosted-engine --check-liveliness
: This command checks the liveliness page of the ovirt-engine service. You can also check by connecting tohttps://engine-fqdn/ovirt-engine/services/health/
in a web browser. -
hosted-engine --connect-storage
: This command instructs VDSM to prepare all storage connections needed for the host and and the Engine virtual machine. This is normally run in the back-end during the self-hosted engine deployment. Ensure that the global maintenance mode is enabled if you need to run this command to troubleshoot storage issues.
Cleaning Up a Failed Self-hosted Engine Deployment
If a self-hosted engine deployment was interrupted, subsequent deployments will fail with an error message. The error will differ depending on the stage in which the deployment failed.
If you receive an error message, you can run the cleanup script on the deployment host to clean up the failed deployment. However, it’s best to reinstall your base operating system and start the deployment from the beginning.
The cleanup script has the following limitations:
|
-
Run
/usr/sbin/ovirt-hosted-engine-cleanup
and selecty
to remove anything left over from the failed self-hosted engine deployment.# /usr/sbin/ovirt-hosted-engine-cleanup This will de-configure the host to run ovirt-hosted-engine-setup from scratch. Caution, this operation should be used with care. Are you sure you want to proceed? [y/n]
-
Define whether to reinstall on the same shared storage device or select a different shared storage device.
-
To deploy the installation on the same storage domain, clean up the storage domain by running the following command in the appropriate directory on the server for NFS, Gluster, PosixFS or local storage domains:
# rm -rf storage_location/*
-
For iSCSI or Fibre Channel Protocol (FCP) storage, see https://access.redhat.com/solutions/2121581 for information on how to clean up the storage.
-
Alternatively, select a different shared storage device.
-
-
Redeploy the self-hosted engine.
Appendix B: Migrating Databases and Services to a Remote Server
Although you cannot configure remote databases and services during the automated installation, you can migrate them to a remote server post-installation.
Migrating the Self-Hosted Engine Database to a Remote Server
You can migrate the engine
database of a self-hosted engine to a remote database server after the oVirt Engine has been initially configured. Use engine-backup
to create a database backup and restore it on the new database server.
The new database server must have Enterprise Linux 8 installed and the required repositories enabled:
Enabling the oVirt Engine Repositories
Ensure the correct repositories are enabled:
You can check which repositories are currently enabled by running yum repolist
.
-
Enable the
javapackages-tools
module.# yum module -y enable javapackages-tools
-
Enable version 12 of the
postgresql
module.# yum module -y enable postgresql:12
Migrating the Self-Hosted Engine Database to a Remote Server
-
Log in to a self-hosted engine node and place the environment into
global
maintenance mode. This disables the High Availability agents and prevents the Engine virtual machine from being migrated during the procedure:# hosted-engine --set-maintenance --mode=global
-
Log in to the oVirt Engine machine and stop the
ovirt-engine
service so that it does not interfere with the engine backup:# systemctl stop ovirt-engine.service
-
Create the
engine
database backup:# engine-backup --scope=files --scope=db --mode=backup --file=file_name --log=backup_log_name
-
Copy the backup file to the new database server:
# scp /tmp/engine.dump root@new.database.server.com:/tmp
-
Log in to the new database server and install
engine-backup
:# yum install ovirt-engine-tools-backup
-
Restore the database on the new database server. file_name is the backup file copied from the Engine.
# engine-backup --mode=restore --scope=files --scope=db --file=file_name --log=restore_log_name --provision-db --no-restore-permissions
-
Now that the database has been migrated, start the
ovirt-engine
service:# systemctl start ovirt-engine.service
-
Log in to a self-hosted engine node and turn off maintenance mode, enabling the High Availability agents:
# hosted-engine --set-maintenance --mode=none
Migrating Data Warehouse to a Separate Machine
This section describes how to migrate the Data Warehouse database and service from the oVirt Engine to a separate machine. Hosting the Data Warehouse service on a separate machine reduces the load on each individual machine, and allows each service to avoid potential conflicts caused by sharing CPU and memory resources with other processes.
You can migrate the Data Warehouse service and connect it with the existing Data Warehouse database (ovirt_engine_history
), or you can migrate the Data Warehouse database to the separate machine before migrating the Data Warehouse service. If the Data Warehouse database is hosted on the Engine, migrating the database in addition to the Data Warehouse service further reduces the competition for resources on the Engine machine. You can migrate the database to the same machine onto which you will migrate the Data Warehouse service, or to a machine that is separate from both the Engine machine and the new Data Warehouse service machine.
Migrating the Data Warehouse Database to a Separate Machine
Migrate the Data Warehouse database (ovirt_engine_history
) before you migrate the Data Warehouse service. Use engine-backup
to create a database backup and restore it on the new database machine. For more information on engine-backup
, run engine-backup --help
.
To migrate the Data Warehouse service only, see Migrating the Data Warehouse Service to a Separate Machine.
The new database server must have Enterprise Linux 7 installed. Enable the required repositories on the new database server.
Enabling the oVirt Engine Repositories
Ensure the correct repositories are enabled:
You can check which repositories are currently enabled by running yum repolist
.
-
Enable the
javapackages-tools
module.# yum module -y enable javapackages-tools
-
Enable version 12 of the
postgresql
module.# yum module -y enable postgresql:12
Migrating the Data Warehouse Database to a Separate Machine
-
Create a backup of the Data Warehouse database and configuration files on the Engine:
# engine-backup --mode=backup --scope=dwhdb --scope=files --file=file_name --log=log_file_name
-
Copy the backup file from the Engine to the new machine:
# scp /tmp/file_name root@new.dwh.server.com:/tmp
-
Install
engine-backup
on the new machine:# yum install ovirt-engine-tools-backup
-
Install the PostgreSQL server package:
# yum install postgresql-server postgresql-contrib
-
Initialize the PostgreSQL database, start the
postgresql
service, and ensure that this service starts on boot:# systemctl enable postgresql-12 # systemctl start postgresql-12
-
Restore the Data Warehouse database on the new machine. file_name is the backup file copied from the Engine.
# engine-backup --mode=restore --scope=files --scope=dwhdb --file=file_name --log=log_file_name --provision-dwh-db --no-restore-permissions
The Data Warehouse database is now hosted on a separate machine from that on which the Engine is hosted. After successfully restoring the Data Warehouse database, a prompt instructs you to run the engine-setup
command. Before running this command, migrate the Data Warehouse service.
Migrating the Data Warehouse Service to a Separate Machine
You can migrate the Data Warehouse service installed and configured on the oVirt Engine to a separate machine. Hosting the Data Warehouse service on a separate machine helps to reduce the load on the Engine machine.
Notice that this procedure migrates the Data Warehouse service only.
To migrate the Data Warehouse database (ovirt_engine_history
) prior to migrating the Data Warehouse service, see Migrating the Data Warehouse Database to a Separate Machine.
-
You must have installed and configured the Engine and Data Warehouse on the same machine.
-
To set up the new Data Warehouse machine, you must have the following:
-
The password from the Engine’s /etc/ovirt-engine/engine.conf.d/10-setup-database.conf file.
-
Allowed access from the Data Warehouse machine to the Engine database machine’s TCP port 5432.
-
The username and password for the Data Warehouse database from the Engine’s /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf file. If you migrated the
ovirt_engine_history
database using Migrating the Data Warehouse Database to a Separate Machine, the backup includes these credentials, which you defined during the database setup on that machine.
-
Installing this scenario requires four steps:
Setting up the New Data Warehouse Machine
Enable the oVirt repositories and install the Data Warehouse setup package on a Enterprise Linux 7 machine:
-
Ensure that all packages currently installed are up to date:
# yum update
-
Install the
ovirt-engine-dwh-setup
package:# yum install ovirt-engine-dwh-setup
Stopping the Data Warehouse Service on the Engine Machine
-
Stop the Data Warehouse service:
# systemctl stop ovirt-engine-dwhd.service
-
If the database is hosted on a remote machine, you must manually grant access by editing the postgres.conf file. Edit the
/var/lib/pgsql/12/data/postgresql.conf
file and modify the listen_addresses line so that it matches the following:listen_addresses = '*'
If the line does not exist or has been commented out, add it manually.
If the database is hosted on the Engine machine and was configured during a clean setup of the oVirt Engine, access is granted by default.
See Migrating the Data Warehouse Database to a Separate Machine for more information on how to configure and migrate the Data Warehouse database.
-
Restart the postgresql service:
# systemctl restart postgresql-12
Configuring the New Data Warehouse Machine
The order of the options or settings shown in this section may differ depending on your environment.
-
If you are migrating both the
ovirt_engine_history
database and the Data Warehouse service to the same machine, run the following, otherwise proceed to the next step.# sed -i '/^ENGINE_DB_/d' \ /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/10-setup-database.conf # sed -i \ -e 's;^\(OVESETUP_ENGINE_CORE/enable=bool\):True;\1:False;' \ -e '/^OVESETUP_CONFIG\/fqdn/d' \ /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf
-
Run the
engine-setup
command to begin configuration of Data Warehouse on the machine:# engine-setup
-
Press
Enter
to configure Data Warehouse:Configure Data Warehouse on this host (Yes, No) [Yes]:
-
Press Enter to accept the automatically detected host name, or enter an alternative host name and press Enter:
Host fully qualified DNS name of this server [autodetected host name]:
-
Press
Enter
to automatically configure the firewall, or typeNo
and pressEnter
to maintain existing settings:Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]:
If you choose to automatically configure the firewall, and no firewall managers are active, you are prompted to select your chosen firewall manager from a list of supported options. Type the name of the firewall manager and press
Enter
. This applies even in cases where only one option is listed. -
Enter the fully qualified domain name and password for the Engine. Press Enter to accept the default values in each other field:
Host fully qualified DNS name of the engine server []: engine-fqdn Setup needs to do some actions on the remote engine server. Either automatically, using ssh as root to access it, or you will be prompted to manually perform each such action. Please choose one of the following: 1 - Access remote engine server using ssh as root 2 - Perform each action manually, use files to copy content around (1, 2) [1]: ssh port on remote engine server [22]: root password on remote engine server engine-fqdn: password
-
Enter the FQDN and password for the Engine database machine. Press
Enter
to accept the default values in each other field:Engine database host []: manager-db-fqdn Engine database port [5432]: Engine database secured connection (Yes, No) [No]: Engine database name [engine]: Engine database user [engine]: Engine database password: password
-
Confirm your installation settings:
Please confirm installation settings (OK, Cancel) [OK]:
The Data Warehouse service is now configured on the remote machine. Proceed to disable the Data Warehouse service on the Engine machine.
Disabling the Data Warehouse Service on the Engine Machine
-
On the Engine machine, restart the Engine:
# service ovirt-engine restart
-
Run the following command to modify the file /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf and set the options to
False
:# sed -i \ -e 's;^\(OVESETUP_DWH_CORE/enable=bool\):True;\1:False;' \ -e 's;^\(OVESETUP_DWH_CONFIG/remoteEngineConfigured=bool\):True;\1:False;' \ /etc/ovirt-engine-setup.conf.d/20-setup-ovirt-post.conf
-
Disable the Data Warehouse service:
# systemctl disable ovirt-engine-dwhd.service
-
Remove the Data Warehouse files:
# rm -f /etc/ovirt-engine-dwh/ovirt-engine-dwhd.conf.d/* .conf /var/lib/ovirt-engine-dwh/backups/*
The Data Warehouse service is now hosted on a separate machine from the Engine.
Migrating the Websocket Proxy to a Separate Machine
For security or performance reasons the websocket proxy can run on a separate machine that does not run the oVirt Engine. The procedure to migrate the websocket proxy from the Engine machine to a separate machine involves removing the websocket proxy configuration from the Engine machine, then installing the proxy on the separate machine.
The engine-cleanup
command can be used to remove the websocket proxy from the Engine machine:
Removing the Websocket Proxy from the Engine machine
-
On the Engine machine, run
engine-cleanup
to remove the required configuration.# engine-cleanup
-
Type
No
when asked to remove all components and pressEnter
.Do you want to remove all components? (Yes, No) [Yes]: No
-
Type
No
when asked to remove the engine and pressEnter
.Do you want to remove the engine? (Yes, No) [Yes]: No
-
Type
Yes
when asked to remove the websocket proxy and pressEnter
.Do you want to remove the WebSocket proxy? (Yes, No) [No]: Yes
Select
No
if asked to remove any other components.
Installing a Websocket Proxy on a Separate Machine
The websocket proxy allows users to connect to virtual machines through a noVNC console. The noVNC client uses websockets to pass VNC data. However, the VNC server in QEMU does not provide websocket support, so a websocket proxy must be placed between the client and the VNC server. The proxy can run on any machine that has access to the network, including the the Engine machine.
For security and performance reasons, users may want to configure the websocket proxy on a separate machine.
-
Install the websocket proxy:
# yum install ovirt-engine-websocket-proxy
-
Run the
engine-setup
command to configure the websocket proxy.# engine-setup
If the
ovirt-engine
package has also been installed, chooseNo
when asked to configure the Engine (Engine
) on this host. -
Press
Enter
to allowengine-setup
to configure a websocket proxy server on the machine.Configure WebSocket Proxy on this machine? (Yes, No) [Yes]:
-
Press
Enter
to accept the automatically detected host name, or enter an alternative host name and pressEnter
. Note that the automatically detected host name may be incorrect if you are using virtual hosts:Host fully qualified DNS name of this server [host.example.com]:
-
Press
Enter
to allowengine-setup
to configure the firewall and open the ports required for external communication. If you do not allowengine-setup
to modify your firewall configuration, then you must manually open the required ports.Setup can automatically configure the firewall on this system. Note: automatic configuration of the firewall may overwrite current settings. Do you want Setup to configure the firewall? (Yes, No) [Yes]:
-
Enter the FQDN of the Engine machine and press
Enter
.Host fully qualified DNS name of the engine server []: manager.example.com
-
Press
Enter
to allowengine-setup
to perform actions on the Engine machine, or press2
to manually perform the actions.Setup will need to do some actions on the remote engine server. Either automatically, using ssh as root to access it, or you will be prompted to manually perform each such action. Please choose one of the following: 1 - Access remote engine server using ssh as root 2 - Perform each action manually, use files to copy content around (1, 2) [1]:
-
Press
Enter
to accept the default SSH port number, or enter the port number of the Engine machine.ssh port on remote engine server [22]:
-
Enter the root password to log in to the Engine machine and press
Enter
.root password on remote engine server engine_host.example.com:
-
-
Press
Enter
to confirm the configuration settings.--== CONFIGURATION PREVIEW ==-- Firewall manager : firewalld Update Firewall : True Host FQDN : host.example.com Configure WebSocket Proxy : True Engine Host FQDN : engine_host.example.com Please confirm installation settings (OK, Cancel) [OK]:
Instructions are provided to configure the Engine machine to use the configured websocket proxy.
Manual actions are required on the engine host in order to enroll certs for this host and configure the engine about it. Please execute this command on the engine host: engine-config -s WebSocketProxy=host.example.com:6100 and than restart the engine service to make it effective
-
Log in to the Engine machine and execute the provided instructions.
# engine-config -s WebSocketProxy=host.example.com:6100 # systemctl restart ovirt-engine.service
Appendix C: Setting up Cinderlib
-
Enable the
openstack-cinderlib
repositories on Red Hat Enterprise Linux:# subscription-manager repos --enable=openstack-16.1-cinderlib-for-rhel-8-x86_64-rpms
-
Enable the
centos-release-openstack-ussuri
repositories on CentOS Stream:# yum install centos-release-openstack-ussuri.noarch
-
Install the
OpenStack Cinderlib
package:# yum install -y python3-cinderlib
-
In the oVirt Engine, enable managed block domain support:
# engine-config -s ManagedBlockDomainSupported=true
-
Restart the Engine to save the new configuration:
# systemctl restart ovirt-engine
-
Enable the
openstack-cinderlib
repositories on Red Hat Enterprise Linux:# subscription-manager repos --enable=openstack-16.1-cinderlib-for-rhel-8-x86_64-rpms
-
Enable the
centos-release-openstack-ussuri
repositories on CentOS Stream:# yum install -y centos-release-openstack-ussuri.noarch
-
Install the Python
brick
package:# yum install -y python3-os-brick
-
Restart the VDSM on the host to save the new configuration:
# systemctl restart vdsmd
If you are using Cinderlib together with a ceph driver:
-
Enable the following repositories on Red Hat Enterprise Linux:
# subscription-manager repos --enable=rhceph-4-osd-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-mon-for-rhel-8-x86_64-rpms \ --enable=rhceph-4-tools-for-rhel-8-x86_64-rpms
-
Enable the
centos-release-ceph-nautilus
repositories on CentOS Stream:# yum install -y centos-release-ceph-nautilus.noarch
-
Run the following command on the Engine and on the hosts:
# yum install -y ceph-common
Appendix D: Configuring a Host for PCI Passthrough
This is one in a series of topics that show how to set up and configure SR-IOV on oVirt. For more information, see Setting Up and Configuring SR-IOV |
Enabling PCI passthrough allows a virtual machine to use a host device as if the device were directly attached to the virtual machine. To enable the PCI passthrough function, you must enable virtualization extensions and the IOMMU function. The following procedure requires you to reboot the host. If the host is attached to the Engine already, ensure you place the host into maintenance mode first.
-
Ensure that the host hardware meets the requirements for PCI device passthrough and assignment. See PCI Device Requirements for more information.
-
Enable the virtualization extension and IOMMU extension in the BIOS. See Enabling Intel VT-x and AMD-V virtualization hardware extensions in BIOS in the Enterprise Linux Virtualization Deployment and Administration Guide for more information.
-
Enable the IOMMU flag in the kernel by selecting the Hostdev Passthrough & SR-IOV check box when adding the host to the Engine or by editing the grub configuration file manually.
-
To enable the IOMMU flag from the Administration Portal, see Adding Standard Hosts to the oVirt Engine and Kernel Settings Explained.
-
To edit the grub configuration file manually, see Enabling IOMMU Manually.
-
-
For GPU passthrough, you need to run additional configuration steps on both the host and the guest system. See GPU device passthrough: Assigning a host GPU to a single virtual machine in Setting up an NVIDIA GPU for a virtual machine in Red Hat Virtualization for more information.
-
Enable IOMMU by editing the grub configuration file.
If you are using IBM POWER8 hardware, skip this step as IOMMU is enabled by default.
-
For Intel, boot the machine, and append
intel_iommu=on
to the end of theGRUB_CMDLINE_LINUX
line in the grub configuration file.# vi /etc/default/grub ... GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 ... intel_iommu=on ...
-
For AMD, boot the machine, and append
amd_iommu=on
to the end of theGRUB_CMDLINE_LINUX
line in the grub configuration file.# vi /etc/default/grub ... GRUB_CMDLINE_LINUX="nofb splash=quiet console=tty0 ... amd_iommu=on ...
If
intel_iommu=on
oramd_iommu=on
works, you can try addingiommu=pt
oramd_iommu=pt
. Thept
option only enables IOMMU for devices used in passthrough and provides better host performance. However, the option might not be supported on all hardware. Revert to previous option if thept
option doesn’t work for your host.If the passthrough fails because the hardware does not support interrupt remapping, you can consider enabling the
allow_unsafe_interrupts
option if the virtual machines are trusted. Theallow_unsafe_interrupts
is not enabled by default because enabling it potentially exposes the host to MSI attacks from virtual machines. To enable the option:# vi /etc/modprobe.d options vfio_iommu_type1 allow_unsafe_interrupts=1
-
-
Refresh the grub.cfg file and reboot the host for these changes to take effect:
# grub2-mkconfig -o /boot/grub2/grub.cfg
# reboot
To enable SR-IOV and assign dedicated virtual NICs to virtual machines, see https://access.redhat.com/articles/2335291.
Appendix E: Removing the standalone oVirt Engine
The engine-cleanup
command removes all components of the oVirt Engine and automatically backs up the following:
-
the Grafana database, in
/var/lib/grafana/
-
the Engine database in
/var/lib/ovirt-engine/backups/
-
a compressed archive of the PKI keys and configuration in
/var/lib/ovirt-engine/backups/
Backup file names include the date and time.
You should use this procedure only on a standalone installation of the oVirt Engine. |
-
Run the following command on the Engine machine:
# engine-cleanup
-
The Engine service must be stopped before proceeding. You are prompted to confirm. Enter
OK
to proceed:During execution engine service will be stopped (OK, Cancel) [OK]:
-
You are prompted to confirm that you want to remove all Engine components. Enter
OK
to remove all components, orCancel
to exitengine-cleanup
:All the installed ovirt components are about to be removed, data will be lost (OK, Cancel) [Cancel]: OK
engine-cleanup
details the components that are removed, and the location of backup files. -
Remove the oVirt packages:
# dnf remove ovirt-engine* vdsm-bootstrap
Appendix F: Legal notice
Certain portions of this text first appeared in Red Hat Virtualization 4.4 Installing Red Hat Virtualization as a self-hosted engine using the Cockpit web interface. Copyright © 2020 Red Hat, Inc. Licensed under a Creative Commons Attribution-ShareAlike 3.0 Unported License.