- 1. Migration Overview
- 2. Installing the Self-hosted Engine Deployment Host
- 3. Preparing Storage for oVirt
- 4. Updating the oVirt Engine
- 5. Backing up the Original Engine
- 6. Restoring the Backup on a New Self-Hosted Engine
- 7. Enabling the oVirt Engine Repositories
- 8. Reinstalling an Existing Host as a Self-Hosted Engine Node
- Appendix A: Preventing kernel modules from loading automatically
- Appendix B: Legal notice
Migrating from a standalone Engine to a self-hosted engine
You can convert a standalone oVirt Engine to a self-hosted engine by backing up the standalone Engine and restoring it in a new self-hosted environment.
The difference between the two environment types is explained below:
Standalone Engine Architecture
The oVirt Engine runs on a physical server, or a virtual machine hosted in a separate virtualization environment. A standalone Engine is easier to deploy and manage, but requires an additional physical server. The Engine is only highly available when managed externally with a product such as Red Hat’s High Availability Add-On.
The minimum setup for a standalone Engine environment includes:
-
One oVirt Engine machine. The Engine is typically deployed on a physical server. However, it can also be deployed on a virtual machine, as long as that virtual machine is hosted in a separate environment. The Engine must run on Enterprise Linux 8.
-
A minimum of two hosts for virtual machine high availability. You can use Enterprise Linux hosts or oVirt Nodes (oVirt Node). VDSM (the host agent) runs on all hosts to facilitate communication with the oVirt Engine.
-
One storage service, which can be hosted locally or on a remote server, depending on the storage type used. The storage service must be accessible to all hosts.
Self-Hosted Engine Architecture
The oVirt Engine runs as a virtual machine on self-hosted engine nodes (specialized hosts) in the same environment it manages. A self-hosted engine environment requires one less physical server, but requires more administrative overhead to deploy and manage. The Engine is highly available without external HA management.
The minimum setup of a self-hosted engine environment includes:
-
One oVirt Engine virtual machine that is hosted on the self-hosted engine nodes. The Engine Appliance is used to automate the installation of a Enterprise Linux 8 virtual machine, and the Engine on that virtual machine.
-
A minimum of two self-hosted engine nodes for virtual machine high availability. You can use Enterprise Linux hosts or oVirt Nodes (oVirt Node). VDSM (the host agent) runs on all hosts to facilitate communication with the oVirt Engine. The HA services run on all self-hosted engine nodes to manage the high availability of the Engine virtual machine.
-
One storage service, which can be hosted locally or on a remote server, depending on the storage type used. The storage service must be accessible to all hosts.
1. Migration Overview
When you specify a backup file during self-hosted engine deployment, the Engine backup is restored on a new virtual machine, with a dedicated self-hosted engine storage domain. Deploying on a fresh host is highly recommended; if the host used for deployment existed in the backed up environment, it will be removed from the restored database to avoid conflicts in the new environment. If you deploy on a new host, you must assign a unique name to the host. Reusing the name of an existing host included in the backup can cause conflicts in the new environment.
At least two self-hosted engine nodes are required for the Engine virtual machine to be highly available. You can add new nodes, or convert existing hosts.
The migration involves the following key steps:
-
Install a new host to deploy the self-hosted engine on. You can use either host type:
-
Prepare storage for the self-hosted engine storage domain. You can use one of the following storage types:
-
Update the original Engine to the latest minor version before you back it up.
-
Enable the Engine repositories on the new Engine virtual machine.
-
Convert regular hosts to self-hosted engine nodes that can host the new Engine.
This procedure assumes that you have access and can make changes to the original Engine.
-
FQDNs prepared for your Engine and the deployment host. Forward and reverse lookup records must both be set in the DNS. The new Engine must have the same FQDN as the original Engine.
-
The management network (ovirtmgmt by default) must be configured as a VM network, so that it can manage the Engine virtual machine.
2. Installing the Self-hosted Engine Deployment Host
A self-hosted engine can be deployed from a oVirt Node or a Enterprise Linux host.
If you plan to use bonded interfaces for high availability or VLANs to separate different types of traffic (for example, for storage or management connections), you should configure them on the host before beginning the self-hosted engine deployment. See Networking Recommendations in the Planning and Prerequisites Guide. |
2.1. Installing oVirt Nodes
oVirt Node (oVirt Node) is a minimal operating system based on Enterprise Linux that is designed to provide a simple method for setting up a physical machine to act as a hypervisor in a oVirt environment. The minimal operating system contains only the packages required for the machine to act as a hypervisor, and features a Cockpit web interface for monitoring the host and performing administrative tasks. See Running Cockpit for the minimum browser requirements.
oVirt Node supports NIST 800-53 partitioning requirements to improve security. oVirt Node uses a NIST 800-53 partition layout by default.
The host must meet the minimum host requirements.
When installing or reinstalling the host’s operating system, oVirt strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. |
-
Visit the oVirt Node Download page.
-
Choose the version of oVirt Node to download and click its Installation ISO link.
-
Write the oVirt Node Installation ISO disk image to a USB, CD, or DVD.
-
Start the machine on which you are installing oVirt Node, booting from the prepared installation media.
-
From the boot menu, select Install oVirt Node 4.5 and press
Enter
.You can also press the
Tab
key to edit the kernel parameters. Kernel parameters must be separated by a space, and you can boot the system using the specified kernel parameters by pressing theEnter
key. Press theEsc
key to clear any changes to the kernel parameters and return to the boot menu. -
Select a language, and click Continue.
-
Select a keyboard layout from the Keyboard Layout screen and click Done.
-
Select the device on which to install oVirt Node from the Installation Destination screen. Optionally, enable encryption. Click Done.
Use the Automatically configure partitioning option.
-
Select a time zone from the Time & Date screen and click Done.
-
Select a network from the Network & Host Name screen and click Configure… to configure the connection details.
To use the connection every time the system boots, select the Connect automatically with priority check box. For more information, see Configuring network and host name options in the Enterprise Linux 8 Installation Guide.
Enter a host name in the Host Name field, and click Done.
-
Optional: Configure Security Policy and Kdump. See Customizing your RHEL installation using the GUI in Performing a standard RHEL installation for Enterprise Linux 8 for more information on each of the sections in the Installation Summary screen.
-
Click Begin Installation.
-
Set a root password and, optionally, create an additional user while oVirt Node installs.
Do not create untrusted users on oVirt Node, as this can lead to exploitation of local security vulnerabilities.
-
Click Reboot to complete the installation.
When oVirt Node restarts,
nodectl check
performs a health check on the host and displays the result when you log in on the command line. The messagenode status: OK
ornode status: DEGRADED
indicates the health status. Runnodectl check
to get more information.If necessary, you can prevent kernel modules from loading automatically.
2.2. Installing Enterprise Linux hosts
A Enterprise Linux host is based on a standard basic installation of Enterprise Linux 8.7 or later on a physical server, with the Enterprise Linux Server
and oVirt
repositories enabled.
The oVirt project also provides packages for Enterprise Linux 9.
For detailed installation instructions, see the Performing a standard EL installation.
The host must meet the minimum host requirements.
When installing or reinstalling the host’s operating system, oVirt strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. |
Virtualization must be enabled in your host’s BIOS settings. For information on changing your host’s BIOS settings, refer to your host’s hardware documentation. |
Do not install third-party watchdogs on Enterprise Linux hosts. They can interfere with the watchdog daemon provided by VDSM. |
Although the existing storage domains will be migrated from the standalone Engine, you must prepare additional storage for a self-hosted engine storage domain that is dedicated to the Engine virtual machine.
3. Preparing Storage for oVirt
You need to prepare storage to be used for storage domains in the new environment. A oVirt environment must have at least one data storage domain, but adding more is recommended.
When installing or reinstalling the host’s operating system, oVirt strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. |
A data domain holds the virtual hard disks and OVF files of all the virtual machines and templates in a data center, and cannot be shared across data centers while active (but can be migrated between data centers). Data domains of multiple storage types can be added to the same data center, provided they are all shared, rather than local, domains.
You can use one of the following storage types:
-
Self-hosted engines must have an additional data domain with at least 74 GiB dedicated to the Engine virtual machine. The self-hosted engine installer creates this domain. Prepare the storage for this domain before installation.
Extending or otherwise changing the self-hosted engine storage domain after deployment of the self-hosted engine is not supported. Any such change might prevent the self-hosted engine from booting.
-
When using a block storage domain, either FCP or iSCSI, a single target LUN is the only supported setup for a self-hosted engine.
-
If you use iSCSI storage, the self-hosted engine storage domain must use a dedicated iSCSI target. Any additional storage domains must use a different iSCSI target.
-
It is strongly recommended to create additional data storage domains in the same data center as the self-hosted engine storage domain. If you deploy the self-hosted engine in a data center with only one active data storage domain, and that storage domain is corrupted, you cannot add new storage domains or remove the corrupted storage domain. You must redeploy the self-hosted engine.
3.1. Preparing NFS Storage
Set up NFS shares on your file storage or remote server to serve as storage domains on Red Hat Enterprise Virtualization Host systems. After exporting the shares on the remote storage and configuring them in the Red Hat Virtualization Manager, the shares will be automatically imported on the Red Hat Virtualization hosts.
For information on setting up, configuring, mounting and exporting NFS, see Managing file systems for Red Hat Enterprise Linux 8.
Specific system user accounts and system user groups are required by oVirt so the Engine can store data in the storage domains represented by the exported directories. The following procedure sets the permissions for one directory. You must repeat the chown
and chmod
steps for all of the directories you intend to use as storage domains in oVirt.
-
Install the NFS
utils
package.# dnf install nfs-utils -y
-
To check the enabled versions:
# cat /proc/fs/nfsd/versions
-
Enable the following services:
# systemctl enable nfs-server # systemctl enable rpcbind
-
Create the group
kvm
:# groupadd kvm -g 36
-
Create the user
vdsm
in the groupkvm
:# useradd vdsm -u 36 -g kvm
-
Create the
storage
directory and modify the access rights.# mkdir /storage # chmod 0755 /storage # chown 36:36 /storage/
-
Add the
storage
directory to/etc/exports
with the relevant permissions.# vi /etc/exports # cat /etc/exports /storage *(rw)
-
Restart the following services:
# systemctl restart rpcbind # systemctl restart nfs-server
-
To see which export are available for a specific IP address:
# exportfs /nfs_server/srv 10.46.11.3/24 /nfs_server <world>
If changes in |
3.2. Preparing iSCSI Storage
oVirt supports iSCSI storage, which is a storage domain created from a volume group made up of LUNs. Volume groups and LUNs cannot be attached to more than one storage domain at a time.
For information on setting up and configuring iSCSI storage, see Configuring an iSCSI target in Managing storage devices for Red Hat Enterprise Linux 8.
If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the |
oVirt currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. |
If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection:
|
3.3. Preparing FCP Storage
oVirt supports SAN storage by creating a storage domain from a volume group made of pre-existing LUNs. Neither volume groups nor LUNs can be attached to more than one storage domain at a time.
oVirt system administrators need a working knowledge of Storage Area Networks (SAN) concepts. SAN usually uses Fibre Channel Protocol (FCP) for traffic between hosts and shared external storage. For this reason, SAN may occasionally be referred to as FCP storage.
For information on setting up and configuring FCP or multipathing on Enterprise Linux, see the Storage Administration Guide and DM Multipath Guide.
If you are using block storage and intend to deploy virtual machines on raw devices or direct LUNs and manage them with the Logical Volume Manager (LVM), you must create a filter to hide guest logical volumes. This will prevent guest logical volumes from being activated when the host is booted, a situation that could lead to stale logical volumes and cause data corruption. Use the |
oVirt currently does not support block storage with a block size of 4K. You must configure block storage in legacy (512b block) mode. |
If your host is booting from SAN storage and loses connectivity to the storage, the storage file systems become read-only and remain in this state after connectivity is restored. To prevent this situation, add a drop-in multipath configuration file on the root file system of the SAN for the boot LUN to ensure that it is queued when there is a connection:
|
3.4. Preparing Gluster Storage
For information on setting up and configuring Gluster Storage, see the Gluster Storage Installation Guide.
3.5. Customizing Multipath Configurations for SAN Vendors
If your RHV environment is configured to use multipath connections with SANs, you can customize the multipath configuration settings to meet requirements specified by your storage vendor. These customizations can override both the default settings and settings that are specified in /etc/multipath.conf
.
To override the multipath settings, do not customize /etc/multipath.conf
. Because VDSM owns /etc/multipath.conf
, installing or upgrading VDSM or oVirt can overwrite this file including any customizations it contains. This overwriting can cause severe storage failures.
Instead, you create a file in the /etc/multipath/conf.d
directory that contains the settings you want to customize or override.
VDSM executes the files in /etc/multipath/conf.d
in alphabetical order. So, to control the order of execution, you begin the filename with a number that makes it come last. For example, /etc/multipath/conf.d/90-myfile.conf
.
To avoid causing severe storage failures, follow these guidelines:
-
Do not modify
/etc/multipath.conf
. If the file contains user modifications, and the file is overwritten, it can cause unexpected storage problems.
Not following these guidelines can cause catastrophic storage errors. |
-
VDSM is configured to use the multipath module. To verify this, enter:
# vdsm-tool is-configured --module multipath
-
Create a new configuration file in the
/etc/multipath/conf.d
directory. -
Copy the individual setting you want to override from
/etc/multipath.conf
to the new configuration file in/etc/multipath/conf.d/<my_device>.conf
. Remove any comment marks, edit the setting values, and save your changes. -
Apply the new configuration settings by entering:
# systemctl reload multipathd
Do not restart the multipathd service. Doing so generates errors in the VDSM logs.
-
Test that the new configuration performs as expected on a non-production cluster in a variety of failure scenarios. For example, disable all of the storage connections.
-
Enable one connection at a time and verify that doing so makes the storage domain reachable.
3.6. Recommended Settings for Multipath.conf
Do not override the following settings:
- user_friendly_names no
-
Device names must be consistent across all hypervisors. For example,
/dev/mapper/{WWID}
. The default value of this setting,no
, prevents the assignment of arbitrary and inconsistent device names such as/dev/mapper/mpath{N}
on various hypervisors, which can lead to unpredictable system behavior.Do not change this setting to user_friendly_names yes
. User-friendly names are likely to cause unpredictable system behavior or failures, and are not supported. find_multipaths no
-
This setting controls whether oVirt Node tries to access devices through multipath only if more than one path is available. The current value,
no
, allows oVirt to access devices through multipath even if only one path is available.Do not override this setting.
Avoid overriding the following settings unless required by the storage system vendor:
no_path_retry 4
-
This setting controls the number of polling attempts to retry when no paths are available. Before oVirt version 4.2, the value of
no_path_retry
wasfail
because QEMU had trouble with the I/O queuing when no paths were available. Thefail
value made it fail quickly and paused the virtual machine. oVirt version 4.2 changed this value to4
so when multipathd detects the last path has failed, it checks all of the paths four more times. Assuming the default 5-second polling interval, checking the paths takes 20 seconds. If no path is up, multipathd tells the kernel to stop queuing and fails all outstanding and future I/O until a path is restored. When a path is restored, the 20-second delay is reset for the next time all paths fail. For more details, see the commit that changed this setting. polling_interval 5
-
This setting determines the number of seconds between polling attempts to detect whether a path is open or has failed. Unless the vendor provides a clear reason for increasing the value, keep the VDSM-generated default so the system responds to path failures sooner.
Before backing up the Engine, ensure it is updated to the latest minor version. The Engine version in the backup file must match the version of the new Engine.
4. Updating the oVirt Engine
-
The data center compatibility level must be set to the latest version to ensure compatibility with the updated storage version.
-
On the Engine machine, check if updated packages are available:
# engine-upgrade-check
-
Update the setup packages:
# dnf update ovirt\*setup\*
-
Update the oVirt Engine with the
engine-setup
script. Theengine-setup
script prompts you with some configuration questions, then stops theovirt-engine
service, downloads and installs the updated packages, backs up and updates the database, performs post-installation configuration, and starts theovirt-engine
service.# engine-setup
When the script completes successfully, the following message appears:
Execution of setup completed successfully
The
engine-setup
script is also used during the oVirt Engine installation process, and it stores the configuration values supplied. During an update, the stored values are displayed when previewing the configuration, and might not be up to date ifengine-config
was used to update configuration after installation. For example, ifengine-config
was used to updateSANWipeAfterDelete
totrue
after installation,engine-setup
will output "Default SAN wipe after delete: False" in the configuration preview. However, the updated values will not be overwritten byengine-setup
.The update process might take some time. Do not stop the process before it completes.
-
Update the base operating system and any optional packages installed on the Engine:
# yum update --nobest
If you encounter a required Ansible package conflict during the update, see Cannot perform yum update on my RHV manager (ansible conflict).
If any kernel packages were updated, reboot the machine to complete the update.
5. Backing up the Original Engine
Back up the original Engine using the engine-backup
command, and copy the backup file to a separate location so that it can be accessed at any point during the process.
For more information about engine-backup --mode=backup
options, see Backing Up and Restoring the oVirt Engine in the Administration Guide.
-
Log in to the original Engine and stop the
ovirt-engine
service:# systemctl stop ovirt-engine # systemctl disable ovirt-engine
Though stopping the original Engine from running is not obligatory, it is recommended as it ensures no changes are made to the environment after the backup is created. Additionally, it prevents the original Engine and the new Engine from simultaneously managing existing resources.
-
Run the
engine-backup
command, specifying the name of the backup file to create, and the name of the log file to create to store the backup log:# engine-backup --mode=backup --file=file_name --log=log_file_name
-
Copy the files to an external server. In the following example,
storage.example.com
is the fully qualified domain name of a network storage server that will store the backup until it is needed, and/backup/
is any designated folder or path.# scp -p file_name log_file_name storage.example.com:/backup/
After backing up the Engine, deploy a new self-hosted engine and restore the backup on the new virtual machine.
6. Restoring the Backup on a New Self-Hosted Engine
Run the hosted-engine
script on a new host, and use the --restore-from-file=path/to/file_name
option to restore the Engine backup during the deployment.
If you are using iSCSI storage, and your iSCSI target filters connections according to the initiator’s ACL, the deployment may fail with a
Note that the IQN can be updated on the host side (iSCSI initiator), or on the storage side (iSCSI target). |
-
Copy the backup file to the new host. In the following example,
host.example.com
is the FQDN for the host, and/backup/
is any designated folder or path.# scp -p file_name host.example.com:/backup/
-
Log in to the new host.
-
If you are deploying on oVirt Node,
ovirt-hosted-engine-setup
is already installed, so skip this step. If you are deploying on Enterprise Linux, install theovirt-hosted-engine-setup
package:# dnf install ovirt-hosted-engine-setup
-
Use the
tmux
window manager to run the script to avoid losing the session in case of network or terminal disruption.Install and run
tmux
:# dnf -y install tmux # tmux
-
Run the
hosted-engine
script, specifying the path to the backup file:# hosted-engine --deploy --restore-from-file=backup/file_name
To escape the script at any time, use CTRL+D to abort deployment.
-
Select Yes to begin the deployment.
-
Configure the network. The script detects possible NICs to use as a management bridge for the environment.
-
If you want to use a custom appliance for the virtual machine installation, enter the path to the OVA archive. Otherwise, leave this field empty to use the Engine Appliance.
-
Enter the root password for the Engine.
-
Enter an SSH public key that will allow you to log in to the Engine as the root user, and specify whether to enable SSH access for the root user.
-
Enter the virtual machine’s CPU and memory configuration.
The virtual machine must have the same amount of RAM as the physical machine from which the Engine is being migrated. If you must migrate to a virtual machine that has less RAM than the physical machine from which the Engine is migrated, see Configuring the amount of RAM in Red Hat Virtualization Hosted Engine.
-
Enter a MAC address for the Engine virtual machine, or accept a randomly generated one. If you want to provide the Engine virtual machine with an IP address via DHCP, ensure that you have a valid DHCP reservation for this MAC address. The deployment script will not configure the DHCP server for you.
-
Enter the virtual machine’s networking details. If you specify Static, enter the IP address of the Engine.
The static IP address must belong to the same subnet as the host. For example, if the host is in 10.1.1.0/24, the Engine virtual machine’s IP must be in the same subnet range (10.1.1.1-254/24).
-
Specify whether to add entries for the Engine virtual machine and the base host to the virtual machine’s
/etc/hosts
file. You must ensure that the host names are resolvable. -
Provide the name and TCP port number of the SMTP server, the email address used to send email notifications, and a comma-separated list of email addresses to receive these notifications:
-
Enter a password for the
admin@internal
user to access the Administration Portal.The script creates the virtual machine. This can take some time if the Engine Appliance needs to be installed.
If the host becomes non operational, due to a missing required network or a similar problem, the deployment pauses and a message such as the following is displayed:
[ INFO ] You can now connect to https://<host name>:6900/ovirt-engine/ and check the status of this host and eventually remediate it, please continue only when the host is listed as 'up' [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : include_tasks] [ INFO ] ok: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Create temporary lock file] [ INFO ] changed: [localhost] [ INFO ] TASK [ovirt.ovirt.hosted_engine_setup : Pause execution until /tmp/ansible.<random>_he_setup_lock is removed, delete it once ready to proceed]
Pausing the process allows you to:
-
Connect to the Administration Portal using the provided URL.
-
Assess the situation, find out why the host is non operational, and fix whatever is needed. For example, if this deployment was restored from a backup, and the backup included required networks for the host cluster, configure the networks, attaching the relevant host NICs to these networks.
-
Once everything looks OK, and the host status is Up, remove the lock file presented in the message above. The deployment continues.
-
-
Select the type of storage to use:
-
For NFS, enter the version, full address and path to the storage, and any mount options.
-
For iSCSI, enter the portal details and select a target and LUN from the auto-detected lists. You can only select one iSCSI target during the deployment, but multipathing is supported to connect all portals of the same portal group.
To specify more than one iSCSI target, you must enable multipathing before deploying the self-hosted engine. See Enterprise Linux DM Multipath for details. There is also a Multipath Helper tool that generates a script to install and configure multipath with different options.
-
For Gluster storage, enter the full address and path to the storage, and any mount options.
Only replica 1 and replica 3 Gluster storage are supported. Ensure you configure the volume as follows:
gluster volume set VOLUME_NAME group virt gluster volume set VOLUME_NAME performance.strict-o-direct on gluster volume set VOLUME_NAME network.remote-dio off gluster volume set VOLUME_NAME storage.owner-uid 36 gluster volume set VOLUME_NAME storage.owner-gid 36 gluster volume set VOLUME_NAME network.ping-timeout 30
-
For Fibre Channel, select a LUN from the auto-detected list. The host bus adapters must be configured and connected, and the LUN must not contain any existing data. To reuse an existing LUN, see Reusing LUNs in the Administration Guide.
-
-
Enter the Engine disk size.
The script continues until the deployment is complete.
-
The deployment process changes the Engine’s SSH keys. To allow client machines to access the new Engine without SSH errors, remove the original Engine’s entry from the
.ssh/known_hosts
file on any client machines that accessed the original Engine.
When the deployment is complete, log in to the new Engine virtual machine and enable the required repositories.
7. Enabling the oVirt Engine Repositories
Ensure the correct repositories are enabled.
For oVirt 4.5: If you are going to install on RHEL or derivatives please follow Installing on RHEL or derivatives first.
# dnf install -y centos-release-ovirt45
As discussed in oVirt Users mailing list we suggest the user community to use oVirt master snapshot repositories ensuring that the latest fixes for the platform regressions will be promptly available. |
For oVirt 4.4:
Common procedure valid for both 4.4 and 4.5 on Enterprise Linux 8 only:
You can check which repositories are currently enabled by running dnf repolist
.
-
Enable the
javapackages-tools
module.# dnf module -y enable javapackages-tools
-
Enable the
pki-deps
module.# dnf module -y enable pki-deps
-
Enable version 12 of the
postgresql
module.# dnf module -y enable postgresql:12
-
Enable version 2.3 of the
mod_auth_openidc
module.# dnf module -y enable mod_auth_openidc:2.3
-
Enable version 14 of the
nodejs
module:# dnf module -y enable nodejs:14
-
Synchronize installed packages to update them to the latest available versions.
# dnf distro-sync --nobest
For information on modules and module streams, see the following sections in Installing, managing, and removing user-space components
The oVirt Engine has been migrated to a self-hosted engine setup. The Engine is now operating on a virtual machine on the new self-hosted engine node.
The hosts will be running in the new environment, but cannot host the Engine virtual machine. You can convert some or all of these hosts to self-hosted engine nodes.
8. Reinstalling an Existing Host as a Self-Hosted Engine Node
You can convert an existing, standard host in a self-hosted engine environment to a self-hosted engine node capable of hosting the Engine virtual machine.
When installing or reinstalling the host’s operating system, oVirt strongly recommends that you first detach any existing non-OS storage that is attached to the host to avoid accidental initialization of these disks, and with that, potential data loss. |
-
Click
and select the host. -
Click
and OK. -
Click
. -
Click the Hosted Engine tab and select DEPLOY from the drop-down list.
-
Click OK.
The host is reinstalled with self-hosted engine configuration, and is flagged with a crown icon in the Administration Portal.
After reinstalling the hosts as self-hosted engine nodes, you can check the status of the new environment by running the following command on one of the nodes:
# hosted-engine --vm-status
If the new environment is running without issue, you can decommission the original Engine machine.
Appendix A: Preventing kernel modules from loading automatically
You can prevent a kernel module from being loaded automatically, whether the module is loaded directly, loaded as a dependency from another module, or during the boot process.
-
The module name must be added to a configuration file for the
modprobe
utility. This file must reside in the configuration directory/etc/modprobe.d
.For more information on this configuration directory, see the man page
modprobe.d
. -
Ensure the module is not configured to get loaded in any of the following:
-
/etc/modprobe.conf
-
/etc/modprobe.d/*
-
/etc/rc.modules
-
/etc/sysconfig/modules/*
# modprobe --showconfig <_configuration_file_name_>
-
-
If the module appears in the output, ensure it is ignored and not loaded:
# modprobe --ignore-install <_module_name_>
-
Unload the module from the running system, if it is loaded:
# modprobe -r <_module_name_>
-
Prevent the module from being loaded directly by adding the
blacklist
line to a configuration file specific to the system - for example/etc/modprobe.d/local-dontload.conf
:# echo "blacklist <_module_name_> >> /etc/modprobe.d/local-dontload.conf
This step does not prevent a module from loading if it is a required or an optional dependency of another module.
-
Prevent optional modules from being loading on demand:
# echo "install <_module_name_>/bin/false" >> /etc/modprobe.d/local-dontload.conf
If the excluded module is required for other hardware, excluding it might cause unexpected side effects.
-
Make a backup copy of your
initramfs
:# cp /boot/initramfs-$(uname -r).img /boot/initramfs-$(uname -r).img.$(date +%m-%d-%H%M%S).bak
-
If the kernel module is part of the
initramfs
, rebuild your initialramdisk
image, omitting the module:# dracut --omit-drivers <_module_name_> -f
-
Get the current kernel command line parameters:
# grub2-editenv - list | grep kernelopts
-
Append
<_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>
to the generated output:# grub2-editenv - set kernelopts="<> <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>"
For example:
# grub2-editenv - set kernelopts="root=/dev/mapper/rhel_example-root ro crashkernel=auto resume=/dev/mapper/rhel_example-swap rd.lvm.lv=rhel_example/root rd.lvm.lv=rhel_example/swap <_module_name_>.blacklist=1 rd.driver.blacklist=<_module_name_>"
-
Make a backup copy of the
kdump initramfs
:# cp /boot/initramfs-$(uname -r)kdump.img /boot/initramfs-$(uname -r)kdump.img.$(date +%m-%d-%H%M%S).bak
-
Append
rd.driver.blacklist=<_module_name_>
to theKDUMP_COMMANDLINE_APPEND
setting in/etc/sysconfig/kdump
to omit it from thekdump initramfs
:# sed -i '/^KDUMP_COMMANDLINE_APPEND=/s/"$/ rd.driver.blacklist=module_name"/' /etc/sysconfig/kdump
-
Restart the
kdump
service to pick up the changes to thekdump initrd
:# kdumpctl restart
-
Rebuild the
kdump
initialramdisk
image:# mkdumprd -f /boot/initramfs-$(uname -r)kdump.img
-
Reboot the system.
A.1. Removing a module temporarily
You can remove a module temporarily.
-
Run
modprobe
to remove any currently-loaded module:# modprobe -r <module name>
-
If the module cannot be unloaded, a process or another module might still be using the module. If so, terminate the process and run the
modpole
command written above another time to unload the module.
Appendix B: Legal notice
Certain portions of this text first appeared in Red Hat Virtualization 4.4 Migrating from a standalone Manager to a self-hosted engine. Copyright © 2022 Red Hat, Inc. Licensed under a Creative Commons Attribution-ShareAlike 4.0 Unported License.