"Fossies" - the Fresh Open Source Software Archive

Member "salt-3002.2/salt/modules/virt.py" (18 Nov 2020, 239848 Bytes) of package /linux/misc/salt-3002.2.tar.gz:


As a special service "Fossies" has tried to format the requested source page into HTML format using (guessed) Python source code syntax highlighting (style: standard) with prefixed line numbers. Alternatively you can here view or download the uninterpreted source code file. For more information about "virt.py" see the Fossies "Dox" file reference documentation and the latest Fossies "Diffs" side-by-side code changes report: 3002.1_vs_3002.2.

    1 """
    2 Work with virtual machines managed by libvirt
    3 
    4 :depends:
    5     * libvirt Python module
    6     * libvirt client
    7     * qemu-img
    8     * grep
    9 
   10 Connection
   11 ==========
   12 
   13 The connection to the virtualization host can be either setup in the minion configuration,
   14 pillar data or overridden for each individual call.
   15 
   16 By default, the libvirt connection URL will be guessed: the first available libvirt
   17 hypervisor driver will be used. This can be overridden like this:
   18 
   19 .. code-block:: yaml
   20 
   21     virt:
   22       connection:
   23         uri: lxc:///
   24 
   25 If the connection requires an authentication like for ESXi, this can be defined in the
   26 minion pillar data like this:
   27 
   28 .. code-block:: yaml
   29 
   30     virt:
   31       connection:
   32         uri: esx://10.1.1.101/?no_verify=1&auto_answer=1
   33         auth:
   34           username: user
   35           password: secret
   36 
   37 Connecting with SSH protocol
   38 ----------------------------
   39 
   40 Libvirt can connect to remote hosts using SSH using one of the ``ssh``, ``libssh`` and
   41 ``libssh2`` transports. Note that ``libssh2`` is likely to fail as it doesn't read the
   42 ``known_hosts`` file. Libvirt may also have been built without ``libssh`` or ``libssh2``
   43 support.
   44 
   45 To use the SSH transport, on the minion setup an SSH agent with a key authorized on
   46 the remote libvirt machine.
   47 
   48 Per call connection setup
   49 -------------------------
   50 
   51 .. versionadded:: 2019.2.0
   52 
   53 All the calls requiring the libvirt connection configuration as mentioned above can
   54 override this configuration using ``connection``, ``username`` and ``password`` parameters.
   55 
   56 This means that the following will list the domains on the local LXC libvirt driver,
   57 whatever the ``virt:connection`` is.
   58 
   59 .. code-block:: bash
   60 
   61     salt 'hypervisor' virt.list_domains connection=lxc:///
   62 
   63 The calls not using the libvirt connection setup are:
   64 
   65 - ``seed_non_shared_migrate``
   66 - ``virt_type``
   67 - ``is_*hyper``
   68 - all migration functions
   69 
   70 - `libvirt ESX URI format <http://libvirt.org/drvesx.html#uriformat>`_
   71 - `libvirt URI format <http://libvirt.org/uri.html#URI_config>`_
   72 - `libvirt authentication configuration <http://libvirt.org/auth.html#Auth_client_config>`_
   73 
   74 Units
   75 ==========
   76 .. _virt-units:
   77 .. rubric:: Units specification
   78 .. versionadded:: 3002
   79 
   80 The string should contain a number optionally followed
   81 by a unit. The number may have a decimal fraction. If
   82 the unit is not given then MiB are set by default.
   83 Units can optionally be given in IEC style (such as MiB),
   84 although the standard single letter style (such as M) is
   85 more convenient.
   86 
   87 Valid units include:
   88 
   89 ========== =====    ==========  ==========  ======
   90 Standard   IEC      Standard    IEC
   91   Unit     Unit     Name        Name        Factor
   92 ========== =====    ==========  ==========  ======
   93     B               Bytes                   1
   94     K       KiB     Kilobytes   Kibibytes   2**10
   95     M       MiB     Megabytes   Mebibytes   2**20
   96     G       GiB     Gigabytes   Gibibytes   2**30
   97     T       TiB     Terabytes   Tebibytes   2**40
   98     P       PiB     Petabytes   Pebibytes   2**50
   99     E       EiB     Exabytes    Exbibytes   2**60
  100     Z       ZiB     Zettabytes  Zebibytes   2**70
  101     Y       YiB     Yottabytes  Yobibytes   2**80
  102 ========== =====    ==========  ==========  ======
  103 
  104 Additional decimal based units:
  105 
  106 ======  =======
  107 Unit     Factor
  108 ======  =======
  109 KB      10**3
  110 MB      10**6
  111 GB      10**9
  112 TB      10**12
  113 PB      10**15
  114 EB      10**18
  115 ZB      10**21
  116 YB      10**24
  117 ======  =======
  118 """
  119 # Special Thanks to Michael Dehann, many of the concepts, and a few structures
  120 # of his in the virt func module have been used
  121 
  122 
  123 import base64
  124 import collections
  125 import copy
  126 import datetime
  127 import logging
  128 import os
  129 import re
  130 import shutil
  131 import string  # pylint: disable=deprecated-module
  132 import subprocess
  133 import sys
  134 import time
  135 from xml.etree import ElementTree
  136 from xml.sax import saxutils
  137 
  138 import jinja2.exceptions
  139 import salt.utils.data
  140 import salt.utils.files
  141 import salt.utils.json
  142 import salt.utils.path
  143 import salt.utils.stringutils
  144 import salt.utils.templates
  145 import salt.utils.virt
  146 import salt.utils.xmlutil as xmlutil
  147 import salt.utils.yaml
  148 from salt._compat import ipaddress
  149 from salt.exceptions import CommandExecutionError, SaltInvocationError
  150 from salt.ext.six.moves import range  # pylint: disable=import-error,redefined-builtin
  151 from salt.ext.six.moves.urllib.parse import urlparse, urlunparse
  152 
  153 try:
  154     import libvirt  # pylint: disable=import-error
  155 
  156     # pylint: disable=no-name-in-module
  157     from libvirt import libvirtError
  158 
  159     # pylint: enable=no-name-in-module
  160 
  161     HAS_LIBVIRT = True
  162 except ImportError:
  163     HAS_LIBVIRT = False
  164 
  165 
  166 log = logging.getLogger(__name__)
  167 
  168 # Set up template environment
  169 JINJA = jinja2.Environment(
  170     loader=jinja2.FileSystemLoader(
  171         os.path.join(salt.utils.templates.TEMPLATE_DIRNAME, "virt")
  172     )
  173 )
  174 
  175 CACHE_DIR = "/var/lib/libvirt/saltinst"
  176 
  177 VIRT_STATE_NAME_MAP = {
  178     0: "running",
  179     1: "running",
  180     2: "running",
  181     3: "paused",
  182     4: "shutdown",
  183     5: "shutdown",
  184     6: "crashed",
  185 }
  186 
  187 
  188 def __virtual__():
  189     if not HAS_LIBVIRT:
  190         return (False, "Unable to locate or import python libvirt library.")
  191     return "virt"
  192 
  193 
  194 def __get_request_auth(username, password):
  195     """
  196     Get libvirt.openAuth callback with username, password values overriding
  197     the configuration ones.
  198     """
  199 
  200     # pylint: disable=unused-argument
  201     def __request_auth(credentials, user_data):
  202         """Callback method passed to libvirt.openAuth().
  203 
  204         The credentials argument is a list of credentials that libvirt
  205         would like to request. An element of this list is a list containing
  206         5 items (4 inputs, 1 output):
  207           - the credential type, e.g. libvirt.VIR_CRED_AUTHNAME
  208           - a prompt to be displayed to the user
  209           - a challenge
  210           - a default result for the request
  211           - a place to store the actual result for the request
  212 
  213         The user_data argument is currently not set in the openAuth call.
  214         """
  215         for credential in credentials:
  216             if credential[0] == libvirt.VIR_CRED_AUTHNAME:
  217                 credential[4] = (
  218                     username
  219                     if username
  220                     else __salt__["config.get"](
  221                         "virt:connection:auth:username", credential[3]
  222                     )
  223                 )
  224             elif credential[0] == libvirt.VIR_CRED_NOECHOPROMPT:
  225                 credential[4] = (
  226                     password
  227                     if password
  228                     else __salt__["config.get"](
  229                         "virt:connection:auth:password", credential[3]
  230                     )
  231                 )
  232             else:
  233                 log.info("Unhandled credential type: %s", credential[0])
  234         return 0
  235 
  236 
  237 def __get_conn(**kwargs):
  238     """
  239     Detects what type of dom this node is and attempts to connect to the
  240     correct hypervisor via libvirt.
  241 
  242     :param connection: libvirt connection URI, overriding defaults
  243     :param username: username to connect with, overriding defaults
  244     :param password: password to connect with, overriding defaults
  245 
  246     """
  247     # This has only been tested on kvm and xen, it needs to be expanded to
  248     # support all vm layers supported by libvirt
  249     # Connection string works on bhyve, but auth is not tested.
  250 
  251     username = kwargs.get("username", None)
  252     password = kwargs.get("password", None)
  253     conn_str = kwargs.get("connection", None)
  254     if not conn_str:
  255         conn_str = __salt__["config.get"]("virt:connection:uri", conn_str)
  256 
  257     try:
  258         auth_types = [
  259             libvirt.VIR_CRED_AUTHNAME,
  260             libvirt.VIR_CRED_NOECHOPROMPT,
  261             libvirt.VIR_CRED_ECHOPROMPT,
  262             libvirt.VIR_CRED_PASSPHRASE,
  263             libvirt.VIR_CRED_EXTERNAL,
  264         ]
  265         conn = libvirt.openAuth(
  266             conn_str, [auth_types, __get_request_auth(username, password), None], 0
  267         )
  268     except Exception:  # pylint: disable=broad-except
  269         raise CommandExecutionError(
  270             "Sorry, {} failed to open a connection to the hypervisor "
  271             "software at {}".format(__grains__["fqdn"], conn_str)
  272         )
  273     return conn
  274 
  275 
  276 def _get_domain(conn, *vms, **kwargs):
  277     """
  278     Return a domain object for the named VM or return domain object for all VMs.
  279 
  280     :params conn: libvirt connection object
  281     :param vms: list of domain names to look for
  282     :param iterable: True to return an array in all cases
  283     """
  284     ret = list()
  285     lookup_vms = list()
  286 
  287     all_vms = []
  288     if kwargs.get("active", True):
  289         for id_ in conn.listDomainsID():
  290             all_vms.append(conn.lookupByID(id_).name())
  291 
  292     if kwargs.get("inactive", True):
  293         for id_ in conn.listDefinedDomains():
  294             all_vms.append(id_)
  295 
  296     if vms and not all_vms:
  297         raise CommandExecutionError("No virtual machines found.")
  298 
  299     if vms:
  300         for name in vms:
  301             if name not in all_vms:
  302                 raise CommandExecutionError(
  303                     'The VM "{name}" is not present'.format(name=name)
  304                 )
  305             else:
  306                 lookup_vms.append(name)
  307     else:
  308         lookup_vms = list(all_vms)
  309 
  310     for name in lookup_vms:
  311         ret.append(conn.lookupByName(name))
  312 
  313     return len(ret) == 1 and not kwargs.get("iterable") and ret[0] or ret
  314 
  315 
  316 def _parse_qemu_img_info(info):
  317     """
  318     Parse qemu-img info JSON output into disk infos dictionary
  319     """
  320     raw_infos = salt.utils.json.loads(info)
  321     disks = []
  322     for disk_infos in raw_infos:
  323         disk = {
  324             "file": disk_infos["filename"],
  325             "file format": disk_infos["format"],
  326             "disk size": disk_infos["actual-size"],
  327             "virtual size": disk_infos["virtual-size"],
  328             "cluster size": disk_infos["cluster-size"]
  329             if "cluster-size" in disk_infos
  330             else None,
  331         }
  332 
  333         if "full-backing-filename" in disk_infos.keys():
  334             disk["backing file"] = format(disk_infos["full-backing-filename"])
  335 
  336         if "snapshots" in disk_infos.keys():
  337             disk["snapshots"] = [
  338                 {
  339                     "id": snapshot["id"],
  340                     "tag": snapshot["name"],
  341                     "vmsize": snapshot["vm-state-size"],
  342                     "date": datetime.datetime.fromtimestamp(
  343                         float(
  344                             "{}.{}".format(snapshot["date-sec"], snapshot["date-nsec"])
  345                         )
  346                     ).isoformat(),
  347                     "vmclock": datetime.datetime.utcfromtimestamp(
  348                         float(
  349                             "{}.{}".format(
  350                                 snapshot["vm-clock-sec"], snapshot["vm-clock-nsec"]
  351                             )
  352                         )
  353                     )
  354                     .time()
  355                     .isoformat(),
  356                 }
  357                 for snapshot in disk_infos["snapshots"]
  358             ]
  359         disks.append(disk)
  360 
  361     for disk in disks:
  362         if "backing file" in disk.keys():
  363             candidates = [
  364                 info
  365                 for info in disks
  366                 if "file" in info.keys() and info["file"] == disk["backing file"]
  367             ]
  368             if candidates:
  369                 disk["backing file"] = candidates[0]
  370 
  371     return disks[0]
  372 
  373 
  374 def _get_uuid(dom):
  375     """
  376     Return a uuid from the named vm
  377 
  378     CLI Example:
  379 
  380     .. code-block:: bash
  381 
  382         salt '*' virt.get_uuid <domain>
  383     """
  384     return ElementTree.fromstring(get_xml(dom)).find("uuid").text
  385 
  386 
  387 def _get_on_poweroff(dom):
  388     """
  389     Return `on_poweroff` setting from the named vm
  390 
  391     CLI Example:
  392 
  393     .. code-block:: bash
  394 
  395         salt '*' virt.get_on_restart <domain>
  396     """
  397     node = ElementTree.fromstring(get_xml(dom)).find("on_poweroff")
  398     return node.text if node is not None else ""
  399 
  400 
  401 def _get_on_reboot(dom):
  402     """
  403     Return `on_reboot` setting from the named vm
  404 
  405     CLI Example:
  406 
  407     .. code-block:: bash
  408 
  409         salt '*' virt.get_on_reboot <domain>
  410     """
  411     node = ElementTree.fromstring(get_xml(dom)).find("on_reboot")
  412     return node.text if node is not None else ""
  413 
  414 
  415 def _get_on_crash(dom):
  416     """
  417     Return `on_crash` setting from the named vm
  418 
  419     CLI Example:
  420 
  421     .. code-block:: bash
  422 
  423         salt '*' virt.get_on_crash <domain>
  424     """
  425     node = ElementTree.fromstring(get_xml(dom)).find("on_crash")
  426     return node.text if node is not None else ""
  427 
  428 
  429 def _get_nics(dom):
  430     """
  431     Get domain network interfaces from a libvirt domain object.
  432     """
  433     nics = {}
  434     doc = ElementTree.fromstring(dom.XMLDesc(0))
  435     for iface_node in doc.findall("devices/interface"):
  436         nic = {}
  437         nic["type"] = iface_node.get("type")
  438         for v_node in iface_node:
  439             if v_node.tag == "mac":
  440                 nic["mac"] = v_node.get("address")
  441             if v_node.tag == "model":
  442                 nic["model"] = v_node.get("type")
  443             if v_node.tag == "target":
  444                 nic["target"] = v_node.get("dev")
  445             # driver, source, and match can all have optional attributes
  446             if re.match("(driver|source|address)", v_node.tag):
  447                 temp = {}
  448                 for key, value in v_node.attrib.items():
  449                     temp[key] = value
  450                 nic[v_node.tag] = temp
  451             # virtualport needs to be handled separately, to pick up the
  452             # type attribute of the virtualport itself
  453             if v_node.tag == "virtualport":
  454                 temp = {}
  455                 temp["type"] = v_node.get("type")
  456                 for key, value in v_node.attrib.items():
  457                     temp[key] = value
  458                 nic["virtualport"] = temp
  459         if "mac" not in nic:
  460             continue
  461         nics[nic["mac"]] = nic
  462     return nics
  463 
  464 
  465 def _get_graphics(dom):
  466     """
  467     Get domain graphics from a libvirt domain object.
  468     """
  469     out = {
  470         "autoport": "None",
  471         "keymap": "None",
  472         "listen": "None",
  473         "port": "None",
  474         "type": "None",
  475     }
  476     doc = ElementTree.fromstring(dom.XMLDesc(0))
  477     for g_node in doc.findall("devices/graphics"):
  478         for key, value in g_node.attrib.items():
  479             out[key] = value
  480     return out
  481 
  482 
  483 def _get_loader(dom):
  484     """
  485     Get domain loader from a libvirt domain object.
  486     """
  487     out = {"path": "None"}
  488     doc = ElementTree.fromstring(dom.XMLDesc(0))
  489     for g_node in doc.findall("os/loader"):
  490         out["path"] = g_node.text
  491         for key, value in g_node.attrib.items():
  492             out[key] = value
  493     return out
  494 
  495 
  496 def _get_disks(conn, dom):
  497     """
  498     Get domain disks from a libvirt domain object.
  499     """
  500     disks = {}
  501     doc = ElementTree.fromstring(dom.XMLDesc(0))
  502     # Get the path, pool, volume name of each volume we can
  503     all_volumes = _get_all_volumes_paths(conn)
  504     for elem in doc.findall("devices/disk"):
  505         source = elem.find("source")
  506         if source is None:
  507             continue
  508         target = elem.find("target")
  509         driver = elem.find("driver")
  510         if target is None:
  511             continue
  512         qemu_target = None
  513         extra_properties = None
  514         if "dev" in target.attrib:
  515             disk_type = elem.get("type")
  516 
  517             def _get_disk_volume_data(pool_name, volume_name):
  518                 qemu_target = "{}/{}".format(pool_name, volume_name)
  519                 pool = conn.storagePoolLookupByName(pool_name)
  520                 vol = pool.storageVolLookupByName(volume_name)
  521                 vol_info = vol.info()
  522                 extra_properties = {
  523                     "virtual size": vol_info[1],
  524                     "disk size": vol_info[2],
  525                 }
  526 
  527                 backing_files = [
  528                     {
  529                         "file": node.find("source").get("file"),
  530                         "file format": node.find("format").get("type"),
  531                     }
  532                     for node in elem.findall(".//backingStore[source]")
  533                 ]
  534 
  535                 if backing_files:
  536                     # We had the backing files in a flat list, nest them again.
  537                     extra_properties["backing file"] = backing_files[0]
  538                     parent = extra_properties["backing file"]
  539                     for sub_backing_file in backing_files[1:]:
  540                         parent["backing file"] = sub_backing_file
  541                         parent = sub_backing_file
  542 
  543                 else:
  544                     # In some cases the backing chain is not displayed by the domain definition
  545                     # Try to see if we have some of it in the volume definition.
  546                     vol_desc = ElementTree.fromstring(vol.XMLDesc())
  547                     backing_path = vol_desc.find("./backingStore/path")
  548                     backing_format = vol_desc.find("./backingStore/format")
  549                     if backing_path is not None:
  550                         extra_properties["backing file"] = {"file": backing_path.text}
  551                         if backing_format is not None:
  552                             extra_properties["backing file"][
  553                                 "file format"
  554                             ] = backing_format.get("type")
  555                 return (qemu_target, extra_properties)
  556 
  557             if disk_type == "file":
  558                 qemu_target = source.get("file", "")
  559                 if qemu_target.startswith("/dev/zvol/"):
  560                     disks[target.get("dev")] = {"file": qemu_target, "zfs": True}
  561                     continue
  562 
  563                 if qemu_target in all_volumes.keys():
  564                     # If the qemu_target is a known path, output a volume
  565                     volume = all_volumes[qemu_target]
  566                     qemu_target, extra_properties = _get_disk_volume_data(
  567                         volume["pool"], volume["name"]
  568                     )
  569                 elif elem.get("device", "disk") != "cdrom":
  570                     # Extract disk sizes, snapshots, backing files
  571                     try:
  572                         stdout = subprocess.Popen(
  573                             [
  574                                 "qemu-img",
  575                                 "info",
  576                                 "-U",
  577                                 "--output",
  578                                 "json",
  579                                 "--backing-chain",
  580                                 qemu_target,
  581                             ],
  582                             shell=False,
  583                             stdout=subprocess.PIPE,
  584                         ).communicate()[0]
  585                         qemu_output = salt.utils.stringutils.to_str(stdout)
  586                         output = _parse_qemu_img_info(qemu_output)
  587                         extra_properties = output
  588                     except TypeError:
  589                         disk.update({"file": "Does not exist"})
  590             elif disk_type == "block":
  591                 qemu_target = source.get("dev", "")
  592                 # If the qemu_target is a known path, output a volume
  593                 if qemu_target in all_volumes.keys():
  594                     volume = all_volumes[qemu_target]
  595                     qemu_target, extra_properties = _get_disk_volume_data(
  596                         volume["pool"], volume["name"]
  597                     )
  598             elif disk_type == "network":
  599                 qemu_target = source.get("protocol")
  600                 source_name = source.get("name")
  601                 if source_name:
  602                     qemu_target = "{}:{}".format(qemu_target, source_name)
  603 
  604                 # Reverse the magic for the rbd and gluster pools
  605                 if source.get("protocol") in ["rbd", "gluster"]:
  606                     for pool_i in conn.listAllStoragePools():
  607                         pool_i_xml = ElementTree.fromstring(pool_i.XMLDesc())
  608                         name_node = pool_i_xml.find("source/name")
  609                         if name_node is not None and source_name.startswith(
  610                             "{}/".format(name_node.text)
  611                         ):
  612                             qemu_target = "{}{}".format(
  613                                 pool_i.name(), source_name[len(name_node.text) :]
  614                             )
  615                             break
  616 
  617                 # Reverse the magic for cdroms with remote URLs
  618                 if elem.get("device", "disk") == "cdrom":
  619                     host_node = source.find("host")
  620                     if host_node is not None:
  621                         hostname = host_node.get("name")
  622                         port = host_node.get("port")
  623                         qemu_target = urlunparse(
  624                             (
  625                                 source.get("protocol"),
  626                                 "{}:{}".format(hostname, port) if port else hostname,
  627                                 source_name,
  628                                 "",
  629                                 saxutils.unescape(source.get("query", "")),
  630                                 "",
  631                             )
  632                         )
  633             elif disk_type == "volume":
  634                 pool_name = source.get("pool")
  635                 volume_name = source.get("volume")
  636                 qemu_target, extra_properties = _get_disk_volume_data(
  637                     pool_name, volume_name
  638                 )
  639 
  640             if not qemu_target:
  641                 continue
  642 
  643             disk = {
  644                 "file": qemu_target,
  645                 "type": elem.get("device"),
  646             }
  647             if driver is not None and "type" in driver.attrib:
  648                 disk["file format"] = driver.get("type")
  649             if extra_properties:
  650                 disk.update(extra_properties)
  651 
  652             disks[target.get("dev")] = disk
  653     return disks
  654 
  655 
  656 def _libvirt_creds():
  657     """
  658     Returns the user and group that the disk images should be owned by
  659     """
  660     g_cmd = "grep ^\\s*group /etc/libvirt/qemu.conf"
  661     u_cmd = "grep ^\\s*user /etc/libvirt/qemu.conf"
  662     try:
  663         stdout = subprocess.Popen(
  664             g_cmd, shell=True, stdout=subprocess.PIPE
  665         ).communicate()[0]
  666         group = salt.utils.stringutils.to_str(stdout).split('"')[1]
  667     except IndexError:
  668         group = "root"
  669     try:
  670         stdout = subprocess.Popen(
  671             u_cmd, shell=True, stdout=subprocess.PIPE
  672         ).communicate()[0]
  673         user = salt.utils.stringutils.to_str(stdout).split('"')[1]
  674     except IndexError:
  675         user = "root"
  676     return {"user": user, "group": group}
  677 
  678 
  679 def _migrate(dom, dst_uri, **kwargs):
  680     """
  681     Migrate the domain object from its current host to the destination
  682     host given by URI.
  683 
  684     :param dom: domain object to migrate
  685     :param dst_uri: destination URI
  686     :param kwargs:
  687         - live:            Use live migration. Default value is True.
  688         - persistent:      Leave the domain persistent on destination host.
  689                            Default value is True.
  690         - undefinesource:  Undefine the domain on the source host.
  691                            Default value is True.
  692         - offline:         If set to True it will migrate the domain definition
  693                            without starting the domain on destination and without
  694                            stopping it on source host. Default value is False.
  695         - max_bandwidth:   The maximum bandwidth (in MiB/s) that will be used.
  696         - max_downtime:    Set maximum tolerable downtime for live-migration.
  697                            The value represents a number of milliseconds the guest
  698                            is allowed to be down at the end of live migration.
  699         - parallel_connections: Specify a number of parallel network connections
  700                            to be used to send memory pages to the destination host.
  701         - compressed:      Activate compression.
  702         - comp_methods:    A comma-separated list of compression methods. Supported
  703                            methods are "mt" and "xbzrle" and can be  used in any
  704                            combination. QEMU defaults to "xbzrle".
  705         - comp_mt_level:   Set compression level. Values are in range from 0 to 9,
  706                            where 1 is maximum speed and 9 is  maximum compression.
  707         - comp_mt_threads: Set number of compress threads on source host.
  708         - comp_mt_dthreads: Set number of decompress threads on target host.
  709         - comp_xbzrle_cache: Set the size of page cache for xbzrle compression in bytes.
  710         - copy_storage:    Migrate non-shared storage. It must be one of the following
  711                            values: all (full disk copy) or incremental (Incremental copy)
  712         - postcopy:        Enable the use of post-copy migration.
  713         - postcopy_bandwidth: The maximum bandwidth allowed in post-copy phase. (MiB/s)
  714         - username:        Username to connect with target host
  715         - password:        Password to connect with target host
  716     """
  717     flags = 0
  718     params = {}
  719     migrated_state = libvirt.VIR_DOMAIN_RUNNING_MIGRATED
  720 
  721     if kwargs.get("live", True):
  722         flags |= libvirt.VIR_MIGRATE_LIVE
  723 
  724     if kwargs.get("persistent", True):
  725         flags |= libvirt.VIR_MIGRATE_PERSIST_DEST
  726 
  727     if kwargs.get("undefinesource", True):
  728         flags |= libvirt.VIR_MIGRATE_UNDEFINE_SOURCE
  729 
  730     max_bandwidth = kwargs.get("max_bandwidth")
  731     if max_bandwidth:
  732         try:
  733             bandwidth_value = int(max_bandwidth)
  734         except ValueError:
  735             raise SaltInvocationError(
  736                 "Invalid max_bandwidth value: {}".format(max_bandwidth)
  737             )
  738         dom.migrateSetMaxSpeed(bandwidth_value)
  739 
  740     max_downtime = kwargs.get("max_downtime")
  741     if max_downtime:
  742         try:
  743             downtime_value = int(max_downtime)
  744         except ValueError:
  745             raise SaltInvocationError(
  746                 "Invalid max_downtime value: {}".format(max_downtime)
  747             )
  748         dom.migrateSetMaxDowntime(downtime_value)
  749 
  750     if kwargs.get("offline") is True:
  751         flags |= libvirt.VIR_MIGRATE_OFFLINE
  752         migrated_state = libvirt.VIR_DOMAIN_RUNNING_UNPAUSED
  753 
  754     if kwargs.get("compressed") is True:
  755         flags |= libvirt.VIR_MIGRATE_COMPRESSED
  756 
  757     comp_methods = kwargs.get("comp_methods")
  758     if comp_methods:
  759         params[libvirt.VIR_MIGRATE_PARAM_COMPRESSION] = comp_methods.split(",")
  760 
  761     comp_options = {
  762         "comp_mt_level": libvirt.VIR_MIGRATE_PARAM_COMPRESSION_MT_LEVEL,
  763         "comp_mt_threads": libvirt.VIR_MIGRATE_PARAM_COMPRESSION_MT_THREADS,
  764         "comp_mt_dthreads": libvirt.VIR_MIGRATE_PARAM_COMPRESSION_MT_DTHREADS,
  765         "comp_xbzrle_cache": libvirt.VIR_MIGRATE_PARAM_COMPRESSION_XBZRLE_CACHE,
  766     }
  767 
  768     for (comp_option, param_key) in comp_options.items():
  769         comp_option_value = kwargs.get(comp_option)
  770         if comp_option_value:
  771             try:
  772                 params[param_key] = int(comp_option_value)
  773             except ValueError:
  774                 raise SaltInvocationError("Invalid {} value".format(comp_option))
  775 
  776     parallel_connections = kwargs.get("parallel_connections")
  777     if parallel_connections:
  778         try:
  779             params[libvirt.VIR_MIGRATE_PARAM_PARALLEL_CONNECTIONS] = int(
  780                 parallel_connections
  781             )
  782         except ValueError:
  783             raise SaltInvocationError("Invalid parallel_connections value")
  784         flags |= libvirt.VIR_MIGRATE_PARALLEL
  785 
  786     if __salt__["config.get"]("virt:tunnel"):
  787         if parallel_connections:
  788             raise SaltInvocationError(
  789                 "Parallel migration isn't compatible with tunneled migration"
  790             )
  791         flags |= libvirt.VIR_MIGRATE_PEER2PEER
  792         flags |= libvirt.VIR_MIGRATE_TUNNELLED
  793 
  794     if kwargs.get("postcopy") is True:
  795         flags |= libvirt.VIR_MIGRATE_POSTCOPY
  796 
  797     postcopy_bandwidth = kwargs.get("postcopy_bandwidth")
  798     if postcopy_bandwidth:
  799         try:
  800             postcopy_bandwidth_value = int(postcopy_bandwidth)
  801         except ValueError:
  802             raise SaltInvocationError("Invalid postcopy_bandwidth value")
  803         dom.migrateSetMaxSpeed(
  804             postcopy_bandwidth_value,
  805             flags=libvirt.VIR_DOMAIN_MIGRATE_MAX_SPEED_POSTCOPY,
  806         )
  807 
  808     copy_storage = kwargs.get("copy_storage")
  809     if copy_storage:
  810         if copy_storage == "all":
  811             flags |= libvirt.VIR_MIGRATE_NON_SHARED_DISK
  812         elif copy_storage in ["inc", "incremental"]:
  813             flags |= libvirt.VIR_MIGRATE_NON_SHARED_INC
  814         else:
  815             raise SaltInvocationError("invalid copy_storage value")
  816     try:
  817         state = False
  818         dst_conn = __get_conn(
  819             connection=dst_uri,
  820             username=kwargs.get("username"),
  821             password=kwargs.get("password"),
  822         )
  823         new_dom = dom.migrate3(dconn=dst_conn, params=params, flags=flags)
  824         if new_dom:
  825             state = new_dom.state()
  826         dst_conn.close()
  827         return state and migrated_state in state
  828     except libvirt.libvirtError as err:
  829         dst_conn.close()
  830         raise CommandExecutionError(err.get_error_message())
  831 
  832 
  833 def _get_volume_path(pool, volume_name):
  834     """
  835     Get the path to a volume. If the volume doesn't exist, compute its path from the pool one.
  836     """
  837     if volume_name in pool.listVolumes():
  838         volume = pool.storageVolLookupByName(volume_name)
  839         volume_xml = ElementTree.fromstring(volume.XMLDesc())
  840         return volume_xml.find("./target/path").text
  841 
  842     # Get the path from the pool if the volume doesn't exist yet
  843     pool_xml = ElementTree.fromstring(pool.XMLDesc())
  844     pool_path = pool_xml.find("./target/path").text
  845     return pool_path + "/" + volume_name
  846 
  847 
  848 def _disk_from_pool(conn, pool, pool_xml, volume_name):
  849     """
  850     Create a disk definition out of the pool XML and volume name.
  851     The aim of this function is to replace the volume-based definition when not handled by libvirt.
  852     It returns the disk Jinja context to be used when creating the VM
  853     """
  854     pool_type = pool_xml.get("type")
  855     disk_context = {}
  856 
  857     # handle dir, fs and netfs
  858     if pool_type in ["dir", "netfs", "fs"]:
  859         disk_context["type"] = "file"
  860         disk_context["source_file"] = _get_volume_path(pool, volume_name)
  861 
  862     elif pool_type in ["logical", "disk", "iscsi", "scsi"]:
  863         disk_context["type"] = "block"
  864         disk_context["format"] = "raw"
  865         disk_context["source_file"] = _get_volume_path(pool, volume_name)
  866 
  867     elif pool_type in ["rbd", "gluster", "sheepdog"]:
  868         # libvirt can't handle rbd, gluster and sheepdog as volumes
  869         disk_context["type"] = "network"
  870         disk_context["protocol"] = pool_type
  871         # Copy the hosts from the pool definition
  872         disk_context["hosts"] = [
  873             {"name": host.get("name"), "port": host.get("port")}
  874             for host in pool_xml.findall(".//host")
  875         ]
  876         dir_node = pool_xml.find("./source/dir")
  877         # Gluster and RBD need pool/volume name
  878         name_node = pool_xml.find("./source/name")
  879         if name_node is not None:
  880             disk_context["volume"] = "{}/{}".format(name_node.text, volume_name)
  881         # Copy the authentication if any for RBD
  882         auth_node = pool_xml.find("./source/auth")
  883         if auth_node is not None:
  884             username = auth_node.get("username")
  885             secret_node = auth_node.find("./secret")
  886             usage = secret_node.get("usage")
  887             if not usage:
  888                 # Get the usage from the UUID
  889                 uuid = secret_node.get("uuid")
  890                 usage = conn.secretLookupByUUIDString(uuid).usageID()
  891             disk_context["auth"] = {
  892                 "type": "ceph",
  893                 "username": username,
  894                 "usage": usage,
  895             }
  896 
  897     return disk_context
  898 
  899 
  900 def _handle_unit(s, def_unit="m"):
  901     """
  902     Handle the unit conversion, return the value in bytes
  903     """
  904     m = re.match(r"(?P<value>[0-9.]*)\s*(?P<unit>.*)$", str(s).strip())
  905     value = m.group("value")
  906     # default unit
  907     unit = m.group("unit").lower() or def_unit
  908     try:
  909         value = int(value)
  910     except ValueError:
  911         try:
  912             value = float(value)
  913         except ValueError:
  914             raise SaltInvocationError("invalid number")
  915     # flag for base ten
  916     dec = False
  917     if re.match(r"[kmgtpezy]b$", unit):
  918         dec = True
  919     elif not re.match(r"(b|[kmgtpezy](ib)?)$", unit):
  920         raise SaltInvocationError("invalid units")
  921     p = "bkmgtpezy".index(unit[0])
  922     value *= 10 ** (p * 3) if dec else 2 ** (p * 10)
  923     return int(value)
  924 
  925 
  926 def nesthash():
  927     """
  928     create default dict that allows arbitrary level of nesting
  929     """
  930     return collections.defaultdict(nesthash)
  931 
  932 
  933 def _gen_xml(
  934     conn,
  935     name,
  936     cpu,
  937     mem,
  938     diskp,
  939     nicp,
  940     hypervisor,
  941     os_type,
  942     arch,
  943     graphics=None,
  944     boot=None,
  945     boot_dev=None,
  946     **kwargs
  947 ):
  948     """
  949     Generate the XML string to define a libvirt VM
  950     """
  951     context = {
  952         "hypervisor": hypervisor,
  953         "name": name,
  954         "cpu": str(cpu),
  955     }
  956 
  957     context["mem"] = nesthash()
  958     if isinstance(mem, int):
  959         mem = int(mem) * 1024  # MB
  960         context["mem"]["boot"] = str(mem)
  961         context["mem"]["current"] = str(mem)
  962     elif isinstance(mem, dict):
  963         for tag, val in mem.items():
  964             if val:
  965                 if tag == "slots":
  966                     context["mem"]["slots"] = "{}='{}'".format(tag, val)
  967                 else:
  968                     context["mem"][tag] = str(int(_handle_unit(val) / 1024))
  969 
  970     if hypervisor in ["qemu", "kvm"]:
  971         context["controller_model"] = False
  972     elif hypervisor == "vmware":
  973         # TODO: make bus and model parameterized, this works for 64-bit Linux
  974         context["controller_model"] = "lsilogic"
  975 
  976     # By default, set the graphics to listen to all addresses
  977     if graphics:
  978         if "listen" not in graphics:
  979             graphics["listen"] = {"type": "address", "address": "0.0.0.0"}
  980         elif (
  981             "address" not in graphics["listen"]
  982             and graphics["listen"]["type"] == "address"
  983         ):
  984             graphics["listen"]["address"] = "0.0.0.0"
  985 
  986         # Graphics of type 'none' means no graphics device at all
  987         if graphics.get("type", "none") == "none":
  988             graphics = None
  989     context["graphics"] = graphics
  990 
  991     context["boot_dev"] = boot_dev.split() if boot_dev is not None else ["hd"]
  992 
  993     context["boot"] = boot if boot else {}
  994 
  995     # if efi parameter is specified, prepare os_attrib
  996     efi_value = context["boot"].get("efi", None) if boot else None
  997     if efi_value is True:
  998         context["boot"]["os_attrib"] = "firmware='efi'"
  999     elif efi_value is not None and type(efi_value) != bool:
 1000         raise SaltInvocationError("Invalid efi value")
 1001 
 1002     if os_type == "xen":
 1003         # Compute the Xen PV boot method
 1004         if __grains__["os_family"] == "Suse":
 1005             if not boot or not boot.get("kernel", None):
 1006                 context["boot"]["kernel"] = "/usr/lib/grub2/x86_64-xen/grub.xen"
 1007                 context["boot_dev"] = []
 1008 
 1009     if "serial_type" in kwargs:
 1010         context["serial_type"] = kwargs["serial_type"]
 1011     if "serial_type" in context and context["serial_type"] == "tcp":
 1012         if "telnet_port" in kwargs:
 1013             context["telnet_port"] = kwargs["telnet_port"]
 1014         else:
 1015             context["telnet_port"] = 23023  # FIXME: use random unused port
 1016     if "serial_type" in context:
 1017         if "console" in kwargs:
 1018             context["console"] = kwargs["console"]
 1019         else:
 1020             context["console"] = True
 1021 
 1022     context["disks"] = []
 1023     disk_bus_map = {"virtio": "vd", "xen": "xvd", "fdc": "fd", "ide": "hd"}
 1024     targets = []
 1025     for i, disk in enumerate(diskp):
 1026         prefix = disk_bus_map.get(disk["model"], "sd")
 1027         disk_context = {
 1028             "device": disk.get("device", "disk"),
 1029             "target_dev": _get_disk_target(targets, len(diskp), prefix),
 1030             "disk_bus": disk["model"],
 1031             "format": disk.get("format", "raw"),
 1032             "index": str(i),
 1033         }
 1034         targets.append(disk_context["target_dev"])
 1035         if disk.get("source_file"):
 1036             url = urlparse(disk["source_file"])
 1037             if not url.scheme or not url.hostname:
 1038                 disk_context["source_file"] = disk["source_file"]
 1039                 disk_context["type"] = "file"
 1040             elif url.scheme in ["http", "https", "ftp", "ftps", "tftp"]:
 1041                 disk_context["type"] = "network"
 1042                 disk_context["protocol"] = url.scheme
 1043                 disk_context["volume"] = url.path
 1044                 disk_context["query"] = saxutils.escape(url.query)
 1045                 disk_context["hosts"] = [{"name": url.hostname, "port": url.port}]
 1046 
 1047         elif disk.get("pool"):
 1048             disk_context["volume"] = disk["filename"]
 1049             # If we had no source_file, then we want a volume
 1050             pool = conn.storagePoolLookupByName(disk["pool"])
 1051             pool_xml = ElementTree.fromstring(pool.XMLDesc())
 1052             pool_type = pool_xml.get("type")
 1053 
 1054             # For Xen VMs convert all pool types (issue #58333)
 1055             if hypervisor == "xen" or pool_type in ["rbd", "gluster", "sheepdog"]:
 1056                 disk_context.update(
 1057                     _disk_from_pool(conn, pool, pool_xml, disk_context["volume"])
 1058                 )
 1059 
 1060             else:
 1061                 if pool_type in ["disk", "logical"]:
 1062                     # The volume format for these types doesn't match the driver format in the VM
 1063                     disk_context["format"] = "raw"
 1064                 disk_context["type"] = "volume"
 1065                 disk_context["pool"] = disk["pool"]
 1066 
 1067         else:
 1068             # No source and no pool is a removable device, use file type
 1069             disk_context["type"] = "file"
 1070 
 1071         if hypervisor in ["qemu", "kvm", "bhyve", "xen"]:
 1072             disk_context["address"] = False
 1073             disk_context["driver"] = True
 1074         elif hypervisor in ["esxi", "vmware"]:
 1075             disk_context["address"] = True
 1076             disk_context["driver"] = False
 1077         context["disks"].append(disk_context)
 1078     context["nics"] = nicp
 1079 
 1080     context["os_type"] = os_type
 1081     context["arch"] = arch
 1082 
 1083     fn_ = "libvirt_domain.jinja"
 1084     try:
 1085         template = JINJA.get_template(fn_)
 1086     except jinja2.exceptions.TemplateNotFound:
 1087         log.error("Could not load template %s", fn_)
 1088         return ""
 1089     return template.render(**context)
 1090 
 1091 
 1092 def _gen_vol_xml(
 1093     name,
 1094     size,
 1095     format=None,
 1096     allocation=0,
 1097     type=None,
 1098     permissions=None,
 1099     backing_store=None,
 1100     nocow=False,
 1101 ):
 1102     """
 1103     Generate the XML string to define a libvirt storage volume
 1104     """
 1105     size = int(size) * 1024  # MB
 1106     context = {
 1107         "type": type,
 1108         "name": name,
 1109         "target": {"permissions": permissions, "nocow": nocow},
 1110         "format": format,
 1111         "size": str(size),
 1112         "allocation": str(int(allocation) * 1024),
 1113         "backingStore": backing_store,
 1114     }
 1115     fn_ = "libvirt_volume.jinja"
 1116     try:
 1117         template = JINJA.get_template(fn_)
 1118     except jinja2.exceptions.TemplateNotFound:
 1119         log.error("Could not load template %s", fn_)
 1120         return ""
 1121     return template.render(**context)
 1122 
 1123 
 1124 def _gen_net_xml(name, bridge, forward, vport, tag=None, ip_configs=None):
 1125     """
 1126     Generate the XML string to define a libvirt network
 1127     """
 1128     context = {
 1129         "name": name,
 1130         "bridge": bridge,
 1131         "forward": forward,
 1132         "vport": vport,
 1133         "tag": tag,
 1134         "ip_configs": [
 1135             {
 1136                 "address": ipaddress.ip_network(config["cidr"]),
 1137                 "dhcp_ranges": config.get("dhcp_ranges", []),
 1138             }
 1139             for config in ip_configs or []
 1140         ],
 1141     }
 1142     fn_ = "libvirt_network.jinja"
 1143     try:
 1144         template = JINJA.get_template(fn_)
 1145     except jinja2.exceptions.TemplateNotFound:
 1146         log.error("Could not load template %s", fn_)
 1147         return ""
 1148     return template.render(**context)
 1149 
 1150 
 1151 def _gen_pool_xml(
 1152     name,
 1153     ptype,
 1154     target=None,
 1155     permissions=None,
 1156     source_devices=None,
 1157     source_dir=None,
 1158     source_adapter=None,
 1159     source_hosts=None,
 1160     source_auth=None,
 1161     source_name=None,
 1162     source_format=None,
 1163     source_initiator=None,
 1164 ):
 1165     """
 1166     Generate the XML string to define a libvirt storage pool
 1167     """
 1168     hosts = [host.split(":") for host in source_hosts or []]
 1169     source = None
 1170     if any(
 1171         [
 1172             source_devices,
 1173             source_dir,
 1174             source_adapter,
 1175             hosts,
 1176             source_auth,
 1177             source_name,
 1178             source_format,
 1179             source_initiator,
 1180         ]
 1181     ):
 1182         source = {
 1183             "devices": source_devices or [],
 1184             "dir": source_dir
 1185             if source_format != "cifs" or not source_dir
 1186             else source_dir.lstrip("/"),
 1187             "adapter": source_adapter,
 1188             "hosts": [
 1189                 {"name": host[0], "port": host[1] if len(host) > 1 else None}
 1190                 for host in hosts
 1191             ],
 1192             "auth": source_auth,
 1193             "name": source_name,
 1194             "format": source_format,
 1195             "initiator": source_initiator,
 1196         }
 1197 
 1198     context = {
 1199         "name": name,
 1200         "ptype": ptype,
 1201         "target": {"path": target, "permissions": permissions},
 1202         "source": source,
 1203     }
 1204     fn_ = "libvirt_pool.jinja"
 1205     try:
 1206         template = JINJA.get_template(fn_)
 1207     except jinja2.exceptions.TemplateNotFound:
 1208         log.error("Could not load template %s", fn_)
 1209         return ""
 1210     return template.render(**context)
 1211 
 1212 
 1213 def _gen_secret_xml(auth_type, usage, description):
 1214     """
 1215     Generate a libvirt secret definition XML
 1216     """
 1217     context = {
 1218         "type": auth_type,
 1219         "usage": usage,
 1220         "description": description,
 1221     }
 1222     fn_ = "libvirt_secret.jinja"
 1223     try:
 1224         template = JINJA.get_template(fn_)
 1225     except jinja2.exceptions.TemplateNotFound:
 1226         log.error("Could not load template %s", fn_)
 1227         return ""
 1228     return template.render(**context)
 1229 
 1230 
 1231 def _get_images_dir():
 1232     """
 1233     Extract the images dir from the configuration. First attempts to
 1234     find legacy virt.images, then tries virt:images.
 1235     """
 1236     img_dir = __salt__["config.get"]("virt:images")
 1237     log.debug("Image directory from config option `virt:images`" " is %s", img_dir)
 1238     return img_dir
 1239 
 1240 
 1241 def _zfs_image_create(
 1242     vm_name,
 1243     pool,
 1244     disk_name,
 1245     hostname_property_name,
 1246     sparse_volume,
 1247     disk_size,
 1248     disk_image_name,
 1249 ):
 1250     """
 1251     Clones an existing image, or creates a new one.
 1252 
 1253     When cloning an image, disk_image_name refers to the source
 1254     of the clone. If not specified, disk_size is used for creating
 1255     a new zvol, and sparse_volume determines whether to create
 1256     a thin provisioned volume.
 1257 
 1258     The cloned or new volume can have a ZFS property set containing
 1259     the vm_name. Use hostname_property_name for specifying the key
 1260     of this ZFS property.
 1261     """
 1262     if not disk_image_name and not disk_size:
 1263         raise CommandExecutionError(
 1264             "Unable to create new disk {}, please specify"
 1265             " the disk image name or disk size argument".format(disk_name)
 1266         )
 1267 
 1268     if not pool:
 1269         raise CommandExecutionError(
 1270             "Unable to create new disk {}, please specify"
 1271             " the disk pool name".format(disk_name)
 1272         )
 1273 
 1274     destination_fs = os.path.join(pool, "{}.{}".format(vm_name, disk_name))
 1275     log.debug("Image destination will be %s", destination_fs)
 1276 
 1277     existing_disk = __salt__["zfs.list"](name=pool)
 1278     if "error" in existing_disk:
 1279         raise CommandExecutionError(
 1280             "Unable to create new disk {}. {}".format(
 1281                 destination_fs, existing_disk["error"]
 1282             )
 1283         )
 1284     elif destination_fs in existing_disk:
 1285         log.info(
 1286             "ZFS filesystem {} already exists. Skipping creation".format(destination_fs)
 1287         )
 1288         blockdevice_path = os.path.join("/dev/zvol", pool, vm_name)
 1289         return blockdevice_path
 1290 
 1291     properties = {}
 1292     if hostname_property_name:
 1293         properties[hostname_property_name] = vm_name
 1294 
 1295     if disk_image_name:
 1296         __salt__["zfs.clone"](
 1297             name_a=disk_image_name, name_b=destination_fs, properties=properties
 1298         )
 1299 
 1300     elif disk_size:
 1301         __salt__["zfs.create"](
 1302             name=destination_fs,
 1303             properties=properties,
 1304             volume_size=disk_size,
 1305             sparse=sparse_volume,
 1306         )
 1307 
 1308     blockdevice_path = os.path.join(
 1309         "/dev/zvol", pool, "{}.{}".format(vm_name, disk_name)
 1310     )
 1311     log.debug("Image path will be %s", blockdevice_path)
 1312     return blockdevice_path
 1313 
 1314 
 1315 def _qemu_image_create(disk, create_overlay=False, saltenv="base"):
 1316     """
 1317     Create the image file using specified disk_size or/and disk_image
 1318 
 1319     Return path to the created image file
 1320     """
 1321     disk_size = disk.get("size", None)
 1322     disk_image = disk.get("image", None)
 1323 
 1324     if not disk_size and not disk_image:
 1325         raise CommandExecutionError(
 1326             "Unable to create new disk {}, please specify"
 1327             " disk size and/or disk image argument".format(disk["filename"])
 1328         )
 1329 
 1330     img_dest = disk["source_file"]
 1331     log.debug("Image destination will be %s", img_dest)
 1332     img_dir = os.path.dirname(img_dest)
 1333     log.debug("Image destination directory is %s", img_dir)
 1334     if not os.path.exists(img_dir):
 1335         os.makedirs(img_dir)
 1336 
 1337     if disk_image:
 1338         log.debug("Create disk from specified image %s", disk_image)
 1339         sfn = __salt__["cp.cache_file"](disk_image, saltenv)
 1340 
 1341         qcow2 = False
 1342         if salt.utils.path.which("qemu-img"):
 1343             res = __salt__["cmd.run"]('qemu-img info "{}"'.format(sfn))
 1344             imageinfo = salt.utils.yaml.safe_load(res)
 1345             qcow2 = imageinfo["file format"] == "qcow2"
 1346         try:
 1347             if create_overlay and qcow2:
 1348                 log.info("Cloning qcow2 image %s using copy on write", sfn)
 1349                 __salt__["cmd.run"](
 1350                     'qemu-img create -f qcow2 -o backing_file="{}" "{}"'.format(
 1351                         sfn, img_dest
 1352                     ).split()
 1353                 )
 1354             else:
 1355                 log.debug("Copying %s to %s", sfn, img_dest)
 1356                 salt.utils.files.copyfile(sfn, img_dest)
 1357 
 1358             mask = salt.utils.files.get_umask()
 1359 
 1360             if disk_size and qcow2:
 1361                 log.debug("Resize qcow2 image to %sM", disk_size)
 1362                 __salt__["cmd.run"](
 1363                     'qemu-img resize "{}" {}M'.format(img_dest, disk_size)
 1364                 )
 1365 
 1366             log.debug("Apply umask and remove exec bit")
 1367             mode = (0o0777 ^ mask) & 0o0666
 1368             os.chmod(img_dest, mode)
 1369 
 1370         except OSError as err:
 1371             raise CommandExecutionError(
 1372                 "Problem while copying image. {} - {}".format(disk_image, err)
 1373             )
 1374 
 1375     else:
 1376         # Create empty disk
 1377         try:
 1378             mask = salt.utils.files.get_umask()
 1379 
 1380             if disk_size:
 1381                 log.debug("Create empty image with size %sM", disk_size)
 1382                 __salt__["cmd.run"](
 1383                     'qemu-img create -f {} "{}" {}M'.format(
 1384                         disk.get("format", "qcow2"), img_dest, disk_size
 1385                     )
 1386                 )
 1387             else:
 1388                 raise CommandExecutionError(
 1389                     "Unable to create new disk {},"
 1390                     " please specify <size> argument".format(img_dest)
 1391                 )
 1392 
 1393             log.debug("Apply umask and remove exec bit")
 1394             mode = (0o0777 ^ mask) & 0o0666
 1395             os.chmod(img_dest, mode)
 1396 
 1397         except OSError as err:
 1398             raise CommandExecutionError(
 1399                 "Problem while creating volume {} - {}".format(img_dest, err)
 1400             )
 1401 
 1402     return img_dest
 1403 
 1404 
 1405 def _seed_image(seed_cmd, img_path, name, config, install, pub_key, priv_key):
 1406     """
 1407     Helper function to seed an existing image. Note that this doesn't
 1408     handle volumes.
 1409     """
 1410     log.debug("Seeding image")
 1411     __salt__[seed_cmd](
 1412         img_path,
 1413         id_=name,
 1414         config=config,
 1415         install=install,
 1416         pub_key=pub_key,
 1417         priv_key=priv_key,
 1418     )
 1419 
 1420 
 1421 def _disk_volume_create(conn, disk, seeder=None, saltenv="base"):
 1422     """
 1423     Create a disk volume for use in a VM
 1424     """
 1425     if disk.get("overlay_image"):
 1426         raise SaltInvocationError(
 1427             "Disk overlay_image property is not supported when creating volumes,"
 1428             "use backing_store_path and backing_store_format instead."
 1429         )
 1430 
 1431     pool = conn.storagePoolLookupByName(disk["pool"])
 1432 
 1433     # Use existing volume if possible
 1434     if disk["filename"] in pool.listVolumes():
 1435         return
 1436 
 1437     pool_type = ElementTree.fromstring(pool.XMLDesc()).get("type")
 1438 
 1439     backing_path = disk.get("backing_store_path")
 1440     backing_format = disk.get("backing_store_format")
 1441     backing_store = None
 1442     if (
 1443         backing_path
 1444         and backing_format
 1445         and (disk.get("format") == "qcow2" or pool_type == "logical")
 1446     ):
 1447         backing_store = {"path": backing_path, "format": backing_format}
 1448 
 1449     if backing_store and disk.get("image"):
 1450         raise SaltInvocationError(
 1451             "Using a template image with a backing store is not possible, "
 1452             "choose either of them."
 1453         )
 1454 
 1455     vol_xml = _gen_vol_xml(
 1456         disk["filename"],
 1457         disk.get("size", 0),
 1458         format=disk.get("format"),
 1459         backing_store=backing_store,
 1460     )
 1461     _define_vol_xml_str(conn, vol_xml, disk.get("pool"))
 1462 
 1463     if disk.get("image"):
 1464         log.debug("Caching disk template image: %s", disk.get("image"))
 1465         cached_path = __salt__["cp.cache_file"](disk.get("image"), saltenv)
 1466 
 1467         if seeder:
 1468             seeder(cached_path)
 1469         _volume_upload(
 1470             conn,
 1471             disk["pool"],
 1472             disk["filename"],
 1473             cached_path,
 1474             sparse=disk.get("format") == "qcow2",
 1475         )
 1476 
 1477 
 1478 def _disk_profile(conn, profile, hypervisor, disks, vm_name):
 1479     """
 1480     Gather the disk profile from the config or apply the default based
 1481     on the active hypervisor
 1482 
 1483     This is the ``default`` profile for KVM/QEMU, which can be
 1484     overridden in the configuration:
 1485 
 1486     .. code-block:: yaml
 1487 
 1488         virt:
 1489           disk:
 1490             default:
 1491               - system:
 1492                   size: 8192
 1493                   format: qcow2
 1494                   model: virtio
 1495 
 1496     Example profile for KVM/QEMU with two disks, first is created
 1497     from specified image, the second is empty:
 1498 
 1499     .. code-block:: yaml
 1500 
 1501         virt:
 1502           disk:
 1503             two_disks:
 1504               - system:
 1505                   size: 8192
 1506                   format: qcow2
 1507                   model: virtio
 1508                   image: http://path/to/image.qcow2
 1509               - lvm:
 1510                   size: 32768
 1511                   format: qcow2
 1512                   model: virtio
 1513 
 1514     The ``format`` and ``model`` parameters are optional, and will
 1515     default to whatever is best suitable for the active hypervisor.
 1516     """
 1517     default = [{"system": {"size": 8192}}]
 1518     if hypervisor == "vmware":
 1519         overlay = {"format": "vmdk", "model": "scsi", "device": "disk"}
 1520     elif hypervisor in ["qemu", "kvm"]:
 1521         overlay = {"device": "disk", "model": "virtio"}
 1522     elif hypervisor == "xen":
 1523         overlay = {"device": "disk", "model": "xen"}
 1524     elif hypervisor == "bhyve":
 1525         overlay = {"format": "raw", "model": "virtio", "sparse_volume": False}
 1526     else:
 1527         overlay = {}
 1528 
 1529     # Get the disks from the profile
 1530     disklist = []
 1531     if profile:
 1532         disklist = copy.deepcopy(
 1533             __salt__["config.get"]("virt:disk", {}).get(profile, default)
 1534         )
 1535 
 1536         # Transform the list to remove one level of dictionary and add the name as a property
 1537         disklist = [dict(d, name=name) for disk in disklist for name, d in disk.items()]
 1538 
 1539     # Merge with the user-provided disks definitions
 1540     if disks:
 1541         for udisk in disks:
 1542             if "name" in udisk:
 1543                 found = [disk for disk in disklist if udisk["name"] == disk["name"]]
 1544                 if found:
 1545                     found[0].update(udisk)
 1546                 else:
 1547                     disklist.append(udisk)
 1548 
 1549     # Get pool capabilities once to get default format later
 1550     pool_caps = _pool_capabilities(conn)
 1551 
 1552     for disk in disklist:
 1553         # Set default model for cdrom devices before the overlay sets the wrong one
 1554         if disk.get("device", "disk") == "cdrom" and "model" not in disk:
 1555             disk["model"] = "ide"
 1556 
 1557         # Add the missing properties that have defaults
 1558         for key, val in overlay.items():
 1559             if key not in disk:
 1560                 disk[key] = val
 1561 
 1562         # We may have an already computed source_file (i.e. image not created by our module)
 1563         if disk.get("source_file") and os.path.exists(disk["source_file"]):
 1564             disk["filename"] = os.path.basename(disk["source_file"])
 1565             if not disk.get("format"):
 1566                 disk["format"] = (
 1567                     "qcow2" if disk.get("device", "disk") != "cdrom" else "raw"
 1568                 )
 1569         elif vm_name and disk.get("device", "disk") == "disk":
 1570             _fill_disk_filename(conn, vm_name, disk, hypervisor, pool_caps)
 1571 
 1572     return disklist
 1573 
 1574 
 1575 def _fill_disk_filename(conn, vm_name, disk, hypervisor, pool_caps):
 1576     """
 1577     Compute the disk file name and update it in the disk value.
 1578     """
 1579     # Compute the filename without extension since it may not make sense for some pool types
 1580     disk["filename"] = "{}_{}".format(vm_name, disk["name"])
 1581 
 1582     # Compute the source file path
 1583     base_dir = disk.get("pool", None)
 1584     if hypervisor in ["qemu", "kvm", "xen"]:
 1585         # Compute the base directory from the pool property. We may have either a path
 1586         # or a libvirt pool name there.
 1587         if not base_dir:
 1588             base_dir = _get_images_dir()
 1589 
 1590         # If the pool is a known libvirt one, skip the filename since a libvirt volume will be created later
 1591         if base_dir not in conn.listStoragePools():
 1592             # For path-based disks, keep the qcow2 default format
 1593             if not disk.get("format"):
 1594                 disk["format"] = "qcow2"
 1595             disk["filename"] = "{}.{}".format(disk["filename"], disk["format"])
 1596             disk["source_file"] = os.path.join(base_dir, disk["filename"])
 1597         else:
 1598             if "pool" not in disk:
 1599                 disk["pool"] = base_dir
 1600             pool_obj = conn.storagePoolLookupByName(base_dir)
 1601             pool_xml = ElementTree.fromstring(pool_obj.XMLDesc())
 1602             pool_type = pool_xml.get("type")
 1603 
 1604             # Disk pools volume names are partition names, they need to be named based on the device name
 1605             if pool_type == "disk":
 1606                 device = pool_xml.find("./source/device").get("path")
 1607                 all_volumes = pool_obj.listVolumes()
 1608                 if disk.get("source_file") not in all_volumes:
 1609                     indexes = [
 1610                         int(re.sub("[a-z]+", "", vol_name)) for vol_name in all_volumes
 1611                     ] or [0]
 1612                     index = min(
 1613                         [
 1614                             idx
 1615                             for idx in range(1, max(indexes) + 2)
 1616                             if idx not in indexes
 1617                         ]
 1618                     )
 1619                     disk["filename"] = "{}{}".format(os.path.basename(device), index)
 1620 
 1621             # Is the user wanting to reuse an existing volume?
 1622             if disk.get("source_file"):
 1623                 if not disk.get("source_file") in pool_obj.listVolumes():
 1624                     raise SaltInvocationError(
 1625                         "{} volume doesn't exist in pool {}".format(
 1626                             disk.get("source_file"), base_dir
 1627                         )
 1628                     )
 1629                 disk["filename"] = disk["source_file"]
 1630                 del disk["source_file"]
 1631 
 1632             # Get the default format from the pool capabilities
 1633             if not disk.get("format"):
 1634                 volume_options = (
 1635                     [
 1636                         type_caps.get("options", {}).get("volume", {})
 1637                         for type_caps in pool_caps.get("pool_types")
 1638                         if type_caps["name"] == pool_type
 1639                     ]
 1640                     or [{}]
 1641                 )[0]
 1642                 # Still prefer qcow2 if possible
 1643                 if "qcow2" in volume_options.get("targetFormatType", []):
 1644                     disk["format"] = "qcow2"
 1645                 else:
 1646                     disk["format"] = volume_options.get("default_format", None)
 1647 
 1648     elif hypervisor == "bhyve" and vm_name:
 1649         disk["filename"] = "{}.{}".format(vm_name, disk["name"])
 1650         disk["source_file"] = os.path.join(
 1651             "/dev/zvol", base_dir or "", disk["filename"]
 1652         )
 1653 
 1654     elif hypervisor in ["esxi", "vmware"]:
 1655         if not base_dir:
 1656             base_dir = __salt__["config.get"]("virt:storagepool", "[0] ")
 1657         disk["filename"] = "{}.{}".format(disk["filename"], disk["format"])
 1658         disk["source_file"] = "{}{}".format(base_dir, disk["filename"])
 1659 
 1660 
 1661 def _complete_nics(interfaces, hypervisor):
 1662     """
 1663     Complete missing data for network interfaces.
 1664     """
 1665 
 1666     vmware_overlay = {"type": "bridge", "source": "DEFAULT", "model": "e1000"}
 1667     kvm_overlay = {"type": "bridge", "source": "br0", "model": "virtio"}
 1668     xen_overlay = {"type": "bridge", "source": "br0", "model": None}
 1669     bhyve_overlay = {"type": "bridge", "source": "bridge0", "model": "virtio"}
 1670     overlays = {
 1671         "xen": xen_overlay,
 1672         "kvm": kvm_overlay,
 1673         "qemu": kvm_overlay,
 1674         "vmware": vmware_overlay,
 1675         "bhyve": bhyve_overlay,
 1676     }
 1677 
 1678     def _normalize_net_types(attributes):
 1679         """
 1680         Guess which style of definition:
 1681 
 1682             bridge: br0
 1683 
 1684              or
 1685 
 1686             network: net0
 1687 
 1688              or
 1689 
 1690             type: network
 1691             source: net0
 1692         """
 1693         for type_ in ["bridge", "network"]:
 1694             if type_ in attributes:
 1695                 attributes["type"] = type_
 1696                 # we want to discard the original key
 1697                 attributes["source"] = attributes.pop(type_)
 1698 
 1699         attributes["type"] = attributes.get("type", None)
 1700         attributes["source"] = attributes.get("source", None)
 1701 
 1702     def _apply_default_overlay(attributes):
 1703         """
 1704         Apply the default overlay to attributes
 1705         """
 1706         for key, value in overlays[hypervisor].items():
 1707             if key not in attributes or not attributes[key]:
 1708                 attributes[key] = value
 1709 
 1710     for interface in interfaces:
 1711         _normalize_net_types(interface)
 1712         if hypervisor in overlays:
 1713             _apply_default_overlay(interface)
 1714 
 1715     return interfaces
 1716 
 1717 
 1718 def _nic_profile(profile_name, hypervisor):
 1719     """
 1720     Compute NIC data based on profile
 1721     """
 1722     config_data = __salt__["config.get"]("virt:nic", {}).get(
 1723         profile_name, [{"eth0": {}}]
 1724     )
 1725 
 1726     interfaces = []
 1727 
 1728     # pylint: disable=invalid-name
 1729     def append_dict_profile_to_interface_list(profile_dict):
 1730         """
 1731         Append dictionary profile data to interfaces list
 1732         """
 1733         for interface_name, attributes in profile_dict.items():
 1734             attributes["name"] = interface_name
 1735             interfaces.append(attributes)
 1736 
 1737     # old style dicts (top-level dicts)
 1738     #
 1739     # virt:
 1740     #    nic:
 1741     #        eth0:
 1742     #            bridge: br0
 1743     #        eth1:
 1744     #            network: test_net
 1745     if isinstance(config_data, dict):
 1746         append_dict_profile_to_interface_list(config_data)
 1747 
 1748     # new style lists (may contain dicts)
 1749     #
 1750     # virt:
 1751     #   nic:
 1752     #     - eth0:
 1753     #         bridge: br0
 1754     #     - eth1:
 1755     #         network: test_net
 1756     #
 1757     # virt:
 1758     #   nic:
 1759     #     - name: eth0
 1760     #       bridge: br0
 1761     #     - name: eth1
 1762     #       network: test_net
 1763     elif isinstance(config_data, list):
 1764         for interface in config_data:
 1765             if isinstance(interface, dict):
 1766                 if len(interface) == 1:
 1767                     append_dict_profile_to_interface_list(interface)
 1768                 else:
 1769                     interfaces.append(interface)
 1770 
 1771     return _complete_nics(interfaces, hypervisor)
 1772 
 1773 
 1774 def _get_merged_nics(hypervisor, profile, interfaces=None):
 1775     """
 1776     Get network devices from the profile and merge uer defined ones with them.
 1777     """
 1778     nicp = _nic_profile(profile, hypervisor) if profile else []
 1779     log.debug("NIC profile is %s", nicp)
 1780     if interfaces:
 1781         users_nics = _complete_nics(interfaces, hypervisor)
 1782         for unic in users_nics:
 1783             found = [nic for nic in nicp if nic["name"] == unic["name"]]
 1784             if found:
 1785                 found[0].update(unic)
 1786             else:
 1787                 nicp.append(unic)
 1788         log.debug("Merged NICs: %s", nicp)
 1789     return nicp
 1790 
 1791 
 1792 def _handle_remote_boot_params(orig_boot):
 1793     """
 1794     Checks if the boot parameters contain a remote path. If so, it will copy
 1795     the parameters, download the files specified in the remote path, and return
 1796     a new dictionary with updated paths containing the canonical path to the
 1797     kernel and/or initrd
 1798 
 1799     :param orig_boot: The original boot parameters passed to the init or update
 1800     functions.
 1801     """
 1802     saltinst_dir = None
 1803     new_boot = orig_boot.copy()
 1804     keys = orig_boot.keys()
 1805     cases = [
 1806         {"efi"},
 1807         {"kernel", "initrd", "efi"},
 1808         {"kernel", "initrd", "cmdline", "efi"},
 1809         {"loader", "nvram"},
 1810         {"kernel", "initrd"},
 1811         {"kernel", "initrd", "cmdline"},
 1812         {"kernel", "initrd", "loader", "nvram"},
 1813         {"kernel", "initrd", "cmdline", "loader", "nvram"},
 1814     ]
 1815 
 1816     try:
 1817         if keys in cases:
 1818             for key in keys:
 1819                 if key == "efi" and type(orig_boot.get(key)) == bool:
 1820                     new_boot[key] = orig_boot.get(key)
 1821                 elif orig_boot.get(key) is not None and salt.utils.virt.check_remote(
 1822                     orig_boot.get(key)
 1823                 ):
 1824                     if saltinst_dir is None:
 1825                         os.makedirs(CACHE_DIR)
 1826                         saltinst_dir = CACHE_DIR
 1827                     new_boot[key] = salt.utils.virt.download_remote(
 1828                         orig_boot.get(key), saltinst_dir
 1829                     )
 1830             return new_boot
 1831         else:
 1832             raise SaltInvocationError(
 1833                 "Invalid boot parameters,It has to follow this combination: [(kernel, initrd) or/and cmdline] or/and [(loader, nvram) or efi]"
 1834             )
 1835     except Exception as err:  # pylint: disable=broad-except
 1836         raise err
 1837 
 1838 
 1839 def _handle_efi_param(boot, desc):
 1840     """
 1841     Checks if boot parameter contains efi boolean value, if so, handles the firmware attribute.
 1842     :param boot: The boot parameters passed to the init or update functions.
 1843     :param desc: The XML description of that domain.
 1844     :return: A boolean value.
 1845     """
 1846     efi_value = boot.get("efi", None) if boot else None
 1847     parent_tag = desc.find("os")
 1848     os_attrib = parent_tag.attrib
 1849 
 1850     # newly defined vm without running, loader tag might not be filled yet
 1851     if efi_value is False and os_attrib != {}:
 1852         parent_tag.attrib.pop("firmware", None)
 1853         return True
 1854 
 1855     # check the case that loader tag might be present. This happens after the vm ran
 1856     elif type(efi_value) == bool and os_attrib == {}:
 1857         if efi_value is True and parent_tag.find("loader") is None:
 1858             parent_tag.set("firmware", "efi")
 1859         if efi_value is False and parent_tag.find("loader") is not None:
 1860             parent_tag.remove(parent_tag.find("loader"))
 1861             parent_tag.remove(parent_tag.find("nvram"))
 1862         return True
 1863     elif type(efi_value) != bool:
 1864         raise SaltInvocationError("Invalid efi value")
 1865     return False
 1866 
 1867 
 1868 def init(
 1869     name,
 1870     cpu,
 1871     mem,
 1872     nic="default",
 1873     interfaces=None,
 1874     hypervisor=None,
 1875     start=True,  # pylint: disable=redefined-outer-name
 1876     disk="default",
 1877     disks=None,
 1878     saltenv="base",
 1879     seed=True,
 1880     install=True,
 1881     pub_key=None,
 1882     priv_key=None,
 1883     seed_cmd="seed.apply",
 1884     graphics=None,
 1885     os_type=None,
 1886     arch=None,
 1887     boot=None,
 1888     boot_dev=None,
 1889     **kwargs
 1890 ):
 1891     """
 1892     Initialize a new vm
 1893 
 1894     :param name: name of the virtual machine to create
 1895     :param cpu: Number of virtual CPUs to assign to the virtual machine
 1896     :param mem: Amount of memory to allocate to the virtual machine in MiB. Since 3002, a dictionary can be used to
 1897         contain detailed configuration which support memory allocation or tuning. Supported parameters are ``boot``,
 1898         ``current``, ``max``, ``slots``, ``hard_limit``, ``soft_limit``, ``swap_hard_limit`` and ``min_guarantee``. The
 1899         structure of the dictionary is documented in  :ref:`init-mem-def`. Both decimal and binary base are supported.
 1900         Detail unit specification is documented  in :ref:`virt-units`. Please note that the value for ``slots`` must be
 1901         an integer.
 1902 
 1903         .. code-block:: python
 1904 
 1905             {
 1906                 'boot': 1g,
 1907                 'current': 1g,
 1908                 'max': 1g,
 1909                 'slots': 10,
 1910                 'hard_limit': '1024'
 1911                 'soft_limit': '512m'
 1912                 'swap_hard_limit': '1g'
 1913                 'min_guarantee': '512mib'
 1914             }
 1915 
 1916         .. versionchanged:: 3002
 1917 
 1918     :param nic: NIC profile to use (Default: ``'default'``).
 1919                 The profile interfaces can be customized / extended with the interfaces parameter.
 1920                 If set to ``None``, no profile will be used.
 1921     :param interfaces:
 1922         List of dictionaries providing details on the network interfaces to create.
 1923         These data are merged with the ones from the nic profile. The structure of
 1924         each dictionary is documented in :ref:`init-nic-def`.
 1925 
 1926         .. versionadded:: 2019.2.0
 1927     :param hypervisor: the virtual machine type. By default the value will be computed according
 1928                        to the virtual host capabilities.
 1929     :param start: ``True`` to start the virtual machine after having defined it (Default: ``True``)
 1930     :param disk: Disk profile to use (Default: ``'default'``). If set to ``None``, no profile will be used.
 1931     :param disks: List of dictionaries providing details on the disk devices to create.
 1932                   These data are merged with the ones from the disk profile. The structure of
 1933                   each dictionary is documented in :ref:`init-disk-def`.
 1934 
 1935                   .. versionadded:: 2019.2.0
 1936     :param saltenv: Fileserver environment (Default: ``'base'``).
 1937                     See :mod:`cp module for more details <salt.modules.cp>`
 1938     :param seed: ``True`` to seed the disk image. Only used when the ``image`` parameter is provided.
 1939                  (Default: ``True``)
 1940     :param install: install salt minion if absent (Default: ``True``)
 1941     :param pub_key: public key to seed with (Default: ``None``)
 1942     :param priv_key: public key to seed with (Default: ``None``)
 1943     :param seed_cmd: Salt command to execute to seed the image. (Default: ``'seed.apply'``)
 1944     :param graphics:
 1945         Dictionary providing details on the graphics device to create. (Default: ``None``)
 1946         See :ref:`init-graphics-def` for more details on the possible values.
 1947 
 1948         .. versionadded:: 2019.2.0
 1949     :param os_type:
 1950         type of virtualization as found in the ``//os/type`` element of the libvirt definition.
 1951         The default value is taken from the host capabilities, with a preference for ``hvm``.
 1952 
 1953         .. versionadded:: 2019.2.0
 1954     :param arch:
 1955         architecture of the virtual machine. The default value is taken from the host capabilities,
 1956         but ``x86_64`` is prefed over ``i686``.
 1957 
 1958         .. versionadded:: 2019.2.0
 1959     :param config: minion configuration to use when seeding.
 1960                    See :mod:`seed module for more details <salt.modules.seed>`
 1961     :param boot_dev: String of space-separated devices to boot from (Default: ``'hd'``)
 1962     :param serial_type: Serial device type. One of ``'pty'``, ``'tcp'`` (Default: ``None``)
 1963     :param telnet_port: Telnet port to use for serial device of type ``tcp``.
 1964     :param console: ``True`` to add a console device along with serial one (Default: ``True``)
 1965     :param connection: libvirt connection URI, overriding defaults
 1966 
 1967                        .. versionadded:: 2019.2.0
 1968     :param username: username to connect with, overriding defaults
 1969 
 1970                      .. versionadded:: 2019.2.0
 1971     :param password: password to connect with, overriding defaults
 1972 
 1973                      .. versionadded:: 2019.2.0
 1974     :param boot:
 1975         Specifies kernel, initial ramdisk and kernel command line parameters for the virtual machine.
 1976         This is an optional parameter, all of the keys are optional within the dictionary. The structure of
 1977         the dictionary is documented in :ref:`init-boot-def`. If a remote path is provided to kernel or initrd,
 1978         salt will handle the downloading of the specified remote file and modify the XML accordingly.
 1979         To boot VM with UEFI, specify loader and nvram path or specify 'efi': ``True`` if your libvirtd version
 1980         is >= 5.2.0 and QEMU >= 3.0.0.
 1981 
 1982         .. versionadded:: 3000
 1983 
 1984         .. code-block:: python
 1985 
 1986             {
 1987                 'kernel': '/root/f8-i386-vmlinuz',
 1988                 'initrd': '/root/f8-i386-initrd',
 1989                 'cmdline': 'console=ttyS0 ks=http://example.com/f8-i386/os/',
 1990                 'loader': '/usr/share/OVMF/OVMF_CODE.fd',
 1991                 'nvram': '/usr/share/OVMF/OVMF_VARS.ms.fd'
 1992             }
 1993 
 1994     :param boot_dev:
 1995         Space separated list of devices to boot from sorted by decreasing priority.
 1996         Values can be ``hd``, ``fd``, ``cdrom`` or ``network``.
 1997 
 1998         By default, the value will ``"hd"``.
 1999 
 2000     .. _init-boot-def:
 2001 
 2002     .. rubric:: Boot parameters definition
 2003 
 2004     The boot parameters dictionary can contains the following properties:
 2005 
 2006     kernel
 2007         The URL or path to the kernel to run the virtual machine with.
 2008 
 2009     initrd
 2010         The URL or path to the initrd file to run the virtual machine with.
 2011 
 2012     cmdline
 2013         The parameters to pass to the kernel provided in the `kernel` property.
 2014 
 2015     loader
 2016         The path to the UEFI binary loader to use.
 2017 
 2018         .. versionadded:: 3001
 2019 
 2020     nvram
 2021         The path to the UEFI data template. The file will be copied when creating the virtual machine.
 2022 
 2023         .. versionadded:: 3001
 2024 
 2025     efi
 2026        A boolean value.
 2027 
 2028        .. versionadded:: sodium
 2029 
 2030     .. _init-mem-def:
 2031 
 2032     .. rubric:: Memory parameter definition
 2033 
 2034     Memory parameter can contain the following properties:
 2035 
 2036     boot
 2037         The maximum allocation of memory for the guest at boot time
 2038 
 2039     current
 2040         The actual allocation of memory for the guest
 2041 
 2042     max
 2043         The run time maximum memory allocation of the guest
 2044 
 2045     slots
 2046          specifies the number of slots available for adding memory to the guest
 2047 
 2048     hard_limit
 2049         the maximum memory the guest can use
 2050 
 2051     soft_limit
 2052         memory limit to enforce during memory contention
 2053 
 2054     swap_hard_limit
 2055         the maximum memory plus swap the guest can use
 2056 
 2057     min_guarantee
 2058         the guaranteed minimum memory allocation for the guest
 2059 
 2060     .. _init-nic-def:
 2061 
 2062     .. rubric:: Network Interfaces Definitions
 2063 
 2064     Network interfaces dictionaries can contain the following properties:
 2065 
 2066     name
 2067         Name of the network interface. This is only used as a key to merge with the profile data
 2068 
 2069     type
 2070         Network type. One of ``'bridge'``, ``'network'``
 2071 
 2072     source
 2073         The network source, typically the bridge or network name
 2074 
 2075     mac
 2076         The desired mac address, computed if ``None`` (Default: ``None``).
 2077 
 2078     model
 2079         The network card model (Default: depends on the hypervisor)
 2080 
 2081     .. _init-disk-def:
 2082 
 2083     .. rubric:: Disks Definitions
 2084 
 2085     Disk dictionaries can contain the following properties:
 2086 
 2087     name
 2088         Name of the disk. This is mostly used in the name of the disk image and as a key to merge
 2089         with the profile data.
 2090 
 2091     format
 2092         Format of the disk image, like ``'qcow2'``, ``'raw'``, ``'vmdk'``.
 2093         (Default: depends on the hypervisor)
 2094 
 2095     size
 2096         Disk size in MiB
 2097 
 2098     pool
 2099         Path to the folder or name of the pool where disks should be created.
 2100         (Default: depends on hypervisor and the virt:storagepool configuration)
 2101 
 2102         .. versionchanged:: 3001
 2103 
 2104         If the value contains no '/', it is considered a pool name where to create a volume.
 2105         Using volumes will be mandatory for some pools types like rdb, iscsi, etc.
 2106 
 2107     model
 2108         One of the disk busses allowed by libvirt (Default: depends on hypervisor)
 2109 
 2110         See the libvirt `disk element`_ documentation for the allowed bus types.
 2111 
 2112     image
 2113         Path to the image to use for the disk. If no image is provided, an empty disk will be created
 2114         (Default: ``None``)
 2115 
 2116         Note that some pool types do not support uploading an image. This list can evolve with libvirt
 2117         versions.
 2118 
 2119     overlay_image
 2120         ``True`` to create a QCOW2 disk image with ``image`` as backing file. If ``False``
 2121         the file pointed to by the ``image`` property will simply be copied. (Default: ``False``)
 2122 
 2123         .. versionchanged:: 3001
 2124 
 2125         This property is only valid on path-based disks, not on volumes. To create a volume with a
 2126         backing store, set the ``backing_store_path`` and ``backing_store_format`` properties.
 2127 
 2128     backing_store_path
 2129         Path to the backing store image to use. This can also be the name of a volume to use as
 2130         backing store within the same pool.
 2131 
 2132         .. versionadded:: 3001
 2133 
 2134     backing_store_format
 2135         Image format of the disk or volume to use as backing store. This property is mandatory when
 2136         using ``backing_store_path`` to avoid `problems <https://libvirt.org/kbase/backing_chains.html#troubleshooting>`_
 2137 
 2138         .. versionadded:: 3001
 2139 
 2140     source_file
 2141         Absolute path to the disk image to use. Not to be confused with ``image`` parameter. This
 2142         parameter is useful to use disk images that are created outside of this module. Can also
 2143         be ``None`` for devices that have no associated image like cdroms.
 2144 
 2145         .. versionchanged:: 3001
 2146 
 2147         For volume disks, this can be the name of a volume already existing in the storage pool.
 2148 
 2149     device
 2150         Type of device of the disk. Can be one of 'disk', 'cdrom', 'floppy' or 'lun'.
 2151         (Default: ``'disk'``)
 2152 
 2153     hostname_property
 2154         When using ZFS volumes, setting this value to a ZFS property ID will make Salt store the name of the
 2155         virtual machine inside this property. (Default: ``None``)
 2156 
 2157     sparse_volume
 2158         Boolean to specify whether to use a thin provisioned ZFS volume.
 2159 
 2160         Example profile for a bhyve VM with two ZFS disks. The first is
 2161         cloned from the specified image. The second disk is a thin
 2162         provisioned volume.
 2163 
 2164         .. code-block:: yaml
 2165 
 2166             virt:
 2167               disk:
 2168                 two_zvols:
 2169                   - system:
 2170                       image: zroot/bhyve/CentOS-7-x86_64-v1@v1.0.5
 2171                       hostname_property: virt:hostname
 2172                       pool: zroot/bhyve/guests
 2173                   - data:
 2174                       pool: tank/disks
 2175                       size: 20G
 2176                       hostname_property: virt:hostname
 2177                       sparse_volume: True
 2178 
 2179     .. _init-graphics-def:
 2180 
 2181     .. rubric:: Graphics Definition
 2182 
 2183     The graphics dictionary can have the following properties:
 2184 
 2185     type
 2186         Graphics type. The possible values are ``none``, ``'spice'``, ``'vnc'`` and other values
 2187         allowed as a libvirt graphics type (Default: ``None``)
 2188 
 2189         See the libvirt `graphics element`_ documentation for more details on the possible types.
 2190 
 2191     port
 2192         Port to export the graphics on for ``vnc``, ``spice`` and ``rdp`` types.
 2193 
 2194     tls_port
 2195         Port to export the graphics over a secured connection for ``spice`` type.
 2196 
 2197     listen
 2198         Dictionary defining on what address to listen on for ``vnc``, ``spice`` and ``rdp``.
 2199         It has a ``type`` property with ``address`` and ``None`` as possible values, and an
 2200         ``address`` property holding the IP or hostname to listen on.
 2201 
 2202         By default, not setting the ``listen`` part of the dictionary will default to
 2203         listen on all addresses.
 2204 
 2205     .. rubric:: CLI Example
 2206 
 2207     .. code-block:: bash
 2208 
 2209         salt 'hypervisor' virt.init vm_name 4 512 salt://path/to/image.raw
 2210         salt 'hypervisor' virt.init vm_name 4 512 /var/lib/libvirt/images/img.raw
 2211         salt 'hypervisor' virt.init vm_name 4 512 nic=profile disk=profile
 2212 
 2213     The disk images will be created in an image folder within the directory
 2214     defined by the ``virt:images`` option. Its default value is
 2215     ``/srv/salt-images/`` but this can changed with such a configuration:
 2216 
 2217     .. code-block:: yaml
 2218 
 2219         virt:
 2220             images: /data/my/vm/images/
 2221 
 2222     .. _disk element: https://libvirt.org/formatdomain.html#elementsDisks
 2223     .. _graphics element: https://libvirt.org/formatdomain.html#elementsGraphics
 2224     """
 2225     try:
 2226         conn = __get_conn(**kwargs)
 2227         caps = _capabilities(conn)
 2228         os_types = sorted({guest["os_type"] for guest in caps["guests"]})
 2229         arches = sorted({guest["arch"]["name"] for guest in caps["guests"]})
 2230 
 2231         virt_hypervisor = hypervisor
 2232         if not virt_hypervisor:
 2233             # Use the machine types as possible values
 2234             # Prefer 'kvm' over the others if available
 2235             hypervisors = sorted(
 2236                 {
 2237                     x
 2238                     for y in [
 2239                         guest["arch"]["domains"].keys() for guest in caps["guests"]
 2240                     ]
 2241                     for x in y
 2242                 }
 2243             )
 2244             if len(hypervisors) == 0:
 2245                 raise SaltInvocationError("No supported hypervisors were found")
 2246             virt_hypervisor = "kvm" if "kvm" in hypervisors else hypervisors[0]
 2247 
 2248         # esxi used to be a possible value for the hypervisor: map it to vmware since it's the same
 2249         virt_hypervisor = "vmware" if virt_hypervisor == "esxi" else virt_hypervisor
 2250 
 2251         log.debug("Using hypervisor %s", virt_hypervisor)
 2252 
 2253         nicp = _get_merged_nics(virt_hypervisor, nic, interfaces)
 2254 
 2255         # the disks are computed as follows:
 2256         # 1 - get the disks defined in the profile
 2257         # 3 - update the disks from the profile with the ones from the user. The matching key is the name.
 2258         diskp = _disk_profile(conn, disk, virt_hypervisor, disks, name)
 2259 
 2260         # Create multiple disks, empty or from specified images.
 2261         for _disk in diskp:
 2262             # No need to create an image for cdrom devices
 2263             if _disk.get("device", "disk") == "cdrom":
 2264                 continue
 2265 
 2266             log.debug("Creating disk for VM [ %s ]: %s", name, _disk)
 2267 
 2268             if virt_hypervisor == "vmware":
 2269                 if "image" in _disk:
 2270                     # TODO: we should be copying the image file onto the ESX host
 2271                     raise SaltInvocationError(
 2272                         "virt.init does not support image "
 2273                         "template in conjunction with esxi hypervisor"
 2274                     )
 2275                 else:
 2276                     # assume libvirt manages disks for us
 2277                     log.debug("Generating libvirt XML for %s", _disk)
 2278                     volume_name = "{}/{}".format(name, _disk["name"])
 2279                     filename = "{}.{}".format(volume_name, _disk["format"])
 2280                     vol_xml = _gen_vol_xml(
 2281                         filename, _disk["size"], format=_disk["format"]
 2282                     )
 2283                     _define_vol_xml_str(conn, vol_xml, pool=_disk.get("pool"))
 2284 
 2285             elif virt_hypervisor in ["qemu", "kvm", "xen"]:
 2286 
 2287                 def seeder(path):
 2288                     _seed_image(
 2289                         seed_cmd,
 2290                         path,
 2291                         name,
 2292                         kwargs.get("config"),
 2293                         install,
 2294                         pub_key,
 2295                         priv_key,
 2296                     )
 2297 
 2298                 create_overlay = _disk.get("overlay_image", False)
 2299                 format = _disk.get("format")
 2300                 if _disk.get("source_file"):
 2301                     if os.path.exists(_disk["source_file"]):
 2302                         img_dest = _disk["source_file"]
 2303                     else:
 2304                         img_dest = _qemu_image_create(_disk, create_overlay, saltenv)
 2305                 else:
 2306                     _disk_volume_create(conn, _disk, seeder if seed else None, saltenv)
 2307                     img_dest = None
 2308 
 2309                 # Seed only if there is an image specified
 2310                 if seed and img_dest and _disk.get("image", None):
 2311                     seeder(img_dest)
 2312 
 2313             elif hypervisor in ["bhyve"]:
 2314                 img_dest = _zfs_image_create(
 2315                     vm_name=name,
 2316                     pool=_disk.get("pool"),
 2317                     disk_name=_disk.get("name"),
 2318                     disk_size=_disk.get("size"),
 2319                     disk_image_name=_disk.get("image"),
 2320                     hostname_property_name=_disk.get("hostname_property"),
 2321                     sparse_volume=_disk.get("sparse_volume"),
 2322                 )
 2323 
 2324             else:
 2325                 # Unknown hypervisor
 2326                 raise SaltInvocationError(
 2327                     "Unsupported hypervisor when handling disk image: {}".format(
 2328                         virt_hypervisor
 2329                     )
 2330                 )
 2331 
 2332         log.debug("Generating VM XML")
 2333         if os_type is None:
 2334             os_type = "hvm" if "hvm" in os_types else os_types[0]
 2335         if arch is None:
 2336             arch = "x86_64" if "x86_64" in arches else arches[0]
 2337 
 2338         if boot is not None:
 2339             boot = _handle_remote_boot_params(boot)
 2340 
 2341         vm_xml = _gen_xml(
 2342             conn,
 2343             name,
 2344             cpu,
 2345             mem,
 2346             diskp,
 2347             nicp,
 2348             virt_hypervisor,
 2349             os_type,
 2350             arch,
 2351             graphics,
 2352             boot,
 2353             boot_dev,
 2354             **kwargs
 2355         )
 2356         log.debug("New virtual machine definition: %s", vm_xml)
 2357         conn.defineXML(vm_xml)
 2358     except libvirt.libvirtError as err:
 2359         conn.close()
 2360         raise CommandExecutionError(err.get_error_message())
 2361 
 2362     if start:
 2363         log.debug("Starting VM %s", name)
 2364         _get_domain(conn, name).create()
 2365     conn.close()
 2366 
 2367     return True
 2368 
 2369 
 2370 def _disks_equal(disk1, disk2):
 2371     """
 2372     Test if two disk elements should be considered like the same device
 2373     """
 2374     target1 = disk1.find("target")
 2375     target2 = disk2.find("target")
 2376     source1 = (
 2377         disk1.find("source")
 2378         if disk1.find("source") is not None
 2379         else ElementTree.Element("source")
 2380     )
 2381     source2 = (
 2382         disk2.find("source")
 2383         if disk2.find("source") is not None
 2384         else ElementTree.Element("source")
 2385     )
 2386 
 2387     source1_dict = xmlutil.to_dict(source1, True)
 2388     source2_dict = xmlutil.to_dict(source2, True)
 2389 
 2390     # Remove the index added by libvirt in the source for backing chain
 2391     if source1_dict:
 2392         source1_dict.pop("index", None)
 2393     if source2_dict:
 2394         source2_dict.pop("index", None)
 2395 
 2396     return (
 2397         source1_dict == source2_dict
 2398         and target1 is not None
 2399         and target2 is not None
 2400         and target1.get("bus") == target2.get("bus")
 2401         and disk1.get("device", "disk") == disk2.get("device", "disk")
 2402         and target1.get("dev") == target2.get("dev")
 2403     )
 2404 
 2405 
 2406 def _nics_equal(nic1, nic2):
 2407     """
 2408     Test if two interface elements should be considered like the same device
 2409     """
 2410 
 2411     def _filter_nic(nic):
 2412         """
 2413         Filter out elements to ignore when comparing nics
 2414         """
 2415         return {
 2416             "type": nic.attrib["type"],
 2417             "source": nic.find("source").attrib[nic.attrib["type"]]
 2418             if nic.find("source") is not None
 2419             else None,
 2420             "model": nic.find("model").attrib["type"]
 2421             if nic.find("model") is not None
 2422             else None,
 2423         }
 2424 
 2425     def _get_mac(nic):
 2426         return (
 2427             nic.find("mac").attrib["address"].lower()
 2428             if nic.find("mac") is not None
 2429             else None
 2430         )
 2431 
 2432     mac1 = _get_mac(nic1)
 2433     mac2 = _get_mac(nic2)
 2434     macs_equal = not mac1 or not mac2 or mac1 == mac2
 2435     return _filter_nic(nic1) == _filter_nic(nic2) and macs_equal
 2436 
 2437 
 2438 def _graphics_equal(gfx1, gfx2):
 2439     """
 2440     Test if two graphics devices should be considered the same device
 2441     """
 2442 
 2443     def _filter_graphics(gfx):
 2444         """
 2445         When the domain is running, the graphics element may contain additional properties
 2446         with the default values. This function will strip down the default values.
 2447         """
 2448         gfx_copy = copy.deepcopy(gfx)
 2449 
 2450         defaults = [
 2451             {"node": ".", "attrib": "port", "values": ["5900", "-1"]},
 2452             {"node": ".", "attrib": "address", "values": ["127.0.0.1"]},
 2453             {"node": "listen", "attrib": "address", "values": ["127.0.0.1"]},
 2454         ]
 2455 
 2456         for default in defaults:
 2457             node = gfx_copy.find(default["node"])
 2458             attrib = default["attrib"]
 2459             if node is not None and (
 2460                 attrib in node.attrib and node.attrib[attrib] in default["values"]
 2461             ):
 2462                 node.attrib.pop(attrib)
 2463         return gfx_copy
 2464 
 2465     return xmlutil.to_dict(_filter_graphics(gfx1), True) == xmlutil.to_dict(
 2466         _filter_graphics(gfx2), True
 2467     )
 2468 
 2469 
 2470 def _diff_lists(old, new, comparator):
 2471     """
 2472     Compare lists to extract the changes
 2473 
 2474     :param old: old list
 2475     :param new: new list
 2476     :return: a dictionary with ``unchanged``, ``new``, ``deleted`` and ``sorted`` keys
 2477 
 2478     The sorted list is the union of unchanged and new lists, but keeping the original
 2479     order from the new list.
 2480     """
 2481 
 2482     def _remove_indent(node):
 2483         """
 2484         Remove the XML indentation to compare XML trees more easily
 2485         """
 2486         node_copy = copy.deepcopy(node)
 2487         node_copy.text = None
 2488         for item in node_copy.iter():
 2489             item.tail = None
 2490         return node_copy
 2491 
 2492     diff = {"unchanged": [], "new": [], "deleted": [], "sorted": []}
 2493     # We don't want to alter old since it may be used later by caller
 2494     old_devices = copy.deepcopy(old)
 2495     for new_item in new:
 2496         found = [
 2497             item
 2498             for item in old_devices
 2499             if comparator(_remove_indent(item), _remove_indent(new_item))
 2500         ]
 2501         if found:
 2502             old_devices.remove(found[0])
 2503             diff["unchanged"].append(found[0])
 2504             diff["sorted"].append(found[0])
 2505         else:
 2506             diff["new"].append(new_item)
 2507             diff["sorted"].append(new_item)
 2508     diff["deleted"] = old_devices
 2509     return diff
 2510 
 2511 
 2512 def _get_disk_target(targets, disks_count, prefix):
 2513     """
 2514     Compute the disk target name for a given prefix.
 2515 
 2516     :param targets: the list of already computed targets
 2517     :param disks: the number of disks
 2518     :param prefix: the prefix of the target name, i.e. "hd"
 2519     """
 2520     for i in range(disks_count):
 2521         ret = "{}{}".format(prefix, string.ascii_lowercase[i])
 2522         if ret not in targets:
 2523             return ret
 2524     return None
 2525 
 2526 
 2527 def _diff_disk_lists(old, new):
 2528     """
 2529     Compare disk definitions to extract the changes and fix target devices
 2530 
 2531     :param old: list of ElementTree nodes representing the old disks
 2532     :param new: list of ElementTree nodes representing the new disks
 2533     """
 2534     # Change the target device to avoid duplicates before diffing: this may lead
 2535     # to additional changes. Think of unchanged disk 'hda' and another disk listed
 2536     # before it becoming 'hda' too... the unchanged need to turn into 'hdb'.
 2537     targets = []
 2538     prefixes = ["fd", "hd", "vd", "sd", "xvd", "ubd"]
 2539     for disk in new:
 2540         target_node = disk.find("target")
 2541         target = target_node.get("dev")
 2542         prefix = [item for item in prefixes if target.startswith(item)][0]
 2543         new_target = _get_disk_target(targets, len(new), prefix)
 2544         target_node.set("dev", new_target)
 2545         targets.append(new_target)
 2546 
 2547     return _diff_lists(old, new, _disks_equal)
 2548 
 2549 
 2550 def _diff_interface_lists(old, new):
 2551     """
 2552     Compare network interface definitions to extract the changes
 2553 
 2554     :param old: list of ElementTree nodes representing the old interfaces
 2555     :param new: list of ElementTree nodes representing the new interfaces
 2556     """
 2557     return _diff_lists(old, new, _nics_equal)
 2558 
 2559 
 2560 def _diff_graphics_lists(old, new):
 2561     """
 2562     Compare graphic devices definitions to extract the changes
 2563 
 2564     :param old: list of ElementTree nodes representing the old graphic devices
 2565     :param new: list of ElementTree nodes representing the new graphic devices
 2566     """
 2567     return _diff_lists(old, new, _graphics_equal)
 2568 
 2569 
 2570 def update(
 2571     name,
 2572     cpu=0,
 2573     mem=0,
 2574     disk_profile=None,
 2575     disks=None,
 2576     nic_profile=None,
 2577     interfaces=None,
 2578     graphics=None,
 2579     live=True,
 2580     boot=None,
 2581     test=False,
 2582     boot_dev=None,
 2583     **kwargs
 2584 ):
 2585     """
 2586     Update the definition of an existing domain.
 2587 
 2588     :param name: Name of the domain to update
 2589     :param cpu: Number of virtual CPUs to assign to the virtual machine
 2590     :param mem: Amount of memory to allocate to the virtual machine in MiB. Since 3002, a dictionary can be used to
 2591         contain detailed configuration which support memory allocation or tuning. Supported parameters are ``boot``,
 2592         ``current``, ``max``, ``slots``, ``hard_limit``, ``soft_limit``, ``swap_hard_limit`` and ``min_guarantee``. The
 2593         structure of the dictionary is documented in  :ref:`init-mem-def`. Both decimal and binary base are supported.
 2594         Detail unit specification is documented  in :ref:`virt-units`. Please note that the value for ``slots`` must be
 2595         an integer.
 2596 
 2597         To remove any parameters, pass a None object, for instance: 'soft_limit': ``None``. Please note  that ``None``
 2598         is mapped to ``null`` in sls file, pass ``null`` in sls file instead.
 2599 
 2600         .. code-block:: yaml
 2601 
 2602             - mem:
 2603                 hard_limit: null
 2604                 soft_limit: null
 2605 
 2606         .. versionchanged:: 3002
 2607 
 2608     :param disk_profile: disk profile to use
 2609     :param disks:
 2610         Disk definitions as documented in the :func:`init` function.
 2611         If neither the profile nor this parameter are defined, the disk devices
 2612         will not be changed. However to clear disks set this parameter to empty list.
 2613 
 2614     :param nic_profile: network interfaces profile to use
 2615     :param interfaces:
 2616         Network interface definitions as documented in the :func:`init` function.
 2617         If neither the profile nor this parameter are defined, the interface devices
 2618         will not be changed. However to clear network interfaces set this parameter
 2619         to empty list.
 2620 
 2621     :param graphics:
 2622         The new graphics definition as defined in :ref:`init-graphics-def`. If not set,
 2623         the graphics will not be changed. To remove a graphics device, set this parameter
 2624         to ``{'type': 'none'}``.
 2625 
 2626     :param live:
 2627         ``False`` to avoid trying to live update the definition. In such a case, the
 2628         new definition is applied at the next start of the virtual machine. If ``True``,
 2629         not all aspects of the definition can be live updated, but as much as possible
 2630         will be attempted. (Default: ``True``)
 2631 
 2632     :param connection: libvirt connection URI, overriding defaults
 2633     :param username: username to connect with, overriding defaults
 2634     :param password: password to connect with, overriding defaults
 2635 
 2636     :param boot:
 2637         Specifies kernel, initial ramdisk and kernel command line parameters for the virtual machine.
 2638         This is an optional parameter, all of the keys are optional within the dictionary.
 2639 
 2640         Refer to :ref:`init-boot-def` for the complete boot parameter description.
 2641 
 2642         To update any boot parameters, specify the new path for each. To remove any boot parameters, pass ``None`` object,
 2643         for instance: 'kernel': ``None``. To switch back to BIOS boot, specify ('loader': ``None`` and 'nvram': ``None``)
 2644         or 'efi': ``False``. Please note that ``None`` is mapped to ``null`` in sls file, pass ``null`` in sls file instead.
 2645 
 2646         SLS file Example:
 2647 
 2648         .. code-block:: yaml
 2649 
 2650             - boot:
 2651                 loader: null
 2652                 nvram: null
 2653 
 2654         .. versionadded:: 3000
 2655 
 2656     :param boot_dev:
 2657         Space separated list of devices to boot from sorted by decreasing priority.
 2658         Values can be ``hd``, ``fd``, ``cdrom`` or ``network``.
 2659 
 2660         By default, the value will ``"hd"``.
 2661 
 2662         .. versionadded:: 3002
 2663 
 2664     :param test: run in dry-run mode if set to True
 2665 
 2666         .. versionadded:: 3001
 2667 
 2668     :return:
 2669 
 2670         Returns a dictionary indicating the status of what has been done. It is structured in
 2671         the following way:
 2672 
 2673         .. code-block:: python
 2674 
 2675             {
 2676               'definition': True,
 2677               'cpu': True,
 2678               'mem': True,
 2679               'disks': {'attached': [list of actually attached disks],
 2680                         'detached': [list of actually detached disks]},
 2681               'nics': {'attached': [list of actually attached nics],
 2682                        'detached': [list of actually detached nics]},
 2683               'errors': ['error messages for failures']
 2684             }
 2685 
 2686     .. versionadded:: 2019.2.0
 2687 
 2688     CLI Example:
 2689 
 2690     .. code-block:: bash
 2691 
 2692         salt '*' virt.update domain cpu=2 mem=1024
 2693 
 2694     """
 2695     status = {
 2696         "definition": False,
 2697         "disk": {"attached": [], "detached": [], "updated": []},
 2698         "interface": {"attached": [], "detached": []},
 2699     }
 2700     conn = __get_conn(**kwargs)
 2701     domain = _get_domain(conn, name)
 2702     desc = ElementTree.fromstring(domain.XMLDesc(0))
 2703     need_update = False
 2704 
 2705     # Compute the XML to get the disks, interfaces and graphics
 2706     hypervisor = desc.get("type")
 2707     all_disks = _disk_profile(conn, disk_profile, hypervisor, disks, name)
 2708 
 2709     if boot is not None:
 2710         boot = _handle_remote_boot_params(boot)
 2711         if boot.get("efi", None) is not None:
 2712             need_update = _handle_efi_param(boot, desc)
 2713 
 2714     new_desc = ElementTree.fromstring(
 2715         _gen_xml(
 2716             conn,
 2717             name,
 2718             cpu or 0,
 2719             mem or 0,
 2720             all_disks,
 2721             _get_merged_nics(hypervisor, nic_profile, interfaces),
 2722             hypervisor,
 2723             domain.OSType(),
 2724             desc.find(".//os/type").get("arch"),
 2725             graphics,
 2726             boot,
 2727             **kwargs
 2728         )
 2729     )
 2730 
 2731     # Update the cpu
 2732     cpu_node = desc.find("vcpu")
 2733     if cpu and int(cpu_node.text) != cpu:
 2734         cpu_node.text = str(cpu)
 2735         cpu_node.set("current", str(cpu))
 2736         need_update = True
 2737 
 2738     def _set_loader(node, value):
 2739         salt.utils.xmlutil.set_node_text(node, value)
 2740         if value is not None:
 2741             node.set("readonly", "yes")
 2742             node.set("type", "pflash")
 2743 
 2744     def _set_nvram(node, value):
 2745         node.set("template", value)
 2746 
 2747     def _set_with_byte_unit(node, value):
 2748         node.text = str(value)
 2749         node.set("unit", "bytes")
 2750 
 2751     def _get_with_unit(node):
 2752         unit = node.get("unit", "KiB")
 2753         # _handle_unit treats bytes as invalid unit for the purpose of consistency
 2754         unit = unit if unit != "bytes" else "b"
 2755         value = node.get("memory") or node.text
 2756         return _handle_unit("{}{}".format(value, unit)) if value else None
 2757 
 2758     old_mem = int(_get_with_unit(desc.find("memory")) / 1024)
 2759 
 2760     # Update the kernel boot parameters
 2761     params_mapping = [
 2762         {"path": "boot:kernel", "xpath": "os/kernel"},
 2763         {"path": "boot:initrd", "xpath": "os/initrd"},
 2764         {"path": "boot:cmdline", "xpath": "os/cmdline"},
 2765         {"path": "boot:loader", "xpath": "os/loader", "set": _set_loader},
 2766         {"path": "boot:nvram", "xpath": "os/nvram", "set": _set_nvram},
 2767         # Update the memory, note that libvirt outputs all memory sizes in KiB
 2768         {
 2769             "path": "mem",
 2770             "xpath": "memory",
 2771             "convert": _handle_unit,
 2772             "get": _get_with_unit,
 2773             "set": _set_with_byte_unit,
 2774         },
 2775         {
 2776             "path": "mem",
 2777             "xpath": "currentMemory",
 2778             "convert": _handle_unit,
 2779             "get": _get_with_unit,
 2780             "set": _set_with_byte_unit,
 2781         },
 2782         {
 2783             "path": "mem:max",
 2784             "convert": _handle_unit,
 2785             "xpath": "maxMemory",
 2786             "get": _get_with_unit,
 2787             "set": _set_with_byte_unit,
 2788         },
 2789         {
 2790             "path": "mem:boot",
 2791             "convert": _handle_unit,
 2792             "xpath": "memory",
 2793             "get": _get_with_unit,
 2794             "set": _set_with_byte_unit,
 2795         },
 2796         {
 2797             "path": "mem:current",
 2798             "convert": _handle_unit,
 2799             "xpath": "currentMemory",
 2800             "get": _get_with_unit,
 2801             "set": _set_with_byte_unit,
 2802         },
 2803         {
 2804             "path": "mem:slots",
 2805             "xpath": "maxMemory",
 2806             "get": lambda n: n.get("slots"),
 2807             "set": lambda n, v: n.set("slots", str(v)),
 2808             "del": salt.utils.xmlutil.del_attribute("slots", ["unit"]),
 2809         },
 2810         {
 2811             "path": "mem:hard_limit",
 2812             "convert": _handle_unit,
 2813             "xpath": "memtune/hard_limit",
 2814             "get": _get_with_unit,
 2815             "set": _set_with_byte_unit,
 2816         },
 2817         {
 2818             "path": "mem:soft_limit",
 2819             "convert": _handle_unit,
 2820             "xpath": "memtune/soft_limit",
 2821             "get": _get_with_unit,
 2822             "set": _set_with_byte_unit,
 2823         },
 2824         {
 2825             "path": "mem:swap_hard_limit",
 2826             "convert": _handle_unit,
 2827             "xpath": "memtune/swap_hard_limit",
 2828             "get": _get_with_unit,
 2829             "set": _set_with_byte_unit,
 2830         },
 2831         {
 2832             "path": "mem:min_guarantee",
 2833             "convert": _handle_unit,
 2834             "xpath": "memtune/min_guarantee",
 2835             "get": _get_with_unit,
 2836             "set": _set_with_byte_unit,
 2837         },
 2838         {
 2839             "path": "boot_dev:{dev}",
 2840             "xpath": "os/boot[$dev]",
 2841             "get": lambda n: n.get("dev"),
 2842             "set": lambda n, v: n.set("dev", v),
 2843             "del": salt.utils.xmlutil.del_attribute("dev"),
 2844         },
 2845     ]
 2846 
 2847     data = {k: v for k, v in locals().items() if bool(v)}
 2848     if boot_dev:
 2849         data["boot_dev"] = {i + 1: dev for i, dev in enumerate(boot_dev.split())}
 2850     need_update = (
 2851         salt.utils.xmlutil.change_xml(desc, data, params_mapping) or need_update
 2852     )
 2853 
 2854     # Update the XML definition with the new disks and diff changes
 2855     devices_node = desc.find("devices")
 2856     parameters = {
 2857         "disk": ["disks", "disk_profile"],
 2858         "interface": ["interfaces", "nic_profile"],
 2859         "graphics": ["graphics"],
 2860     }
 2861     changes = {}
 2862     for dev_type in parameters:
 2863         changes[dev_type] = {}
 2864         func_locals = locals()
 2865         if [
 2866             param
 2867             for param in parameters[dev_type]
 2868             if func_locals.get(param, None) is not None
 2869         ]:
 2870             old = devices_node.findall(dev_type)
 2871             new = new_desc.findall("devices/{}".format(dev_type))
 2872             changes[dev_type] = globals()["_diff_{}_lists".format(dev_type)](old, new)
 2873             if changes[dev_type]["deleted"] or changes[dev_type]["new"]:
 2874                 for item in old:
 2875                     devices_node.remove(item)
 2876                 devices_node.extend(changes[dev_type]["sorted"])
 2877                 need_update = True
 2878 
 2879     # Set the new definition
 2880     if need_update:
 2881         # Create missing disks if needed
 2882         try:
 2883             if changes["disk"]:
 2884                 for idx, item in enumerate(changes["disk"]["sorted"]):
 2885                     source_file = all_disks[idx].get("source_file")
 2886                     # We don't want to create image disks for cdrom devices
 2887                     if all_disks[idx].get("device", "disk") == "cdrom":
 2888                         continue
 2889                     if (
 2890                         item in changes["disk"]["new"]
 2891                         and source_file
 2892                         and not os.path.isfile(source_file)
 2893                     ):
 2894                         _qemu_image_create(all_disks[idx])
 2895                     elif item in changes["disk"]["new"] and not source_file:
 2896                         _disk_volume_create(conn, all_disks[idx])
 2897 
 2898             if not test:
 2899                 xml_desc = ElementTree.tostring(desc)
 2900                 log.debug("Update virtual machine definition: %s", xml_desc)
 2901                 conn.defineXML(salt.utils.stringutils.to_str(xml_desc))
 2902             status["definition"] = True
 2903         except libvirt.libvirtError as err:
 2904             conn.close()
 2905             raise err
 2906 
 2907         # Do the live changes now that we know the definition has been properly set
 2908         # From that point on, failures are not blocking to try to live update as much
 2909         # as possible.
 2910         commands = []
 2911         removable_changes = []
 2912         if domain.isActive() and live:
 2913             if cpu:
 2914                 commands.append(
 2915                     {
 2916                         "device": "cpu",
 2917                         "cmd": "setVcpusFlags",
 2918                         "args": [cpu, libvirt.VIR_DOMAIN_AFFECT_LIVE],
 2919                     }
 2920                 )
 2921             if mem:
 2922                 if isinstance(mem, dict):
 2923                     # setMemoryFlags takes memory amount in KiB
 2924                     new_mem = (
 2925                         int(_handle_unit(mem.get("current")) / 1024)
 2926                         if "current" in mem
 2927                         else None
 2928                     )
 2929                 elif isinstance(mem, int):
 2930                     new_mem = int(mem * 1024)
 2931 
 2932                 if old_mem != new_mem and new_mem is not None:
 2933                     commands.append(
 2934                         {
 2935                             "device": "mem",
 2936                             "cmd": "setMemoryFlags",
 2937                             "args": [new_mem, libvirt.VIR_DOMAIN_AFFECT_LIVE],
 2938                         }
 2939                     )
 2940 
 2941             # Look for removable device source changes
 2942             new_disks = []
 2943             for new_disk in changes["disk"].get("new", []):
 2944                 device = new_disk.get("device", "disk")
 2945                 if device not in ["cdrom", "floppy"]:
 2946                     new_disks.append(new_disk)
 2947                     continue
 2948 
 2949                 target_dev = new_disk.find("target").get("dev")
 2950                 matching = [
 2951                     old_disk
 2952                     for old_disk in changes["disk"].get("deleted", [])
 2953                     if old_disk.get("device", "disk") == device
 2954                     and old_disk.find("target").get("dev") == target_dev
 2955                 ]
 2956                 if not matching:
 2957                     new_disks.append(new_disk)
 2958                 else:
 2959                     # libvirt needs to keep the XML exactly as it was before
 2960                     updated_disk = matching[0]
 2961                     changes["disk"]["deleted"].remove(updated_disk)
 2962                     removable_changes.append(updated_disk)
 2963                     source_node = updated_disk.find("source")
 2964                     new_source_node = new_disk.find("source")
 2965                     source_file = (
 2966                         new_source_node.get("file")
 2967                         if new_source_node is not None
 2968                         else None
 2969                     )
 2970 
 2971                     updated_disk.set("type", "file")
 2972                     # Detaching device
 2973                     if source_node is not None:
 2974                         updated_disk.remove(source_node)
 2975 
 2976                     # Attaching device
 2977                     if source_file:
 2978                         ElementTree.SubElement(
 2979                             updated_disk, "source", attrib={"file": source_file}
 2980                         )
 2981 
 2982             changes["disk"]["new"] = new_disks
 2983 
 2984             for dev_type in ["disk", "interface"]:
 2985                 for added in changes[dev_type].get("new", []):
 2986                     commands.append(
 2987                         {
 2988                             "device": dev_type,
 2989                             "cmd": "attachDevice",
 2990                             "args": [
 2991                                 salt.utils.stringutils.to_str(
 2992                                     ElementTree.tostring(added)
 2993                                 )
 2994                             ],
 2995                         }
 2996                     )
 2997 
 2998                 for removed in changes[dev_type].get("deleted", []):
 2999                     commands.append(
 3000                         {
 3001                             "device": dev_type,
 3002                             "cmd": "detachDevice",
 3003                             "args": [
 3004                                 salt.utils.stringutils.to_str(
 3005                                     ElementTree.tostring(removed)
 3006                                 )
 3007                             ],
 3008                         }
 3009                     )
 3010 
 3011         for updated_disk in removable_changes:
 3012             commands.append(
 3013                 {
 3014                     "device": "disk",
 3015                     "cmd": "updateDeviceFlags",
 3016                     "args": [
 3017                         salt.utils.stringutils.to_str(
 3018                             ElementTree.tostring(updated_disk)
 3019                         )
 3020                     ],
 3021                 }
 3022             )
 3023 
 3024         for cmd in commands:
 3025             try:
 3026                 ret = getattr(domain, cmd["cmd"])(*cmd["args"]) if not test else 0
 3027                 device_type = cmd["device"]
 3028                 if device_type in ["cpu", "mem"]:
 3029                     status[device_type] = not bool(ret)
 3030                 else:
 3031                     actions = {
 3032                         "attachDevice": "attached",
 3033                         "detachDevice": "detached",
 3034                         "updateDeviceFlags": "updated",
 3035                     }
 3036                     status[device_type][actions[cmd["cmd"]]].append(cmd["args"][0])
 3037 
 3038             except libvirt.libvirtError as err:
 3039                 if "errors" not in status:
 3040                     status["errors"] = []
 3041                 status["errors"].append(str(err))
 3042 
 3043     conn.close()
 3044     return status
 3045 
 3046 
 3047 def list_domains(**kwargs):
 3048     """
 3049     Return a list of available domains.
 3050 
 3051     :param connection: libvirt connection URI, overriding defaults
 3052 
 3053         .. versionadded:: 2019.2.0
 3054     :param username: username to connect with, overriding defaults
 3055 
 3056         .. versionadded:: 2019.2.0
 3057     :param password: password to connect with, overriding defaults
 3058 
 3059         .. versionadded:: 2019.2.0
 3060 
 3061     CLI Example:
 3062 
 3063     .. code-block:: bash
 3064 
 3065         salt '*' virt.list_domains
 3066     """
 3067     vms = []
 3068     conn = __get_conn(**kwargs)
 3069     for dom in _get_domain(conn, iterable=True):
 3070         vms.append(dom.name())
 3071     conn.close()
 3072     return vms
 3073 
 3074 
 3075 def list_active_vms(**kwargs):
 3076     """
 3077     Return a list of names for active virtual machine on the minion
 3078 
 3079     :param connection: libvirt connection URI, overriding defaults
 3080 
 3081         .. versionadded:: 2019.2.0
 3082     :param username: username to connect with, overriding defaults
 3083 
 3084         .. versionadded:: 2019.2.0
 3085     :param password: password to connect with, overriding defaults
 3086 
 3087         .. versionadded:: 2019.2.0
 3088 
 3089     CLI Example:
 3090 
 3091     .. code-block:: bash
 3092 
 3093         salt '*' virt.list_active_vms
 3094     """
 3095     vms = []
 3096     conn = __get_conn(**kwargs)
 3097     for dom in _get_domain(conn, iterable=True, inactive=False):
 3098         vms.append(dom.name())
 3099     conn.close()
 3100     return vms
 3101 
 3102 
 3103 def list_inactive_vms(**kwargs):
 3104     """
 3105     Return a list of names for inactive virtual machine on the minion
 3106 
 3107     :param connection: libvirt connection URI, overriding defaults
 3108 
 3109         .. versionadded:: 2019.2.0
 3110     :param username: username to connect with, overriding defaults
 3111 
 3112         .. versionadded:: 2019.2.0
 3113     :param password: password to connect with, overriding defaults
 3114 
 3115         .. versionadded:: 2019.2.0
 3116 
 3117     CLI Example:
 3118 
 3119     .. code-block:: bash
 3120 
 3121         salt '*' virt.list_inactive_vms
 3122     """
 3123     vms = []
 3124     conn = __get_conn(**kwargs)
 3125     for dom in _get_domain(conn, iterable=True, active=False):
 3126         vms.append(dom.name())
 3127     conn.close()
 3128     return vms
 3129 
 3130 
 3131 def vm_info(vm_=None, **kwargs):
 3132     """
 3133     Return detailed information about the vms on this hyper in a
 3134     list of dicts:
 3135 
 3136     :param vm_: name of the domain
 3137     :param connection: libvirt connection URI, overriding defaults
 3138 
 3139         .. versionadded:: 2019.2.0
 3140     :param username: username to connect with, overriding defaults
 3141 
 3142         .. versionadded:: 2019.2.0
 3143     :param password: password to connect with, overriding defaults
 3144 
 3145         .. versionadded:: 2019.2.0
 3146 
 3147     .. code-block:: python
 3148 
 3149         [
 3150             'your-vm': {
 3151                 'cpu': <int>,
 3152                 'maxMem': <int>,
 3153                 'mem': <int>,
 3154                 'state': '<state>',
 3155                 'cputime' <int>
 3156                 },
 3157             ...
 3158             ]
 3159 
 3160     If you pass a VM name in as an argument then it will return info
 3161     for just the named VM, otherwise it will return all VMs.
 3162 
 3163     CLI Example:
 3164 
 3165     .. code-block:: bash
 3166 
 3167         salt '*' virt.vm_info
 3168     """
 3169 
 3170     def _info(conn, dom):
 3171         """
 3172         Compute the infos of a domain
 3173         """
 3174         raw = dom.info()
 3175         return {
 3176             "cpu": raw[3],
 3177             "cputime": int(raw[4]),
 3178             "disks": _get_disks(conn, dom),
 3179             "graphics": _get_graphics(dom),
 3180             "nics": _get_nics(dom),
 3181             "uuid": _get_uuid(dom),
 3182             "loader": _get_loader(dom),
 3183             "on_crash": _get_on_crash(dom),
 3184             "on_reboot": _get_on_reboot(dom),
 3185             "on_poweroff": _get_on_poweroff(dom),
 3186             "maxMem": int(raw[1]),
 3187             "mem": int(raw[2]),
 3188             "state": VIRT_STATE_NAME_MAP.get(raw[0], "unknown"),
 3189         }
 3190 
 3191     info = {}
 3192     conn = __get_conn(**kwargs)
 3193     if vm_:
 3194         info[vm_] = _info(conn, _get_domain(conn, vm_))
 3195     else:
 3196         for domain in _get_domain(conn, iterable=True):
 3197             info[domain.name()] = _info(conn, domain)
 3198     conn.close()
 3199     return info
 3200 
 3201 
 3202 def vm_state(vm_=None, **kwargs):
 3203     """
 3204     Return list of all the vms and their state.
 3205 
 3206     If you pass a VM name in as an argument then it will return info
 3207     for just the named VM, otherwise it will return all VMs.
 3208 
 3209     :param vm_: name of the domain
 3210     :param connection: libvirt connection URI, overriding defaults
 3211 
 3212         .. versionadded:: 2019.2.0
 3213     :param username: username to connect with, overriding defaults
 3214 
 3215         .. versionadded:: 2019.2.0
 3216     :param password: password to connect with, overriding defaults
 3217 
 3218         .. versionadded:: 2019.2.0
 3219 
 3220     CLI Example:
 3221 
 3222     .. code-block:: bash
 3223 
 3224         salt '*' virt.vm_state <domain>
 3225     """
 3226 
 3227     def _info(dom):
 3228         """
 3229         Compute domain state
 3230         """
 3231         state = ""
 3232         raw = dom.info()
 3233         state = VIRT_STATE_NAME_MAP.get(raw[0], "unknown")
 3234         return state
 3235 
 3236     info = {}
 3237     conn = __get_conn(**kwargs)
 3238     if vm_:
 3239         info[vm_] = _info(_get_domain(conn, vm_))
 3240     else:
 3241         for domain in _get_domain(conn, iterable=True):
 3242             info[domain.name()] = _info(domain)
 3243     conn.close()
 3244     return info
 3245 
 3246 
 3247 def _node_info(conn):
 3248     """
 3249     Internal variant of node_info taking a libvirt connection as parameter
 3250     """
 3251     raw = conn.getInfo()
 3252     info = {
 3253         "cpucores": raw[6],
 3254         "cpumhz": raw[3],
 3255         "cpumodel": str(raw[0]),
 3256         "cpus": raw[2],
 3257         "cputhreads": raw[7],
 3258         "numanodes": raw[4],
 3259         "phymemory": raw[1],
 3260         "sockets": raw[5],
 3261     }
 3262     return info
 3263 
 3264 
 3265 def node_info(**kwargs):
 3266     """
 3267     Return a dict with information about this node
 3268 
 3269     :param connection: libvirt connection URI, overriding defaults
 3270 
 3271         .. versionadded:: 2019.2.0
 3272     :param username: username to connect with, overriding defaults
 3273 
 3274         .. versionadded:: 2019.2.0
 3275     :param password: password to connect with, overriding defaults
 3276 
 3277         .. versionadded:: 2019.2.0
 3278 
 3279     CLI Example:
 3280 
 3281     .. code-block:: bash
 3282 
 3283         salt '*' virt.node_info
 3284     """
 3285     conn = __get_conn(**kwargs)
 3286     info = _node_info(conn)
 3287     conn.close()
 3288     return info
 3289 
 3290 
 3291 def get_nics(vm_, **kwargs):
 3292     """
 3293     Return info about the network interfaces of a named vm
 3294 
 3295     :param vm_: name of the domain
 3296     :param connection: libvirt connection URI, overriding defaults
 3297 
 3298         .. versionadded:: 2019.2.0
 3299     :param username: username to connect with, overriding defaults
 3300 
 3301         .. versionadded:: 2019.2.0
 3302     :param password: password to connect with, overriding defaults
 3303 
 3304         .. versionadded:: 2019.2.0
 3305 
 3306     CLI Example:
 3307 
 3308     .. code-block:: bash
 3309 
 3310         salt '*' virt.get_nics <domain>
 3311     """
 3312     conn = __get_conn(**kwargs)
 3313     nics = _get_nics(_get_domain(conn, vm_))
 3314     conn.close()
 3315     return nics
 3316 
 3317 
 3318 def get_macs(vm_, **kwargs):
 3319     """
 3320     Return a list off MAC addresses from the named vm
 3321 
 3322     :param vm_: name of the domain
 3323     :param connection: libvirt connection URI, overriding defaults
 3324 
 3325         .. versionadded:: 2019.2.0
 3326     :param username: username to connect with, overriding defaults
 3327 
 3328         .. versionadded:: 2019.2.0
 3329     :param password: password to connect with, overriding defaults
 3330 
 3331         .. versionadded:: 2019.2.0
 3332 
 3333     CLI Example:
 3334 
 3335     .. code-block:: bash
 3336 
 3337         salt '*' virt.get_macs <domain>
 3338     """
 3339     doc = ElementTree.fromstring(get_xml(vm_, **kwargs))
 3340     return [node.get("address") for node in doc.findall("devices/interface/mac")]
 3341 
 3342 
 3343 def get_graphics(vm_, **kwargs):
 3344     """
 3345     Returns the information on vnc for a given vm
 3346 
 3347     :param vm_: name of the domain
 3348     :param connection: libvirt connection URI, overriding defaults
 3349 
 3350         .. versionadded:: 2019.2.0
 3351     :param username: username to connect with, overriding defaults
 3352 
 3353         .. versionadded:: 2019.2.0
 3354     :param password: password to connect with, overriding defaults
 3355 
 3356         .. versionadded:: 2019.2.0
 3357 
 3358     CLI Example:
 3359 
 3360     .. code-block:: bash
 3361 
 3362         salt '*' virt.get_graphics <domain>
 3363     """
 3364     conn = __get_conn(**kwargs)
 3365     graphics = _get_graphics(_get_domain(conn, vm_))
 3366     conn.close()
 3367     return graphics
 3368 
 3369 
 3370 def get_loader(vm_, **kwargs):
 3371     """
 3372     Returns the information on the loader for a given vm
 3373 
 3374     :param vm_: name of the domain
 3375     :param connection: libvirt connection URI, overriding defaults
 3376     :param username: username to connect with, overriding defaults
 3377     :param password: password to connect with, overriding defaults
 3378 
 3379     CLI Example:
 3380 
 3381     .. code-block:: bash
 3382 
 3383         salt '*' virt.get_loader <domain>
 3384 
 3385     .. versionadded:: Fluorine
 3386     """
 3387     conn = __get_conn(**kwargs)
 3388     try:
 3389         loader = _get_loader(_get_domain(conn, vm_))
 3390         return loader
 3391     finally:
 3392         conn.close()
 3393 
 3394 
 3395 def get_disks(vm_, **kwargs):
 3396     """
 3397     Return the disks of a named vm
 3398 
 3399     :param vm_: name of the domain
 3400     :param connection: libvirt connection URI, overriding defaults
 3401 
 3402         .. versionadded:: 2019.2.0
 3403     :param username: username to connect with, overriding defaults
 3404 
 3405         .. versionadded:: 2019.2.0
 3406     :param password: password to connect with, overriding defaults
 3407 
 3408         .. versionadded:: 2019.2.0
 3409 
 3410     CLI Example:
 3411 
 3412     .. code-block:: bash
 3413 
 3414         salt '*' virt.get_disks <domain>
 3415     """
 3416     conn = __get_conn(**kwargs)
 3417     disks = _get_disks(conn, _get_domain(conn, vm_))
 3418     conn.close()
 3419     return disks
 3420 
 3421 
 3422 def setmem(vm_, memory, config=False, **kwargs):
 3423     """
 3424     Changes the amount of memory allocated to VM. The VM must be shutdown
 3425     for this to work.
 3426 
 3427     :param vm_: name of the domain
 3428     :param memory: memory amount to set in MB
 3429     :param config: if True then libvirt will be asked to modify the config as well
 3430     :param connection: libvirt connection URI, overriding defaults
 3431 
 3432         .. versionadded:: 2019.2.0
 3433     :param username: username to connect with, overriding defaults
 3434 
 3435         .. versionadded:: 2019.2.0
 3436     :param password: password to connect with, overriding defaults
 3437 
 3438         .. versionadded:: 2019.2.0
 3439 
 3440     CLI Example:
 3441 
 3442     .. code-block:: bash
 3443 
 3444         salt '*' virt.setmem <domain> <size>
 3445         salt '*' virt.setmem my_domain 768
 3446     """
 3447     conn = __get_conn(**kwargs)
 3448     dom = _get_domain(conn, vm_)
 3449 
 3450     if VIRT_STATE_NAME_MAP.get(dom.info()[0], "unknown") != "shutdown":
 3451         return False
 3452 
 3453     # libvirt has a funny bitwise system for the flags in that the flag
 3454     # to affect the "current" setting is 0, which means that to set the
 3455     # current setting we have to call it a second time with just 0 set
 3456     flags = libvirt.VIR_DOMAIN_MEM_MAXIMUM
 3457     if config:
 3458         flags = flags | libvirt.VIR_DOMAIN_AFFECT_CONFIG
 3459 
 3460     ret1 = dom.setMemoryFlags(memory * 1024, flags)
 3461     ret2 = dom.setMemoryFlags(memory * 1024, libvirt.VIR_DOMAIN_AFFECT_CURRENT)
 3462 
 3463     conn.close()
 3464 
 3465     # return True if both calls succeeded
 3466     return ret1 == ret2 == 0
 3467 
 3468 
 3469 def setvcpus(vm_, vcpus, config=False, **kwargs):
 3470     """
 3471     Changes the amount of vcpus allocated to VM. The VM must be shutdown
 3472     for this to work.
 3473 
 3474     If config is True then we ask libvirt to modify the config as well
 3475 
 3476     :param vm_: name of the domain
 3477     :param vcpus: integer representing the number of CPUs to be assigned
 3478     :param config: if True then libvirt will be asked to modify the config as well
 3479     :param connection: libvirt connection URI, overriding defaults
 3480 
 3481         .. versionadded:: 2019.2.0
 3482     :param username: username to connect with, overriding defaults
 3483 
 3484         .. versionadded:: 2019.2.0
 3485     :param password: password to connect with, overriding defaults
 3486 
 3487         .. versionadded:: 2019.2.0
 3488 
 3489     CLI Example:
 3490 
 3491     .. code-block:: bash
 3492 
 3493         salt '*' virt.setvcpus <domain> <amount>
 3494         salt '*' virt.setvcpus my_domain 4
 3495     """
 3496     conn = __get_conn(**kwargs)
 3497     dom = _get_domain(conn, vm_)
 3498 
 3499     if VIRT_STATE_NAME_MAP.get(dom.info()[0], "unknown") != "shutdown":
 3500         return False
 3501 
 3502     # see notes in setmem
 3503     flags = libvirt.VIR_DOMAIN_VCPU_MAXIMUM
 3504     if config:
 3505         flags = flags | libvirt.VIR_DOMAIN_AFFECT_CONFIG
 3506 
 3507     ret1 = dom.setVcpusFlags(vcpus, flags)
 3508     ret2 = dom.setVcpusFlags(vcpus, libvirt.VIR_DOMAIN_AFFECT_CURRENT)
 3509 
 3510     conn.close()
 3511 
 3512     return ret1 == ret2 == 0
 3513 
 3514 
 3515 def _freemem(conn):
 3516     """
 3517     Internal variant of freemem taking a libvirt connection as parameter
 3518     """
 3519     mem = conn.getInfo()[1]
 3520     # Take off just enough to sustain the hypervisor
 3521     mem -= 256
 3522     for dom in _get_domain(conn, iterable=True):
 3523         if dom.ID() > 0:
 3524             mem -= dom.info()[2] / 1024
 3525     return mem
 3526 
 3527 
 3528 def freemem(**kwargs):
 3529     """
 3530     Return an int representing the amount of memory (in MB) that has not
 3531     been given to virtual machines on this node
 3532 
 3533     :param connection: libvirt connection URI, overriding defaults
 3534 
 3535         .. versionadded:: 2019.2.0
 3536     :param username: username to connect with, overriding defaults
 3537 
 3538         .. versionadded:: 2019.2.0
 3539     :param password: password to connect with, overriding defaults
 3540 
 3541         .. versionadded:: 2019.2.0
 3542 
 3543     CLI Example:
 3544 
 3545     .. code-block:: bash
 3546 
 3547         salt '*' virt.freemem
 3548     """
 3549     conn = __get_conn(**kwargs)
 3550     mem = _freemem(conn)
 3551     conn.close()
 3552     return mem
 3553 
 3554 
 3555 def _freecpu(conn):
 3556     """
 3557     Internal variant of freecpu taking a libvirt connection as parameter
 3558     """
 3559     cpus = conn.getInfo()[2]
 3560     for dom in _get_domain(conn, iterable=True):
 3561         if dom.ID() > 0:
 3562             cpus -= dom.info()[3]
 3563     return cpus
 3564 
 3565 
 3566 def freecpu(**kwargs):
 3567     """
 3568     Return an int representing the number of unallocated cpus on this
 3569     hypervisor
 3570 
 3571     :param connection: libvirt connection URI, overriding defaults
 3572 
 3573         .. versionadded:: 2019.2.0
 3574     :param username: username to connect with, overriding defaults
 3575 
 3576         .. versionadded:: 2019.2.0
 3577     :param password: password to connect with, overriding defaults
 3578 
 3579         .. versionadded:: 2019.2.0
 3580 
 3581     CLI Example:
 3582 
 3583     .. code-block:: bash
 3584 
 3585         salt '*' virt.freecpu
 3586     """
 3587     conn = __get_conn(**kwargs)
 3588     cpus = _freecpu(conn)
 3589     conn.close()
 3590     return cpus
 3591 
 3592 
 3593 def full_info(**kwargs):
 3594     """
 3595     Return the node_info, vm_info and freemem
 3596 
 3597     :param connection: libvirt connection URI, overriding defaults
 3598 
 3599         .. versionadded:: 2019.2.0
 3600     :param username: username to connect with, overriding defaults
 3601 
 3602         .. versionadded:: 2019.2.0
 3603     :param password: password to connect with, overriding defaults
 3604 
 3605         .. versionadded:: 2019.2.0
 3606 
 3607     CLI Example:
 3608 
 3609     .. code-block:: bash
 3610 
 3611         salt '*' virt.full_info
 3612     """
 3613     conn = __get_conn(**kwargs)
 3614     info = {
 3615         "freecpu": _freecpu(conn),
 3616         "freemem": _freemem(conn),
 3617         "node_info": _node_info(conn),
 3618         "vm_info": vm_info(),
 3619     }
 3620     conn.close()
 3621     return info
 3622 
 3623 
 3624 def get_xml(vm_, **kwargs):
 3625     """
 3626     Returns the XML for a given vm
 3627 
 3628     :param vm_: domain name
 3629     :param connection: libvirt connection URI, overriding defaults
 3630 
 3631         .. versionadded:: 2019.2.0
 3632     :param username: username to connect with, overriding defaults
 3633 
 3634         .. versionadded:: 2019.2.0
 3635     :param password: password to connect with, overriding defaults
 3636 
 3637         .. versionadded:: 2019.2.0
 3638 
 3639     CLI Example:
 3640 
 3641     .. code-block:: bash
 3642 
 3643         salt '*' virt.get_xml <domain>
 3644     """
 3645     conn = __get_conn(**kwargs)
 3646     xml_desc = (
 3647         vm_.XMLDesc(0)
 3648         if isinstance(vm_, libvirt.virDomain)
 3649         else _get_domain(conn, vm_).XMLDesc(0)
 3650     )
 3651     conn.close()
 3652     return xml_desc
 3653 
 3654 
 3655 def get_profiles(hypervisor=None, **kwargs):
 3656     """
 3657     Return the virt profiles for hypervisor.
 3658 
 3659     Currently there are profiles for:
 3660 
 3661     - nic
 3662     - disk
 3663 
 3664     :param hypervisor: override the default machine type.
 3665     :param connection: libvirt connection URI, overriding defaults
 3666 
 3667         .. versionadded:: 2019.2.0
 3668     :param username: username to connect with, overriding defaults
 3669 
 3670         .. versionadded:: 2019.2.0
 3671     :param password: password to connect with, overriding defaults
 3672 
 3673         .. versionadded:: 2019.2.0
 3674 
 3675     CLI Example:
 3676 
 3677     .. code-block:: bash
 3678 
 3679         salt '*' virt.get_profiles
 3680         salt '*' virt.get_profiles hypervisor=vmware
 3681     """
 3682     # Use the machine types as possible values
 3683     # Prefer 'kvm' over the others if available
 3684     conn = __get_conn(**kwargs)
 3685     caps = _capabilities(conn)
 3686     hypervisors = sorted(
 3687         {
 3688             x
 3689             for y in [guest["arch"]["domains"].keys() for guest in caps["guests"]]
 3690             for x in y
 3691         }
 3692     )
 3693     if len(hypervisors) == 0:
 3694         raise SaltInvocationError("No supported hypervisors were found")
 3695 
 3696     if not hypervisor:
 3697         hypervisor = "kvm" if "kvm" in hypervisors else hypervisors[0]
 3698 
 3699     ret = {
 3700         "disk": {"default": _disk_profile(conn, "default", hypervisor, [], None)},
 3701         "nic": {"default": _nic_profile("default", hypervisor)},
 3702     }
 3703     virtconf = __salt__["config.get"]("virt", {})
 3704 
 3705     for profile in virtconf.get("disk", []):
 3706         ret["disk"][profile] = _disk_profile(conn, profile, hypervisor, [], None)
 3707 
 3708     for profile in virtconf.get("nic", []):
 3709         ret["nic"][profile] = _nic_profile(profile, hypervisor)
 3710 
 3711     return ret
 3712 
 3713 
 3714 def shutdown(vm_, **kwargs):
 3715     """
 3716     Send a soft shutdown signal to the named vm
 3717 
 3718     :param vm_: domain name
 3719     :param connection: libvirt connection URI, overriding defaults
 3720 
 3721         .. versionadded:: 2019.2.0
 3722     :param username: username to connect with, overriding defaults
 3723 
 3724         .. versionadded:: 2019.2.0
 3725     :param password: password to connect with, overriding defaults
 3726 
 3727         .. versionadded:: 2019.2.0
 3728 
 3729     CLI Example:
 3730 
 3731     .. code-block:: bash
 3732 
 3733         salt '*' virt.shutdown <domain>
 3734     """
 3735     conn = __get_conn(**kwargs)
 3736     dom = _get_domain(conn, vm_)
 3737     ret = dom.shutdown() == 0
 3738     conn.close()
 3739     return ret
 3740 
 3741 
 3742 def pause(vm_, **kwargs):
 3743     """
 3744     Pause the named vm
 3745 
 3746     :param vm_: domain name
 3747     :param connection: libvirt connection URI, overriding defaults
 3748 
 3749         .. versionadded:: 2019.2.0
 3750     :param username: username to connect with, overriding defaults
 3751 
 3752         .. versionadded:: 2019.2.0
 3753     :param password: password to connect with, overriding defaults
 3754 
 3755         .. versionadded:: 2019.2.0
 3756 
 3757     CLI Example:
 3758 
 3759     .. code-block:: bash
 3760 
 3761         salt '*' virt.pause <domain>
 3762     """
 3763     conn = __get_conn(**kwargs)
 3764     dom = _get_domain(conn, vm_)
 3765     ret = dom.suspend() == 0
 3766     conn.close()
 3767     return ret
 3768 
 3769 
 3770 def resume(vm_, **kwargs):
 3771     """
 3772     Resume the named vm
 3773 
 3774     :param vm_: domain name
 3775     :param connection: libvirt connection URI, overriding defaults
 3776 
 3777         .. versionadded:: 2019.2.0
 3778     :param username: username to connect with, overriding defaults
 3779 
 3780         .. versionadded:: 2019.2.0
 3781     :param password: password to connect with, overriding defaults
 3782 
 3783         .. versionadded:: 2019.2.0
 3784 
 3785     CLI Example:
 3786 
 3787     .. code-block:: bash
 3788 
 3789         salt '*' virt.resume <domain>
 3790     """
 3791     conn = __get_conn(**kwargs)
 3792     dom = _get_domain(conn, vm_)
 3793     ret = dom.resume() == 0
 3794     conn.close()
 3795     return ret
 3796 
 3797 
 3798 def start(name, **kwargs):
 3799     """
 3800     Start a defined domain
 3801 
 3802     :param vm_: domain name
 3803     :param connection: libvirt connection URI, overriding defaults
 3804 
 3805         .. versionadded:: 2019.2.0
 3806     :param username: username to connect with, overriding defaults
 3807 
 3808         .. versionadded:: 2019.2.0
 3809     :param password: password to connect with, overriding defaults
 3810 
 3811         .. versionadded:: 2019.2.0
 3812 
 3813     CLI Example:
 3814 
 3815     .. code-block:: bash
 3816 
 3817         salt '*' virt.start <domain>
 3818     """
 3819     conn = __get_conn(**kwargs)
 3820     ret = _get_domain(conn, name).create() == 0
 3821     conn.close()
 3822     return ret
 3823 
 3824 
 3825 def stop(name, **kwargs):
 3826     """
 3827     Hard power down the virtual machine, this is equivalent to pulling the power.
 3828 
 3829     :param vm_: domain name
 3830     :param connection: libvirt connection URI, overriding defaults
 3831 
 3832         .. versionadded:: 2019.2.0
 3833     :param username: username to connect with, overriding defaults
 3834 
 3835         .. versionadded:: 2019.2.0
 3836     :param password: password to connect with, overriding defaults
 3837 
 3838         .. versionadded:: 2019.2.0
 3839 
 3840     CLI Example:
 3841 
 3842     .. code-block:: bash
 3843 
 3844         salt '*' virt.stop <domain>
 3845     """
 3846     conn = __get_conn(**kwargs)
 3847     ret = _get_domain(conn, name).destroy() == 0
 3848     conn.close()
 3849     return ret
 3850 
 3851 
 3852 def reboot(name, **kwargs):
 3853     """
 3854     Reboot a domain via ACPI request
 3855 
 3856     :param vm_: domain name
 3857     :param connection: libvirt connection URI, overriding defaults
 3858 
 3859         .. versionadded:: 2019.2.0
 3860     :param username: username to connect with, overriding defaults
 3861 
 3862         .. versionadded:: 2019.2.0
 3863     :param password: password to connect with, overriding defaults
 3864 
 3865         .. versionadded:: 2019.2.0
 3866 
 3867     CLI Example:
 3868 
 3869     .. code-block:: bash
 3870 
 3871         salt '*' virt.reboot <domain>
 3872     """
 3873     conn = __get_conn(**kwargs)
 3874     ret = _get_domain(conn, name).reboot(libvirt.VIR_DOMAIN_REBOOT_DEFAULT) == 0
 3875     conn.close()
 3876     return ret
 3877 
 3878 
 3879 def reset(vm_, **kwargs):
 3880     """
 3881     Reset a VM by emulating the reset button on a physical machine
 3882 
 3883     :param vm_: domain name
 3884     :param connection: libvirt connection URI, overriding defaults
 3885 
 3886         .. versionadded:: 2019.2.0
 3887     :param username: username to connect with, overriding defaults
 3888 
 3889         .. versionadded:: 2019.2.0
 3890     :param password: password to connect with, overriding defaults
 3891 
 3892         .. versionadded:: 2019.2.0
 3893 
 3894     CLI Example:
 3895 
 3896     .. code-block:: bash
 3897 
 3898         salt '*' virt.reset <domain>
 3899     """
 3900     conn = __get_conn(**kwargs)
 3901     dom = _get_domain(conn, vm_)
 3902 
 3903     # reset takes a flag, like reboot, but it is not yet used
 3904     # so we just pass in 0
 3905     # see: http://libvirt.org/html/libvirt-libvirt.html#virDomainReset
 3906     ret = dom.reset(0) == 0
 3907     conn.close()
 3908     return ret
 3909 
 3910 
 3911 def ctrl_alt_del(vm_, **kwargs):
 3912     """
 3913     Sends CTRL+ALT+DEL to a VM
 3914 
 3915     :param vm_: domain name
 3916     :param connection: libvirt connection URI, overriding defaults
 3917 
 3918         .. versionadded:: 2019.2.0
 3919     :param username: username to connect with, overriding defaults
 3920 
 3921         .. versionadded:: 2019.2.0
 3922     :param password: password to connect with, overriding defaults
 3923 
 3924         .. versionadded:: 2019.2.0
 3925 
 3926     CLI Example:
 3927 
 3928     .. code-block:: bash
 3929 
 3930         salt '*' virt.ctrl_alt_del <domain>
 3931     """
 3932     conn = __get_conn(**kwargs)
 3933     dom = _get_domain(conn, vm_)
 3934     ret = dom.sendKey(0, 0, [29, 56, 111], 3, 0) == 0
 3935     conn.close()
 3936     return ret
 3937 
 3938 
 3939 def create_xml_str(xml, **kwargs):  # pylint: disable=redefined-outer-name
 3940     """
 3941     Start a transient domain based on the XML passed to the function
 3942 
 3943     :param xml: libvirt XML definition of the domain
 3944     :param connection: libvirt connection URI, overriding defaults
 3945 
 3946         .. versionadded:: 2019.2.0
 3947     :param username: username to connect with, overriding defaults
 3948 
 3949         .. versionadded:: 2019.2.0
 3950     :param password: password to connect with, overriding defaults
 3951 
 3952         .. versionadded:: 2019.2.0
 3953 
 3954     CLI Example:
 3955 
 3956     .. code-block:: bash
 3957 
 3958         salt '*' virt.create_xml_str <XML in string format>
 3959     """
 3960     conn = __get_conn(**kwargs)
 3961     ret = conn.createXML(xml, 0) is not None
 3962     conn.close()
 3963     return ret
 3964 
 3965 
 3966 def create_xml_path(path, **kwargs):
 3967     """
 3968     Start a transient domain based on the XML-file path passed to the function
 3969 
 3970     :param path: path to a file containing the libvirt XML definition of the domain
 3971     :param connection: libvirt connection URI, overriding defaults
 3972 
 3973         .. versionadded:: 2019.2.0
 3974     :param username: username to connect with, overriding defaults
 3975 
 3976         .. versionadded:: 2019.2.0
 3977     :param password: password to connect with, overriding defaults
 3978 
 3979         .. versionadded:: 2019.2.0
 3980 
 3981     CLI Example:
 3982 
 3983     .. code-block:: bash
 3984 
 3985         salt '*' virt.create_xml_path <path to XML file on the node>
 3986     """
 3987     try:
 3988         with salt.utils.files.fopen(path, "r") as fp_:
 3989             return create_xml_str(
 3990                 salt.utils.stringutils.to_unicode(fp_.read()), **kwargs
 3991             )
 3992     except OSError:
 3993         return False
 3994 
 3995 
 3996 def define_xml_str(xml, **kwargs):  # pylint: disable=redefined-outer-name
 3997     """
 3998     Define a persistent domain based on the XML passed to the function
 3999 
 4000     :param xml: libvirt XML definition of the domain
 4001     :param connection: libvirt connection URI, overriding defaults
 4002 
 4003         .. versionadded:: 2019.2.0
 4004     :param username: username to connect with, overriding defaults
 4005 
 4006         .. versionadded:: 2019.2.0
 4007     :param password: password to connect with, overriding defaults
 4008 
 4009         .. versionadded:: 2019.2.0
 4010 
 4011     CLI Example:
 4012 
 4013     .. code-block:: bash
 4014 
 4015         salt '*' virt.define_xml_str <XML in string format>
 4016     """
 4017     conn = __get_conn(**kwargs)
 4018     ret = conn.defineXML(xml) is not None
 4019     conn.close()
 4020     return ret
 4021 
 4022 
 4023 def define_xml_path(path, **kwargs):
 4024     """
 4025     Define a persistent domain based on the XML-file path passed to the function
 4026 
 4027     :param path: path to a file containing the libvirt XML definition of the domain
 4028     :param connection: libvirt connection URI, overriding defaults
 4029 
 4030         .. versionadded:: 2019.2.0
 4031     :param username: username to connect with, overriding defaults
 4032 
 4033         .. versionadded:: 2019.2.0
 4034     :param password: password to connect with, overriding defaults
 4035 
 4036         .. versionadded:: 2019.2.0
 4037 
 4038     CLI Example:
 4039 
 4040     .. code-block:: bash
 4041 
 4042         salt '*' virt.define_xml_path <path to XML file on the node>
 4043 
 4044     """
 4045     try:
 4046         with salt.utils.files.fopen(path, "r") as fp_:
 4047             return define_xml_str(
 4048                 salt.utils.stringutils.to_unicode(fp_.read()), **kwargs
 4049             )
 4050     except OSError:
 4051         return False
 4052 
 4053 
 4054 def _define_vol_xml_str(conn, xml, pool=None):  # pylint: disable=redefined-outer-name
 4055     """
 4056     Same function than define_vml_xml_str but using an already opened libvirt connection
 4057     """
 4058     default_pool = "default" if conn.getType() != "ESX" else "0"
 4059     poolname = (
 4060         pool if pool else __salt__["config.get"]("virt:storagepool", default_pool)
 4061     )
 4062     pool = conn.storagePoolLookupByName(str(poolname))
 4063     ret = pool.createXML(xml, 0) is not None
 4064     return ret
 4065 
 4066 
 4067 def define_vol_xml_str(
 4068     xml, pool=None, **kwargs
 4069 ):  # pylint: disable=redefined-outer-name
 4070     """
 4071     Define a volume based on the XML passed to the function
 4072 
 4073     :param xml: libvirt XML definition of the storage volume
 4074     :param pool:
 4075         storage pool name to define the volume in.
 4076         If defined, this parameter will override the configuration setting.
 4077 
 4078         .. versionadded:: 3001
 4079     :param connection: libvirt connection URI, overriding defaults
 4080 
 4081         .. versionadded:: 2019.2.0
 4082     :param username: username to connect with, overriding defaults
 4083 
 4084         .. versionadded:: 2019.2.0
 4085     :param password: password to connect with, overriding defaults
 4086 
 4087         .. versionadded:: 2019.2.0
 4088 
 4089     CLI Example:
 4090 
 4091     .. code-block:: bash
 4092 
 4093         salt '*' virt.define_vol_xml_str <XML in string format>
 4094 
 4095     The storage pool where the disk image will be defined is ``default``
 4096     unless changed with the pool parameter or a configuration like this:
 4097 
 4098     .. code-block:: yaml
 4099 
 4100         virt:
 4101             storagepool: mine
 4102     """
 4103     conn = __get_conn(**kwargs)
 4104     ret = False
 4105     try:
 4106         ret = _define_vol_xml_str(conn, xml, pool=pool)
 4107     except libvirtError as err:
 4108         raise CommandExecutionError(err.get_error_message())
 4109     finally:
 4110         conn.close()
 4111     return ret
 4112 
 4113 
 4114 def define_vol_xml_path(path, pool=None, **kwargs):
 4115     """
 4116     Define a volume based on the XML-file path passed to the function
 4117 
 4118     :param path: path to a file containing the libvirt XML definition of the volume
 4119     :param pool:
 4120         storage pool name to define the volume in.
 4121         If defined, this parameter will override the configuration setting.
 4122 
 4123         .. versionadded:: 3001
 4124     :param connection: libvirt connection URI, overriding defaults
 4125 
 4126         .. versionadded:: 2019.2.0
 4127     :param username: username to connect with, overriding defaults
 4128 
 4129         .. versionadded:: 2019.2.0
 4130     :param password: password to connect with, overriding defaults
 4131 
 4132         .. versionadded:: 2019.2.0
 4133 
 4134     CLI Example:
 4135 
 4136     .. code-block:: bash
 4137 
 4138         salt '*' virt.define_vol_xml_path <path to XML file on the node>
 4139 
 4140     """
 4141     try:
 4142         with salt.utils.files.fopen(path, "r") as fp_:
 4143             return define_vol_xml_str(
 4144                 salt.utils.stringutils.to_unicode(fp_.read()), pool=pool, **kwargs
 4145             )
 4146     except OSError:
 4147         return False
 4148 
 4149 
 4150 def migrate_non_shared(vm_, target, ssh=False, **kwargs):
 4151     """
 4152     Attempt to execute non-shared storage "all" migration
 4153 
 4154     :param vm_: domain name
 4155     :param target: target libvirt host name
 4156     :param ssh: True to connect over ssh
 4157 
 4158         .. deprecated:: 3002
 4159 
 4160     :param kwargs:
 4161         - live:           Use live migration. Default value is True.
 4162         - persistent:     Leave the domain persistent on destination host.
 4163                           Default value is True.
 4164         - undefinesource: Undefine the domain on the source host.
 4165                           Default value is True.
 4166         - offline:        If set to True it will migrate the domain definition
 4167                           without starting the domain on destination and without
 4168                           stopping it on source host. Default value is False.
 4169         - max_bandwidth:  The maximum bandwidth (in MiB/s) that will be used.
 4170         - max_downtime:   Set maximum tolerable downtime for live-migration.
 4171                           The value represents a number of milliseconds the guest
 4172                           is allowed to be down at the end of live migration.
 4173         - parallel_connections: Specify a number of parallel network connections
 4174                           to be used to send memory pages to the destination host.
 4175         - compressed:      Activate compression.
 4176         - comp_methods:    A comma-separated list of compression methods. Supported
 4177                            methods are "mt" and "xbzrle" and can be  used in any
 4178                            combination. QEMU defaults to "xbzrle".
 4179         - comp_mt_level:   Set compression level. Values are in range from 0 to 9,
 4180                            where 1 is maximum speed and 9 is  maximum compression.
 4181         - comp_mt_threads: Set number of compress threads on source host.
 4182         - comp_mt_dthreads: Set number of decompress threads on target host.
 4183         - comp_xbzrle_cache: Set the size of page cache for xbzrle compression in bytes.
 4184         - postcopy:        Enable the use of post-copy migration.
 4185         - postcopy_bandwidth: The maximum bandwidth allowed in post-copy phase. (MiB/s)
 4186         - username:       Username to connect with target host
 4187         - password:       Password to connect with target host
 4188 
 4189         .. versionadded:: 3002
 4190 
 4191     CLI Example:
 4192 
 4193     .. code-block:: bash
 4194 
 4195         salt '*' virt.migrate_non_shared <vm name> <target hypervisor>
 4196 
 4197     A tunnel data migration can be performed by setting this in the
 4198     configuration:
 4199 
 4200     .. code-block:: yaml
 4201 
 4202         virt:
 4203             tunnel: True
 4204 
 4205     For more details on tunnelled data migrations, report to
 4206     https://libvirt.org/migration.html#transporttunnel
 4207     """
 4208     salt.utils.versions.warn_until(
 4209         "Silicon",
 4210         "The 'migrate_non_shared' feature has been deprecated. "
 4211         "Use 'migrate' with copy_storage='all' instead.",
 4212     )
 4213     return migrate(vm_, target, ssh, copy_storage="all", **kwargs)
 4214 
 4215 
 4216 def migrate_non_shared_inc(vm_, target, ssh=False, **kwargs):
 4217     """
 4218     Attempt to execute non-shared storage "inc" migration
 4219 
 4220     :param vm_: domain name
 4221     :param target: target libvirt host name
 4222     :param ssh: True to connect over ssh
 4223 
 4224         .. deprecated:: 3002
 4225 
 4226     :param kwargs:
 4227         - live:           Use live migration. Default value is True.
 4228         - persistent:     Leave the domain persistent on destination host.
 4229                           Default value is True.
 4230         - undefinesource: Undefine the domain on the source host.
 4231                           Default value is True.
 4232         - offline:        If set to True it will migrate the domain definition
 4233                           without starting the domain on destination and without
 4234                           stopping it on source host. Default value is False.
 4235         - max_bandwidth:  The maximum bandwidth (in MiB/s) that will be used.
 4236         - max_downtime:   Set maximum tolerable downtime for live-migration.
 4237                           The value represents a number of milliseconds the guest
 4238                           is allowed to be down at the end of live migration.
 4239         - parallel_connections: Specify a number of parallel network connections
 4240                           to be used to send memory pages to the destination host.
 4241         - compressed:      Activate compression.
 4242         - comp_methods:    A comma-separated list of compression methods. Supported
 4243                            methods are "mt" and "xbzrle" and can be  used in any
 4244                            combination. QEMU defaults to "xbzrle".
 4245         - comp_mt_level:   Set compression level. Values are in range from 0 to 9,
 4246                            where 1 is maximum speed and 9 is  maximum compression.
 4247         - comp_mt_threads: Set number of compress threads on source host.
 4248         - comp_mt_dthreads: Set number of decompress threads on target host.
 4249         - comp_xbzrle_cache: Set the size of page cache for xbzrle compression in bytes.
 4250         - postcopy:        Enable the use of post-copy migration.
 4251         - postcopy_bandwidth: The maximum bandwidth allowed in post-copy phase. (MiB/s)
 4252         - username:       Username to connect with target host
 4253         - password:       Password to connect with target host
 4254 
 4255         .. versionadded:: 3002
 4256 
 4257     CLI Example:
 4258 
 4259     .. code-block:: bash
 4260 
 4261         salt '*' virt.migrate_non_shared_inc <vm name> <target hypervisor>
 4262 
 4263     A tunnel data migration can be performed by setting this in the
 4264     configuration:
 4265 
 4266     .. code-block:: yaml
 4267 
 4268         virt:
 4269             tunnel: True
 4270 
 4271     For more details on tunnelled data migrations, report to
 4272     https://libvirt.org/migration.html#transporttunnel
 4273     """
 4274     salt.utils.versions.warn_until(
 4275         "Silicon",
 4276         "The 'migrate_non_shared_inc' feature has been deprecated. "
 4277         "Use 'migrate' with copy_storage='inc' instead.",
 4278     )
 4279     return migrate(vm_, target, ssh, copy_storage="inc", **kwargs)
 4280 
 4281 
 4282 def migrate(vm_, target, ssh=False, **kwargs):
 4283     """
 4284     Shared storage migration
 4285 
 4286     :param vm_: domain name
 4287     :param target: target libvirt URI or host name
 4288     :param ssh: True to connect over ssh
 4289 
 4290        .. deprecated:: 3002
 4291 
 4292     :param kwargs:
 4293         - live:            Use live migration. Default value is True.
 4294         - persistent:      Leave the domain persistent on destination host.
 4295                            Default value is True.
 4296         - undefinesource:  Undefine the domain on the source host.
 4297                            Default value is True.
 4298         - offline:         If set to True it will migrate the domain definition
 4299                            without starting the domain on destination and without
 4300                            stopping it on source host. Default value is False.
 4301         - max_bandwidth:   The maximum bandwidth (in MiB/s) that will be used.
 4302         - max_downtime:    Set maximum tolerable downtime for live-migration.
 4303                            The value represents a number of milliseconds the guest
 4304                            is allowed to be down at the end of live migration.
 4305         - parallel_connections: Specify a number of parallel network connections
 4306                            to be used to send memory pages to the destination host.
 4307         - compressed:      Activate compression.
 4308         - comp_methods:    A comma-separated list of compression methods. Supported
 4309                            methods are "mt" and "xbzrle" and can be  used in any
 4310                            combination. QEMU defaults to "xbzrle".
 4311         - comp_mt_level:   Set compression level. Values are in range from 0 to 9,
 4312                            where 1 is maximum speed and 9 is  maximum compression.
 4313         - comp_mt_threads: Set number of compress threads on source host.
 4314         - comp_mt_dthreads: Set number of decompress threads on target host.
 4315         - comp_xbzrle_cache: Set the size of page cache for xbzrle compression in bytes.
 4316         - copy_storage:    Migrate non-shared storage. It must be one of the following
 4317                            values: all (full disk copy) or incremental (Incremental copy)
 4318         - postcopy:        Enable the use of post-copy migration.
 4319         - postcopy_bandwidth: The maximum bandwidth allowed in post-copy phase. (MiB/s)
 4320         - username:        Username to connect with target host
 4321         - password:        Password to connect with target host
 4322 
 4323         .. versionadded:: 3002
 4324 
 4325     CLI Example:
 4326 
 4327     .. code-block:: bash
 4328 
 4329         salt '*' virt.migrate <domain> <target hypervisor URI>
 4330         salt src virt.migrate guest qemu+ssh://dst/system
 4331         salt src virt.migrate guest qemu+tls://dst/system
 4332         salt src virt.migrate guest qemu+tcp://dst/system
 4333 
 4334     A tunnel data migration can be performed by setting this in the
 4335     configuration:
 4336 
 4337     .. code-block:: yaml
 4338 
 4339         virt:
 4340             tunnel: True
 4341 
 4342     For more details on tunnelled data migrations, report to
 4343     https://libvirt.org/migration.html#transporttunnel
 4344     """
 4345 
 4346     if ssh:
 4347         salt.utils.versions.warn_until(
 4348             "Silicon",
 4349             "The 'ssh' argument has been deprecated and "
 4350             "will be removed in a future release. "
 4351             "Use libvirt URI string 'target' instead.",
 4352         )
 4353 
 4354     conn = __get_conn()
 4355     dom = _get_domain(conn, vm_)
 4356 
 4357     if not urlparse(target).scheme:
 4358         proto = "qemu"
 4359         if ssh:
 4360             proto += "+ssh"
 4361         dst_uri = "{}://{}/system".format(proto, target)
 4362     else:
 4363         dst_uri = target
 4364 
 4365     ret = _migrate(dom, dst_uri, **kwargs)
 4366     conn.close()
 4367     return ret
 4368 
 4369 
 4370 def migrate_start_postcopy(vm_):
 4371     """
 4372     Starts post-copy migration. This function has to be called
 4373     while live migration is in progress and it has been initiated
 4374     with the `postcopy=True` option.
 4375 
 4376     CLI Example:
 4377 
 4378     .. code-block:: bash
 4379 
 4380         salt '*' virt.migrate_start_postcopy <domain>
 4381     """
 4382     conn = __get_conn()
 4383     dom = _get_domain(conn, vm_)
 4384     try:
 4385         dom.migrateStartPostCopy()
 4386     except libvirt.libvirtError as err:
 4387         conn.close()
 4388         raise CommandExecutionError(err.get_error_message())
 4389     conn.close()
 4390 
 4391 
 4392 def seed_non_shared_migrate(disks, force=False):
 4393     """
 4394     Non shared migration requires that the disks be present on the migration
 4395     destination, pass the disks information via this function, to the
 4396     migration destination before executing the migration.
 4397 
 4398     :param disks: the list of disk data as provided by virt.get_disks
 4399     :param force: skip checking the compatibility of source and target disk
 4400                   images if True. (default: False)
 4401 
 4402     CLI Example:
 4403 
 4404     .. code-block:: bash
 4405 
 4406         salt '*' virt.seed_non_shared_migrate <disks>
 4407     """
 4408     for _, data in disks.items():
 4409         fn_ = data["file"]
 4410         form = data["file format"]
 4411         size = data["virtual size"].split()[1][1:]
 4412         if os.path.isfile(fn_) and not force:
 4413             # the target exists, check to see if it is compatible
 4414             pre = salt.utils.yaml.safe_load(
 4415                 subprocess.Popen(
 4416                     "qemu-img info arch", shell=True, stdout=subprocess.PIPE
 4417                 ).communicate()[0]
 4418             )
 4419             if (
 4420                 pre["file format"] != data["file format"]
 4421                 and pre["virtual size"] != data["virtual size"]
 4422             ):
 4423                 return False
 4424         if not os.path.isdir(os.path.dirname(fn_)):
 4425             os.makedirs(os.path.dirname(fn_))
 4426         if os.path.isfile(fn_):
 4427             os.remove(fn_)
 4428         cmd = "qemu-img create -f " + form + " " + fn_ + " " + size
 4429         subprocess.call(cmd, shell=True)
 4430         creds = _libvirt_creds()
 4431         cmd = "chown " + creds["user"] + ":" + creds["group"] + " " + fn_
 4432         subprocess.call(cmd, shell=True)
 4433     return True
 4434 
 4435 
 4436 def set_autostart(vm_, state="on", **kwargs):
 4437     """
 4438     Set the autostart flag on a VM so that the VM will start with the host
 4439     system on reboot.
 4440 
 4441     :param vm_: domain name
 4442     :param state: 'on' to auto start the pool, anything else to mark the
 4443                   pool not to be started when the host boots
 4444     :param connection: libvirt connection URI, overriding defaults
 4445 
 4446         .. versionadded:: 2019.2.0
 4447     :param username: username to connect with, overriding defaults
 4448 
 4449         .. versionadded:: 2019.2.0
 4450     :param password: password to connect with, overriding defaults
 4451 
 4452         .. versionadded:: 2019.2.0
 4453 
 4454     CLI Example:
 4455 
 4456     .. code-block:: bash
 4457 
 4458         salt "*" virt.set_autostart <domain> <on | off>
 4459     """
 4460     conn = __get_conn(**kwargs)
 4461     dom = _get_domain(conn, vm_)
 4462 
 4463     # return False if state is set to something other then on or off
 4464     ret = False
 4465 
 4466     if state == "on":
 4467         ret = dom.setAutostart(1) == 0
 4468 
 4469     elif state == "off":
 4470         ret = dom.setAutostart(0) == 0
 4471 
 4472     conn.close()
 4473     return ret
 4474 
 4475 
 4476 def undefine(vm_, **kwargs):
 4477     """
 4478     Remove a defined vm, this does not purge the virtual machine image, and
 4479     this only works if the vm is powered down
 4480 
 4481     :param vm_: domain name
 4482     :param connection: libvirt connection URI, overriding defaults
 4483 
 4484         .. versionadded:: 2019.2.0
 4485     :param username: username to connect with, overriding defaults
 4486 
 4487         .. versionadded:: 2019.2.0
 4488     :param password: password to connect with, overriding defaults
 4489 
 4490         .. versionadded:: 2019.2.0
 4491 
 4492     CLI Example:
 4493 
 4494     .. code-block:: bash
 4495 
 4496         salt '*' virt.undefine <domain>
 4497     """
 4498     conn = __get_conn(**kwargs)
 4499     dom = _get_domain(conn, vm_)
 4500     if getattr(libvirt, "VIR_DOMAIN_UNDEFINE_NVRAM", False):
 4501         # This one is only in 1.2.8+
 4502         ret = dom.undefineFlags(libvirt.VIR_DOMAIN_UNDEFINE_NVRAM) == 0
 4503     else:
 4504         ret = dom.undefine() == 0
 4505     conn.close()
 4506     return ret
 4507 
 4508 
 4509 def purge(vm_, dirs=False, removables=False, **kwargs):
 4510     """
 4511     Recursively destroy and delete a persistent virtual machine, pass True for
 4512     dir's to also delete the directories containing the virtual machine disk
 4513     images - USE WITH EXTREME CAUTION!
 4514 
 4515     :param vm_: domain name
 4516     :param dirs: pass True to remove containing directories
 4517     :param removables: pass True to remove removable devices
 4518 
 4519         .. versionadded:: 2019.2.0
 4520     :param connection: libvirt connection URI, overriding defaults
 4521 
 4522         .. versionadded:: 2019.2.0
 4523     :param username: username to connect with, overriding defaults
 4524 
 4525         .. versionadded:: 2019.2.0
 4526     :param password: password to connect with, overriding defaults
 4527 
 4528         .. versionadded:: 2019.2.0
 4529 
 4530     CLI Example:
 4531 
 4532     .. code-block:: bash
 4533 
 4534         salt '*' virt.purge <domain>
 4535     """
 4536     conn = __get_conn(**kwargs)
 4537     dom = _get_domain(conn, vm_)
 4538     disks = _get_disks(conn, dom)
 4539     if (
 4540         VIRT_STATE_NAME_MAP.get(dom.info()[0], "unknown") != "shutdown"
 4541         and dom.destroy() != 0
 4542     ):
 4543         return False
 4544     directories = set()
 4545     for disk in disks:
 4546         if not removables and disks[disk]["type"] in ["cdrom", "floppy"]:
 4547             continue
 4548         if disks[disk].get("zfs", False):
 4549             # TODO create solution for 'dataset is busy'
 4550             time.sleep(3)
 4551             fs_name = disks[disk]["file"][len("/dev/zvol/") :]
 4552             log.info("Destroying VM ZFS volume {}".format(fs_name))
 4553             __salt__["zfs.destroy"](name=fs_name, force=True)
 4554         elif os.path.exists(disks[disk]["file"]):
 4555             os.remove(disks[disk]["file"])
 4556             directories.add(os.path.dirname(disks[disk]["file"]))
 4557         else:
 4558             # We may have a volume to delete here
 4559             matcher = re.match("^(?P<pool>[^/]+)/(?P<volume>.*)$", disks[disk]["file"],)
 4560             if matcher:
 4561                 pool_name = matcher.group("pool")
 4562                 pool = None
 4563                 if pool_name in conn.listStoragePools():
 4564                     pool = conn.storagePoolLookupByName(pool_name)
 4565 
 4566                 if pool and matcher.group("volume") in pool.listVolumes():
 4567                     volume = pool.storageVolLookupByName(matcher.group("volume"))
 4568                     volume.delete()
 4569 
 4570     if dirs:
 4571         for dir_ in directories:
 4572             shutil.rmtree(dir_)
 4573     if getattr(libvirt, "VIR_DOMAIN_UNDEFINE_NVRAM", False):
 4574         # This one is only in 1.2.8+
 4575         try:
 4576             dom.undefineFlags(libvirt.VIR_DOMAIN_UNDEFINE_NVRAM)
 4577         except Exception:  # pylint: disable=broad-except
 4578             dom.undefine()
 4579     else:
 4580         dom.undefine()
 4581     conn.close()
 4582     return True
 4583 
 4584 
 4585 def virt_type():
 4586     """
 4587     Returns the virtual machine type as a string
 4588 
 4589     CLI Example:
 4590 
 4591     .. code-block:: bash
 4592 
 4593         salt '*' virt.virt_type
 4594     """
 4595     return __grains__["virtual"]
 4596 
 4597 
 4598 def _is_kvm_hyper():
 4599     """
 4600     Returns a bool whether or not this node is a KVM hypervisor
 4601     """
 4602     try:
 4603         with salt.utils.files.fopen("/proc/modules") as fp_:
 4604             if "kvm_" not in salt.utils.stringutils.to_unicode(fp_.read()):
 4605                 return False
 4606     except OSError:
 4607         # No /proc/modules? Are we on Windows? Or Solaris?
 4608         return False
 4609     return "libvirtd" in __salt__["cmd.run"](__grains__["ps"])
 4610 
 4611 
 4612 def _is_xen_hyper():
 4613     """
 4614     Returns a bool whether or not this node is a XEN hypervisor
 4615     """
 4616     try:
 4617         if __grains__["virtual_subtype"] != "Xen Dom0":
 4618             return False
 4619     except KeyError:
 4620         # virtual_subtype isn't set everywhere.
 4621         return False
 4622     try:
 4623         with salt.utils.files.fopen("/proc/modules") as fp_:
 4624             if "xen_" not in salt.utils.stringutils.to_unicode(fp_.read()):
 4625                 return False
 4626     except OSError:
 4627         # No /proc/modules? Are we on Windows? Or Solaris?
 4628         return False
 4629     return "libvirtd" in __salt__["cmd.run"](__grains__["ps"])
 4630 
 4631 
 4632 def get_hypervisor():
 4633     """
 4634     Returns the name of the hypervisor running on this node or ``None``.
 4635 
 4636     Detected hypervisors:
 4637 
 4638     - kvm
 4639     - xen
 4640     - bhyve
 4641 
 4642     CLI Example:
 4643 
 4644     .. code-block:: bash
 4645 
 4646         salt '*' virt.get_hypervisor
 4647 
 4648     .. versionadded:: 2019.2.0
 4649         the function and the ``kvm``, ``xen`` and ``bhyve`` hypervisors support
 4650     """
 4651     # To add a new 'foo' hypervisor, add the _is_foo_hyper function,
 4652     # add 'foo' to the list below and add it to the docstring with a .. versionadded::
 4653     hypervisors = ["kvm", "xen", "bhyve"]
 4654     result = [
 4655         hyper
 4656         for hyper in hypervisors
 4657         if getattr(sys.modules[__name__], "_is_{}_hyper".format(hyper))()
 4658     ]
 4659     return result[0] if result else None
 4660 
 4661 
 4662 def _is_bhyve_hyper():
 4663     sysctl_cmd = "sysctl hw.vmm.create"
 4664     vmm_enabled = False
 4665     try:
 4666         stdout = subprocess.Popen(
 4667             sysctl_cmd, shell=True, stdout=subprocess.PIPE
 4668         ).communicate()[0]
 4669         vmm_enabled = len(salt.utils.stringutils.to_str(stdout).split('"')[1]) != 0
 4670     except IndexError:
 4671         pass
 4672     return vmm_enabled
 4673 
 4674 
 4675 def is_hyper():
 4676     """
 4677     Returns a bool whether or not this node is a hypervisor of any kind
 4678 
 4679     CLI Example:
 4680 
 4681     .. code-block:: bash
 4682 
 4683         salt '*' virt.is_hyper
 4684     """
 4685     if HAS_LIBVIRT:
 4686         return _is_xen_hyper() or _is_kvm_hyper() or _is_bhyve_hyper()
 4687     return False
 4688 
 4689 
 4690 def vm_cputime(vm_=None, **kwargs):
 4691     """
 4692     Return cputime used by the vms on this hyper in a
 4693     list of dicts:
 4694 
 4695     :param vm_: domain name
 4696     :param connection: libvirt connection URI, overriding defaults
 4697 
 4698         .. versionadded:: 2019.2.0
 4699     :param username: username to connect with, overriding defaults
 4700 
 4701         .. versionadded:: 2019.2.0
 4702     :param password: password to connect with, overriding defaults
 4703 
 4704         .. versionadded:: 2019.2.0
 4705 
 4706     .. code-block:: python
 4707 
 4708         [
 4709             'your-vm': {
 4710                 'cputime' <int>
 4711                 'cputime_percent' <int>
 4712                 },
 4713             ...
 4714             ]
 4715 
 4716     If you pass a VM name in as an argument then it will return info
 4717     for just the named VM, otherwise it will return all VMs.
 4718 
 4719     CLI Example:
 4720 
 4721     .. code-block:: bash
 4722 
 4723         salt '*' virt.vm_cputime
 4724     """
 4725     conn = __get_conn(**kwargs)
 4726     host_cpus = conn.getInfo()[2]
 4727 
 4728     def _info(dom):
 4729         """
 4730         Compute cputime info of a domain
 4731         """
 4732         raw = dom.info()
 4733         vcpus = int(raw[3])
 4734         cputime = int(raw[4])
 4735         cputime_percent = 0
 4736         if cputime:
 4737             # Divide by vcpus to always return a number between 0 and 100
 4738             cputime_percent = (1.0e-7 * cputime / host_cpus) / vcpus
 4739         return {
 4740             "cputime": int(raw[4]),
 4741             "cputime_percent": int("{:.0f}".format(cputime_percent)),
 4742         }
 4743 
 4744     info = {}
 4745     if vm_:
 4746         info[vm_] = _info(_get_domain(conn, vm_))
 4747     else:
 4748         for domain in _get_domain(conn, iterable=True):
 4749             info[domain.name()] = _info(domain)
 4750     conn.close()
 4751     return info
 4752 
 4753 
 4754 def vm_netstats(vm_=None, **kwargs):
 4755     """
 4756     Return combined network counters used by the vms on this hyper in a
 4757     list of dicts:
 4758 
 4759     :param vm_: domain name
 4760     :param connection: libvirt connection URI, overriding defaults
 4761 
 4762         .. versionadded:: 2019.2.0
 4763     :param username: username to connect with, overriding defaults
 4764 
 4765         .. versionadded:: 2019.2.0
 4766     :param password: password to connect with, overriding defaults
 4767 
 4768         .. versionadded:: 2019.2.0
 4769 
 4770     .. code-block:: python
 4771 
 4772         [
 4773             'your-vm': {
 4774                 'rx_bytes'   : 0,
 4775                 'rx_packets' : 0,
 4776                 'rx_errs'    : 0,
 4777                 'rx_drop'    : 0,
 4778                 'tx_bytes'   : 0,
 4779                 'tx_packets' : 0,
 4780                 'tx_errs'    : 0,
 4781                 'tx_drop'    : 0
 4782                 },
 4783             ...
 4784             ]
 4785 
 4786     If you pass a VM name in as an argument then it will return info
 4787     for just the named VM, otherwise it will return all VMs.
 4788 
 4789     CLI Example:
 4790 
 4791     .. code-block:: bash
 4792 
 4793         salt '*' virt.vm_netstats
 4794     """
 4795 
 4796     def _info(dom):
 4797         """
 4798         Compute network stats of a domain
 4799         """
 4800         nics = _get_nics(dom)
 4801         ret = {
 4802             "rx_bytes": 0,
 4803             "rx_packets": 0,
 4804             "rx_errs": 0,
 4805             "rx_drop": 0,
 4806             "tx_bytes": 0,
 4807             "tx_packets": 0,
 4808             "tx_errs": 0,
 4809             "tx_drop": 0,
 4810         }
 4811         for attrs in nics.values():
 4812             if "target" in attrs:
 4813                 dev = attrs["target"]
 4814                 stats = dom.interfaceStats(dev)
 4815                 ret["rx_bytes"] += stats[0]
 4816                 ret["rx_packets"] += stats[1]
 4817                 ret["rx_errs"] += stats[2]
 4818                 ret["rx_drop"] += stats[3]
 4819                 ret["tx_bytes"] += stats[4]
 4820                 ret["tx_packets"] += stats[5]
 4821                 ret["tx_errs"] += stats[6]
 4822                 ret["tx_drop"] += stats[7]
 4823 
 4824         return ret
 4825 
 4826     info = {}
 4827     conn = __get_conn(**kwargs)
 4828     if vm_:
 4829         info[vm_] = _info(_get_domain(conn, vm_))
 4830     else:
 4831         for domain in _get_domain(conn, iterable=True):
 4832             info[domain.name()] = _info(domain)
 4833     conn.close()
 4834     return info
 4835 
 4836 
 4837 def vm_diskstats(vm_=None, **kwargs):
 4838     """
 4839     Return disk usage counters used by the vms on this hyper in a
 4840     list of dicts:
 4841 
 4842     :param vm_: domain name
 4843     :param connection: libvirt connection URI, overriding defaults
 4844 
 4845         .. versionadded:: 2019.2.0
 4846     :param username: username to connect with, overriding defaults
 4847 
 4848         .. versionadded:: 2019.2.0
 4849     :param password: password to connect with, overriding defaults
 4850 
 4851         .. versionadded:: 2019.2.0
 4852 
 4853     .. code-block:: python
 4854 
 4855         [
 4856             'your-vm': {
 4857                 'rd_req'   : 0,
 4858                 'rd_bytes' : 0,
 4859                 'wr_req'   : 0,
 4860                 'wr_bytes' : 0,
 4861                 'errs'     : 0
 4862                 },
 4863             ...
 4864             ]
 4865 
 4866     If you pass a VM name in as an argument then it will return info
 4867     for just the named VM, otherwise it will return all VMs.
 4868 
 4869     CLI Example:
 4870 
 4871     .. code-block:: bash
 4872 
 4873         salt '*' virt.vm_blockstats
 4874     """
 4875 
 4876     def get_disk_devs(dom):
 4877         """
 4878         Extract the disk devices names from the domain XML definition
 4879         """
 4880         doc = ElementTree.fromstring(get_xml(dom, **kwargs))
 4881         return [target.get("dev") for target in doc.findall("devices/disk/target")]
 4882 
 4883     def _info(dom):
 4884         """
 4885         Compute the disk stats of a domain
 4886         """
 4887         # Do not use get_disks, since it uses qemu-img and is very slow
 4888         # and unsuitable for any sort of real time statistics
 4889         disks = get_disk_devs(dom)
 4890         ret = {"rd_req": 0, "rd_bytes": 0, "wr_req": 0, "wr_bytes": 0, "errs": 0}
 4891         for disk in disks:
 4892             stats = dom.blockStats(disk)
 4893             ret["rd_req"] += stats[0]
 4894             ret["rd_bytes"] += stats[1]
 4895             ret["wr_req"] += stats[2]
 4896             ret["wr_bytes"] += stats[3]
 4897             ret["errs"] += stats[4]
 4898 
 4899         return ret
 4900 
 4901     info = {}
 4902     conn = __get_conn(**kwargs)
 4903     if vm_:
 4904         info[vm_] = _info(_get_domain(conn, vm_))
 4905     else:
 4906         # Can not run function blockStats on inactive VMs
 4907         for domain in _get_domain(conn, iterable=True, inactive=False):
 4908             info[domain.name()] = _info(domain)
 4909     conn.close()
 4910     return info
 4911 
 4912 
 4913 def _parse_snapshot_description(vm_snapshot, unix_time=False):
 4914     """
 4915     Parse XML doc and return a dict with the status values.
 4916 
 4917     :param xmldoc:
 4918     :return:
 4919     """
 4920     ret = dict()
 4921     tree = ElementTree.fromstring(vm_snapshot.getXMLDesc())
 4922     for node in tree:
 4923         if node.tag == "name":
 4924             ret["name"] = node.text
 4925         elif node.tag == "creationTime":
 4926             ret["created"] = (
 4927                 datetime.datetime.fromtimestamp(float(node.text)).isoformat(" ")
 4928                 if not unix_time
 4929                 else float(node.text)
 4930             )
 4931         elif node.tag == "state":
 4932             ret["running"] = node.text == "running"
 4933 
 4934     ret["current"] = vm_snapshot.isCurrent() == 1
 4935 
 4936     return ret
 4937 
 4938 
 4939 def list_snapshots(domain=None, **kwargs):
 4940     """
 4941     List available snapshots for certain vm or for all.
 4942 
 4943     :param domain: domain name
 4944     :param connection: libvirt connection URI, overriding defaults
 4945 
 4946         .. versionadded:: 2019.2.0
 4947     :param username: username to connect with, overriding defaults
 4948 
 4949         .. versionadded:: 2019.2.0
 4950     :param password: password to connect with, overriding defaults
 4951 
 4952         .. versionadded:: 2019.2.0
 4953 
 4954     .. versionadded:: 2016.3.0
 4955 
 4956     CLI Example:
 4957 
 4958     .. code-block:: bash
 4959 
 4960         salt '*' virt.list_snapshots
 4961         salt '*' virt.list_snapshots <domain>
 4962     """
 4963     ret = dict()
 4964     conn = __get_conn(**kwargs)
 4965     for vm_domain in _get_domain(conn, *(domain and [domain] or list()), iterable=True):
 4966         ret[vm_domain.name()] = [
 4967             _parse_snapshot_description(snap) for snap in vm_domain.listAllSnapshots()
 4968         ] or "N/A"
 4969 
 4970     conn.close()
 4971     return ret
 4972 
 4973 
 4974 def snapshot(domain, name=None, suffix=None, **kwargs):
 4975     """
 4976     Create a snapshot of a VM.
 4977 
 4978     :param domain: domain name
 4979     :param name: Name of the snapshot. If the name is omitted, then will be used original domain
 4980                  name with ISO 8601 time as a suffix.
 4981 
 4982     :param suffix: Add suffix for the new name. Useful in states, where such snapshots
 4983                    can be distinguished from manually created.
 4984     :param connection: libvirt connection URI, overriding defaults
 4985 
 4986         .. versionadded:: 2019.2.0
 4987     :param username: username to connect with, overriding defaults
 4988 
 4989         .. versionadded:: 2019.2.0
 4990     :param password: password to connect with, overriding defaults
 4991 
 4992         .. versionadded:: 2019.2.0
 4993 
 4994     .. versionadded:: 2016.3.0
 4995 
 4996     CLI Example:
 4997 
 4998     .. code-block:: bash
 4999 
 5000         salt '*' virt.snapshot <domain>
 5001     """
 5002     if name and name.lower() == domain.lower():
 5003         raise CommandExecutionError(
 5004             "Virtual Machine {name} is already defined. "
 5005             "Please choose another name for the snapshot".format(name=name)
 5006         )
 5007     if not name:
 5008         name = "{domain}-{tsnap}".format(
 5009             domain=domain, tsnap=time.strftime("%Y%m%d-%H%M%S", time.localtime())
 5010         )
 5011 
 5012     if suffix:
 5013         name = "{name}-{suffix}".format(name=name, suffix=suffix)
 5014 
 5015     doc = ElementTree.Element("domainsnapshot")
 5016     n_name = ElementTree.SubElement(doc, "name")
 5017     n_name.text = name
 5018 
 5019     conn = __get_conn(**kwargs)
 5020     _get_domain(conn, domain).snapshotCreateXML(
 5021         salt.utils.stringutils.to_str(ElementTree.tostring(doc))
 5022     )
 5023     conn.close()
 5024 
 5025     return {"name": name}
 5026 
 5027 
 5028 def delete_snapshots(name, *names, **kwargs):
 5029     """
 5030     Delete one or more snapshots of the given VM.
 5031 
 5032     :param name: domain name
 5033     :param names: names of the snapshots to remove
 5034     :param connection: libvirt connection URI, overriding defaults
 5035 
 5036         .. versionadded:: 2019.2.0
 5037     :param username: username to connect with, overriding defaults
 5038 
 5039         .. versionadded:: 2019.2.0
 5040     :param password: password to connect with, overriding defaults
 5041 
 5042         .. versionadded:: 2019.2.0
 5043 
 5044     .. versionadded:: 2016.3.0
 5045 
 5046     CLI Example:
 5047 
 5048     .. code-block:: bash
 5049 
 5050         salt '*' virt.delete_snapshots <domain> all=True
 5051         salt '*' virt.delete_snapshots <domain> <snapshot>
 5052         salt '*' virt.delete_snapshots <domain> <snapshot1> <snapshot2> ...
 5053     """
 5054     deleted = dict()
 5055     conn = __get_conn(**kwargs)
 5056     domain = _get_domain(conn, name)
 5057     for snap in domain.listAllSnapshots():
 5058         if snap.getName() in names or not names:
 5059             deleted[snap.getName()] = _parse_snapshot_description(snap)
 5060             snap.delete()
 5061     conn.close()
 5062 
 5063     available = {
 5064         name: [_parse_snapshot_description(snap) for snap in domain.listAllSnapshots()]
 5065         or "N/A"
 5066     }
 5067 
 5068     return {"available": available, "deleted": deleted}
 5069 
 5070 
 5071 def revert_snapshot(name, vm_snapshot=None, cleanup=False, **kwargs):
 5072     """
 5073     Revert snapshot to the previous from current (if available) or to the specific.
 5074 
 5075     :param name: domain name
 5076     :param vm_snapshot: name of the snapshot to revert
 5077     :param cleanup: Remove all newer than reverted snapshots. Values: True or False (default False).
 5078     :param connection: libvirt connection URI, overriding defaults
 5079 
 5080         .. versionadded:: 2019.2.0
 5081     :param username: username to connect with, overriding defaults
 5082 
 5083         .. versionadded:: 2019.2.0
 5084     :param password: password to connect with, overriding defaults
 5085 
 5086         .. versionadded:: 2019.2.0
 5087 
 5088     .. versionadded:: 2016.3.0
 5089 
 5090     CLI Example:
 5091 
 5092     .. code-block:: bash
 5093 
 5094         salt '*' virt.revert <domain>
 5095         salt '*' virt.revert <domain> <snapshot>
 5096     """
 5097     ret = dict()
 5098     conn = __get_conn(**kwargs)
 5099     domain = _get_domain(conn, name)
 5100     snapshots = domain.listAllSnapshots()
 5101 
 5102     _snapshots = list()
 5103     for snap_obj in snapshots:
 5104         _snapshots.append(
 5105             {
 5106                 "idx": _parse_snapshot_description(snap_obj, unix_time=True)["created"],
 5107                 "ptr": snap_obj,
 5108             }
 5109         )
 5110     snapshots = [
 5111         w_ptr["ptr"]
 5112         for w_ptr in sorted(_snapshots, key=lambda item: item["idx"], reverse=True)
 5113     ]
 5114     del _snapshots
 5115 
 5116     if not snapshots:
 5117         conn.close()
 5118         raise CommandExecutionError("No snapshots found")
 5119     elif len(snapshots) == 1:
 5120         conn.close()
 5121         raise CommandExecutionError(
 5122             "Cannot revert to itself: only one snapshot is available."
 5123         )
 5124 
 5125     snap = None
 5126     for p_snap in snapshots:
 5127         if not vm_snapshot:
 5128             if p_snap.isCurrent() and snapshots[snapshots.index(p_snap) + 1 :]:
 5129                 snap = snapshots[snapshots.index(p_snap) + 1 :][0]
 5130                 break
 5131         elif p_snap.getName() == vm_snapshot:
 5132             snap = p_snap
 5133             break
 5134 
 5135     if not snap:
 5136         conn.close()
 5137         raise CommandExecutionError(
 5138             snapshot
 5139             and 'Snapshot "{}" not found'.format(vm_snapshot)
 5140             or "No more previous snapshots available"
 5141         )
 5142     elif snap.isCurrent():
 5143         conn.close()
 5144         raise CommandExecutionError("Cannot revert to the currently running snapshot.")
 5145 
 5146     domain.revertToSnapshot(snap)
 5147     ret["reverted"] = snap.getName()
 5148 
 5149     if cleanup:
 5150         delete = list()
 5151         for p_snap in snapshots:
 5152             if p_snap.getName() != snap.getName():
 5153                 delete.append(p_snap.getName())
 5154                 p_snap.delete()
 5155             else:
 5156                 break
 5157         ret["deleted"] = delete
 5158     else:
 5159         ret["deleted"] = "N/A"
 5160 
 5161     conn.close()
 5162 
 5163     return ret
 5164 
 5165 
 5166 def _caps_add_machine(machines, node):
 5167     """
 5168     Parse the <machine> element of the host capabilities and add it
 5169     to the machines list.
 5170     """
 5171     maxcpus = node.get("maxCpus")
 5172     canonical = node.get("canonical")
 5173     name = node.text
 5174 
 5175     alternate_name = ""
 5176     if canonical:
 5177         alternate_name = name
 5178         name = canonical
 5179 
 5180     machine = machines.get(name)
 5181     if not machine:
 5182         machine = {"alternate_names": []}
 5183         if maxcpus:
 5184             machine["maxcpus"] = int(maxcpus)
 5185         machines[name] = machine
 5186     if alternate_name:
 5187         machine["alternate_names"].append(alternate_name)
 5188 
 5189 
 5190 def _parse_caps_guest(guest):
 5191     """
 5192     Parse the <guest> element of the connection capabilities XML
 5193     """
 5194     arch_node = guest.find("arch")
 5195     result = {
 5196         "os_type": guest.find("os_type").text,
 5197         "arch": {"name": arch_node.get("name"), "machines": {}, "domains": {}},
 5198     }
 5199 
 5200     child = None
 5201     for child in arch_node:
 5202         if child.tag == "wordsize":
 5203             result["arch"]["wordsize"] = int(child.text)
 5204         elif child.tag == "emulator":
 5205             result["arch"]["emulator"] = child.text
 5206         elif child.tag == "machine":
 5207             _caps_add_machine(result["arch"]["machines"], child)
 5208         elif child.tag == "domain":
 5209             domain_type = child.get("type")
 5210             domain = {"emulator": None, "machines": {}}
 5211             emulator_node = child.find("emulator")
 5212             if emulator_node is not None:
 5213                 domain["emulator"] = emulator_node.text
 5214             for machine in child.findall("machine"):
 5215                 _caps_add_machine(domain["machines"], machine)
 5216             result["arch"]["domains"][domain_type] = domain
 5217 
 5218     # Note that some features have no default and toggle attributes.
 5219     # This may not be a perfect match, but represent them as enabled by default
 5220     # without possibility to toggle them.
 5221     # Some guests may also have no feature at all (xen pv for instance)
 5222     features_nodes = guest.find("features")
 5223     if features_nodes is not None and child is not None:
 5224         result["features"] = {
 5225             child.tag: {
 5226                 "toggle": child.get("toggle", "no") == "yes",
 5227                 "default": child.get("default", "on") == "on",
 5228             }
 5229             for child in features_nodes
 5230         }
 5231     return result
 5232 
 5233 
 5234 def _parse_caps_cell(cell):
 5235     """
 5236     Parse the <cell> nodes of the connection capabilities XML output.
 5237     """
 5238     result = {"id": int(cell.get("id"))}
 5239 
 5240     mem_node = cell.find("memory")
 5241     if mem_node is not None:
 5242         unit = mem_node.get("unit", "KiB")
 5243         memory = mem_node.text
 5244         result["memory"] = "{} {}".format(memory, unit)
 5245 
 5246     pages = [
 5247         {
 5248             "size": "{} {}".format(page.get("size"), page.get("unit", "KiB")),
 5249             "available": int(page.text),
 5250         }
 5251         for page in cell.findall("pages")
 5252     ]
 5253     if pages:
 5254         result["pages"] = pages
 5255 
 5256     distances = {
 5257         int(distance.get("id")): int(distance.get("value"))
 5258         for distance in cell.findall("distances/sibling")
 5259     }
 5260     if distances:
 5261         result["distances"] = distances
 5262 
 5263     cpus = []
 5264     for cpu_node in cell.findall("cpus/cpu"):
 5265         cpu = {"id": int(cpu_node.get("id"))}
 5266         socket_id = cpu_node.get("socket_id")
 5267         if socket_id:
 5268             cpu["socket_id"] = int(socket_id)
 5269 
 5270         core_id = cpu_node.get("core_id")
 5271         if core_id:
 5272             cpu["core_id"] = int(core_id)
 5273         siblings = cpu_node.get("siblings")
 5274         if siblings:
 5275             cpu["siblings"] = siblings
 5276         cpus.append(cpu)
 5277     if cpus:
 5278         result["cpus"] = cpus
 5279 
 5280     return result
 5281 
 5282 
 5283 def _parse_caps_bank(bank):
 5284     """
 5285     Parse the <bank> element of the connection capabilities XML.
 5286     """
 5287     result = {
 5288         "id": int(bank.get("id")),
 5289         "level": int(bank.get("level")),
 5290         "type": bank.get("type"),
 5291         "size": "{} {}".format(bank.get("size"), bank.get("unit")),
 5292         "cpus": bank.get("cpus"),
 5293     }
 5294 
 5295     controls = []
 5296     for control in bank.findall("control"):
 5297         unit = control.get("unit")
 5298         result_control = {
 5299             "granularity": "{} {}".format(control.get("granularity"), unit),
 5300             "type": control.get("type"),
 5301             "maxAllocs": int(control.get("maxAllocs")),
 5302         }
 5303 
 5304         minimum = control.get("min")
 5305         if minimum:
 5306             result_control["min"] = "{} {}".format(minimum, unit)
 5307         controls.append(result_control)
 5308     if controls:
 5309         result["controls"] = controls
 5310 
 5311     return result
 5312 
 5313 
 5314 def _parse_caps_host(host):
 5315     """
 5316     Parse the <host> element of the connection capabilities XML.
 5317     """
 5318     result = {}
 5319     for child in host:
 5320 
 5321         if child.tag == "uuid":
 5322             result["uuid"] = child.text
 5323 
 5324         elif child.tag == "cpu":
 5325             cpu = {
 5326                 "arch": child.find("arch").text
 5327                 if child.find("arch") is not None
 5328                 else None,
 5329                 "model": child.find("model").text
 5330                 if child.find("model") is not None
 5331                 else None,
 5332                 "vendor": child.find("vendor").text
 5333                 if child.find("vendor") is not None
 5334                 else None,
 5335                 "features": [
 5336                     feature.get("name") for feature in child.findall("feature")
 5337                 ],
 5338                 "pages": [
 5339                     {"size": "{} {}".format(page.get("size"), page.get("unit", "KiB"))}
 5340                     for page in child.findall("pages")
 5341                 ],
 5342             }
 5343             # Parse the cpu tag
 5344             microcode = child.find("microcode")
 5345             if microcode is not None:
 5346                 cpu["microcode"] = microcode.get("version")
 5347 
 5348             topology = child.find("topology")
 5349             if topology is not None:
 5350                 cpu["sockets"] = int(topology.get("sockets"))
 5351                 cpu["cores"] = int(topology.get("cores"))
 5352                 cpu["threads"] = int(topology.get("threads"))
 5353             result["cpu"] = cpu
 5354 
 5355         elif child.tag == "power_management":
 5356             result["power_management"] = [node.tag for node in child]
 5357 
 5358         elif child.tag == "migration_features":
 5359             result["migration"] = {
 5360                 "live": child.find("live") is not None,
 5361                 "transports": [
 5362                     node.text for node in child.findall("uri_transports/uri_transport")
 5363                 ],
 5364             }
 5365 
 5366         elif child.tag == "topology":
 5367             result["topology"] = {
 5368                 "cells": [
 5369                     _parse_caps_cell(cell) for cell in child.findall("cells/cell")
 5370                 ]
 5371             }
 5372 
 5373         elif child.tag == "cache":
 5374             result["cache"] = {
 5375                 "banks": [_parse_caps_bank(bank) for bank in child.findall("bank")]
 5376             }
 5377 
 5378     result["security"] = [
 5379         {
 5380             "model": secmodel.find("model").text
 5381             if secmodel.find("model") is not None
 5382             else None,
 5383             "doi": secmodel.find("doi").text
 5384             if secmodel.find("doi") is not None
 5385             else None,
 5386             "baselabels": [
 5387                 {"type": label.get("type"), "label": label.text}
 5388                 for label in secmodel.findall("baselabel")
 5389             ],
 5390         }
 5391         for secmodel in host.findall("secmodel")
 5392     ]
 5393 
 5394     return result
 5395 
 5396 
 5397 def _capabilities(conn):
 5398     """
 5399     Return the hypervisor connection capabilities.
 5400 
 5401     :param conn: opened libvirt connection to use
 5402     """
 5403     caps = ElementTree.fromstring(conn.getCapabilities())
 5404 
 5405     return {
 5406         "host": _parse_caps_host(caps.find("host")),
 5407         "guests": [_parse_caps_guest(guest) for guest in caps.findall("guest")],
 5408     }
 5409 
 5410 
 5411 def capabilities(**kwargs):
 5412     """
 5413     Return the hypervisor connection capabilities.
 5414 
 5415     :param connection: libvirt connection URI, overriding defaults
 5416     :param username: username to connect with, overriding defaults
 5417     :param password: password to connect with, overriding defaults
 5418 
 5419     .. versionadded:: 2019.2.0
 5420 
 5421     CLI Example:
 5422 
 5423     .. code-block:: bash
 5424 
 5425         salt '*' virt.capabilities
 5426     """
 5427     conn = __get_conn(**kwargs)
 5428     try:
 5429         caps = _capabilities(conn)
 5430     except libvirt.libvirtError as err:
 5431         raise CommandExecutionError(str(err))
 5432     finally:
 5433         conn.close()
 5434     return caps
 5435 
 5436 
 5437 def _parse_caps_enum(node):
 5438     """
 5439     Return a tuple containing the name of the enum and the possible values
 5440     """
 5441     return (node.get("name"), [value.text for value in node.findall("value")])
 5442 
 5443 
 5444 def _parse_caps_cpu(node):
 5445     """
 5446     Parse the <cpu> element of the domain capabilities
 5447     """
 5448     result = {}
 5449     for mode in node.findall("mode"):
 5450         if not mode.get("supported") == "yes":
 5451             continue
 5452 
 5453         name = mode.get("name")
 5454         if name == "host-passthrough":
 5455             result[name] = True
 5456 
 5457         elif name == "host-model":
 5458             host_model = {}
 5459             model_node = mode.find("model")
 5460             if model_node is not None:
 5461                 model = {"name": model_node.text}
 5462 
 5463                 vendor_id = model_node.get("vendor_id")
 5464                 if vendor_id:
 5465                     model["vendor_id"] = vendor_id
 5466 
 5467                 fallback = model_node.get("fallback")
 5468                 if fallback:
 5469                     model["fallback"] = fallback
 5470                 host_model["model"] = model
 5471 
 5472             vendor = (
 5473                 mode.find("vendor").text if mode.find("vendor") is not None else None
 5474             )
 5475             if vendor:
 5476                 host_model["vendor"] = vendor
 5477 
 5478             features = {
 5479                 feature.get("name"): feature.get("policy")
 5480                 for feature in mode.findall("feature")
 5481             }
 5482             if features:
 5483                 host_model["features"] = features
 5484 
 5485             result[name] = host_model
 5486 
 5487         elif name == "custom":
 5488             custom_model = {}
 5489             models = {
 5490                 model.text: model.get("usable") for model in mode.findall("model")
 5491             }
 5492             if models:
 5493                 custom_model["models"] = models
 5494             result[name] = custom_model
 5495 
 5496     return result
 5497 
 5498 
 5499 def _parse_caps_devices_features(node):
 5500     """
 5501     Parse the devices or features list of the domain capatilities
 5502     """
 5503     result = {}
 5504     for child in node:
 5505         if child.get("supported") == "yes":
 5506             enums = [_parse_caps_enum(node) for node in child.findall("enum")]
 5507             result[child.tag] = {item[0]: item[1] for item in enums if item[0]}
 5508     return result
 5509 
 5510 
 5511 def _parse_caps_loader(node):
 5512     """
 5513     Parse the <loader> element of the domain capabilities.
 5514     """
 5515     enums = [_parse_caps_enum(enum) for enum in node.findall("enum")]
 5516     result = {item[0]: item[1] for item in enums if item[0]}
 5517 
 5518     values = [child.text for child in node.findall("value")]
 5519 
 5520     if values:
 5521         result["values"] = values
 5522 
 5523     return result
 5524 
 5525 
 5526 def _parse_domain_caps(caps):
 5527     """
 5528     Parse the XML document of domain capabilities into a structure.
 5529     """
 5530     result = {
 5531         "emulator": caps.find("path").text if caps.find("path") is not None else None,
 5532         "domain": caps.find("domain").text if caps.find("domain") is not None else None,
 5533         "machine": caps.find("machine").text
 5534         if caps.find("machine") is not None
 5535         else None,
 5536         "arch": caps.find("arch").text if caps.find("arch") is not None else None,
 5537     }
 5538 
 5539     for child in caps:
 5540         if child.tag == "vcpu" and child.get("max"):
 5541             result["max_vcpus"] = int(child.get("max"))
 5542 
 5543         elif child.tag == "iothreads":
 5544             result["iothreads"] = child.get("supported") == "yes"
 5545 
 5546         elif child.tag == "os":
 5547             result["os"] = {}
 5548             loader_node = child.find("loader")
 5549             if loader_node is not None and loader_node.get("supported") == "yes":
 5550                 loader = _parse_caps_loader(loader_node)
 5551                 result["os"]["loader"] = loader
 5552 
 5553         elif child.tag == "cpu":
 5554             cpu = _parse_caps_cpu(child)
 5555             if cpu:
 5556                 result["cpu"] = cpu
 5557 
 5558         elif child.tag == "devices":
 5559             devices = _parse_caps_devices_features(child)
 5560             if devices:
 5561                 result["devices"] = devices
 5562 
 5563         elif child.tag == "features":
 5564             features = _parse_caps_devices_features(child)
 5565             if features:
 5566                 result["features"] = features
 5567 
 5568     return result
 5569 
 5570 
 5571 def domain_capabilities(emulator=None, arch=None, machine=None, domain=None, **kwargs):
 5572     """
 5573     Return the domain capabilities given an emulator, architecture, machine or virtualization type.
 5574 
 5575     .. versionadded:: 2019.2.0
 5576 
 5577     :param emulator: return the capabilities for the given emulator binary
 5578     :param arch: return the capabilities for the given CPU architecture
 5579     :param machine: return the capabilities for the given emulated machine type
 5580     :param domain: return the capabilities for the given virtualization type.
 5581     :param connection: libvirt connection URI, overriding defaults
 5582     :param username: username to connect with, overriding defaults
 5583     :param password: password to connect with, overriding defaults
 5584 
 5585     The list of the possible emulator, arch, machine and domain can be found in
 5586     the host capabilities output.
 5587 
 5588     If none of the parameters is provided, the libvirt default one is returned.
 5589 
 5590     CLI Example:
 5591 
 5592     .. code-block:: bash
 5593 
 5594         salt '*' virt.domain_capabilities arch='x86_64' domain='kvm'
 5595 
 5596     """
 5597     conn = __get_conn(**kwargs)
 5598     result = []
 5599     try:
 5600         caps = ElementTree.fromstring(
 5601             conn.getDomainCapabilities(emulator, arch, machine, domain, 0)
 5602         )
 5603         result = _parse_domain_caps(caps)
 5604     finally:
 5605         conn.close()
 5606 
 5607     return result
 5608 
 5609 
 5610 def all_capabilities(**kwargs):
 5611     """
 5612     Return the host and domain capabilities in a single call.
 5613 
 5614     .. versionadded:: 3001
 5615 
 5616     :param connection: libvirt connection URI, overriding defaults
 5617     :param username: username to connect with, overriding defaults
 5618     :param password: password to connect with, overriding defaults
 5619 
 5620     CLI Example:
 5621 
 5622     .. code-block:: bash
 5623 
 5624         salt '*' virt.all_capabilities
 5625 
 5626     """
 5627     conn = __get_conn(**kwargs)
 5628     try:
 5629         host_caps = ElementTree.fromstring(conn.getCapabilities())
 5630         domains = [
 5631             [
 5632                 (guest.get("arch", {}).get("name", None), key)
 5633                 for key in guest.get("arch", {}).get("domains", {}).keys()
 5634             ]
 5635             for guest in [
 5636                 _parse_caps_guest(guest) for guest in host_caps.findall("guest")
 5637             ]
 5638         ]
 5639         flattened = [pair for item in (x for x in domains) for pair in item]
 5640         result = {
 5641             "host": {
 5642                 "host": _parse_caps_host(host_caps.find("host")),
 5643                 "guests": [
 5644                     _parse_caps_guest(guest) for guest in host_caps.findall("guest")
 5645                 ],
 5646             },
 5647             "domains": [
 5648                 _parse_domain_caps(
 5649                     ElementTree.fromstring(
 5650                         conn.getDomainCapabilities(None, arch, None, domain)
 5651                     )
 5652                 )
 5653                 for (arch, domain) in flattened
 5654             ],
 5655         }
 5656         return result
 5657     finally:
 5658         conn.close()
 5659 
 5660 
 5661 def cpu_baseline(full=False, migratable=False, out="libvirt", **kwargs):
 5662     """
 5663     Return the optimal 'custom' CPU baseline config for VM's on this minion
 5664 
 5665     .. versionadded:: 2016.3.0
 5666 
 5667     :param full: Return all CPU features rather than the ones on top of the closest CPU model
 5668     :param migratable: Exclude CPU features that are unmigratable (libvirt 2.13+)
 5669     :param out: 'libvirt' (default) for usable libvirt XML definition, 'salt' for nice dict
 5670     :param connection: libvirt connection URI, overriding defaults
 5671 
 5672         .. versionadded:: 2019.2.0
 5673     :param username: username to connect with, overriding defaults
 5674 
 5675         .. versionadded:: 2019.2.0
 5676     :param password: password to connect with, overriding defaults
 5677 
 5678         .. versionadded:: 2019.2.0
 5679 
 5680     CLI Example:
 5681 
 5682     .. code-block:: bash
 5683 
 5684         salt '*' virt.cpu_baseline
 5685 
 5686     """
 5687     conn = __get_conn(**kwargs)
 5688     caps = ElementTree.fromstring(conn.getCapabilities())
 5689     cpu = caps.find("host/cpu")
 5690     log.debug(
 5691         "Host CPU model definition: %s",
 5692         salt.utils.stringutils.to_str(ElementTree.tostring(cpu)),
 5693     )
 5694 
 5695     flags = 0
 5696     if migratable:
 5697         # This one is only in 1.2.14+
 5698         if getattr(libvirt, "VIR_CONNECT_BASELINE_CPU_MIGRATABLE", False):
 5699             flags += libvirt.VIR_CONNECT_BASELINE_CPU_MIGRATABLE
 5700         else:
 5701             conn.close()
 5702             raise ValueError
 5703 
 5704     if full and getattr(libvirt, "VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES", False):
 5705         # This one is only in 1.1.3+
 5706         flags += libvirt.VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES
 5707 
 5708     cpu = ElementTree.fromstring(
 5709         conn.baselineCPU(
 5710             [salt.utils.stringutils.to_str(ElementTree.tostring(cpu))], flags
 5711         )
 5712     )
 5713     conn.close()
 5714 
 5715     if full and not getattr(libvirt, "VIR_CONNECT_BASELINE_CPU_EXPAND_FEATURES", False):
 5716         # Try do it by ourselves
 5717         # Find the models in cpu_map.xml and iterate over them for as long as entries have submodels
 5718         with salt.utils.files.fopen("/usr/share/libvirt/cpu_map.xml", "r") as cpu_map:
 5719             cpu_map = ElementTree.parse(cpu_map)
 5720 
 5721         cpu_model = cpu.find("model").text
 5722         while cpu_model:
 5723             cpu_map_models = cpu_map.findall("arch/model")
 5724             cpu_specs = [
 5725                 el
 5726                 for el in cpu_map_models
 5727                 if el.get("name") == cpu_model and bool(len(el))
 5728             ]
 5729 
 5730             if not cpu_specs:
 5731                 raise ValueError("Model {} not found in CPU map".format(cpu_model))
 5732             elif len(cpu_specs) > 1:
 5733                 raise ValueError(
 5734                     "Multiple models {} found in CPU map".format(cpu_model)
 5735                 )
 5736 
 5737             cpu_specs = cpu_specs[0]
 5738 
 5739             # libvirt's cpu map used to nest model elements, to point the parent model.
 5740             # keep this code for compatibility with old libvirt versions
 5741             model_node = cpu_specs.find("model")
 5742             if model_node is None:
 5743                 cpu_model = None
 5744             else:
 5745                 cpu_model = model_node.get("name")
 5746 
 5747             cpu.extend([feature for feature in cpu_specs.findall("feature")])
 5748 
 5749     if out == "salt":
 5750         return {
 5751             "model": cpu.find("model").text,
 5752             "vendor": cpu.find("vendor").text,
 5753             "features": [feature.get("name") for feature in cpu.findall("feature")],
 5754         }
 5755     return ElementTree.tostring(cpu)
 5756 
 5757 
 5758 def network_define(name, bridge, forward, ipv4_config=None, ipv6_config=None, **kwargs):
 5759     """
 5760     Create libvirt network.
 5761 
 5762     :param name: Network name
 5763     :param bridge: Bridge name
 5764     :param forward: Forward mode(bridge, router, nat)
 5765     :param vport: Virtualport type
 5766     :param tag: Vlan tag
 5767     :param autostart: Network autostart (default True)
 5768     :param start: Network start (default True)
 5769     :param ipv4_config: IP v4 configuration
 5770         Dictionary describing the IP v4 setup like IP range and
 5771         a possible DHCP configuration. The structure is documented
 5772         in net-define-ip_.
 5773 
 5774         .. v