"Fossies" - the Fresh Open Source Software Archive

Member "nova-18.2.3/doc/source/contributor/testing/zero-downtime-upgrade.rst" (10 Oct 2019, 6072 Bytes) of package /linux/misc/openstack/nova-18.2.3.tar.gz:


As a special service "Fossies" has tried to format the requested source page into HTML format (assuming markdown format). Alternatively you can here view or download the uninterpreted source code file. A member file download can also be achieved by clicking within a package contents listing on the according byte size field. See also the latest Fossies "Diffs" side-by-side code changes report for "zero-downtime-upgrade.rst": 18.2.2_vs_18.2.3.

Testing Zero Downtime Upgrade Process

Zero Downtime upgrade eliminates any disruption to nova API service during upgrade.

Nova API services are upgraded at the end. The basic idea of the zero downtime upgrade process is to have the connections drain from the old API before being upgraded. In this process, new connections go to the new API nodes while old connections slowly drain from the old nodes. This ensures that the user sees the max_supported API version as a monotonically increasing number. There might be some performance degradation during the process due to slow HTTP responses and delayed request handling, but there is no API downtime.

This page describes how to test the zero downtime upgrade process.

Environment

Instructions to setup HAProxy

Install HAProxy and Keepalived on both nodes.

# apt-get install haproxy keepalived

Let the kernel know that we intend to bind additional IP addresses that won't be defined in the interfaces file. To do this, edit /etc/sysctl.conf and add the following line:

net.ipv4.ip_nonlocal_bind=1

Make this take effect without rebooting.

# sysctl -p

Configure HAProxy to add backend servers and assign virtual IP to the frontend. On both nodes add the below HAProxy config:

# cd /etc/haproxy
# cat >> haproxy.cfg <<EOF

   global
      chroot /var/lib/haproxy
      user haproxy
      group haproxy
      daemon
      log 192.168.0.88 local0
      pidfile  /var/run/haproxy.pid
      stats socket /var/run/haproxy.sock mode 600 level admin
      stats timeout 2m
      maxconn 4000

   defaults
      log  global
      maxconn  8000
      mode  http
      option  redispatch
      retries  3
      stats  enable
      timeout  http-request 10s
      timeout  queue 1m
      timeout  connect 10s
      timeout  client 1m
      timeout  server 1m
      timeout  check 10s

   frontend nova-api-vip
      bind 192.168.0.95:8282             <<ha proxy virtual ip>>
      default_backend nova-api

   backend nova-api
      balance  roundrobin
      option  tcplog
      server  controller 192.168.0.88:8774  check
      server  apicomp  192.168.0.89:8774  check

  EOF

Note

Just change the IP for log in the global section on each node.

On both nodes add keepalived.conf:

# cd /etc/keepalived
# cat >> keepalived.conf <<EOF

   global_defs {
      router_id controller
   }
   vrrp_script haproxy {
      script "killall -0 haproxy"
      interval 2
      weight 2
   }
   vrrp_instance 50 {
      virtual_router_id 50
      advert_int 1
      priority 101
      state MASTER
      interface eth0
      virtual_ipaddress {
         192.168.0.95 dev eth0
      }
      track_script {
         haproxy
      }
   }

 EOF

Note

Change priority on node2 to 100 ( or vice-versa). Add HAProxy virtual IP.

Restart keepalived service.

# service keepalived restart

Add ENABLED=1 in /etc/default/haproxy and then restart HAProxy service.

# service haproxy restart

When both the services have restarted, node with the highest priority for keepalived claims the virtual IP. You can check which node claimed the virtual IP using:

# ip a

Zero Downtime upgrade process

General rolling upgrade process: minimal_downtime_upgrade.

Before Upgrade

Before maintenance window

During maintenance window

After maintenance window