Deployment and Administration guide

Manual installation and configuration

Repository

Install the INDIGO repository.

Install the Synergy packages

On CentOS7:

yum install python-synergy-service python-synergy-scheduler-manager

On Ubuntu:

apt-get install python-synergy-service python-synergy-scheduler-manager

They can be installed in the OpenStack controller node or on another node.

Updating the Synergy packages

The Synergy project makes periodic releases. As a system administrator you can get the latest features and bug fixes by updating Synergy.

This is done using the standard update commands for your OS, as long you have the INDIGO repository set up.

On Ubuntu:

apt-get update
apt-get upgrade

On CentOS:

yum update

Once the update is complete remember to restart the service. Follow the instructions in "Configure and start Synergy" section of this guide to see how to do it.

Setup the Synergy database

Then use the database access client to connect to the database server as the root user:

$ mysql -u root -p

Create the synergy database:

CREATE DATABASE synergy;

Grant proper access to the glance database:

GRANT ALL PRIVILEGES ON synergy.* TO 'synergy'@'%' IDENTIFIED BY 'SYNERGY_DBPASS';  
flush privileges;

Replace SYNERGY_DBPASS with a suitable password.

Exit the database access client.

Add Synergy as an OpenStack endpoint and service

Source the admin credentials to gain access to admin-only CLI commands:

$ . admin-openrc

Register the Synergy service and endpoint in the Openstack service catalog:

openstack service create --name synergy management
openstack endpoint create --region RegionOne management public http://$SYNERGY_HOST_IP:8051 
openstack endpoint create --region RegionOne management admin http://$SYNERGY_HOST_IP:8051
openstack endpoint create --region RegionOne management internal http://$SYNERGY_HOST_IP:8051

Adjust nova notifications

Make sure that nova notifications are enabled on the controller and compute node. Edit the /etc/nova/nova.conf file. In the [DEFAULT] and [oslo_messaging_notifications] sections add the following attributes:

[DEFAULT]
...
notify_on_state_change = vm_state
default_notification_level = INFO

[oslo_messaging_notifications]
...
driver = messaging
topics = notifications

The topics parameter is used by Nova for informing listeners about the state changes of the VMs. In case some other service (e.g. Ceilometer) is listening on the default topic notifications, to avoid the competition on consuming the notifications, please define a new topic specific for Synergy (e.g. topics = notifications,synergy_notifications).

Then restart the nova services on the Compute node.

Configure Controller to use Synergy

Perform these steps on the controller node. In /etc/nova/ create a nova-api.conf file. Edit /etc/nova/nova-api.conf file and add the following to it:

[conductor]
topic=synergy

The topic must have the same value of the synergy_topic defined in the /etc/synergy/synergy_scheduler.conf file.

Only for Ubuntu 16.04, edit the /etc/init.d/nova-api file and replace

[ "x$USE_LOGFILE" != "xno" ] && DAEMON_ARGS="$DAEMON_ARGS --log-file=$LOGFILE"

with

[ "x$USE_LOGFILE" != "xno" ] && DAEMON_ARGS="$DAEMON_ARGS --config-file /etc/nova/nova-api.conf --log-file=$LOGFILE"

Restart nova-api service to enable your configuration.

On the node where it is installed RabbitMQ, run the following command to check whether your configuration is correct:

# rabbitmqctl list_queues | grep synergy
synergy_fanout_1e30d613c19142ec8ce452292042c35c    0
synergy    0
synergy.192.168.60.231    0

The output of the command should show something similar.

Configure and start Synergy

Configure the Synergy service, as explained in the following section.

Then start and enable the Synergy service. On CentOS:

systemctl start synergy
systemctl enable synergy

On Ubuntu:

service synergy start

If Synergy complains about incompatibility with the version of installed oslo packages, e.g.:

synergy.service - ERROR - manager 'timer' instantiation error: (oslo.log 
1.10.0 (/usr/lib/python2.7/site-packages), 
Requirement.parse('oslo.log<2.3.0,>=2.0.0')) 

synergy.service - ERROR - manager 'timer' instantiation error: 
(oslo.service 0.9.0 (/usr/lib/python2.7/site-packages), 
Requirement.parse('oslo.service<1.3.0,>=1.0.0')) 

synergy.service - ERROR - manager 'timer' instantiation error: 
(oslo.concurrency 2.6.0 (/usr/lib/python2.7/site-packages), 
Requirement.parse('oslo.concurrency<3.3.0,>=3.0.0')) 

synergy.service - ERROR - manager 'timer' instantiation error: 
(oslo.middleware 2.8.0 (/usr/lib/python2.7/site-packages), 

Requirement.parse('oslo.middleware<3.5.0,>=3.0.0'))

please patch the the file /usr/lib/python2.7/site-packages/synergy_service-1.0.0-py2.7.egg-info/requires.txt by removing the versions after the dependencies.

The Synergy configuration file

Synergy must be configured properly by filling the synergy.conf and synergy_scheduler.conf configuration files in /etc/synergy/. To apply the changes of any configuration parameter, the Synergy service must be restarted.

This is an example of the synergy.conf configuration file:

[DEFAULT]


[Logger]
# set the logging file name
filename = /var/log/synergy/synergy.log

# set the logging level. Valid values are: CRITICAL, ERROR, WARNING, INFO, DEBUG, NOTSET.
level = INFO

# set the format of the logged messages
formatter = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"

# set the max file size
maxBytes = 1048576

# set the logging rotation threshold
backupCount = 100 


[WSGI]
# set the Synergy hostname
host = SYNERGY_HOST

# set the WSGI port (default: 8051)
port = 8051

# set the number of threads
threads = 2

# set the SSL
use_ssl = False
#ssl_ca_file =  
#ssl_cert_file = 
#ssl_key_file = 
max_header_line = 16384
retry_until_window = 30
tcp_keepidle = 600
backlog = 4096

The following describes the meaning of the attributes of the synergy.conf file, for each possible section:

Section [Logger]

Section [WSGI]

This example shows how to configure the synergy_scheduler.conf file:

[DEFAULT]


[SchedulerManager]
autostart = True

# set the manager rate (minutes)
rate = 1

# set the list of projects accessing to the shared quota
# projects = prj_a, prj_b
#projects =

# set the projects share
# shares = prj_a=70, prj_b=30
#shares =

# set the default max time to live (minutes) for VM/Container (default: 2880)
default_TTL = 2880

# set, for the specified projects, the max time to live (minutes) for VM/Container
# TTLs = prj_a=1440, prj_b=2880
#TTLs =

# set the max depth used by the backfilling strategy (default: 100)
# this allows Synergy to not check the whole queue when looking for VMs to start
backfill_depth = 100

# set the notification topic used by Nova for informing listeners about the state
# changes of the VMs. In case some other service (e.g. Ceilometer) is listening
# on the default Nova topic (i.e. "notifications"), please define a new topic
specific for Synergy (e.g. notification_topics = notifications,synergy_notifications)
notification_topic = notifications


[FairShareManager]
autostart = True

# set the manager rate (minutes)
rate = 2

# set the period size (default: 7 days)
period_length = 7

# set num of periods (default: 3)
periods = 3

# set the default share value (default: 10)
default_share = 10

# set the dacay weight, float value [0,1] (default: 0.5)
decay_weight = 0.5

# set the vcpus weight (default: 100)
vcpus_weight = 50

# set the age weight (default: 10)
age_weight = 10

# set the memory weight (default: 70)
memory_weight = 70


[KeystoneManager]
autostart = True

# set the manager rate (minutes)
rate = 5

# set the Keystone url (v3 only)
auth_url = http://CONTROLLER_HOST:5000/v3

# set the name of user with admin role
#username =

# set the password of user with admin role
#password =

# set the project name to request authorization on
#project_name =

# set the project id to request authorization on
#project_id =

# set the http connection timeout (default: 60)
timeout = 60

# set the user domain name (default: default)
user_domain_name = default

# set the project domain name (default: default)
project_domain_name = default

# set the clock skew. This forces the request for token, a
# delta time before the token expiration (default: 60 sec)
clock_skew = 60

# set the PEM encoded Certificate Authority to use when verifying HTTPs connections
#ssl_ca_file =

# set the SSL client certificate (PEM encoded)
#ssl_cert_file = 


[NovaManager]
autostart = True

# set the manager rate (minutes)
rate = 5

#set the http connection timeout (default: 60)
timeout = 60

# set the AMQP backend type (e.g. rabbit, qpid)
#amqp_backend =

# set the AMQP HA cluster host:port pairs
#amqp_hosts =

# set the AMQP broker address where a single node is used (default: localhost)
amqp_host = localhost

# set the AMQP broker port where a single node is used
amqp_port = 5672

# set the AMQP user
#amqp_user =

# set the AMQP user password
#amqp_password =

# set the AMQP virtual host (default: /)
amqp_virtual_host = /

# set the Nova host (default: localhost)
host = CONTROLLER_HOST

# set the Synery topic as defined in nova-api.conf file (default: synergy)
synergy_topic = synergy

# set the Nova conductor topic (default: conductor)
conductor_topic = conductor

# set the Nova compute topic (default: compute)
compute_topic = compute

# set the Nova scheduler topic (default: scheduler)
scheduler_topic = scheduler

# set the Nova database connection
db_connection=DIALECT+DRIVER://USER:PASSWORD@DB_HOST/nova

# set the Nova CPU allocation ratio (default: 16)
cpu_allocation_ratio = 16

# set the Nova RAM allocation ratio (default: 1.5)
ram_allocation_ratio = 1.5

# set the Nova metadata_proxy_shared_secret
metadata_proxy_shared_secret =

# set the PEM encoded Certificate Authority to use when verifying HTTPs connections
#ssl_ca_file =

# set the SSL client certificate (PEM encoded)
#ssl_cert_file = 


[QueueManager]
autostart = True

# set the manager rate (minutes)
rate = 60

# set the Synergy database connection:
db_connection = DIALECT+DRIVER://USER:PASSWORD@DB_HOST/synergy

# set the connection pool size (default: 10)
db_pool_size = 10

# set the number of seconds after which a connection is automatically
# recycled (default: 30)
db_pool_recycle = 30

# set the max overflow (default: 5)
db_max_overflow = 5


[QuotaManager]
autostart = True

# set the manager rate (minutes)
rate = 5

Attributes and their meanings are described in the following tables:

Section [SchedulerManager]

Section [FairShareManager]

Section [KeystoneManager]

Section [NovaManager]

Section [QueueManager]

Section [QuotaManager]

Installation and configuration using puppet

We provide a Puppet module for Synergy so users can install and configure Synergy with Puppet. The module provides both the synergy-service and synergy-scheduler-manager components.

The module is available on the Puppet Forge : vll/synergy.

Install the puppet module with:

puppet module install vll-synergy

Usage example:

class { 'synergy':
  synergy_db_url          => 'mysql://synergy:test@localhost/synergy',
  synergy_project_shares  => {'A' => 70, 'B' => 30 },
  keystone_url            => 'https://example.com',
  keystone_admin_user     => 'admin',
  keystone_admin_password => 'the keystone password',
  nova_url                => 'https://example.com',
  nova_db_url             => 'mysql://nova:test@localhost/nova',
  amqp_backend            => 'rabbit',
  amqp_host               => 'localhost',
  amqp_port               => 5672,
  amqp_user               => 'openstack',
  amqp_password           => 'the amqp password',
  amqp_virtual_host       => '/',
}

The Synergy command line interface

The Synergy service provides a command-line client, called synergy, which allows the Cloud administrator to control and monitor the Synergy service.

Before running the Synergy client command, you must create and source the admin-openrc.sh file to set the relevant environment variables. This is the same script used to run the OpenStack command line tools.

Note that the OS_AUTH_URL variables must refer to the v3 version of the keystone API, e.g.:

export OS_AUTH_URL=https://cloud-areapd.pd.infn.it:35357/v3

synergy usage

usage: synergy [-h] [--version] [--debug] [--os-username <auth-user-name>]
               [--os-password <auth-password>]
               [--os-user-domain-id <auth-user-domain-id>]
               [--os-user-domain-name <auth-user-domain-name>]
               [--os-project-name <auth-project-name>]
               [--os-project-id <auth-project-id>]
               [--os-project-domain-id <auth-project-domain-id>]
               [--os-project-domain-name <auth-project-domain-name>]
               [--os-auth-url <auth-url>] [--os-auth-system <auth-system>]
               [--bypass-url <bypass-url>] [--os-cacert <ca-certificate>]
               {manager,queue,quota,usage} ...

positional arguments:
  {manager,queue,quota,usage}
                        commands

optional arguments:
  -h, --help            show this help message and exit
  --version             show program's version number and exit
  --debug               print debugging output
  --os-username <auth-user-name>
                        defaults to env[OS_USERNAME]
  --os-password <auth-password>
                        defaults to env[OS_PASSWORD]
  --os-user-domain-id <auth-user-domain-id>
                        defaults to env[OS_USER_DOMAIN_ID]
  --os-user-domain-name <auth-user-domain-name>
                        defaults to env[OS_USER_DOMAIN_NAME]
  --os-project-name <auth-project-name>
                        defaults to env[OS_PROJECT_NAME]
  --os-project-id <auth-project-id>
                        defaults to env[OS_PROJECT_ID]
  --os-project-domain-id <auth-project-domain-id>
                        defaults to env[OS_PROJECT_DOMAIN_ID]
  --os-project-domain-name <auth-project-domain-name>
                        defaults to env[OS_PROJECT_DOMAIN_NAME]
  --os-auth-url <auth-url>
                        defaults to env[OS_AUTH_URL]
  --os-auth-system <auth-system>
                        defaults to env[OS_AUTH_SYSTEM]
  --bypass-url <bypass-url>
                        use this API endpoint instead of the Service Catalog
  --os-cacert <ca-certificate>
                        Specify a CA bundle file to use in verifying a TLS
                        (https) server certificate. Defaults to env[OS_CACERT]

Command-line interface to the OpenStack Synergy API.

The synergy optional arguments:

-h, --help

Show help message and exit

--version

Show program’s version number and exit

--debug

Show debugging information

--os-username <auth-user-name>

Username to login with. Defaults to env[OS_USERNAME]

--os-password <auth-password>

Password to use.Defaults to env[OS_PASSWORD]

--os-project-name <auth-project-name>

Project name to scope to. Defaults to env:[OS_PROJECT_NAME]

--os-project-id <auth-project-id>

Id of the project to scope to. Defaults to env[OS_PROJECT_ID]

--os-project-domain-id <auth-project-domain-id>

Specify the project domain id. Defaults to env[OS_PROJECT_DOMAIN_ID]

--os-project-domain-name <auth-project-domain-name>

Specify the project domain name. Defaults to env[OS_PROJECT_DOMAIN_NAME]

--os-user-domain-id <auth-user-domain-id>

Specify the user domain id. Defaults to env[OS_USER_DOMAIN_ID]

--os-user-domain-name <auth-user-domain-name>

Specify the user domain name. Defaults to env[OS_USER_DOMAIN_NAME]

--os-auth-url <auth-url>

The URL of the Identity endpoint. Defaults to env[OS_AUTH_URL]

--os-auth-system <auth-system>

The auth system to be used. Defaults to env[OS_AUTH_SYSTEM]

--bypass-url <bypass-url>

Use this API endpoint instead of the Service Catalog

--os-cacert <ca-bundle-file>

Specify a CA certificate bundle file to use in verifying a TLS
(https) server certificate. Defaults to env[OS_CACERT]

synergy manager

This command allows to get information about the managers deployed in the Synergy service and control their execution:

# synergy manager -h
usage: synergy manager [-h] {list,status,start,stop} ...

positional arguments:
  {list,status,start,stop}
    list                list the managers
    status              show the managers status
    start               start the manager
    stop                stop the manager

optional arguments:
  -h, --help            show this help message and exit

The command synergy manager list provides the list of all managers deployed in the Synergy service:

# synergy manager list
╒══════════════════╕
│ manager          │
╞══════════════════╡
│ QuotaManager     │
├──────────────────┤
│ NovaManager      │
├──────────────────┤
│ SchedulerManager │
├──────────────────┤
│ TimerManager     │
├──────────────────┤
│ QueueManager     │
├──────────────────┤
│ KeystoneManager  │
├──────────────────┤
│ FairShareManager │
╘══════════════════╛

To get the status about managers, use:

# synergy manager status
╒══════════════════╤══════════╤══════════════╕
│ manager          │ status   │   rate (min) │
╞══════════════════╪══════════╪══════════════╡
│ QuotaManager     │ RUNNING  │            1 │
├──────────────────┼──────────┼──────────────┤
│ NovaManager      │ RUNNING  │            1 │
├──────────────────┼──────────┼──────────────┤
│ SchedulerManager │ RUNNING  │            1 │
├──────────────────┼──────────┼──────────────┤
│ TimerManager     │ ACTIVE   │           60 │
├──────────────────┼──────────┼──────────────┤
│ QueueManager     │ RUNNING  │           10 │
├──────────────────┼──────────┼──────────────┤
│ KeystoneManager  │ RUNNING  │            1 │
├──────────────────┼──────────┼──────────────┤
│ FairShareManager │ RUNNING  │            1 │
╘══════════════════╧══════════╧══════════════╛

# synergy manager status TimerManager
╒══════════════╤══════════╤══════════════╕
│ manager      │ status   │   rate (min) │
╞══════════════╪══════════╪══════════════╡
│ TimerManager │ ACTIVE   │           60 │
╘══════════════╧══════════╧══════════════╛

To control the execution of a specific manager, use the start and stop sub-commands:

# synergy manager start TimerManager
╒══════════════╤════════════════════════════════╤══════════════╕
│ manager      │ status                         │   rate (min) │
╞══════════════╪════════════════════════════════╪══════════════╡
│ TimerManager │ RUNNING (started successfully) │           60 │
╘══════════════╧════════════════════════════════╧══════════════╛

# synergy manager stop TimerManager
╒══════════════╤═══════════════════════════════╤══════════════╕
│ manager      │ status                        │   rate (min) │
╞══════════════╪═══════════════════════════════╪══════════════╡
│ TimerManager │ ACTIVE (stopped successfully) │           60 │
╘══════════════╧═══════════════════════════════╧══════════════╛

synergy quota

The overall cloud resources can be grouped in:

  • private quota: composed of resources statically allocated and managed using the 'standard' OpenStack policies

  • shared quota: composed of resources non statically allocated and fairly distributed among users by Synergy

The size of the shared quota is calculated as the difference between the total amount of cloud resources (considering also the over-commitment ratios) and the total resources allocated to the private quotas. Therefore for all projects it is necessary to specify the proper quota for instances, VCPUs and RAM so that their total is less than the total amount of cloud resources.

Since Synergy is installed, the private quota of projects cannot be managed anymore by using the Horizon dashboard, but only via command line tools using the following OpenStack command:

# openstack quota set --cores <num_vcpus> --ram <memory_size> --instances <max_num_instances> --class <project_id>

The private quota will be updated from Synergy after a few minutes without restart it. This example shows how the private quota of the project _prj_a (id=_a5ccbaf2a9da407484de2af881198eb9) has been modified:

# synergy quota show --project_name prj_a
╒═══════════╤═══════════════════════════════════════════════╤═════════════════════════════════════════════════════════════════════════════╕
│ project   │ private quota                                 │ shared quota │
╞═══════════╪═══════════════════════════════════════════════╪═════════════════════════════════════════════════════════════════════════════╡
│ prj_a     │ vcpus: 0.00 of 3.00 | memory: 0.00 of 1024.00 │ vcpus: 0.00 of 26.00 | memory: 0.00 of 59956.00 | share: 70.00% | TTL: 5.00 │
╘═══════════╧═══════════════════════════════════════════════╧═════════════════════════════════════════════════════════════════════════════╛ 

# openstack quota set --cores 2 --ram 2048 --instances 10 --class a5ccbaf2a9da407484de2af881198eb9

# synergy quota show --project_name prj_a
╒═══════════╤═══════════════════════════════════════════════╤═════════════════════════════════════════════════════════════════════════════╕
│ project   │ private quota                                 │ shared quota │
╞═══════════╪═══════════════════════════════════════════════╪═════════════════════════════════════════════════════════════════════════════╡
│ prj_a     │ vcpus: 0.00 of 2.00 | memory: 0.00 of 2048.00 │ vcpus: 0.00 of 27.00 | memory: 0.00 of 58932.00 | share: 70.00% | TTL: 5.00 │
╘═══════════╧═══════════════════════════════════════════════╧═════════════════════════════════════════════════════════════════════════════╛

To get information about the private and shared quotas you must use the synergy quota command :

# synergy quota -h
usage: synergy quota [-h] {show} ...

positional arguments:
  {show}
    show      shows the quota info

optional arguments:
  -h, --help  show this help message and exit

# synergy quota show -h
usage: synergy quota show [-h] [-i <id> | -n <name> | -a | -s]

optional arguments:
  -h, --help            show this help message and exit
  -i <id>, --project_id <id>
  -n <name>, --project_name <name>
  -a, --all_projects
  -s, --shared

To get the status about the shared quota, use the option --shared:

# synergy quota show --shared
╒════════════╤════════╤════════╕
│ resource   │   used │   size │
╞════════════╪════════╪════════╡
│ vcpus      │      2 │     27 │
├────────────┼────────┼────────┤
│ memory     │   1024 │  60980 │
├────────────┼────────┼────────┤
│ instances  │      1 │     -1 │
╘════════════╧════════╧════════╛

in this example the total amount of VCPUs allocated to the shared quota is 27 whereof have been used just 2 CPUs (similarly to the memory and instances number). The value -1 means that the Cloud administrator has not fixed the limit of the number of instances (i.e. VMs), so in this example the VMs can be unlimited.

The --all_projects option provides information about the private and shared quotas of all projects:

# synergy quota show --all_projects
╒═══════════╤════════════════════════════════════════════════╤═══════════════════════════════════════════════════════════════════════════════╕
│ project   │ private quota                                  │ shared quota                                                                  │
╞═══════════╪════════════════════════════════════════════════╪═══════════════════════════════════════════════════════════════════════════════╡
│ prj_b     │ vcpus: 1.00 of 3.00 | memory: 512.0 of 1536.00 │ vcpus: 0.00 of 27.00 | memory: 0.00 of 60980.00 | share: 30.00% | TTL: 5.00   │
├───────────┼────────────────────────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤
│ prj_a     │ vcpus: 0.00 of 1.00 | memory: 0.00 of 512.00   │ vcpus: 2.00 of 27.00 | memory: 1024.0 of 60980.00 | share: 70.00% | TTL: 5.00 │
╘═══════════╧════════════════════════════════════════════════╧═══════════════════════════════════════════════════════════════════════════════╛

# synergy quota show --project_name prj_a
╒═══════════╤══════════════════════════════════════════════╤═══════════════════════════════════════════════════════════════════════════════╕
│ project   │ private quota                                │ shared quota                                                                  │
╞═══════════╪══════════════════════════════════════════════╪═══════════════════════════════════════════════════════════════════════════════╡
│ prj_a     │ vcpus: 0.00 of 1.00 | memory: 0.00 of 512.00 │ vcpus: 2.00 of 27.00 | memory: 1024.0 of 60980.00 | share: 70.00% | TTL: 5.00 │
╘═══════════╧══════════════════════════════════════════════╧═══════════════════════════════════════════════════════════════════════════════╛

In this example the project prj_b is currently consuming just resources of its private quota (1 VCPU and 512MB of memory) while the shared quota is not used. By contrary, the prj_a is consuming just the shared quota (2 VCPUs and 1024MB of memory). The share values fixed by the Cloud administrator are 70% for prj_a and 30% prj_b (the attribute shares in synergy.conf) while for both projects the TTL has been set to 5 minutes (the TTL attribute). Remark, in this example, the VMs instantiated in the shared quota can live just 5 minutes while the ones created in the private quota can live forever.

synergy queue

This command provides information about the amount of user requests stored in the persistent priority queue:

# synergy queue -h
usage: synergy queue [-h] {show} ...

positional arguments:
  {show}
    show      shows the queue info

optional arguments:
  -h, --help  show this help message and exit

# synergy queue show
╒═════════╤════════╤═══════════╕
│ name    │   size │ is open   │
╞═════════╪════════╪═══════════╡
│ DYNAMIC │    544 │ true      │
╘═════════╧════════╧═══════════╛

synergy usage

To get information about the usage of shared resources at project or user level, use:

# synergy usage show -h
usage: synergy usage show [-h] {project,user} ...

positional arguments:
  {project,user}
    project       project help
    user          user help

optional arguments:
  -h, --help      show this help message and exit


# synergy usage show project -h
usage: synergy usage show project [-h] [-d <id> | -m <name> | -a]

optional arguments:
  -h, --help            show this help message and exit
  -d <id>, --project_id <id>
  -m <name>, --project_name <name>
  -a, --all_projects


# synergy usage show user -h
usage: synergy usage show user [-h] (-d <id> | -m <name>)
                               (-i <id> | -n <name> | -a)

optional arguments:
  -h, --help            show this help message and exit
  -d <id>, --project_id <id>
  -m <name>, --project_name <name>
  -i <id>, --user_id <id>
  -n <name>, --user_name <name>
  -a, --all_users

The project sub-command provides the resource usage information by the projects.

The following example shows the projects prj_a (share: 70%) and prj_b (share: 30%) have consumed in the last three days, respectively 70.40% and 29.40% of shared resources:

# synergy usage show project --all_projects
╒═══════════╤═══════════════════════════════════════════════════════════════╤═════════╕
│ project   │ shared quota (09 Dec 2016 14:35:43 - 12 Dec 2016 14:35:43)    │ share   │
╞═══════════╪═══════════════════════════════════════════════════════════════╪═════════╡
│ prj_b     │ vcpus: 29.60% | memory: 29.60%                                │ 30.00%  │
├───────────┼───────────────────────────────────────────────────────────────┼─────────┤
│ prj_a     │ vcpus: 70.40% | memory: 70.40%                                │ 70.00%  │
╘═══════════╧═══════════════════════════════════════════════════════════════╧═════════╛

# synergy usage show project --project_name prj_a
╒═══════════╤══════════════════════════════════════════════════════════════╤═════════╕
│ project   │ shared quota (09 Dec 2016 15:01:44 - 12 Dec 2016 15:01:44)   │ share   │
╞═══════════╪══════════════════════════════════════════════════════════════╪═════════╡
│ prj_a     │ vcpus: 70.40% | memory: 70.40%                               │ 70.00%  │
╘═══════════╧══════════════════════════════════════════════════════════════╧═════════╛

The time window is defined by Cloud administrator by setting the attributes period and period_length in synergy.conf.

It may happen that the prj_a (or prj_b) doesn't have the need to consume shared resources for a while: in this scenario the others projects (i.e. prj_b) can take advantage and so consume more resources than the fixed share (i.e. 30%):

# synergy usage show project --all_projects
╒═══════════╤═══════════════════════════════════════════════════════════════╤═════════╕
│ project   │ shared quota (09 Dec 2016 14:35:43 - 12 Dec 2016 14:35:43)    │ share   │
╞═══════════╪═══════════════════════════════════════════════════════════════╪═════════╡
│ prj_b     │ vcpus: 98.40% | memory: 98.40%                                │ 30.00%  │
├───────────┼───────────────────────────────────────────────────────────────┼─────────┤
│ prj_a     │ vcpus: 1.60% | memory: 1.60%                                  │ 70.00%  │
╘═══════════╧═══════════════════════════════════════════════════════════════╧═════════╛

However, as soon as the prj_a requires more shared resources, it will gain a higher priority than the prj_b, in order to balance their usage.

The user sub-command provides the resource usage information by the project users.

The following example shows the usage report of users belonging to the project prj_a. They have the same value for share (35%) but different priority (user_a1=80, user_a2=100) because the user_a1 has consumed too much with respect to user_a2 (51.90% VS 48.10%).

# synergy usage show user --project_name prj_a --all_users
╒═════════╤══════════════════════════════════════════════════════════════╤═════════╤════════════╕
│ user    │ shared quota (09 Dec 2016 14:58:44 - 12 Dec 2016 14:58:44)   │ share   │   priority │
╞═════════╪══════════════════════════════════════════════════════════════╪═════════╪════════════╡
│ user_a2 │ vcpus: 48.10% | memory: 48.10%                               │ 35.00%  │        100 │
├─────────┼──────────────────────────────────────────────────────────────┼─────────┼────────────┤
│ user_a1 │ vcpus: 51.90% | memory: 51.90%                               │ 35.00%  │         80 │
╘═════════╧══════════════════════════════════════════════════════════════╧═════════╧════════════╛

# synergy usage show user --project_name prj_a --user_name user_a1
╒═════════╤══════════════════════════════════════════════════════════════╤═════════╤════════════╕
│ user    │ shared quota (09 Dec 2016 14:58:44 - 12 Dec 2016 14:58:44)   │ share   │   priority │
╞═════════╪══════════════════════════════════════════════════════════════╪═════════╪════════════╡
│ user_a1 │ vcpus: 51.90% | memory: 51.90%                               │ 35.00%  │         80 │
╘═════════╧══════════════════════════════════════════════════════════════╧═════════╧════════════╛

Open Ports

To interact with Synergy using the client tool, just one port needs to be open. This is the port defined in the Synergy configuration file (attribute port in the [WSGI] section). The default value is 8051.

Last updated