Synergy Doc
Search…
Deployment and Administration guide
⚠ This is the documentation for an old version of Synergy on INDIGO 1 ⚠
The Synergy package versions corresponding to this documentation are:
    synergy-service v1.4.0
    synergy-scheduler-manager v2.3.0

Manual installation and configuration

Repository

Install the INDIGO repository.

Install the Synergy packages

On CentOS7:
1
yum install python-synergy-service python-synergy-scheduler-manager
Copied!
On Ubuntu:
1
apt-get install python-synergy-service python-synergy-scheduler-manager
Copied!
They can be installed in the OpenStack controller node or on another node.

Updating the Synergy packages

The Synergy project makes periodic releases. As a system administrator you can get the latest features and bug fixes by updating Synergy.
This is done using the standard update commands for your OS, as long you have the INDIGO repository set up.
On Ubuntu:
1
apt-get update
2
apt-get upgrade
Copied!
On CentOS:
1
yum update
Copied!
Once the update is complete remember to restart the service. Follow the instructions in "Configure and start Synergy" section of this guide to see how to do it.

Setup the Synergy database

Then use the database access client to connect to the database server as the root user:
1
$ mysql -u root -p
Copied!
Create the synergy database:
1
CREATE DATABASE synergy;
Copied!
Grant proper access to the glance database:
1
GRANT ALL PRIVILEGES ON
2
._ TO 'synergy'@'localhost' \
3
IDENTIFIED BY 'SYNERGY_DBPASS';
4
GRANT ALL PRIVILEGES ON synergy._ TO 'synergy'@'%' \
5
IDENTIFIED BY 'SYNERGY_DBPASS';
6
flush privileges;
Copied!
Replace SYNERGY_DBPASS with a suitable password.
Exit the database access client.

Add Synergy as an OpenStack endpoint and service

Source the admin credentials to gain access to admin-only CLI commands:
1
$ . admin-openrc
Copied!
Register the Synergy service and endpoint in the Openstack service catalog:
1
openstack service create --name synergy management
2
openstack endpoint create --region RegionOne management public http://$SYNERGY_HOST_IP:8051
3
openstack endpoint create --region RegionOne management admin http://$SYNERGY_HOST_IP:8051
4
openstack endpoint create --region RegionOne management internal http://$SYNERGY_HOST_IP:8051
Copied!

Adjust nova notifications

Make sure that nova notifications are enabled on the compute node. Edit the /etc/nova/nova.conf file. In the [DEFAULT] and [oslo_messaging_notifications] sections add the following attributes:
1
[DEFAULT]
2
...
3
notify_on_state_change = vm_and_task_state
4
default_notification_level = INFO
5
6
[oslo_messaging_notifications]
7
...
8
driver = messagingv2
9
topics = notifications
Copied!
The topics parameter is used by Nova for informing listeners about the state changes of the VMs. In case some other service (e.g. Ceilometer) is listening on the default topic notifications, to avoid the competition on consuming the notifications, please define a new topic specific for Synergy (e.g. topics = notifications,synergy_notifications).

Configure Controller to use Synergy

In /etc/nova/ create a nova-conductor.conf file. Edit /etc/nova/nova-conductor.conf file and add the following to it:
1
[conductor]
2
topic=conductor_synergy
Copied!
Perform these steps on the controller node.

Restart nova

Then restart the nova services on the Controller and Compute nodes.

Verify operation

Run this command on the controller node to check whether your configuration is correct:
1
# rabbitmqctl list_queues | grep synergy
2
3
conductor_synergy_fanout_55408359225b4d1f8a825b472de99fd3 0
4
conductor_synergy_fanout_7892360ce9c14fb4bb6df70ca6829984 0
5
conductor_synergy_fanout_c012b565a7b0414ebb75011eda1d18e8 0
6
conductor_synergy_fanout_17fdc55529fd4e51ae56208a3bac0735 0
7
conductor_synergy 0
8
conductor_synergy.cld-corso-21.cloud.pd.infn.it 0
9
conductor_synergy_fanout_5e6e8f8f09bf4ec4be9405499dc4b921 0
10
conductor_synergy_fanout_7e6adf936eb24cdaab41af661740649b 0
11
conductor_synergy_fanout_bb35ba53dd914138a2bd0e205c57e315 0
12
conductor_synergy_fanout_9a94fef80955478e96dddca32ac4eeb6 0
13
conductor_synergy_fanout_ad1fdd78801148e6a1a29ea58e929b76 0
Copied!
The output of the command should show something similar.

Configure and start Synergy

Configure the Synergy service, as explained in the following section.
Then start and enable the Synergy service. On CentOS:
1
systemctl start synergy
2
systemctl enable synergy
Copied!
On Ubuntu:
1
service synergy start
Copied!
If Synergy complains about incompatibility with the version of installed oslo packages, e.g.:
1
synergy.service - ERROR - manager 'timer' instantiation error: (oslo.log
2
1.10.0 (/usr/lib/python2.7/site-packages),
3
Requirement.parse('oslo.log<2.3.0,>=2.0.0'))
4
5
synergy.service - ERROR - manager 'timer' instantiation error:
6
(oslo.service 0.9.0 (/usr/lib/python2.7/site-packages),
7
Requirement.parse('oslo.service<1.3.0,>=1.0.0'))
8
9
synergy.service - ERROR - manager 'timer' instantiation error:
10
(oslo.concurrency 2.6.0 (/usr/lib/python2.7/site-packages),
11
Requirement.parse('oslo.concurrency<3.3.0,>=3.0.0'))
12
13
synergy.service - ERROR - manager 'timer' instantiation error:
14
(oslo.middleware 2.8.0 (/usr/lib/python2.7/site-packages),
15
16
Requirement.parse('oslo.middleware<3.5.0,>=3.0.0'))
Copied!
please patch the the file /usr/lib/python2.7/site-packages/synergy_service-1.0.0-py2.7.egg-info/requires.txt by removing the versions after the dependencies.

The Synergy configuration file

Synergy must be configured properly filling the /etc/synergy/synergy.conf configuration file. To apply the changes of any configuration parameter, the Synergy service must be restarted.
This is an example of the synergy.conf configuration file:
1
[DEFAULT]
2
3
4
[Logger]
5
# set the logging file name
6
filename = /var/log/synergy/synergy.log
7
8
# set the logging level. Valid values are: CRITICAL, ERROR, WARNING, INFO, DEBUG, NOTSET.
9
level = INFO
10
11
# set the format of the logged messages
12
formatter = "%(asctime)s - %(name)s - %(levelname)s - %(message)s"
13
14
# set the max file size
15
maxBytes = 1048576
16
17
# set the logging rotation threshold
18
backupCount = 100
19
20
21
[WSGI]
22
# set the Synergy hostname
23
host = SYNERGY_HOST
24
25
# set the WSGI port (default: 8051)
26
port = 8051
27
28
# set the number of threads
29
threads = 2
30
31
# set the SSL
32
use_ssl = False
33
#ssl_ca_file =
34
#ssl_cert_file =
35
#ssl_key_file =
36
max_header_line = 16384
37
retry_until_window = 30
38
tcp_keepidle = 600
39
backlog = 4096
40
41
42
[SchedulerManager]
43
autostart = True
44
45
# set the manager rate (minutes)
46
rate = 1
47
48
# set the list of projects accessing to the shared quota
49
# projects = prj_a, prj_b
50
#projects =
51
52
# set the projects share
53
# shares = prj_a=70, prj_b=30
54
#shares =
55
56
# set the default max time to live (minutes) for VM/Container (default: 2880)
57
default_TTL = 2880
58
59
# set, for the specified projects, the max time to live (minutes) for VM/Container
60
# TTLs = prj_a=1440, prj_b=2880
61
#TTLs =
62
63
# set the max depth used by the backfilling strategy (default: 100)
64
# this allows Synergy to not check the whole queue when looking for VMs to start
65
backfill_depth = 100
66
67
# set the notification topic used by Nova for informing listeners about the state
68
# changes of the VMs. In case some other service (e.g. Ceilometer) is listening
69
# on the default Nova topic (i.e. "notifications"), please define a new topic
70
specific for Synergy (e.g. notification_topics = notifications,synergy_notifications)
71
notification_topic = notifications
72
73
74
[FairShareManager]
75
autostart = True
76
77
# set the manager rate (minutes)
78
rate = 2
79
80
# set the period size (default: 7 days)
81
period_length = 7
82
83
# set num of periods (default: 3)
84
periods = 3
85
86
# set the default share value (default: 10)
87
default_share = 10
88
89
# set the dacay weight, float value [0,1] (default: 0.5)
90
decay_weight = 0.5
91
92
# set the vcpus weight (default: 100)
93
vcpus_weight = 50
94
95
# set the age weight (default: 10)
96
age_weight = 10
97
98
# set the memory weight (default: 70)
99
memory_weight = 70
100
101
102
[KeystoneManager]
103
autostart = True
104
105
# set the manager rate (minutes)
106
rate = 5
107
108
# set the Keystone url (v3 only)
109
auth_url = http://CONTROLLER_HOST:5000/v3
110
111
# set the name of user with admin role
112
#username =
113
114
# set the password of user with admin role
115
#password =
116
117
# set the project name to request authorization on
118
#project_name =
119
120
# set the project id to request authorization on
121
#project_id =
122
123
# set the http connection timeout (default: 60)
124
timeout = 60
125
126
# set the user domain name (default: default)
127
user_domain_name = default
128
129
# set the project domain name (default: default)
130
project_domain_name = default
131
132
# set the clock skew. This forces the request for token, a
133
# delta time before the token expiration (default: 60 sec)
134
clock_skew = 60
135
136
# set the PEM encoded Certificate Authority to use when verifying HTTPs connections
137
#ssl_ca_file =
138
139
# set the SSL client certificate (PEM encoded)
140
#ssl_cert_file =
141
142
143
[NovaManager]
144
autostart = True
145
146
# set the manager rate (minutes)
147
rate = 5
148
149
#set the http connection timeout (default: 60)
150
timeout = 60
151
152
# set the AMQP backend type (e.g. rabbit, qpid)
153
#amqp_backend =
154
155
# set the AMQP HA cluster host:port pairs
156
#amqp_hosts =
157
158
# set the AMQP broker address where a single node is used (default: localhost)
159
amqp_host = localhost
160
161
# set the AMQP broker port where a single node is used
162
amqp_port = 5672
163
164
# set the AMQP user
165
#amqp_user =
166
167
# set the AMQP user password
168
#amqp_password =
169
170
# set the AMQP virtual host (default: /)
171
amqp_virtual_host = /
172
173
# set the Nova host (default: localhost)
174
host = CONTROLLER_HOST
175
176
# set the Nova conductor topic (default: conductor)
177
conductor_topic = conductor
178
179
# set the Nova compute topic (default: compute)
180
compute_topic = compute
181
182
# set the Nova scheduler topic (default: scheduler)
183
scheduler_topic = scheduler
184
185
# set the Nova database connection
186
db_connection=DIALECT+DRIVER://USER:[email protected]_HOST/nova
187
188
# set the Nova CPU allocation ratio (default: 16)
189
cpu_allocation_ratio = 16
190
191
# set the Nova RAM allocation ratio (default: 1.5)
192
ram_allocation_ratio = 1.5
193
194
# set the Nova metadata_proxy_shared_secret
195
metadata_proxy_shared_secret =
196
197
# set the PEM encoded Certificate Authority to use when verifying HTTPs connections
198
#ssl_ca_file =
199
200
# set the SSL client certificate (PEM encoded)
201
#ssl_cert_file =
202
203
204
[QueueManager]
205
autostart = True
206
207
# set the manager rate (minutes)
208
rate = 60
209
210
# set the Synergy database connection:
211
db_connection = DIALECT+DRIVER://USER:[email protected]_HOST/synergy
212
213
# set the connection pool size (default: 10)
214
db_pool_size = 10
215
216
# set the number of seconds after which a connection is automatically
217
# recycled (default: 30)
218
db_pool_recycle = 30
219
220
# set the max overflow (default: 5)
221
db_max_overflow = 5
222
223
224
[QuotaManager]
225
autostart = True
226
227
# set the manager rate (minutes)
228
rate = 5
Copied!
The following describes the meaning of the attributes of the Synergy configuration file, for each possible section:
Section [Logger]
Attribute
Description
filename
The name of the log file
level
The logging level. Valid values are: CRITICAL, ERROR, WARNING, INFO, DEBUG, NOTSET
formatter
The format of the logged messages
maxBytes
The maximum size of a log file. When this size is reached, the log file is rotated
backupCount
The number of log files to be kept
Section [WSGI]
Attribute
Description
host
The hostname where the Synergy service is deployed
port
The port used by the Synergy service
threads
The number of threads used by the Synergy service
use ssl
Specify if the service is secured through SSL
ssl_ca_file
The CA certificate file to use to verify connecting clients
ssl_cert_file
The Identifying certificate PEM file to present to clients
ssl_key_file
The Private key PEM file used to sign cert_file certificate
max_header_line
The maximum size of message headers to be accepted (default: 16384)
retry_until_window
The number of seconds to keep retrying for listening (default: 30sec)
tcp_keepidle
The value of TCP_KEEPIDLE in seconds for each server socket
backlog
The number of backlog requests to configure the socket with (default: 4096). The listen backlog is a socket setting specifying that the kernel how to limit the number of outstanding (i.e. not yet accepted) connections in the listen queue of a listening socket. If the number of pending connections exceeds the specified size, new ones are automatically rejected
Section [SchedulerManager]
Attribute
Description
autostart
Specifies if the SchedulerManager manager should be started when Synergy starts
rate
the time (in minutes) between two executions of the task implementing this manager
projects
Defines the list of OpenStack projects entitled to access the dynamic resources
shares
Defines, for each project entitled to access the dynamic resources, the relevant share for the usage of such resources. If for a project the value is not specified, the value set for the attribute default_share in the FairShareManager section is used
default_TTL
Specifies the default maximum Time to Live for a Virtual Machine/container, in minutes (default: 2880)
TTLs
For each project, specifies the maximum Time to Live for a Virtual Machine/container, in minutes. VMs and containers running for more that this value will be killed by Synergy. If for a certain project the value is not specified, the value specified by the default_TTL attribute will be used
backfill_depth
The integer value expresses the max depth used by the backfilling strategy: this allows Synergy to not check the whole queue when looking for VMs to start (default: 100)
notification_topic
The notification topic used by Nova for informing listeners about the state changes of the VMs. In case some other service (e.g. Ceilometer) is listening on the default Nova topic (i.e. "notifications"), please define a new topic specific for Synergy (e.g. notification_topics = notifications,synergy_notifications)
Section [FairShareManager]
Attribute
Description
autostart
Specifies if the FairShare manager should be started when Synergy starts
rate
The time (in minutes) between two executions of the task implementing this manager
period_length
The time window considered for resource usage by the fair-share algorithm used by Synergy is split in periods having all the same length, and the most recent periods are given a higher weight. This attribute specifies the length, in days, of a single period (default: 7)
periods
The time window considered for resource usage by the fairshare algoritm used by Synergy is split in periods having all the same length, and the most recent periods are given a higher weight. This attribue specifies the number of periods to be considered (default: 3)
default_share
Specifies the default to be used for a project, if not specified in the shares attribute of the SchedulerManager section (default: 10)
decay_weight
Value between 0 and 1, used by the fairshare scheduler, to define how oldest periods should be given a less weight wrt resource usage (default: 0.5)
vcpus_weight
The weight to be used for the attribute concerning vcpus usage in the fairshare algorithm used by Synergy (default: 100)
age_weight
This attribute defines how oldest requests (and therefore with low priority) should have their priority increased so thay cam be eventaully served (default: 10)
memory_weight
The weight to be used for the attribute concerning memory usage in the fairshare algorithm used by Synergy (default: 70)
Section [KeystoneManager]
Attribute
Description
autostart
Specifies if the Keystone manager should be started when Synergy starts
rate
The time (in minutes) between two executions of the task implementing this manage
auth_url
The URL of the OpenStack identity service. Please note that the v3 API endpoint must be used
username
The name of the user with admin role
password
The password of the specified user with admin role
project_id
The project id to request authorization on
project_name
The project name to request authorization on
user_domain_name
The user domain name (default: "default")
project_domain_name
The project domain name (default: "default")
timeout
The http connection timeout (default: 60)
clock_skew
Forces the request for token, a delta time before the token expiration (default: 60 sec)
Section [NovaManager]
Attribute
Description
autostart
Specifies if the nova manager should be started when Synergy starts
rate
The time (in minutes) between two executions of the task implementing this manager
host
The hostname where the nova-conductor service runs (default: localhost)
timeout
The http connection timeout (default: 60)
amqp_backend
The AMQP backend tpye (rabbit or qpid)
amqp_hosts
The AMQP HA cluster host:port pairs
amqp_host
The server where the AMQP service runs (default: localhost)
amqp_port
The port used by the AMQP service
amqp_user
The AMQP userid
amqp_password
The password of the AMQP user
amqp_virtual_host
The AMQP virtual host
conductor_topic
The topic on which conductor nodes listen on (default: conductor)
compute_topic
The topic on which compute nodes listen on (default: compute)
scheduler_topic
The topic on which scheduler nodes listen on (default: scheduler)
cpu_allocation_ratio
The Nova CPU allocation ratio (default: 16)
ram_allocation_ratio
The Nova RAM allocation ratio (default: 1.5)
metadata_proxy_shared_secret
The Nova metadata_proxy_shared_secret
db_connection
The SQLAlchemy connection string to use to connect to the Nova database
Section [QueueManager]
Attribute
Description
autostart
Specifies if the Queue manager should be started when Synergy starts
rate
The time (in minutes) between two executions of the task implementing this manager
db_connection
The SQLAlchemy connection string to use to connect to the Synergy database
db_pool_size
The number of SQL connections to be kept open (default: 10)
db_pool_recycle
The number of seconds after which a connection is automatically recycled (default: 30)
db_max_overflow
The max overflow with SQLAlchemy (default: 5)
Section [QuotaManager]
Attribute
Description
autostart
Specifies if the Quota manager should be started when Synergy starts
rate
The time (in minutes) between two executions of the task implementing this manager

Installation and configuration using puppet

We provide a Puppet module for Synergy so users can install and configure Synergy with Puppet. The module provides both the synergy-service and synergy-scheduler-manager components.
The module is available on the Puppet Forge : vll/synergy.
Install the puppet module with:
1
puppet module install vll-synergy
Copied!
Usage example:
1
class { 'synergy':
2
synergy_db_url => 'mysql://synergy:[email protected]/synergy',
3
synergy_project_shares => {'A' => 70, 'B' => 30 },
4
keystone_url => 'https://example.com',
5
keystone_admin_user => 'admin',
6
keystone_admin_password => 'the keystone password',
7
nova_url => 'https://example.com',
8
nova_db_url => 'mysql://nova:[email protected]/nova',
9
amqp_backend => 'rabbit',
10
amqp_host => 'localhost',
11
amqp_port => 5672,
12
amqp_user => 'openstack',
13
amqp_password => 'the amqp password',
14
amqp_virtual_host => '/',
15
}
Copied!

The Synergy command line interface

The Synergy service provides a command-line client, called synergy, which allows the Cloud administrator to control and monitor the Synergy service.
Before running the Synergy client command, you must create and source the admin-openrc.sh file to set the relevant environment variables. This is the same script used to run the OpenStack command line tools.
Note that the OS_AUTH_URL variables must refer to the v3 version of the keystone API, e.g.:
export OS_AUTH_URL=https://cloud-areapd.pd.infn.it:35357/v3

synergy usage

1
usage: synergy [-h] [--version] [--debug] [--os-username <auth-user-name>]
2
[--os-password <auth-password>]
3
[--os-user-domain-id <auth-user-domain-id>]
4
[--os-user-domain-name <auth-user-domain-name>]
5
[--os-project-name <auth-project-name>]
6
[--os-project-id <auth-project-id>]
7
[--os-project-domain-id <auth-project-domain-id>]
8
[--os-project-domain-name <auth-project-domain-name>]
9
[--os-auth-token <auth-token>] [--os-auth-token-cache]
10
[--os-auth-url <auth-url>] [--os-auth-system <auth-system>]
11
[--bypass-url <bypass-url>] [--os-cacert <ca-certificate>]
12
{manager,queue,quota,usage} ...
13
14
positional arguments:
15
{manager,queue,quota,usage}
16
commands
17
18
optional arguments:
19
-h, --help show this help message and exit
20
--version show program's version number and exit
21
--debug print debugging output
22
--os-username <auth-user-name>
23
defaults to env[OS_USERNAME]
24
--os-password <auth-password>
25
defaults to env[OS_PASSWORD]
26
--os-user-domain-id <auth-user-domain-id>
27
defaults to env[OS_USER_DOMAIN_ID]
28
--os-user-domain-name <auth-user-domain-name>
29
defaults to env[OS_USER_DOMAIN_NAME]
30
--os-project-name <auth-project-name>
31
defaults to env[OS_PROJECT_NAME]
32
--os-project-id <auth-project-id>
33
defaults to env[OS_PROJECT_ID]
34
--os-project-domain-id <auth-project-domain-id>
35
defaults to env[OS_PROJECT_DOMAIN_ID]
36
--os-project-domain-name <auth-project-domain-name>
37
defaults to env[OS_PROJECT_DOMAIN_NAME]
38
--os-auth-token <auth-token>
39
defaults to env[OS_AUTH_TOKEN]
40
--os-auth-token-cache
41
Use the auth token cache. Defaults to False if
42
env[OS_AUTH_TOKEN_CACHE] is not set
43
--os-auth-url <auth-url>
44
defaults to env[OS_AUTH_URL]
45
--os-auth-system <auth-system>
46
defaults to env[OS_AUTH_SYSTEM]
47
--bypass-url <bypass-url>
48
use this API endpoint instead of the Service Catalog
49
--os-cacert <ca-certificate>
50
Specify a CA bundle file to use in verifying a TLS
51
(https) server certificate. Defaults to env[OS_CACERT]
52
53
Command-line interface to the OpenStack Synergy API.
Copied!
The synergy optional arguments:
-h, --help
1
Show help message and exit
Copied!
--version
1
Show program’s version number and exit
Copied!
--debug
1
Show debugging information
Copied!
--os-username <auth-user-name>
1
Username to login with. Defaults to env[OS_USERNAME]
Copied!
--os-password <auth-password>
1
Password to use.Defaults to env[OS_PASSWORD]
Copied!
--os-project-name <auth-project-name>
1
Project name to scope to. Defaults to env:[OS_PROJECT_NAME]
Copied!
--os-project-id <auth-project-id>
1
Id of the project to scope to. Defaults to env[OS_PROJECT_ID]
Copied!
--os-project-domain-id <auth-project-domain-id>
1
Specify the project domain id. Defaults to env[OS_PROJECT_DOMAIN_ID]
Copied!
--os-project-domain-name <auth-project-domain-name>
1
Specify the project domain name. Defaults to env[OS_PROJECT_DOMAIN_NAME]
Copied!
--os-user-domain-id <auth-user-domain-id>
1
Specify the user domain id. Defaults to env[OS_USER_DOMAIN_ID]
Copied!
--os-user-domain-name <auth-user-domain-name>
1
Specify the user domain name. Defaults to env[OS_USER_DOMAIN_NAME]
Copied!
--os-auth-token <auth-token>
1
The auth token to be used. Defaults to env[OS_AUTH_TOKEN]
Copied!
--os-auth-token-cache
1
Use the auth token cache. Defaults to env[OS_AUTH_TOKEN_CACHE]to False.
2
Defaults to 'false' if not set
Copied!
--os-auth-url <auth-url>
1
The URL of the Identity endpoint. Defaults to env[OS_AUTH_URL]
Copied!
--os-auth-system <auth-system>
1
The auth system to be used. Defaults to env[OS_AUTH_SYSTEM]
Copied!
--bypass-url <bypass-url>
1
Use this API endpoint instead of the Service Catalog
Copied!
--os-cacert <ca-bundle-file>
1
Specify a CA certificate bundle file to use in verifying a TLS
2
(https) server certificate. Defaults to env[OS_CACERT]
Copied!

synergy manager

This command allows to get information about the managers deployed in the Synergy service and control their execution:
1
# synergy manager -h
2
usage: synergy manager [-h] {list,status,start,stop} ...
3
4
positional arguments:
5
{list,status,start,stop}
6
list list the managers
7
status show the managers status
8
start start the manager
9
stop stop the manager
10
11
optional arguments:
12
-h, --help show this help message and exit
Copied!
The command synergy manager list provides the list of all managers deployed in the Synergy service:
1
# synergy manager list
2
╒══════════════════╕
3
│ manager │
4
╞══════════════════╡
5
│ QuotaManager │
6
├──────────────────┤
7
│ NovaManager │
8
├──────────────────┤
9
│ SchedulerManager │
10
├──────────────────┤
11
│ TimerManager │
12
├──────────────────┤
13
│ QueueManager │
14
├──────────────────┤
15
│ KeystoneManager │
16
├──────────────────┤
17
│ FairShareManager │
18
╘══════════════════╛
Copied!
To get the status about managers, use:
1
# synergy manager status
2
╒══════════════════╤══════════╤══════════════╕
3
│ manager │ status │ rate (min) │
4
╞══════════════════╪══════════╪══════════════╡
5
│ QuotaManager │ RUNNING │ 1 │
6
├──────────────────┼──────────┼──────────────┤
7
│ NovaManager │ RUNNING │ 1 │
8
├──────────────────┼──────────┼──────────────┤
9
│ SchedulerManager │ RUNNING │ 1 │
10
├──────────────────┼──────────┼──────────────┤
11
│ TimerManager │ ACTIVE │ 60 │
12
├──────────────────┼──────────┼──────────────┤
13
│ QueueManager │ RUNNING │ 10 │
14
├──────────────────┼──────────┼──────────────┤
15
│ KeystoneManager │ RUNNING │ 1 │
16
├──────────────────┼──────────┼──────────────┤
17
│ FairShareManager │ RUNNING │ 1 │
18
╘══════════════════╧══════════╧══════════════╛
19
20
# synergy manager status TimerManager
21
╒══════════════╤══════════╤══════════════╕
22
│ manager │ status │ rate (min) │
23
╞══════════════╪══════════╪══════════════╡
24
│ TimerManager │ ACTIVE │ 60 │
25
╘══════════════╧══════════╧══════════════╛
Copied!
To control the execution of a specific manager, use the start and stop sub-commands:
1
# synergy manager start TimerManager
2
╒══════════════╤════════════════════════════════╤══════════════╕
3
│ manager │ status │ rate (min) │
4
╞══════════════╪════════════════════════════════╪══════════════╡
5
│ TimerManager │ RUNNING (started successfully) │ 60 │
6
╘══════════════╧════════════════════════════════╧══════════════╛
7
8
# synergy manager stop TimerManager
9
╒══════════════╤═══════════════════════════════╤══════════════╕
10
│ manager │ status │ rate (min) │
11
╞══════════════╪═══════════════════════════════╪══════════════╡
12
│ TimerManager │ ACTIVE (stopped successfully) │ 60 │
13
╘══════════════╧═══════════════════════════════╧══════════════╛
Copied!

synergy quota

The overall cloud resources can be grouped in:
    private quota: composed of resources statically allocated and managed using the 'standard' OpenStack policies
    shared quota: composed of resources non statically allocated and fairly distributed among users by Synergy
The size of the shared quota is calculated as the difference between the total amount of cloud resources (considering also the over-commitment ratios) and the total resources allocated to the private quotas. Therefore for all projects it is necessary to specify the proper quota for instances, VCPUs and RAM so that their total is less than the total amount of cloud resources.
Since Synergy is installed, the private quota of projects cannot be managed anymore by using the Horizon dashboard, but only via command line tools using the following OpenStack command:
1
# openstack quota set --cores <num_vcpus> --ram <memory_size> --instances <max_num_instances> --class <project_id>
Copied!
The private quota will be updated from Synergy after a few minutes without restart it. This example shows how the private quota of the project _prj_a (id=_a5ccbaf2a9da407484de2af881198eb9) has been modified:
1
# synergy quota show --project_name prj_a
2
╒═══════════╤═══════════════════════════════════════════════╤═════════════════════════════════════════════════════════════════════════════╕
3
│ project │ private quota │ shared quota │
4
╞═══════════╪═══════════════════════════════════════════════╪═════════════════════════════════════════════════════════════════════════════╡
5
│ prj_a │ vcpus: 0.00 of 3.00 | memory: 0.00 of 1024.00 │ vcpus: 0.00 of 26.00 | memory: 0.00 of 59956.00 | share: 70.00% | TTL: 5.00 │
6
╘═══════════╧═══════════════════════════════════════════════╧═════════════════════════════════════════════════════════════════════════════╛
7
8
# openstack quota set --cores 2 --ram 2048 --instances 10 --class a5ccbaf2a9da407484de2af881198eb9
9
10
# synergy quota show --project_name prj_a
11
╒═══════════╤═══════════════════════════════════════════════╤═════════════════════════════════════════════════════════════════════════════╕
12
│ project │ private quota │ shared quota │
13
╞═══════════╪═══════════════════════════════════════════════╪═════════════════════════════════════════════════════════════════════════════╡
14
│ prj_a │ vcpus: 0.00 of 2.00 | memory: 0.00 of 2048.00 │ vcpus: 0.00 of 27.00 | memory: 0.00 of 58932.00 | share: 70.00% | TTL: 5.00 │
15
╘═══════════╧═══════════════════════════════════════════════╧═════════════════════════════════════════════════════════════════════════════╛
Copied!
To get information about the private and shared quotas you must use the synergy quota command :
1
# synergy quota -h
2
usage: synergy quota [-h] {show} ...
3
4
positional arguments:
5
{show}
6
show shows the quota info
7
8
optional arguments:
9
-h, --help show this help message and exit
10
11
# synergy quota show -h
12
usage: synergy quota show [-h] [-i <id> | -n <name> | -a | -s]
13
14
optional arguments:
15
-h, --help show this help message and exit
16
-i <id>, --project_id <id>
17
-n <name>, --project_name <name>
18
-a, --all_projects
19
-s, --shared
Copied!
To get the status about the shared quota, use the option --shared:
1
# synergy quota show --shared
2
╒════════════╤════════╤════════╕
3
│ resource │ used │ size │
4
╞════════════╪════════╪════════╡
5
│ vcpus │ 2 │ 27 │
6
├────────────┼────────┼────────┤
7
│ memory │ 1024 │ 60980 │
8
├────────────┼────────┼────────┤
9
│ instances │ 1 │ -1 │
10
╘════════════╧════════╧════════╛
Copied!
in this example the total amount of VCPUs allocated to the shared quota is 27 whereof have been used just 2 CPUs (similarly to the memory and instances number). The value -1 means that the Cloud administrator has not fixed the limit of the number of instances (i.e. VMs), so in this example the VMs can be unlimited.
The --all_projects option provides information about the private and shared quotas of all projects:
1
# synergy quota show --all_projects
2
╒═══════════╤════════════════════════════════════════════════╤═══════════════════════════════════════════════════════════════════════════════╕
3
│ project │ private quota │ shared quota │
4
╞═══════════╪════════════════════════════════════════════════╪═══════════════════════════════════════════════════════════════════════════════╡
5
│ prj_b │ vcpus: 1.00 of 3.00 | memory: 512.0 of 1536.00 │ vcpus: 0.00 of 27.00 | memory: 0.00 of 60980.00 | share: 30.00% | TTL: 5.00 │
6
├───────────┼────────────────────────────────────────────────┼───────────────────────────────────────────────────────────────────────────────┤
7
│ prj_a │ vcpus: 0.00 of 1.00 | memory: 0.00 of 512.00 │ vcpus: 2.00 of 27.00 | memory: 1024.0 of 60980.00 | share: 70.00% | TTL: 5.00 │
8
╘═══════════╧════════════════════════════════════════════════╧═══════════════════════════════════════════════════════════════════════════════╛
9
10
# synergy quota show --project_name prj_a
11
╒═══════════╤══════════════════════════════════════════════╤═══════════════════════════════════════════════════════════════════════════════╕
12
│ project │ private quota │ shared quota │
13
╞═══════════╪══════════════════════════════════════════════╪═══════════════════════════════════════════════════════════════════════════════╡
14
│ prj_a │ vcpus: 0.00 of 1.00 | memory: 0.00 of 512.00 │ vcpus: 2.00 of 27.00 | memory: 1024.0 of 60980.00 | share: 70.00% | TTL: 5.00 │
15
╘═══════════╧══════════════════════════════════════════════╧═══════════════════════════════════════════════════════════════════════════════╛
Copied!
In this example the project prj_b is currently consuming just resources of its private quota (1 VCPU and 512MB of memory) while the shared quota is not used. By contrary, the prj_a is consuming just the shared quota (2 VCPUs and 1024MB of memory). The share values fixed by the Cloud administrator are 70% for prj_a and 30% prj_b (the attribute shares in synergy.conf) while for both projects the TTL has been set to 5 minutes (the TTL attribute). Remark, in this example, the VMs instantiated in the shared quota can live just 5 minutes while the ones created in the private quota can live forever.

synergy queue

This command provides information about the amount of user requests stored in the persistent priority queue:
1
# synergy queue -h
2
usage: synergy queue [-h] {show} ...
3
4
positional arguments:
5
{show}
6
show shows the queue info
7
8
optional arguments:
9
-h, --help show this help message and exit
10
11
# synergy queue show
12
╒═════════╤════════╤═══════════╕
13
│ name │ size │ is open │
14
╞═════════╪════════╪═══════════╡
15
│ DYNAMIC │ 544 │ true │
16
╘═════════╧════════╧═══════════╛
Copied!

synergy usage

To get information about the usage of shared resources at project or user level, use:
1
# synergy usage show -h
2
usage: synergy usage show [-h] {project,user} ...
3
4
positional arguments:
5
{project,user}
6
project project help
7
user user help
8
9
optional arguments:
10
-h, --help show this help message and exit
11
12
13
# synergy usage show project -h
14
usage: synergy usage show project [-h] [-d <id> | -m <name> | -a]
15
16
optional arguments:
17
-h, --help show this help message and exit
18
-d <id>, --project_id <id>
19
-m <name>, --project_name <name>
20
-a, --all_projects
21
22
23
# synergy usage show user -h
24
usage: synergy usage show user [-h] (-d <id> | -m <name>)
25
(-i <id> | -n <name> | -a)
26
27
optional arguments:
28
-h, --help show this help message and exit
29
-d <id>, --project_id <id>
30
-m <name>, --project_name <name>
31
-i <id>, --user_id <id>
32
-n <name>, --user_name <name>
33
-a, --all_users
Copied!
The project sub-command provides the resource usage information by the projects.
The following example shows the projects prj_a (share: 70%) and prj_b (share: 30%) have consumed in the last three days, respectively 70.40% and 29.40% of shared resources:
1
# synergy usage show project --all_projects
2
╒═══════════╤═══════════════════════════════════════════════════════════════╤═════════╕
3
│ project │ shared quota (09 Dec 2016 14:35:43 - 12 Dec 2016 14:35:43) │ share │
4
╞═══════════╪═══════════════════════════════════════════════════════════════╪═════════╡
5
│ prj_b │ vcpus: 29.60% | memory: 29.60% │ 30.00% │
6
├───────────┼───────────────────────────────────────────────────────────────┼─────────┤
7
│ prj_a │ vcpus: 70.40% | memory: 70.40% │ 70.00% │
8
╘═══════════╧═══════════════════════════════════════════════════════════════╧═════════╛
9
10
# synergy usage show project --project_name prj_a
11
╒═══════════╤══════════════════════════════════════════════════════════════╤═════════╕
12
│ project │ shared quota (09 Dec 2016 15:01:44 - 12 Dec 2016 15:01:44) │ share │
13
╞═══════════╪══════════════════════════════════════════════════════════════╪═════════╡
14
│ prj_a │ vcpus: 70.40% | memory: 70.40% │ 70.00% │
15
╘═══════════╧══════════════════════════════════════════════════════════════╧═════════╛
Copied!
The time window is defined by Cloud administrator by setting the attributes period and period_length in synergy.conf.
It may happen that the prj_a (or prj_b) doesn't have the need to consume shared resources for a while: in this scenario the others projects (i.e. prj_b) can take advantage and so consume more resources than the fixed share (i.e. 30%):
1
# synergy usage show project --all_projects
2
╒═══════════╤═══════════════════════════════════════════════════════════════╤═════════╕
3
│ project │ shared quota (09 Dec 2016 14:35:43 - 12 Dec 2016 14:35:43) │ share │
4
╞═══════════╪═══════════════════════════════════════════════════════════════╪═════════╡
5
│ prj_b │ vcpus: 98.40% | memory: 98.40% │ 30.00% │
6
├───────────┼───────────────────────────────────────────────────────────────┼─────────┤
7
│ prj_a │ vcpus: 1.60% | memory: 1.60% │ 70.00% │
8
╘═══════════╧═══════════════════════════════════════════════════════════════╧═════════╛
Copied!
However, as soon as the prj_a requires more shared resources, it will gain a higher priority than the prj_b, in order to balance their usage.
The user sub-command provides the resource usage information by the project users.
The following example shows the usage report of users belonging to the project prj_a. They have the same value for share (35%) but different priority (user_a1=80, user_a2=100) because the user_a1 has consumed too much with respect to user_a2 (51.90% VS 48.10%).
1
# synergy usage show user --project_name prj_a --all_users
2
╒═════════╤══════════════════════════════════════════════════════════════╤═════════╤════════════╕
3
│ user │ shared quota (09 Dec 2016 14:58:44 - 12 Dec 2016 14:58:44) │ share │ priority │
4
╞═════════╪══════════════════════════════════════════════════════════════╪═════════╪════════════╡
5
│ user_a2 │ vcpus: 48.10% | memory: 48.10% │ 35.00% │ 100 │
6
├─────────┼──────────────────────────────────────────────────────────────┼─────────┼────────────┤
7
│ user_a1 │ vcpus: 51.90% | memory: 51.90% │ 35.00% │ 80 │
8
╘═════════╧══════════════════════════════════════════════════════════════╧═════════╧════════════╛
9
10
# synergy usage show user --project_name prj_a --user_name user_a1
11
╒═════════╤══════════════════════════════════════════════════════════════╤═════════╤════════════╕
12
│ user │ shared quota (09 Dec 2016 14:58:44 - 12 Dec 2016 14:58:44) │ share │ priority │
13
╞═════════╪══════════════════════════════════════════════════════════════╪═════════╪════════════╡
14
│ user_a1 │ vcpus: 51.90% | memory: 51.90% │ 35.00% │ 80 │
15
╘═════════╧══════════════════════════════════════════════════════════════╧═════════╧════════════╛
Copied!

Open Ports

To interact with Synergy using the client tool, just one port needs to be open. This is the port defined in the Synergy configuration file (attribute port in the [WSGI] section). The default value is 8051.
Last modified 3yr ago