Synergy Doc
Search…
Deployment and Administration guide
⚠ This is the documentation for an old version of Synergy on INDIGO 1 ⚠
The Synergy package versions corresponding to this documentation are:
    synergy-service v1.2.0
    synergy-scheduler-manager v2.1.0

Manual installation and configuration

Quota setting

The overall resources can be grouped in two groups:
    Static resources
    Dynamic resources
Static resources are managed using the 'standard' Openstack policies. Therefore for each project referring to static resources it is necessary to specify the relevant quota for instances, VCPUs and RAM.
The overall amount of dynamic resources is calculated as difference between the total amount of resources (considering also the overcommitment ratios) and the resources allocated for static projects.
For projects referring to dynamic resources, the quota values for VCPUs, instances and RAM are not meaningful and therefore can be set to any arbitrary value.

Installation

Install the relevant INDIGO repository.

Install the synergy packages

On CentOS7:
1
yum install python-synergy-service python-synergy-scheduler-manager
Copied!
On Ubuntu:
1
apt-get install python-synergy-service python-synergy-scheduler-manager
Copied!
They can be installed in the OpenStack controller node or on another node.

Setup the Synergy database

Then use the database access client to connect to the database server as the root user:
1
$ mysql -u root -p
Copied!
Create the synergy database:
1
```
2
CREATE DATABASE synergy;
Copied!
Grant proper access to the glance database:
1
GRANT ALL PRIVILEGES ON synergy.* TO 'synergy'@'localhost' \
2
IDENTIFIED BY 'SYNERGY_DBPASS';
3
GRANT ALL PRIVILEGES ON synergy.* TO 'synergy'@'%' \
4
IDENTIFIED BY 'SYNERGY_DBPASS';
5
flush privileges;
Copied!
Replace SYNERGY_DBPASS with a suitable password.
Exit the database access client.

Add Synergy as an OpenStack endpoint and service

Source the admin credentials to gain access to admin-only CLI commands:
1
$ . admin-openrc
Copied!
Register the synergy service and endpoint in the Openstack service catalog:
1
openstack service create --name synergy management
2
3
4
openstack endpoint create --region RegionOne management public http://$SYNERGY_HOST_IP:8051
5
openstack endpoint create --region RegionOne management admin http://$SYNERGY_HOST_IP:8051
6
openstack endpoint create --region RegionOne management internal http://$SYNERGY_HOST_IP:8051
Copied!

Adjust nova notifications

Make sure that nova notifications are enanbled. On the controller node add the following attributes in the nova.conf file and then restart the nova services:
1
notify_on_state_change = vm_state
2
default_notification_level = INFO
3
notification_driver = messaging
4
notification_topics = notifications
Copied!

Edit the source files for proper messaging

Two changes are then needed on the controller node.
The first one is edit /usr/lib/python2.7/site-packages/oslo_messaging/localcontext.py (for CentOS) //usr/lib/python2.7/dist-packages/oslo_messaging/localcontext.py (for Ubuntu) , replacing:
1
def _clear_local_context():
2
"""Clear the request context for the current thread."""
3
delattr(_STORE, _KEY)
Copied!
with:
1
def _clear_local_context():
2
"""Clear the request context for the current thread."""
3
if hasattr(_STORE, _KEY):
4
delattr(_STORE, _KEY)
Copied!
The second one is edit /usr/lib/python2.7/site-packages/nova/cmd/conductor.py (for CentOS) / /usr/lib/python2.7/site-packages/nova/cmd/conductor.py (for Ubuntu) replacing:
1
topic=CONF.conductor.topic,
Copied!
with:
1
topic=CONF.conductor.topic + "_synergy",
Copied!

Restart nova

Then restart the nova services on the Controller node.

Configure and start Synergy

Configure the synergy service, as explained in the following section.
Then start and enable the synergy service. On CentOS:
1
systemctl start synergy
2
systemctl enable synergy
Copied!
On Ubuntu:
1
service synergy start
Copied!
If synergy complains about incompatibility with the version of installed oslo packages, e.g.:
1
synergy.service - ERROR - manager 'timer' instantiation error: (oslo.log
2
1.10.0 (/usr/lib/python2.7/site-packages),
3
Requirement.parse('oslo.log<2.3.0,>=2.0.0'))
4
5
synergy.service - ERROR - manager 'timer' instantiation error:
6
(oslo.service 0.9.0 (/usr/lib/python2.7/site-packages),
7
Requirement.parse('oslo.service<1.3.0,>=1.0.0'))
8
9
synergy.service - ERROR - manager 'timer' instantiation error:
10
(oslo.concurrency 2.6.0 (/usr/lib/python2.7/site-packages),
11
Requirement.parse('oslo.concurrency<3.3.0,>=3.0.0'))
12
13
synergy.service - ERROR - manager 'timer' instantiation error:
14
(oslo.middleware 2.8.0 (/usr/lib/python2.7/site-packages),
15
16
Requirement.parse('oslo.middleware<3.5.0,>=3.0.0'))
Copied!
please patch the the file /usr/lib/python2.7/site-packages/synergy_service-1.0.0-py2.7.egg-info/requires.txt by removing the versions after the dependencies.

The synergy configuration file

Synergy must be configured properly filling the /etc/synergy/synergy.conf configuration file.
This is an example of the synergy.conf configuration file:
1
[DEFAULT]
2
3
4
[Logger]
5
filename=/var/log/synergy/synergy.log
6
level=INFO
7
formatter="%(asctime)s - %(name)s - %(levelname)s - %(message)s"
8
maxBytes=1048576
9
backupCount=100
10
11
12
[WSGI]
13
host=localhost
14
port=8051
15
threads=2
16
use_ssl=False
17
#ssl_ca_file=
18
#ssl_cert_file=
19
#ssl_key_file=
20
max_header_line=16384
21
retry_until_window=30
22
tcp_keepidle=600
23
backlog=4096
24
25
26
27
[SchedulerManager]
28
autostart=True
29
# rate (minutes)
30
rate=1
31
32
# the list of projects accessing to the dynamic quota
33
projects=prj_a, prj_b
34
35
# the integer value expresses the share
36
shares=prj_a=70, prj_b=30
37
38
# the integer value expresses the default max time to live (minutes) for VM/Container
39
default_TTL=2880
40
41
# the integer value expresses the max time to live (minutes) for VM/Container
42
TTLs=prj_a=1440, prj_b=2880
43
44
45
46
[FairShareManager]
47
autostart=True
48
# rate (minutes)
49
rate=2
50
51
# period size (default=7 days)
52
period_length=1
53
# num of periods (default=3)
54
periods=3
55
56
# default share value (default=10)
57
default_share = 10
58
59
# weights
60
decay_weight=0.5
61
vcpus_weight=50
62
age_weight=0
63
memory_weight=50
64
65
66
67
[KeystoneManager]
68
autostart=True
69
rate=5
70
71
# the Keystone url (v3 only)
72
auth_url=http://10.64.31.19:5000/v3
73
# the name of user with admin role
74
username=admin
75
# the password of user with admin role
76
password=ADMIN
77
# the project to request authorization on
78
project_name=admin
79
# set the http connection timeout
80
timeout=60
81
# set the trust expiration
82
83
84
85
[NovaManager]
86
autostart=True
87
rate=5
88
89
# the nova configuration file: if specified the following attributes are used:
90
# my_ip, conductor_topic, compute_topic, scheduler_topic, connection, rpc_backend
91
# in case of RABBIT backend: rabbit_host, rabbit_port, rabbit_virtual_host, rabbit_userid, rabbit_password
92
# in case of QPID backend: qpid_hostname, qpid_port, qpid_username, qpid_password
93
nova_conf=/etc/nova/nova.conf
94
95
host=10.64.31.19
96
#set the http connection timeout (default=60)
97
timeout=60
98
99
# the amqp backend tpye (e.g. rabbit, qpid)
100
amqp_backend=rabbit
101
amqp_host=10.64.31.19
102
amqp_port=5672
103
amqp_user=openstack
104
amqp_password=RABBIT_PASS
105
amqp_virtual_host=/
106
# the conductor topic
107
conductor_topic = conductor
108
# the compute topic
109
compute_topic = compute
110
# the scheduler topic
111
scheduler_topic = scheduler
112
# the NOVA database connection
113
db_connection = mysql://nova:[email protected]/nova
114
115
116
[QueueManager]
117
autostart=True
118
rate=5
119
# the Synergy database connection
120
db_connection=mysql://synergy:[email protected]/synergy
121
# the connection pool size (default=10)
122
db_pool_size = 10
123
# the max overflow (default=5)
124
db_max_overflow = 5
125
126
127
[QuotaManager]
128
autostart=True
129
rate=5
Copied!
The following describes the meaning of the attributes of the synergy configuration file, for each possible section:
Section [Logger]
Attribute
Description
filename
The name of the log file
level
The log level. Possible values are DEBUG, INFO, WARNING, ERROR, CRITICAL
formatter
The format of the log file
maxBytes
The maximum size of a log file. When this size is reached, the log file is rotated
backupCount
The number of log files to be kept
Section [WSGI]
Attribute
Description
host
The hostname where the synergy service is deployed
port
The port used by the synergy service
threads
The number of threads used by the synergy service
use ssl
Specify if the service is secured through SSL
ssl_ca_file
CA certificate file to use to verify connecting clients
ssl_cert_file
Identifying certificate PEM file to present to clients
ssl_key_file
Private key PEM file used to sign cert_file certificate
max_header_line
Maximum size of message headers to be accepted (default=16384)
retry_until_window
Number of seconds to keep retrying for listening (default 30s)
tcp_keepidle
Sets the value of TCP_KEEPIDLE in seconds for each server socket
backlog
Number of backlog requests to configure the socket with (default=4096). The listen backlog is a socket setting specifying that the kernel how to limit the number of outstanding (i.e. not yet accepted) connections in the listen queue of a listening socket. If the number of pending connections exceeds the specified size, new ones are automatically rejected
Section [SchedulerManager]
Attribute
Description
autostart
Specifies if the SchedulerManager manager should be started when synergy starts
rate
The time (in minutes) between two executions of the task implementing this manager
projects
Defines the list of OpenStack projects entitled to access the dynamic resources
shares
Defines, for each project entitled to access the dynamic resources, the relevant share for the usage of such resources. If for a project the value is not specified, the value set for the attribute default_share in the FairShareManager section is used
default_TTL
Specifies the default maximum Time to Live for a Virtual Machine/container, in minutes
TTLs
For each project, specifies the maximum Time to Live for a Virtual Machine/container, in minutes. VMs and containers running for more that this value will be killed by synergy. If for a certain project the value is not specified, the value specified by the default_TTL attribute will be used
Section [FairShareManager]
Attribute
Description
autostart
Specifies if the FairShare manager should be started when synergy starts
rate
The time (in minutes) between two executions of the task implementing this manager
period_length
The time window considered for resource usage by the fairshare algoritm used by synergy is split in periods having all the same length, and the most recent periods are given a higher weight. This attribue specifies the length, in days, of a single period (default=7 days)
periods
The time window considered for resource usage by the fairshare algoritm used by synergy is split in periods having all the same length, and the most recent periods are given a higher weight. This attribue specifies the number of periods to be considered
default_share
Specifies the default to be used for a project, if not specified in the shares attribute of the SchedulerManager section
decay_weight
Value between 0 and 1, used by the fairshare scheduler, to define how oldest periods should be given a less weight wrt resource usage
vcpus_weight
The weight to be used for the attribute concerning vcpus usage in the fairshare algorithm used by synergy
age_weight
This attribute defines how oldest requests (and therefore with low priority) should have their priority increased so thay cam be eventaully served
memory_weight
The weight to be used for the attribute concerning memory usage in the fairshare algorithm used by synergy
Section [KeystoneManager]
Attribute
Description
autostart
Specifies if the Keystone manager should be started when synergy starts
rate
The time (in minutes) between two executions of the task implementing this manage
auth_url
The URL of the OpenStack identity service. Please note that the v3 API endpoint must be used
username
the name of the user with admin role
password
the password of the specified user with admin role
project_name
the project to request authorization on
timeout
the http connection timeout
Section [NovaManager]
Attribute
Description
autostart
Specifies if the nova manager should be started when synergy starts
rate
The time (in minutes) between two executions of the task implementing this manager
nova_conf
The pathname of the nova configuration file, if synergy is deployed in the OpenStack controller node. Otherwise it is necessary to specify the attributes host, conductor_topic, compute_topic, scheduler_topic, db_connection, and the ones referring to the AMQP system. This file must be readable by the synergy user
host
The hostname where the nova-conductor service runs
timeout
The http connection timeout
amqp_backend
The AMQP backend tpye (rabbit or qpid)
amqp_host
The server where the AMQP service runs
amqp_port
The port used by the AMQP service
amqp_user
The AMQP userid
amqp_password
The password of the AMQP user
amqp_virtual_host
The AMQP virtual host
conductor_topic
The topic on which conductor nodes listen on
compute_topic
The topic compute nodes listen on
scheduler_topic
The topic scheduler nodes listen on
db_connection
The SQLAlchemy connection string to use to connect to the Nova database.
Section [QueueManager]
Attribute
Description
autostart
Specifies if the Queue manager should be started when synergy starts
rate
The time (in minutes) between two executions of the task implementing this manager
db_connection
The SQLAlchemy connection string to use to connect to the synergy database.
db_pool_size
The number of SQL connections to be kept open
db_max_overflow
The max overflow with SQLAlchemy
Section [QuotaManager]
Attribute
Description
autostart
Specifies if the Quota manager should be started when synergy starts
rate
The time (in minutes) between two executions of the task implementing this manager

Installation and configuration using puppet

We provide a Puppet module for Synergy so users can install and configure Synergy with Puppet. The module provides both the synergy-service and synergy-scheduler-manager components.
The module is available on the Puppet Forge : vll/synergy.
Install the puppet module with:
1
puppet module install vll-synergy
Copied!
Usage example:
1
class { 'synergy':
2
synergy_db_url => 'mysql://synergy:[email protected]/synergy',
3
synergy_project_shares => {'A' => 70, 'B' => 30 },
4
keystone_url => 'https://example.com',
5
keystone_admin_user => 'admin',
6
keystone_admin_password => 'the keystone password',
7
nova_url => 'https://example.com',
8
nova_db_url => 'mysql://nova:[email protected]/nova',
9
amqp_backend => 'rabbit',
10
amqp_host => 'localhost',
11
amqp_port => 5672,
12
amqp_user => 'openstack',
13
amqp_password => 'the amqp password',
14
amqp_virtual_host => '/',
15
}
Copied!

The Synergy command line interface

The Synergy service provides a command-line client, called synergy, which allows the Cloud administrator to control and monitor the Synergy service.
Before running the synergy client command, you must create and source the admin-openrc.sh file to set the relevant environment variables. This is the same script used to run the OpenStack command line tools.
Note that the OS_AUTH_URL variables must refer to the v3 version of the keystone API, e.g.:
export OS_AUTH_URL=https://cloud-areapd.pd.infn.it:35357/v3

synergy usage

1
[[email protected] ~]# synergy --help
2
usage: synergy [-h] [--version] [--debug] [--os-username <auth-user-name>]
3
[--os-password <auth-password>]
4
[--os-project-name <auth-project-name>]
5
[--os-project-id <auth-project-id>]
6
[--os-auth-token <auth-token>] [--os-auth-token-cache]
7
[--os-auth-url <auth-url>] [--os-auth-system <auth-system>]
8
[--bypass-url <bypass-url>] [--os-cacert <ca-certificate>]
9
10
{get_priority,get_queue,get_quota,get_share,get_usage,list,start,status,stop}
11
...
12
13
positional arguments:
14
{get_priority,get_queue,get_quota,get_share,get_usage,list,start,status,stop}
15
commands
16
get_priority shows the users priority
17
get_queue shows the queue info
18
get_quota shows the dynamic quota info
19
get_share shows the users share
20
get_usage retrieve the resource usages
21
list list the managers
22
start start the managers
23
status retrieve the manager's status
24
stop stop the managers
25
26
optional arguments:
27
-h, --help show this help message and exit
28
--version show program's version number and exit
29
--debug print debugging output
30
--os-username <auth-user-name>
31
defaults to env[OS_USERNAME]
32
--os-password <auth-password>
33
defaults to env[OS_PASSWORD]
34
--os-project-name <auth-project-name>
35
defaults to env[OS_PROJECT_NAME]
36
--os-project-id <auth-project-id>
37
defaults to env[OS_PROJECT_ID]
38
--os-auth-token <auth-token>
39
defaults to env[OS_AUTH_TOKEN]
40
--os-auth-token-cache
41
Use the auth token cache. Defaults to False if
42
env[OS_AUTH_TOKEN_CACHE] is not set
43
--os-auth-url <auth-url>
44
defaults to env[OS_AUTH_URL]
45
--os-auth-system <auth-system>
46
defaults to env[OS_AUTH_SYSTEM]
47
--bypass-url <bypass-url>
48
use this API endpoint instead of the Service Catalog
49
--os-cacert <ca-certificate>
50
Specify a CA bundle file to use in verifying a TLS
51
(https) server certificate. Defaults to env[OS_CACERT]
52
53
Command-line interface to the OpenStack Synergy API.
Copied!

synergy optional arguments

-h, --help
1
Show help message and exit
Copied!
--version
1
Show program’s version number and exit
Copied!
--debug
1
Show debugging information
Copied!
--os-username <auth-user-name>
1
Username to login with. Defaults to env[OS_USERNAME]
Copied!
--os-password <auth-password>
1
Password to use.Defaults to env[OS_PASSWORD]
Copied!
--os-project-name <auth-project-name>
1
Project name to scope to. Defaults to env:[OS_PROJECT_NAME]
Copied!
--os-project-id <auth-project-id>
1
Id of the project to scope to. Defaults to env[OS_PROJECT_ID]
Copied!
--os-auth-token <auth-token>
1
The auth token to be used. Defaults to env[OS_AUTH_TOKEN]
Copied!
--os-auth-token-cache
1
Use the auth token cache. Defaults to env[OS_AUTH_TOKEN_CACHE]to False.
2
Defaults to 'false' if not set
Copied!
--os-auth-url <auth-url>
1
The URL of the Identity endpoint. Defaults to env[OS_AUTH_URL]
Copied!
--os-auth-system <auth-system>
1
The auth system to be used. Defaults to env[OS_AUTH_SYSTEM]
Copied!
--bypass-url <bypass-url>
1
Use this API endpoint instead of the Service Catalog
Copied!
--os-cacert <ca-bundle-file>
1
Specify a CA certificate bundle file to use in verifying a TLS
2
(https) server certificate. Defaults to env[OS_CACERT]
Copied!

synergy list

This command returns the list of managers that have been deployed in the synergy service.
E.g.:
1
# synergy list
2
--------------------
3
| manager |
4
--------------------
5
| QuotaManager |
6
| NovaManager |
7
| FairShareManager |
8
| TimerManager |
9
| QueueManager |
10
| KeystoneManager |
11
| SchedulerManager |
12
--------------------
Copied!

synergy start

This command start a manager deployed in the synergy service.
E.g.:
1
# synergy start TimerManager
2
-------------------------------------------------
3
| manager | status | message |
4
-------------------------------------------------
5
| TimerManager | RUNNING | started successfully |
6
-------------------------------------------------
Copied!

synergy stop

This command stops a manager deployed in the synergy service.
E.g.:
1
# synergy stop KeystoneManager
2
---------------------------------------------------
3
| manager | status | message |
4
---------------------------------------------------
5
| KeystoneManager | ACTIVE | stopped successfully |
6
---------------------------------------------------
Copied!

synergy status

This command returns the status of the managers deployed in the synergy service.
E.g.:
1
# synergy status
2
------------------------------
3
| manager | status |
4
------------------------------
5
| QuotaManager | RUNNING |
6
| NovaManager | RUNNING |
7
| FairShareManager | RUNNING |
8
| TimerManager | ACTIVE |
9
| QueueManager | RUNNING |
10
| KeystoneManager | RUNNING |
11
| SchedulerManager | RUNNING |
12
------------------------------
Copied!

synergy get_quota

This command shows the dynamic resources being used wrt the total number of dynamic resources.
E.g:
1
# synergy get_quota
2
-------------------------------
3
| type | in use | limit |
4
-------------------------------
5
| ram (MB) | 9728 | 9808.00 |
6
| cores | 19 | 28.00 |
7
-------------------------------
Copied!
Using the --long option, it is also possible to see the status for each project.
In the following example:
    limit=28.0 for vcpus for each dynamic project says that the total number of VCPUs for the dynamic portion of the resources is 28. This is calculated considering the total number of resources and the ones allocated to static projects. The overcommitment factor is also taken into account.
    limit=9808.0 for memory for each dynamic project says that the total number of MB of RAM for the dynamic portion of the resources is 9808. This is calculated considering the total number of resources and the ones allocated to static projects. The overcommitment factor is also taken into account.
    prj_a is currently using 9 VCPUs and 4608 MB of RAM
    prj_b is currently using 10 VCPUs and 5120 MB of RAM
    the total number of VCPUs currently used by the dynamic projects is 19 (the value reported between parenthesis)
    the total number of MB of RAM currently used by the dynamic projects is 9728 (the value reported between parenthesis)
1
# synergy get_quota --long
2
-------------------------------------------------------------------------------
3
| project | cores | ram (MB) |
4
-------------------------------------------------------------------------------
5
| prj_b | in use=10 (19) | limit=28.00 | in use=5120 (9728) | limit=9808.00 |
6
| prj_a | in use= 9 (19) | limit=28.00 | in use=4608 (9728) | limit=9808.00 |
7
-------------------------------------------------------------------------------
Copied!

synergy get_priority

This command returns the priority set in that moment by Synergy to all users of the dynamic projects, to guarantee the fair share use of the resources (considering the policies specified by the Cloud administrator and considering the past usage of such resources).
E.g. in the following example user_a2 of project prj_a has the highest priority:
1
# synergy get_priority
2
--------------------------------
3
| project | user | priority |
4
--------------------------------
5
| prj_a | user_a1 | 78.00 |
6
| prj_a | user_a2 | 80.00 |
7
| prj_b | user_b1 | 5.00 |
8
| prj_b | user_b2 | 5.00 |
9
--------------------------------
Copied!

synergy get_share

This command reports the shares imposed by the Cloud administrator (attribute shares in the synergy configuration file) to the dynamic projects and to their users.
E.g. in the following example the administrator specified in the synergy configuration file the value 70 for the share value of prj_a, and 10 as share value for prj_b. The command also reports the % values.
1
# synergy get_share
2
----------------------------
3
| project | share |
4
----------------------------
5
| prj_b | 12.50% (10.00) |
6
| prj_a | 87.50% (70.00) |
7
----------------------------
Copied!
With the --long option it is also possible to see the shares for the users. The relevant users of the 2 projects are given the same share.
Therefore the 2 users of prj_a has each one a share of 43.75 % (50 % of 87.50 %) of total resources.
The 2 users of prj_b has each one a share of 6.25 % (50 % of 12.50 %) of total resources.
1
# synergy get_share --long
2
-----------------------------------------------
3
| project | share | user | share |
4
-----------------------------------------------
5
| prj_b | 12.50% (10.00) | user_b1 | 6.25% |
6
| prj_b | 12.50% (10.00) | user_b2 | 6.25% |
7
| prj_a | 87.50% (70.00) | user_a1 | 43.75% |
8
| prj_a | 87.50% (70.00) | user_a2 | 43.75% |
9
-----------------------------------------------
Copied!

synergy get_usage

This command reports the usage of the resources by the dynamic projects in the last time frame considered by synergy (attribute period_length of the synergy configuration file attribute time window*).
In the following example it is reported that, in the considered time frame:
    proj_a has used 31.26% of cores and 31.26% of RAM
    proj_b has used 68.74% of cores and 68.74% of RAM
    user_a1 has used 100 % of resources within its project (and 31.26% considering the overall usage)
    user_a2 hasn't used resources at all
    user_b1 has used 100 % of resources within its project (and 68.74% considering the overall usage)
    user_b2 hasn't used resources at all
1
# synergy get_usage
2
---------------------------------------------------------------------------
3
| project | cores | ram | user | cores (abs) | ram (abs) |
4
---------------------------------------------------------------------------
5
| prj_b | 28.47% | 28.47% | user_b1 | 48.58% (13.83%) | 48.58% (13.83%) |
6
| prj_b | 28.47% | 28.47% | user_b2 | 51.42% (14.64%) | 51.42% (14.64%) |
7
| prj_a | 71.53% | 71.53% | user_a1 | 59.68% (42.69%) | 59.68% (42.69%) |
8
| prj_a | 71.53% | 71.53% | user_a2 | 40.32% (28.84%) | 40.32% (28.84%) |
9
---------------------------------------------------------------------------
Copied!

synergy get_queue

This command returns the number of queued requests for the dynamic projects, in total.
E.g. in the following example there are 45 queued requests in total for the dynamic projects.
1
# synergy get_queue
2
---------------------------
3
| queue | status | size |
4
---------------------------
5
| DYNAMIC | ON | 45 |
6
---------------------------
Copied!

Open Ports

To interact with Synergy using the client tool, just one port needs to be open. This is the port defined in the synergy configuration file (attribute port in the [WSGI] section). The default value is 8051.
Last modified 3yr ago