-
Zextras Carbonio 23.6.0
-
Carbonio Community Edition
-
Suite for Zimbra
- Articles coming soon
Installation
This section discusses the Carbonio installation, which is a Carbonio installation distributed across numerous nodes (i.e., a Multi-Server installation), each of which plays one or more Roles.
Rather than providing set installation instructions with some functionality put on every node, we give an installation scenario that can be tailored to the various demands of Carbonio customers with varying numbers of nodes. As a result, we present the concept of Role: a Carbonio feature that is atomic and consists of one or more packages.
Because a Role may be deployed on any node in the cluster, the situation described below can be changed at any time by installing a Role on a different node (or even a dedicated node).
Scenario of Six Nodes
Carbonio Multi-Server is the only supported method of installation in a production environment, particularly for large production systems, because it is more scalable in the event of infrastructure growth, and communication across all nodes is set up and secured automatically by Carbonio Mesh, which also adds fault detection and dynamic routing between infrastructure components.
In the proposed scenario, we will create a Carbonio Multi-Server environment comprised of six nodes (designated as SRV1,…, SRV6) as follows:
- Postgres is installed specifically on SRV1.
- Carbonio’s fundamental infrastructure is represented by SRV2, which installs Directory Server, Carbonio Mesh, and a database connection.
- MTA, the mail server, is installed on SRV3.
- The Proxy, which gives online access to all components, is hosted by SRV4, as is the Carbonio VideoServer, which provides video-conference functions and the capacity to record meetings.
- SRV5 is an AppServer that installs Carbonio Files and Carbonio Docs, which allow for document sharing and collaborative editing.
- SRV6 is another AppServer that includes Carbonio Preview, which allows Carbonio to preview snippets or thumbnails of a document, User Management, and several sophisticated service features.
In our scenario, we begin Carbonio installation on six nodes running Ubuntu 20.04 LTS. The instructions are applicable for six RHEL 8 nodes: the only variation is the command for installing packages, while the procedures for configuring the nodes are the same.
We also suppose that each node’s IP address is 172.16.0.1X, where X is the n-th node. In other words, the IP addresses will be 172.16.0.11 (SRV1) to 172.16.0.16 (second AppServer). These values will be used in configuration files that must be manually altered during installation or upgrading operations.
In most Multi-Server settings, installing a Replica Directory Server in a Master/Slave configuration improves dependability and load-balancing. In a separate section, we discuss how to install the Replica on a dedicated node, SRV7 (which must be running the same OS as the other six). However, the Replica may be installed on any node other than SRV2 using the same approach.
Requirements
Each node of a Multi-Server must meet the Software Requirements of the Single-Server; System Requirements are also applicable
No additional requirement is necessary.
The following extra prerequisites are required.
An active subscription (the ability to fetch from BaseOS and the other primary repositories):
# subscription-manager repos --enable=rhel-8-for-x86_64-appstream-rpms
The CodeReady repository enabled:
# subscription-manager repos --enable codeready-builder-for-rhel-8-x86_64-rpms
Some further thoughts:
- It is required to be familiar with the use of CLI. All carbonio commands must be run as the zextras user (a zextras$ prompt will appear), whereas all other commands must be run as the root user, unless otherwise specified.
- Give the nodes meaningful names. For instance, you may call them proxy.example.com, mta.example.com, and so on. Replace example.com with the name of your domain.
- You will need to write down various configuration parameters and their values throughout the installation operation, as they will be required in the setup of the next nodes
- These details are summarised at the end of each node’s installation: copy them to a secure location and keep them handy until the installation is complete. Examples of values are a node’s IP address (public or private) and a database user’s password.
- Depending on the Roles placed on each Node, you must open the ports indicated in ref:fw-ports in your firewall for all of the services you will provide.
Preliminary Duties
Perform the following two procedures on each of the six nodes before beginning the actual installation.
Task 1: Configure repositories
The following are important information concerning the packages repository Carbonio and its content. Please read them carefully, as they might save you some time in case of installation or upgrade problems and help you to provide more precise bug reports.
The repository hosts simultaneously packages of two channels:
Release Candidate (RC). Packages in this channel are made available as soon as they are built by Zextras development and tested by the QA team. While they are stable, they are not suitable for a use in a production environment, because they might still contain some bug, new functionalities that have not yet reached a production-level quality, or some broken dependencies might be introduced.
Usually these problems are fixed within days or even hours, so in case just try again before you report some problem.
Use this channel and its packages for testing (or demo) installations only.
RELEASE. This channel contains only packages that are stable and suitable for a production environment.
Hint
When reporting a problem or opening a ticket, remember to always add the channel if you are using, as this helps us to analyse the problem quickly.
Task 2: Setting Host name
Carbonio needs a valid FQDN as hostname and a valid entry in the /etc/hosts
file. To configure them, execute these two commands. First, set the hostname
# hostnamectl set-hostname mail.example.com
then update /etc/hosts
with IP and hostname
# echo "172.16.0.10 mail.example.com mail" >> /etc/hosts
You can also simply get the current IP and hostname and save them:
# echo "$(hostname -I) $(hostname -f)"
Hint
Replace 172.16.0.10 with the actual management IP to be assigned to the server.
It is mandatory to configure the hostname, especially on the Directory-Server node, otherwise the services will not be able to bind to the correct address, leading to a disruption in Carbonio’s functionality.
Preliminary Tasks for RHEL 8 Only
Carbonio installation on RHEL 8 systems requires a few tasks, including the setting of SELinux and the firewall.
SELinux and Firewall
SELinux
Must be set to disabled or permissive in file /etc/selinux/config
. You can check the current profile using the command
# sestatus
Firewall
All the ports needed by Carbonio are open on the firewall or the firewall is disabled. To disable the firewall, issue the commands
# systemctl stop firewalld.service
# systemctl disable firewalld.service
Installation of Nodes
The installation method follows the node sequence provided in the scenario. A few observations:
- It is assumed that the Postgres node is not a “real” component of the infrastructure, in the sense that it might be an existing server that is configured to connect properly with Carbonio (configuration instructions are included with the SRV1 installation).
Note
In our scenario, we install Postgres and configure it from scratch (SRV1).
- The first node to be deployed will be the one with the Directory Server role (SRV2).
- The MTA server (SRV3) will be setup next.
- The additional nodes can be installed in whatever order you choose, and you can skip instructions for any node or role you don’t want to install.
- While the fundamental technique is the same for both Ubuntu and RHEL 8, the particular instructions and file paths on the two operating systems may change, so make sure you perform the proper command on the relevant file.
When the installation procedure is complete, you may use a browser to view Carbonio’s GUI: instructions can be found in Section view to the Web Interface.
Postgres: SRV1
# apt install postgresql-12
Carbonio relies on a variety of databases to store and manage all of the items that it must handle. The primary database may be set up in two stages, however if you’re running Carbonio on RHEL 8, please first set up Postgres according to the instructions.
The first step is to create a role with administrative privileges and a password.
# su - postgres -c "psql --command=\"CREATE ROLE carbonio_adm WITH LOGIN SUPERUSER encrypted password 'DB_ADM_PWD';\""
Remember to update the password with a strong password of your choice and keep it somewhere secure (ideally using a password manager), since you will need it for the rest of the procedure and maybe in the future. This password will be represented by the string DB_ADM_PWD.
The next stage is to build the database.
# su - postgres -c "psql --command=\"CREATE DATABASE carbonio_adm owner carbonio_adm;\""
Finally, allow the other nodes to access the databases that will be stored on this node by running these four commands.
# su - postgres -c "psql --command=\"ALTER SYSTEM SET listen_addresses TO '*';\""
# su - postgres -c "psql --command=\"ALTER SYSTEM SET port TO '5433';\""
# su - postgres -c "psql --command=\"ALTER SYSTEM SET max_connections = 500;\""
# su - postgres -c "psql --command=\"ALTER SYSTEM SET shared_buffers = 5000;\""
# echo "host all all 0.0.0.0/0 md5"
>> /etc/postgresql/12/main/pg_hba.conf
# systemctl restart postgresql
Hint
You may replace the 0.0.0.0/0
network with the one within the cluster is installed (172.16.0.0) to prevent unwanted accesses.
-
DB_ADM_PWD the password of the
carbonio_adm
database role -
SRV1_IP the IP address of the nodeFinally, execute these four instructions to allow other nodes to access the databases that will be stored on this node.
The first step is to add the Postgresql repository.
# yum -y install
https://download.postgresql.org/pub/repos/yum/reporpms/EL-8-x86_64/pgdg-redhat-repo-latest.noarch.rpm
Then, perform the following instructions to ensure that Postresql 12 is installed.
# dnf -qy module disable postgresql # dnf -y install postgresql12 postgresql12-server
Finally, start and activate Postgresql.
# /usr/pgsql-12/bin/postgresql-12-setup initdb
# systemctl enable --now postgresql-12
Carbonio relies on a variety of databases to store and monitor all of the items that it must manage. The primary database may be set up in two stages, however if you’re running.
The first step is to create a role with administrative privileges and a password.
# su - postgres -c "psql --command=\"CREATE ROLE carbonio_adm WITH LOGIN SUPERUSER encrypted password 'DB_ADM_PWD';\""
Remember to update the password with a strong password of your choice and keep it somewhere secure (ideally using a password manager), since you will need it for the rest of the procedure and maybe in the future. This password will be represented by the string DB_ADM_PWD.
The next stage is to build the database.
# su - postgres -c "psql --command=\"CREATE DATABASE carbonio_adm owner carbonio_adm;\""
Finally, execute these four instructions to allow other nodes to access the databases that will be stored on this node.
# su - postgres -c "psql --command=\"ALTER SYSTEM SET listen_addresses TO '*';\""
# su - postgres -c "psql --command=\"ALTER SYSTEM SET port TO '5433';\""
# su - postgres -c "psql --command=\"ALTER SYSTEM SET max_connections = 500;\""
# su - postgres -c "psql --command=\"ALTER SYSTEM SET shared_buffers = 5000;\""
# echo "host all all 0.0.0.0/0 md5"
>> /var/lib/pgsql/12/data/pg_hba.conf
# systemctl restart postgresql-12
Hint
You may replace the 0.0.0.0/0
network with the one within the cluster is installed (172.16.0.0) to prevent unwanted accesses.
DB_ADM_PWD the password of the
carbonio_adm
database roleSRV1_IP the IP address of the node
SRV2: Directory Server, Database Connection, and Carbonio Mesh Server
This server’s installation entails a lot of activities since it will include a number of critical services for the proper operation of Carbonio: Directory Server and Carbonio Mesh, connection with PostgreSQL node using Pgpool-II, and Carbonio Mesh.
Note
It is possible to install multiple instances of the service-discover service provided by Carbonio Mesh. Please refer to section Set up Multiple Carbonio Mesh Servers for details.
- Install the packages listed below from the main repository.
2. Setup pgpool
3. Using the following command, configure Pgpool-II to work with the PostgreSQL node (SRV1). SRV1_IP should be replaced with the value saved in the preceding job.
# echo "backend_clustering_mode = 'raw'
port = 5432
backend_hostname0 = 'SRV1_IP' # eg 192.168.1.100
backend_port0 = 5433
backend_flag0 = 'DISALLOW_TO_FAILOVER'
num_init_children = 32
max_pool=8
reserved_connections=1" > /etc/pgpool2/pgpool.conf
# echo “backend_clustering_mode = ‘raw’
port = 5432
backend_hostname0 = ‘SRV1_IP’ # eg 192.168.1.100
backend_port0 = 5433
backend_flag0 = ‘DISALLOW_TO_FAILOVER’
num_init_children = 32
max_pool=8
reserved_connections=1″ > /etc/pgpool-II/pgpool.conf
4. This command will restart the service.
5. Carbonio is a bootstrapped carbonio.
# carbonio-bootstrap
The bootstrap command will perform a variety of operations and configure the node. At the end, you will be presented with a menu, and if you have previously configured everything, all you need to do is click y for confirmation.
6. Carbonio Mesh Installation
Carbonio Mesh is necessary for Carbonio and its components to communicate. Command generates the setup interactively.
# service-discover setup-wizard
This command will do the following:
request the IP address and netmask
Request the Carbonio Mesh secret, which is required for installations, management, and access to the administrative GUI. For further details, see the Carbonio Mesh Administration Interface section.
Throughout the documentation, this password will be designated as MESH_SECRET.
Hint
We suggest to use a robust password which is at least 16 characters long, including at least one of lowercase and uppercase letters, numbers, special characters and store it in a password manager.
Carbonio Mesh Secret can be Regenerated if the password is forgotten or the credential file becomes damaged and useless.
Save the configuration to the file
/etc/zextras/service-discover/cluster-credentials.tar.gpg
To finish installing Carbonio Mesh, run
# pending-setups -a
Hint
The secret needed to run the above command is stored in file /var/lib/service-discover/password
, which is accessible only by the root
user.
7. Bootstrap Carbonio Databases with the Postgres user created on SRV1 and the password provided in the previous step.
Advanced Carbonio
# PGPASSWORD=DB_ADM_PWD carbonio-mailbox-db-bootstrap carbonio_adm 127.0.0.1
Carbonio Files
# PGPASSWORD=DB_ADM_PWD carbonio-files-db-bootstrap carbonio_adm 127.0.0.1
Docs Carbonio
# PGPASSWORD=DB_ADM_PWD carbonio-docs-connector-db-bootstrap carbonio_adm 127.0.0.1
SRV2_hostname this node’s hostname, which can be retrieved using the command su – zextras -c “carbonio prov gas service-discover”
MESH_SECRET the Carbonio Mesh password
LDAP_PWD the LDAP bind password for the
root
user and applications, retrieved with command:# zmlocalconfig -s zimbra_ldap_password
AMAVIS_PWD the password used by Carbonio for the Amavis service, retrieved with command
# zmlocalconfig -s ldap_amavis_password
POSTFIX_PWD the password used by Carbonio for the Postfix service, retrieved with command
# zmlocalconfig -s ldap_postfix_password
NGINX_PWD the password used by Carbonio for the NGINX service, retrieved with command
# zmlocalconfig -s ldap_nginx_password
Note
By default, all the LDAP_PWD, AMAVIS_PWD, POSTFIX_PWD, and NGINX_PWD bind passwords have the same value.
SRV3: MTA
These following tasks must be executed to configure the MTA.
Bootstrap Carbonio
# carbonio-bootstrap
- To properly finish the bootstrap, utilise SRV2_hostname, LDAP_PWD, POSTFIX_PWD, and AMAVIS_PWD in the following items in the bootstrap menu.
SRV2_hostname is the Ldap master host.
LDAP_PWD is the password for the LDAP administrator.
POSTFIX_PWD is the password bound for the postfix ldap user.
Bind amavis ldap user password: AMAVIS_PWD
Use MESH_SECRET to run Carbonio Mesh setup.
# service-discover setup-wizard
Because this node is not the Carbonio Mesh Server, the cluster-credentials.tar.gpg file will be downloaded automatically.
Carbonio Mesh Configuration
# pending-setups -a
Hint
The secret needed to run the above command is stored in file
/var/lib/service-discover/password
which is accessible only by theroot
user.
MTA_IP the IP address of this node
SRV4:
Proxy with Carbonio VideoServer
This node contains the proxy, the *-ui files (the front-end packages for Carbonio Chats, Carbonio Admin Panel, and Carbonio Files), and the Carbonio VideoServer packages. Because Proxy and Carbonio VideoServer are distinct responsibilities, we segregate their installation and configuration so that they may be simply implemented on multiple nodes.
The proxy functionality requires no configuration, so we can just install the packages and configure the node only.
1. Install packages
2. Restart the nginx exporter for Carbonio Monitoring
# systemctl restart carbonio-prometheus-nginx-exporter.service
3. Bootstrap Carbonio
# carbonio-bootstrap
In the bootstrap menu, use SRV2_hostname, AND LDAP_PWD in the following items to complete successfully the bootstrap.
Ldap master host
: SRV2_hostnameLdap Admin password
: LDAP_PWDBind password for nginx ldap user
: NGINX_PWD
4. Run Carbonio Mesh setup using MESH_SECRET
# service-discover setup-wizard
Since this node is not the Carbonio Mesh Server, the cluster-credentials.tar.gpg
file will be automatically downloaded.
5. Complete Carbonio Mesh setup
# pending-setups -a
Hint
The secret needed to run the above command is stored in file /var/lib/service-discover/password
which is accessible only by the root
user.
Enable Memcached
access using the commands as the zextras
user:
zextras$ carbonio prov ms $(zmhostname) zimbraMemcachedBindAddress $(hostname -i)
zextras$ zmmemcachedctl restart
zextras$ zmproxyctl restart
Warning
- Since
Memcached
does not support authentication, make sure that the Memcached port (11211) is accessible only from internal, trusted networks.
It is possible to install the Carbonio VideoServer without the Video Recording feature. If you wish to do so, follow the procedure below, but skip the last step, labelled [Video Recording].
Install Carbonio VideoServer package
# apt install carbonio-videoserver
# yum -y install https://dl.fedoraproject.org/pub/epel/epel-release-latest-8.noarch.rpm- After the installation, make sure that the Carbonio VideoServer public IP address (i.e., the one that will accept incoming connections to the Carbonio VideoServer) is present in the configuration file
/etc/janus/janus.jcfg
and add it if missing. Enable and start the service with the commands
# systemctl enable videoserver.service # systemctl start videoserver.service
[Video Recording] To implement this feature, install package
# apt install carbonio-videoserver-recorder
# dnf install carbonio-videoserver-recorderThe video-recording feature is enabled by default, and does not require configuration on this node, but in the next one. Indeed, it requires a node which installs the
carbonio-appserver
packages. The recorded sessions will be stored on that node, in directory/var/lib/videorecorder/
. Make sure that the directory has sufficient free space, otherwise recorded videos can not be stored.Hint
You can mount on that location a dedicated disk or partition and keep it monitored for space usage
VS_IP the local IP address of this node
VS_PWD the password of the Carbonio VideoServer, that can be retrieved by running as the
root
user the command grep -i -e nat_1_1 -e api_secret /etc/janus/janus.jcfg
SRV5: Advanced, AppServer, Files, and Docs in SRV5
Install all Carbonio Files-related packages on this node before configuring the various services.
# apt install service-discover-agent carbonio-advanced \
carbonio-user-management carbonio-files \
carbonio-docs-connector carbonio-docs-editor
# yum install service-discover-agent
# yum install carbonio-files
# yum install carbonio-user-management carbonio-advanced
# yum install carbonio-docs-connector carbonio-docs-editor
Complete the following tasks.
Carbonio is a bootstrapped carbonio.
# carbonio-bootstrap
To properly finish the bootstrap, utilise SRV2_hostname, LDAP_PWD, and NGINX_PWD in the following items in the bootstrap menu.
SRV2_hostname is the Ldap master host.
LDAP_PWD is the password for the LDAP administrator.
NGINX_PWD: Bind password for nginx ldap user
Use MESH_SECRET to run Carbonio Mesh setup.
# service-discover setup-wizard
Because this node is not the Carbonio Mesh Server, the cluster-credentials.tar.gpg file will be downloaded automatically.
Carbonio Mesh arrangement in its entirety
# pending-setups -a
Hint
The secret needed to run the above command is stored in file
/var/lib/service-discover/password
which is accessible only by theroot
user.Run the following command as the zextras user to enable Video Recording, using VS_IP and VS_PWD specified on SRV4.
zextras$ carbonio chats video-server add VS_IP port 8188 \ servlet_port 8090 secret VS_PWD
Enable Carbonio VideoServer, Video Recording, and the ability for each user to record meetings at the COS level.
zextras$ carbonio config set cos default teamChatEnabled true zextras$ carbonio config set global teamVideoServerRecordingEnabled true zextras$ carbonio config set global teamMeetingRecordingEnabled true
Note
In the commands above, the policy allows every user to record a meeting. It is however possible to enforce this policy at user or COS level, to allow only selected users or members of a COS to record meetings.
(optional) Activate the licence as the
zextras$ carbonio core activate-license TOKEN
SRV6: Advanced, AppServer, Preview, and Carbonio Monitoring zextras user.
We deploy the Preview, User Management, and advanced services on this node.
Hint
We suggest that Preview and the Carbonio Docs-related packages be installed on different physical nodes.
Complete the following tasks:
Carbonio is a bootstrapped carbonio
# carbonio-bootstrap
To properly finish the bootstrap, utilise SRV2_hostname and LDAP_PWD in the following items in the bootstrap menu.
SRV2_hostname is the Ldap master host.
LDAP_PWD is the password for the LDAP administrator.
Run Carbonio Mesh setup using MESH_SECRET
# service-discover setup-wizard
Because this node is not the Carbonio Mesh Server, the cluster-credentials.tar.gpg file will be downloaded automatically.
Complete Carbonio Mesh setup
# pending-setups -a
Hint
The secret needed to run the above command is stored in file
/var/lib/service-discover/password
which is accessible only by theroot
user.Allow access to port 9090 on this node if you want the Carbonio Monitoring to be accessible from outside the local network
Allow Memcached to be used by Carbonio Preview. Search for section # Nginx Lookup server in file /etc/carbonio/preview/config.ini.
1nginx_lookup_server_full_path_urls = https://172.16.0.16:7072 2memcached_server_full_path_urls = 172.16.0.14:11211
Make certain that:
In line 1, the protocol is https, and the IP address is the address of one AppServer; for simplicity, we use the present node’s IP address.
Line 1 should also include the port number used by Preview, 7072.
Line 2 contains the value VS_IP, which allows this node to connect to Memcached, which is installed on the Proxy Node.
Start the Carbonio Preview procedure again
# systemctl restart carbonio-preview # systemctl restart carbonio-preview-sidecar
Finally, as the carbonio user, restart the mailbox process.
zextras$ zmcontrol stop zextras$ zmcontrol start
Installation is finished.
The installation is now complete, and you can begin using Carbonio and accessing its graphic interface as described in the section Access to the Web Interface.
You should also change the password of the system user with administrative access, as described in the section Change Password of System User below.
Logging Configuration Centralised
Carbonio’s log system is rsyslog, which supports a centralised setup: that is, all log files generated by Carbonio can be sent to a unique host server (we call it “Log Server”) that is appropriately configured to receive log files, which is especially useful in a Multi-Server installation.
In the procedures following, we choose SRV-6 as the Log Server.
On SRV-6, open file /etc/rsyslog.conf
, find the following lines, and uncomment them (i.e., remove the #
character at the beginning of the line).
$ModLoad imudp
$UDPServerRun 514
$ModLoad imtcp
$TCPServerRun 514
Then, restart the rsyslog
service.
# systemctl restart rsyslog
Finally, specify the host server that will receive logs. Since is the SRV-6 node, we need SRV6_hostname.
zextras$ carbonio prov mcf zimbraLogHostname SRV6_hostname
Note
Since zimbraLogHostname
is a global attribute, this command must be run only once on one node.
Once the Log Server node has properly been initialised, on all other nodes, execute
# /opt/zextras/libexec/zmsyslogsetup && service rsyslog restart
Change the System User’s Password
Log in to a shell terminal as the zextras user and perform these two commands to change the password for the zextras@mail.example.com user’s Web access. The first allows you to switch to the zextras user, while the second changes the password.
# su - zextras
zextras$ carbonio prov setpassword zextras@mail.example.com newpassword
Ascertain that newpassword fulfils appropriate security requirements.
The zextras
and zextras@mail.example.com
users
There is an obvious separation between these two users, who are designed to do unique tasks:
zextras
This is the administrator’s unix account, and it must be used to do administrative activities via the command line.
zextras@mail.example.com
- zextras@mail.example.comThis is the default administrator user for using the web interface to visit the Admin UI and control Carbonio.
Configure a Directory Server Replica
This section describes how to set up a Directory Server Replica, which is a second instance of a Directory Server in a Master/Slave configuration that comes in handy when the demand on the Directory server is consistently high.
Indeed, in this setup, the Master Directory Server will remain authoritative for storing and managing both user information and server configuration, but will delegate all requests from the other infrastructure nodes to the Replica. As a result, if some data on the master is updated, it is promptly replicated to the slave and made available for queries. In the end, the two servers will share their work, lessening the stress on the main instance.
Preliminaries
Please read the whole method on this page before attempting to install a Directory Server Replica and ensure the following conditions are met.
A Carbonio Multi-Server is already operational.
A new node is available for installation of the Replica, which meets the Multi Server Requirements and has already completed the Preliminary Tasks. This node will be known as SRV7.
Note
In case you plan to install the Replica on an existent node, execute all commands on that node instead of on SRV7.
All commands, with the exception of the one in the Installation section, must be run as the zextras user.
Give the new node a descriptive name/FQDN. We will utilise ds-replica.example.com as needed. Remember to change it to the name you provide.
You must have CLI access to both the Main and Replica Directory Servers in order to run commands on both.
Installation
This command must be executed on SRV7 during the installation.
# apt install carbonio-directory-server service-discover-agent
Note
As an alternative. you may install the package service-discover-server
if you plan to have multiple Carbonio Mesh server. In this case, however, please refer to section Set up Multiple Carbonio Mesh Servers for detailed directions.
Configuration
The Replica Directory server must be configured in a few stages.
On SRV2 activate the replica by executing
# /opt/zextras/libexec/zmldapenablereplica
Then, retrieve a few passwords that you will need on the Replica to configure the connection and access to SRV2
zextras$ zmlocalconfig -s zimbra_ldap_password
zextras$ zmlocalconfig -s ldap_replication_password
zextras$ zmlocalconfig -s ldap_postfix_password
zextras$ zmlocalconfig -s ldap_amavis_password
zextras$ zmlocalconfig -s ldap_nginx_password
Note
By default, these password are the same and coincide with zimbra_ldap_password
. If you did not change them, use the same password in the next step.
You will asked to properly configure a couple of options in the Common configuration and Ldap configuration menus. In the first menu, provide these values:
Ldap configuration
1) Hostname: The hostname of the Director Server Replica.
2) Ldap master host: The hostname of SRV2
3) Ldap port: 389
4) Ldap Admin password: The zimbra_ldap_password
Exit this menu and go to the second:
Ldap configuration 1) Status:Enabled
2) Create Domain: do not change 3) Domain to create: example.com 4) Ldap root password: Thezimbra_ldap_password
5) Ldap replication password: Theldap_replication_password
6) Ldap postfix password: Theldap_postfix_password
7) Ldap amavis password: Theldap_amavis_password
8) Ldap nginx password: Theldap_nginx_password
Hint
Remember to always use the zimbra_ldap_password
in case you did not change the other passwords.
You can now continue the bootstrap process and after a while the installation will be successfully completed and immediately after, the copy of the Directory Server on SRV2 will be copied over to the Replica on SRV7.
Testing
You can do a brief test to see if the Replica works properly after the installation has been completed successfully.
Log in to the Master (SRV2) and create a test user using the following credentials:
zextras$ carbonio prov ca john.doe@example.com MySecretPassword
Log in to the Replica and ensure that all accounts from the Master have been copied over:
zextras$ carbonio prov -l gaa
The email address john.doe@example.com must be present among the results.
Hint
You can pipe the previous command to
grep
to check only the new account (or any given account): carbonio prov -l gaa | grep “john.doe@example.com”Execute the command Set up Replica to Answer Queries on the Replica.
zextras$ /opt/zextras/libexec/zmreplchk
If everything is working correctly, the output of the command will be similar to:
Replica: ldap://ds-replica.example.com:389 Code: 0 Status: In Sync CSNs: 20230607112059.968930Z#000000#000#000000
Set up a Replica to Respond to Queries
It is now time to configure the Replica to respond to queries instead of the Master, which necessitates changing the value of the ldap_url option to link to the Replica. This configuration may be accomplished with a few Master commands.
You need to keep at hand the following data
SRV2_hostname
: the hostname on which the Directory Server Master is installedSRV7_hostname
: the hostname on which the Directory Server Replica is installed, e.g., ds-replica.example.com
Hint
To retrieve the hostname, use the hostname on the Master and Replica nodes.
Carbonio services must be halted.
zextras$ zmcontrol stop
Change the value of ldap_url.
zextras$ zmlocalconfig -e \ ldap_url="ldap://SRV7_hostname ldap://SRV2_hostname"
If you intend to install numerous Replica Directory Servers, you can do so and then run the above-mentioned command once for each Replica, making sure that their hostnames come before the Master’s. If you installed two Replica Directory Servers on SRV4 and SRV5, for example, execute:
zextras$ zmlocalconfig -e \ ldap_url="ldap://SRV7_hostname ldap://SRV4_hostname \ ldap://SRV5_hostname ldap://SRV2_hostname"
The Replica instance to query first is the first stated in the command.
Remove a Replacment
To delete a Replica, you must complete two tasks:
Execute the following command on each node of the Multiple-Server installation, which will only use the Master for queries
zextras$ zmlocalconfig -e ldap_url="ldap://SRV2_hostname"
If you have several Replicas setup, the above command will send all requests to the Master. If you only want to remove part of the Replicas, simply remove their hostname from the list. To uninstall SRV5, for example, use the command Execute, but only on the MTA node.
zextras$ zmlocalconfig -e \ ldap_url="ldap://SRV7_hostname ldap://SRV4_hostname \ ldap://SRV2_hostname"
Execute, only on the MTA node the command
# /opt/zextras/libexec/zmmtainit
This command will update the configuration of postfix with new
ldap_url
.