Installation of many servers

This section explains how to set up a Carbonio CE Multi-Server, or distributed Carbonio installation, where each node takes on one or more roles.

We propose an installation scenario that may be tailored to the various demands of Carbonio CE customers that employ a varying number of nodes, as opposed to providing fixed installation instructions with certain functionality deployed on each node. As a result, we provide the concept of Role, which is a Carbonio CE utility that is thought of as atomic and is composed of one or more packages.

The situation we describe below may be changed at any time by putting a Role on a different node (or even on a dedicated node) of the cluster because a Role can be deployed on any node of the cluster.

Scenario with Six Nodes

In the recommended scenario, a Carbonio CE Multi-Server environment made up of six nodes (designated SRV1, SRV2,…, SRV6) will be configured as follows:

  1. Postgres is specifically installed on SRV1.
  2. Carbonio CE’s fundamental infrastructure is represented by SRV2, which installs Carbonio Mesh, Directory Server, and a DB connection.
  3. SRV3 is outfitted with the mail server MTA.
  4. All components may access the Proxy on the web because it is hosted by SRV4
  5. Carbonio Files & Carbonio Docs, which enable document sharing and collaborative editing, as well as the Carbonio Storages CE instance, are installed via the SRV5 AppServer.
  6. SRV6 is an additional AppServer that includes Carbonio Preview, which allows users to examine excerpts or thumbnails of documents, and the User Management.

Within a Carbonio CE architecture, the Carbonio Storages CE Roles must be distinct. Because Carbonio Storages CE is a requirement for Carbonio Files, it is installed on the first node that Carbonio Files is installed on. When you install Carbonio Files on a different node, Carbonio will detect through Carbonio Mesh that there is already a Carbonio Storages CE instance and will block its installation in favour of using the one that is already there.

In this example, we begin the installation of Carbonio CE on six nodes running Ubuntu 20.04 LTS. The steps to setup the nodes are the same, with the command for installing packages being the sole variation for nodes that have RHEL 8 installed.

We also suppose that each node has the IP address, where X is the n-th node. To put it another way, IP addresses will fall between (SRV1) and (second AppServer). When manually modifying configuration files during installation or upgrade processes, these settings will be utilised.

Installing a Replica Directory Server in a Master/Slave configuration for increased dependability and load-balancing is beneficial in the majority of Multi-Server scenarios. The steps to install the Replica on SRV7, a dedicated node (which must have the same OS as the other six), are covered in detail in a separate section. The same process may be used to install the Replica on any node other than SRV2 however.

additional Requirements


No additional requirement is necessary.


Some more remarks:

  • Acquaintance with the use of CLI is necessary. All carbonio commands must be executed as the zextras user (these commands will feature a zextras$ prompt), while all other commands must be issued as the root user, unless stated otherwise.
  • Give the nodes interesting names. Use names like,, and so on. Simply substitute your domain name for
  • In order to set up the subsequent nodes, you will need to remember a few configuration settings and their values throughout the installation process. These details are summarised at the conclusion of the installation of each node; copy them to a secure location and keep them nearby until the installation is complete. Examples of values are a node’s public or private IP address or a database user’s password.
  • The ports indicated in ref:fw-ports must be opened in your firewall for all the services you will provide, depending on the Roles configured on each Node.

Perform the preliminary actions listed below on each of the six nodes before beginning the installation.

only RHEL 8 introductory tasks

SELinux and the firewall setup are two tasks that must be completed before Carbonio CE can be installed on RHEL 8 computers.

Installation of Nodes

The proposed order of nodes as stated in the scenario is followed during installation. A few points:

  • Since the Postgres node might alternatively be an existing server that has been set up to connect with Carbonio CE properly, it is presumed that it is not a “real” component of the infrastructure (configuration instructions are included with SRV1 installation).
  • The node with the Directory Server role (SRV2) will be the first to be installed.
  • The MTA server (SRV3) will be setup next.
  • You can skip the steps for any node or role that you do not want to install and install the other nodes in any order.
  • Despite the fact that the general process is the same for both Ubuntu and RHEL 8, the particular commands and file paths may vary between the two operating systems, so be careful to use the right command on the right file.

Following a successful installation, you may use a browser to view Carbonio CE’s GUI by following the instructions in Section view to the Web Interface.

Postgres SRV1


All of the databases needed by Carbonio CE will be hosted on the first node, which is devoted to PostgreSQL.

To store and keep track of all the items it needs to handle, Carbonio CE uses several databases. If you are running Carbonio CE on RHEL 8, please first configure Postgres in accordance with the instructions before configuring the primary database, which may be done in two phases.

Making a role with administrative privileges and a related password is the first step.

You’ll need it for the rest of the process and may need it in the future, so make sure to update the password with a strong one of your choosing and keep it safely (ideally using a password manager). DB_ADM_PWD will be used to identify this password.

The database creation process is the next phase.

Finally, execute these four instructions to provide the other nodes access to the databases that will be kept on this node.

SRV2: Carbonio Mesh Server, Directory Server, and Database Connection

The installation of this server entails a variety of chores because it will include several essential services for Carbonio CE’s proper operation, including Carbonio Mesh, Directory Server, and Pgpool-II connection to a PostgreSQL node.

  1. Install these packages from the primary repository.
  2. Setup pgpool
  3. Use the command below to set up Pgpool-II to work with the SRV1 node, which hosts PostgreSQL. Substitute the value saved in the preceding task for SRV1_IP.
  4. With this command, the service may be restarted.
  5. Carbonio BootstrapThe bootstrap command will set up the node and carry out a variety of activities. When the process is finished, a menu will appear, and if everything has already been established, all you need to do is click Y to confirm
  6. Carbonio Mesh setup
To enable connectivity between Carbonio CE and its components, Carbonio Mesh is necessary. Commands are used to interactively generate the configuration.
This instruction will:
  • the IP address and the netmask
  • In order to install, administer, and use the administrative GUI, you must have the Carbonio Mesh secret. For additional details, go to Section Carbonio Mesh Administration Interface.
MESH_SECRET will be used to refer to this password throughout the manual.
You can regenerate Carbonio Mesh Secret in the event that the password is forgotten or the credential file is damaged and rendered useless.
  1. Run Bootstrap Carbonio Files Database with the Postgres login you generated on SRV1 and the password you set in the previous step to finish installing Carbonio Mesh.

 On this node, the MTA—the actual programme that sends and receives emails—is installed.

To configure the MTA, the aforementioned actions must be taken.
  1. Carbonio Bootstrap
Use SRV2_hostname, LDAP_PWD, POSTFIX_PWD, and AMAVIS_PWD in the subsequent items in the bootstrap menu to correctly complete the bootstrap.
  • SRV2_hostname is the Ldap master host.
  • password for LDAP admin: LDAP_PWD 
  • POSTFIX_PWD is the bind password for the postfix ldap user. 
  • AMAVIS_PWD is the bind password for the amavis ldap user.
  • Utilising MESH_SECRET, launch Carbonio Mesh setup
The cluster-credentials.tar.gpg file will be downloaded automatically because this node is not the Carbonio Mesh Server.
Setup of Carbonio Mesh in its entirety SRV4: Proxy
All of the *-ui files, or the front-end packages for Carbonio Files and Carbonio Admin Panel, will be placed on this node, which also houses the Proxy.
  1. Set up bundles
  2. Restart Carbonio Monitoring Bootstrap Carbonio’s nginx exporter.
  3. Use SRV2_hostname AND LDAP_PWD in the following items in the bootstrap menu to properly complete the bootstrap.
  4. SRV2_hostname is the Ldap master host.
  5. password for LDAP admin: LDAP_PWD
  6. NGINX_PWD is the bind password for nginx’s ldap user 
  7. Utilising MESH_SECRET, launch Carbonio Mesh setup
  8. The cluster-credentials.tar.gpg file will be downloaded automatically because this node is not the Carbonio Mesh Server.
  9. complete setup for Carbonio Mesh
  10.  Using the following commands as the zextras user, enable Memcached access:
SRV5: Files, Docs, and AppServer
Install all the Carbonio Files prerequisite packages on this node before configuring the various services.
Do the following things.
  1. Carbonio Bootstrap
  2. Use SRV2_hostname, LDAP_PWD, and NGINX_PWD in the following items in the bootstrap menu to correctly complete the bootstrap.
  • SRV2_hostname is the Ldap master host
  • password for LDAP admin: LDAP_PWD
  • NGINX_PWD is the bind password for nginx’s ldap user.
  1. Utilising MESH_SECRET, launch Carbonio Mesh setup
  2. The cluster-credentials.tar.gpg file will be downloaded automatically because this node is not the Carbonio Mesh Server.
SRV6: AppServer, Preview, and Carbonio Monitoring: Complete Carbonio Mesh Setup

We demonstrate how to install the Preview and User Management on this node.

Install all required software first:
Do the following things.
  • Carbonio Bootstrap
Use SRV2_hostname and LDAP_PWD in the following items in the bootstrap menu to properly complete the bootstrap.
SRV2_hostname is the Ldap master host.
password for LDAP admin: LDAP_PWD
  • Utilising MESH_SECRET, launch Carbonio Mesh setup
The cluster-credentials.tar.gpg file will be downloaded automatically because this node is not the Carbonio Mesh Server.
  • complete setup for Carbonio Mesh
  • Allow access to port 9090 on this node if you want the Carbonio Monitoring to be available from outside the local network.
  • Let Memcached be used by Carbonio Preview. Go to section #Nginx Lookup servers in file /etc/carbonio/preview/config.ini and make the necessary changes.Make certain to:
  • For simplicity, we utilise the IP address of the current node in line 1 where the protocol is https and the IP address is that of one AppServer.
  • Be sure to provide the port that Preview uses, 7072, in line 1.
  • To grant this node access to Memcached, which is installed on the Proxy Node, SRV4_IP is put in line 2.
  • Launch the Carbonio Preview procedure again.
  • Restart the mailbox process as the zextras user as the last step to complete installation.
Following the instructions in the section under “Access to the Web Interface,” Carbonio CE can now be used when the installation is complete.
 The work of changing the password for the system user with administrative access is also strongly encouraged and is covered in the section Change Password of System User below.

Configuration of centralised logging
The log system used by Carbonio CE is rsyslog, which supports a centralised setup. As a result, all log files generated by Carbonio CE can be sent to a single host server (referred to as the “Log Server”) that has been properly set up to receive log files.
We choose to use SRV-6 as the log server in the procedures that follow.
Place a Directory Server Replica in place
In this part, we’ll go over how to set up a directory server replica, or a second Directory Server running in a Master/Slave configuration, which comes in handy when the demand on the primary Directory Server is consistently high.
under fact, under this setup, the Master Directory Server will continue to be the authoritative source for keeping and maintaining both user data and server configuration, but it will forward any requests from the other infrastructure nodes to the Replica. As a result, if data is updated on the master, a copy is immediately sent to the slave and made searchable. The two servers will share their workloads, which will ultimately decrease the stress on the primary instance.
Please carefully study the whole installation guide on this page before attempting to install a Directory Server Replica and confirm that the following conditions are met.

The rest of this section demonstrates how to deploy one Replica on a dedicated node, while it can also be installed on an existing cluster node.
  • Already, a Multi-Server Carbonio CE is functioning properly.
  • The Replica can be installed on a new node that meets the Multi Server Requirements and on which the Preliminary Tasks have previously been completed. This node will be known as SRV7.
  • All commands must be run as the zextras user, with the exception of the one in the Installation section.
  • Assign a relevant name or FQDN to the new node. Every time it is required, we shall utilise Do not forget to change it with the name you provide.
  • Because you must run commands on both the Main and Replica Directory systems, have access to the CLI on both systems.
Executing this command on SRV7 is necessary for installation.
The Replica Directory server has to be configured in a few different ways.
You may do a simple test as shown below to see if the Replica functions properly once the installation was successful.

  1. Create a test user with a password by logging into the Master (SRV2):
  2. Check that all of your accounts have been transferred from the Master by logging onto the Replica:
  3. The email address must appear in the results.
run the command on the replica

If everything is operating as it should, the command’s output will like this: 
Set up Replica to Answer Queries.
It is now necessary to change the value of the ldap_url argument so that it points to the Replica in order to set up the Replica to respond to queries instead of the Master. A few commands on the Master will get you to this configuration.
  1. All Carbonio CE services must end
  2. Change the ldap_url value
If you want to set up several Replica Directory Servers, you may do so by installing each one separately and then running the aforementioned command once for each Replica, ensuring sure that their hostnames come before the Master’s hostname. Execute, for instance, the command mentioned first, The Replica instance to query first is the first stated in the command, assuming you deployed two Replica Directory Servers on SRV4 and SRV5.

Get rid of a replica
Two actions must be taken in order to delete a Replica:

Run the following command on each node of the Multiple-Server installation to limit queries to the Master.
  1. The aforementioned command will send all queries to the Master if you had configured several Replicas. Simply exclude its hostname from the list if you only want to get rid of certain of the Replicas. Use the command Execute, but only on the MTA node, to delete SRV5, for instance.
  2. Using the updated ldap_url, this command will update postfix’s settings.

Leave a Reply

Your email address will not be published. Required fields are marked *