RabbitMQ (Split) for Xray

This is a guide for installing Xray and RabbitMQ on separate nodes and to connecting multiple Xray nodes a RabbitMQ cluster (sequentially) to achieve high availability.

This is a guide for installing Xray and RabbitMQ on separate nodes and connecting multiple Xray nodes to a RabbitMQ cluster (sequentially) to achieve high availability.

The setup supports native (rpm, deb), Linux archive, and docker-compose-based installation. This document uses three nodes for the application, RabbitMQ, and one Xray database node for demonstration purposes.

4.png

Benefits

  • Improved Scalability with Dedicated Resources:

    Separating RabbitMQ allows for dedicated resource allocation (CPU, memory, and I/O), enabling it to scale independently from the application services.

  • Reduced Resource Contention:

    When RabbitMQ and application services share the same resources, they compete for CPU, memory, and network bandwidth. Separating them eliminates this contention, ensuring more consistent and reliable performance.

  • **Simplified Troubleshooting:**A clear separation between RabbitMQ and the application makes identifying and resolving issues easier, whether they stem from RabbitMQ or the application itself.

Prerequisites

JFrog Installer Compatiility

Ensure you use the installer for the latest Xray version, which supports deploying Xray and RabbitMQ on separate nodes. Version Support for this deployment model is as follows.

Xray Installer

Version

RPM

3.97.8 and 3.107.18 onwards

Debian

3.111.0 onwards

Linux Archive

3.111.0 onwards

Docker Compose

3.111.0 onwards

Hardware Requirements

We recommend installing Rabbit MQ on at least three nodes in this deployment model. The table below lists the minimum hardware requirements for the servers. For a detailed description of the requirements, please refer to Xray System Requirements and Platform Support.

Node Type

CPU

RAM (GB)

Disk Size (GB)

Xray Node

6

12

250

RabbitMQ Node

4

8

250

PostgreSQL

JFrog Xray requires PostgreSQL as its database. You could use a managed solution or a native installation. If you prefer the latter, we recommend using a separate standalone server for the database. For more details, refer to the following document: PostgreSQL for Xray.

Artifactory

Ensure Artifactory is operational on a separate, fully configured node.

RabbitMQ New Installation Guide RPM/DEB

Installation Procedure

This process consists of two main stages: setting up the RabbitMQ cluster and configuring the Xray node cluster.

Stage 1: Setting Up the RabbitMQ Cluster

This stage focuses on configuring the RabbitMQ cluster. Since Xray relies on RabbitMQ for its operation, completing this step is crucial. Before proceeding, ensure that you are logged in as the root user.

  1. Download the Xray RPM/DEB-based installer(tar.gz) from the JFrog Xray Downloads page.

  2. Extract the tar.gz file:

    tar -xvf jfrog-xray-<version>-rpm/deb.tar.gz
📘

Note

📘

Note

From 3.107 onwards, JFrog Xray installers have organized files into designated subfolders. After extracting the tar.gz file, ensure you navigate to the xray directory.

cd jfrog-xray-<version>-rpm/deb
  1. Run the install.sh

    ./install.sh -t rabbitmq
  2. Once you run the command, you will be prompted to confirm whether this node will join an existing cluster. Since this is a fresh installation on the first node, please type "N" to indicate no.

    1.png

📘

Note

📘

Note

If you wish to use a custom password for RabbitMQ, perform the following steps before proceeding to Step 5:

  1. Update the default_pass value in the rabbitmq.conf file with your desired password.

    vi /opt/jfrog/xray/app/bin/rabbitmq/rabbitmq.conf

    2.png

  2. Update the system.yaml file with the same password in rabbitmq.conf.

    vi /var/opt/jfrog/xray/etc/system.yaml

    3.png

    🚧

    Important

    Ensure this step is repeated on all RabbitMQ nodes, and verify that the custom password remains consistent across the cluster.

  1. Start RabbitMQ service.

    systemctl start xray-rabbitmq.service
  2. RabbitMQ service should start; you can verify it by checking the status.

    systemctl status xray-rabbitmq.service

    4.png

  3. Access the UI through a browser to verify if the management plugin is enabled.

📘

Note

📘

Note

The credentials provided below are the default ones. If you have customized the RabbitMQ password in ’/opt/jfrog/xray/app/bin/rabbitmq/rabbitmq.conf’, ensure you use the correct password when accessing the management UI.

http://<your-node-ip>:15672

Username:  guest
Password:  JFXR_RABBITMQ_COOKIE

Adding Nodes to Form a RabbitMQ Cluster

Now that RabbitMQ is running on the first node, we need to install RabbitMQ on additional nodes and integrate them into the existing cluster. In this example, we will install RabbitMQ on two more nodes and join them in the cluster.

  1. Repeat steps 1 to 3 from the initial node setup to obtain the Xray files on the additional node.

  2. Run install.sh file

    ./install.sh -t rabbitmq
  3. Once you execute the command, you will be prompted to confirm whether this node will join an existing RabbitMQ cluster. Since we already have an active RabbitMQ node, type Y and press Enter. Next, you will be asked to provide the name of the active RabbitMQ node. Here, you should input the hostname of the active RabbitMQ node.

    5.png

  4. Start RabbitMQ service.

    systemctl start xray-rabbitmq.service
  5. RabbitMQ service should start; you can verify it by checking the status.

    systemctl status xray-rabbitmq.service

Repeat steps 1 through 5 to set up the additional RabbitMQ node and integrate it into the existing cluster.

Verify the RabbitMQ cluster status.

After installing and starting RabbitMQ on all required nodes, confirm that the cluster functions correctly and all nodes are properly joined.

📘

Note

📘

Note

Since the value of $HOME differs for each user, placing a copy of the cookie file for every user who will use the CLI tools, including both non-privileged users and root, is essential.The xray user, automatically created by the installer script, can execute the cluster_status command. To switch to the xray user, run:

su xray

Navigate to the RabbitMQ sbin directory and execute the cluster_status commands.

$ cd /opt/jfrog/xray/app/third-party/rabbitmq/sbin
$ ./rabbitmqctl cluster_status
6.png

You can verify the cluster from the Management UI URL http://<RabbitMQ-Node-IP/Hostname>:15672; see the screenshot below.

7.png

Stage 2: Setting Up the Xray Node Cluster

Once the RabbitMQ cluster is successfully configured, set up the Xray node cluster, which will integrate with the previously configured RabbitMQ cluster.

  1. Download the Xray RPM/DEB-based installer(tar.gz) from the JFrog Xray Downloads page.

  2. Extract the tar.gz file:

    tar -xvf jfrog-xray-<version>-rpm/deb.tar.gz
📘

Note

📘

Note

From 3.107 onwards, JFrog Xray installers have organized files into designated subfolders. After extracting the tar.gz file, ensure you navigate to the xray directory.

cd jfrog-xray-<version>-deb
  1. Run the install.sh

    ./install.sh -t xray
  2. Upon running the command, you will be prompted with a series of interactive installation questions; refer to the screenshot attached for details about the mentioned steps.

    1. Confirm the installation directory by pressing ENTER

    2. Provide the URL of your JFrog Platform( Artifactory) instance in the format:<http://artifactory\_node\_ip:port>

    3. Obtain the Join Key by navigating to Administration > Security > General > Connection Details in Artifactory.

      1. Enter the platform password to access the details.
      2. Copy the Join Key and use it when prompted.
    4. Specify the machine's IP address:  You can manually input the current node's IP address or press ENTER to use the default, as the system automatically detects the current node's IP address. For IPv6, ensure the address is enclosed in square brackets: [<ipv6_address>].

    5. Are you adding an additional node to the existing product cluster? For the first node of Xray, select N (No), as this will set up a new cluster. For subsequent nodes, select Y (Yes). The process for adding additional nodes is detailed in later steps.

    6. Do you want to install Postgresql? Select N (No) and press ENTER, as we use a PostgreSQL instance installed on a separate node. Provide the following database connection details:

      1. Connection URL: postgres://<IP_ADDRESS>:<PORT>/<database name>?sslmode=disable
      2. Database Username: <YOUR_DATABASE_USERNAME>
      3. Database Password: <YOUR_DATABASE_PASSWORD>8.jpg
  3. Before starting the Xray service, update the system.yaml file with the RabbitMQ cluster URLs and credentials for all nodes in the cluster. Ensure that all RabbitMQ node URLs are correctly listed and include the appropriate username and password for authentication. This configuration ensures Xray can communicate effectively with the RabbitMQ cluster and save the changes.

    vi /var/opt/jfrog/xray/etc/system.yaml
📘

Note

📘

Note

If using a custom RabbitMQ password, ensure the same password is configured in Xray’s system.yaml file under the RabbitMQ section to maintain consistency.

9.png 6. Start Xray service:

systemctl start xray.service
  1. Once the Xray service is started successfully, you can access it via the user interface or APIs.

Adding Nodes to Form an Xray Cluster:

  1. Install Xray on a new node by repeating the initial Xray node setup steps. During the installation, you will be prompted to ask, "Are you adding an additional node to an existing product cluster? [y/N]?" At this point, select Y to confirm that this is an additional node for the X-ray cluster.

  2. In the next step, you will be prompted to provide the master_key. You can retrieve this key from the first node by executing the following command:

    cat /var/opt/jfrog/xray/etc/security/master.key

The remaining installation steps for the new node should be carried out like for the first node.

10.png

After installing, modify the system.yaml file to update the RabbitMQ URLs. Change the order of the URLs to optimize load balancing and prevent all Xray nodes from connecting to the same RabbitMQ node. For example, if the RabbitMQ node URL order in the first Xray node is 1,2,3, then in the second node, it should be 2,3,1, and in the third node, it could be 3,1,2. This ensures that each Xray node connects to different RabbitMQ nodes, improving fault tolerance and distributing the load efficiently.

vi /var/opt/jfrog/xray/etc/system.yaml
11.png
Uninstallation

Read about Uninstalling JFrog Xray.

RabbitMQ (Split) Fresh Installation Guide Linux Archive

Installation Procedure

This process consists of two main stages: setting up the RabbitMQ cluster and configuring the Xray node cluster.

Stage 1: Setting Up the RabbitMQ Cluster

This stage focuses on configuring the RabbitMQ cluster. Since Xray relies on RabbitMQ for its operation, completing this step is crucial. Before proceeding, ensure that you are logged in as the root user.

  1. Download the Xray linux archive based installer(tar.gz) from the JFrog Xray Downloads page.

  2. Extract the tar.gz file:

    tar -xvf jfrog-xray-<version>-linux.tar.gz

    OS user permissions for Linux archive

    When running Xray, the installation script creates a user called xray by default, which must have run and execute permissions on the installation directory.

    We recommend that you copy the Xray download file into a directory that gives run and execute permissions to all users such as /opt.

    mkdir -p /opt/jfrog
    cp -r jfrog-xray-<version>-linux /opt/jfrog/
    cd /opt/jfrog
    mv jfrog-xray-<version>-linux xray
    cd xray/app/bin
📘

Note

📘

Note

From 3.107 onwards, JFrog Xray installers have organized files into designated subfolders. After extracting the tar.gz file, ensure you navigate to the correct directory

  1. Run the install.sh

    Check prerequisites for Xray in Linux Archive before running install script.

    ./install.sh -t rabbitmq
  2. Once you run the command, you will be prompted to confirm whether this node will join an existing cluster. Since this is a fresh installation on the first node, please type "N" to indicate no.

    1.png

  3. Make sure to follow the instructions on the screen after the installation is completed regarding the below points

    1. Xray directory should be owned by the xray usereg : chown -R xray:xray /opt/jfrog/xray
    2. Switch to xray user, before you start the serviceeg: su xray
📘

Note

📘

Note

If you wish to use a custom password for RabbitMQ, perform the following steps before proceeding to Step 6:

  1. Update the default_pass value in the rabbitmq.conf file with your desired password. Refer to this article for guidance.

    vi xray/app/bin/rabbitmq/rabbitmq.conf

    2.png

  2. Update the system.yaml file with the same password in rabbitmq.conf.

    vi xray/etc/system.yaml

    3.png

    Important: Ensure this step is repeated on all RabbitMQ nodes, and verify that the custom password remains consistent across the cluster.

  1. Start rabbitmq service:

    xray/app/bin/xray.sh start|stop
  2. RabbitMQ service should start; you can verify it by checking the status.

    xray/app/bin/xray.sh status
  3. (Optional) You can also install it as a service.

    Xray is packaged as an archive file and you can use the install script to install it as a service running under a custom user. Currently supported on Linux systems.

📘

Note

OS User Permissions

When running Xray as a service, the installation script creates a user called xray (by default) which must have run and execute permissions on the installation directory.

It is recommended to extract the Xray download file into a directory that gives run and execute permissions to all users such as /opt.

To install Xray as a service, execute the following command as root.

xray/app/bin/installService.sh
 
-u | --user                                       : [optional] (default: xray) user which will be used to run the product, it will be created if its unavailable
-g | --group                                      : [optional] (default: xray) group which will be used to run the product, it will be created if its unavailable
📘

Note

Note

If you wish to change the user and group, it can be passed through xray/var/etc/system.yaml as shared.user and shared.group. This takes precedence over values passed through command line on install.

The user and group is stored in xray/var/etc/system.yaml at the end of installation. To manage the service, use systemd or init.d commands depending on your system.Using systemd

 systemctl <start|stop|status> xray.service

Using init.d

service xray <start|stop|status>
  1. Access the UI through a browser to verify if the management plugin is enabled.

📘

Note

📘

Note

The credentials provided below are the default ones. If you have customized the RabbitMQ password in’/opt/jfrog/xray/app/bin/rabbitmq/rabbitmq.conf’, ensure you use the correct password when accessing the management UI.

http://<your-node-ip>:15672

Username:  guest
Password:  JFXR_RABBITMQ_COOKIE

Adding Nodes to Form a RabbitMQ Cluster

Now that RabbitMQ is running on the first node, we need to install RabbitMQ on additional nodes and integrate them into the existing cluster. In this example, we will install RabbitMQ on two more nodes and join them in the cluster.

  1. Repeat steps for the fresh install node setup (step 1 and step 2) to obtain the Xray files on the additional node

  2. Run install.sh fileCheck prerequisites for Xray in Linux Archive before running install script.

    ./install.sh -t rabbitmq
  3. Once you execute the command, you will be prompted to confirm whether this node will join an existing RabbitMQ cluster. Since we already have an active RabbitMQ node, type Y and press Enter. Next, you will be asked to provide the name of the active RabbitMQ node. Here, you should input the hostname of the active RabbitMQ node.

    4.png

  4. Make sure to follow the instructions on the screen after the installation is completed regarding the below points

    1. Xray directory should be owned by the xray usereg : chown -R xray:xray /opt/jfrog/xray
    2. Start rabbitmq as Xray usereg: su xray
  5. If you are using a custom password, please follow the same steps done in first node

  6. Start RabbitMQ service

    xray/app/bin/xray.sh start|stop 
    
    or via service 
    
    systemctl <start|stop|status> xray.service
  7. Verify the cluster status from the management UI. Access the UI through a browser

    5.png

📘

Note

📘

Note

The credentials provided below are the default ones. If you have customized the RabbitMQ password in’xray/app/bin/rabbitmq/rabbitmq.conf’, ensure you use the correct password when accessing the management UI.

http://<your-node-ip>:15672

Username:  guest
Password:  JFXR_RABBITMQ_COOKIE

Stage 2: Setting Up the Xray Node Cluster

Once the RabbitMQ cluster is successfully configured, set up the Xray node cluster, which will integrate with the previously configured RabbitMQ cluster.

  1. Download the Xray linux archive based installer(tar.gz) from the JFrog Xray Downloads page.

  2. Extract the tar.gz file:

    tar -xvf jfrog-xray-<version>-linux.tar.gz
📘

Note

OS user permissions for Linux archive

When running Xray, the installation script creates a user called xray by default, which must have run and execute permissions on the installation directory.

We recommend that you copy the Xray download file into a directory that gives run and execute permissions to all users such as /opt.

mkdir -p /opt/jfrog
cp -r jfrog-xray-<version>-linux /opt/jfrog/
cd /opt/jfrog
mv jfrog-xray-<version>-linux xray
cd xray/app/bin
📘

Note

From 3.107 onwards, JFrog Xray installers have organized files into designated subfolders. After extracting the tar.gz file, ensure you navigate to the correct directory

  1. Run the install.sh

    Check prerequisites for Xray in Linux Archive before running install script.

    ./install.sh -t xray
  2. Upon running the command, you will be prompted with a series of interactive installation questions.

    1. Provide the URL of your JFrog Platform( Artifactory) instance in the format http://artifactory\_node\_ip:port

    2. Obtain the Join Key by navigating to Administration > Security > General > C****onnection Details in Artifactory.

      1. Enter the platform password to access the details.
      2. Copy the Join Key and use it when prompted.
    3. Specify the machine's IP address:  You can manually input the current node's IP address or press ENTER to use the default, as the system automatically detects the current node's IP address. For IPv6, ensure the address is enclosed in square brackets: [<ipv6_address>].

    4. Are you adding an additional node to the existing product cluster? For the first node of Xray, select N (No), as this will set up a new cluster. For subsequent nodes, select Y (Yes). The process for adding additional nodes is detailed in later steps.

    5. Provide the database connection details. Postgresql should be installed separately on a different node. Provide the following database connection details:

      1. Connection URL: postgres://<IP_ADDRESS>:<PORT>/<database name>?sslmode=disable

      2. Database Username: <YOUR_DATABASE_USERNAME>

      3. Database Password: <YOUR_DATABASE_PASSWORD>

        6.png

  3. Make sure to follow the instructions on the screen after the installation is completed regarding the below points

    1. Xray directory should be owned by the xray usereg : chown -R xray:xray /opt/jfrog/xray
    2. Switch to xray user, before you start the serviceeg: `su xray
  4. Before starting the Xray service, update the system.yaml file with the RabbitMQ cluster URLs and credentials for all nodes in the cluster. Ensure that all RabbitMQ node URLs are correctly listed and include the appropriate username and password for authentication. This configuration ensures Xray can communicate effectively with the RabbitMQ cluster and save the changes.

    vi xray/etc/system.yaml
📘

Note

📘

Note

If using a custom RabbitMQ password, ensure the same password is configured in Xray’s system.yaml file under the RabbitMQ section to maintain consistency.

7.png 7. Start Xray service:

xray/app/bin/xray.sh start|stop 

or via service 

systemctl <start|stop|status> xray.service
  1. Once the Xray service is started successfully, you can access it via the user interface or APIs.

Adding Nodes to Form an Xray Cluster:

  1. Install Xray on a new node by repeating the initial Xray node setup steps.

  2. During the installation, you will be prompted to ask, "Are you adding an additional node to an existing product cluster? [y/N]?" At this point, select Y to confirm that this is an additional node for the X-ray cluster.

  3. In the next step, you will be prompted to provide the master_key. You can retrieve this key from the first node by executing the following command:

    cat /var/opt/jfrog/xray/etc/security/master.key

    The remaining installation steps for the new node should be carried out like for the first node.

    8.png

    After installing, modify the system.yml file to update the RabbitMQ URLs. Change the order of the URLs to optimize load balancing and prevent all Xray nodes from connecting to the same RabbitMQ node. For example, if the RabbitMQ node URL order in the first Xray node is 1,2,3, then in the second node, it should be 2,3,1, and in the third node, it could be 3,1,2. This ensures that each Xray node connects to different RabbitMQ nodes, improving fault tolerance and distributing the load efficiently.

    vi xray/etc/system.yaml

    9.png

    Uninstallation

    Read about Uninstalling JFrog Xray.

RabbitMQ (Split) Fresh Installation Guide Docker Compose

Installation Procedure

This process consists of two main stages: setting up the RabbitMQ cluster and configuring the Xray node cluster.

Stage 1: Setting Up the RabbitMQ Cluster

This stage focuses on configuring the RabbitMQ cluster. Since Xray relies on RabbitMQ for its operation, completing this step is crucial. Before proceeding, ensure that you are logged in as the root user.

  1. Download the Xray docker compose-based installer(tar.gz) from the JFrog Xray Downloads page.

  2. Extract the tar.gz file:

    tar -xvf jfrog-xray-<version>-compose.tar.gz
📘

Note

📘

Note

From 3.107 onwards, JFrog Xray installers have organized files into designated subfolders. After extracting the tar.gz file, ensure you navigate to the xray directory.

cd jfrog-xray-<version>-compose
  1. Run the config.sh

    ./config.sh -t rabbitmq
  2. Once you run the command, you will be prompted to confirm whether this node will join an existing cluster. Since this is a fresh installation on the first node, please type "N" to indicate no.

    1.png

📘

Note

📘

Note

If you wish to use a custom password for RabbitMQ, perform the following steps before proceeding to Step 5:

  1. Update the default_pass value in the rabbitmq.conf file with your desired password. This file is mounted inside the rabbitmq container. If you have changed the ROOT_DATA_DIR in the docker-compose env file, edit the rabbitmq.conf file in the right location

    vi /root/.jfrog/xray/app/third-party/rabbitmq/rabbitmq.conf

    2.png

    🚧

    Important

    Ensure this step is repeated on all RabbitMQ nodes, and verify that the custom password remains consistent across the cluster.

  1. Start RabbitMQ service.

    docker-compose -p xray-rabbitmq -f docker-compose-rabbitmq.yaml up -d
  2. RabbitMQ service should start; you can verify it by checking the logs

    docker ps
    docker logs -f xray_rabbitmq
  3. Verify the cluster status from the management UI URL http://<RabbitMQ-Node-IP/Hostname>:15672. Access the UI through a browser

    3.png

📘

Note

📘

Note

The credentials provided below are the default ones. If you have customized the RabbitMQ password in’/root/.jfrog/xray/app/third-party/rabbitmq/rabbitmq.conf’, ensure you use the correct password when accessing the management UI.

http://<your-node-ip>:15672

Username:  guest
Password:  JFXR_RABBITMQ_COOKIE

Adding Nodes to Form a RabbitMQ Cluster

Now that RabbitMQ is running on the first node, we need to install RabbitMQ on additional nodes and integrate them into the existing cluster. In this example, we will install RabbitMQ on two more nodes and join them in the cluster.

  1. Repeat steps 1 to 2 from the initial node setup to obtain the Xray files on the additional node.

  2. Run install.sh file

    ./config.sh -t rabbitmq
  3. Once you execute the command, you will be prompted to confirm whether this node will join an existing RabbitMQ cluster. Since we already have an active RabbitMQ node, type Y and press Enter. Next, you will be asked to provide the name of the active RabbitMQ node. Here, you should input the hostname of the active RabbitMQ node and also the IP of the active node name in the next step.

    4.png

  4. If you use a custom password, please follow the instructions mentioned in the initial node setup.

  5. Start RabbitMQ service.

    docker-compose -p xray-rabbitmq -f docker-compose-rabbitmq.yaml up -d
  6. RabbitMQ service should start. You can verify it by checking the logs.

    docker ps
    docker logs -f xray_rabbitmq
  7. Verify the cluster status from the management UI to confirm the nodes are connected to each other

📘

Note

📘

Note

The credentials provided below are the default ones. If you have customized the RabbitMQ password in’/root/.jfrog/xray/app/third-party/rabbitmq/rabbitmq.conf’, ensure you use the correct password when accessing the management UI.

http://<your-node-ip>:15672

Username:  guest
Password:  JFXR_RABBITMQ_COOKIE

Repeat steps 1 through 7 to set up the additional RabbitMQ node and integrate it into the existing cluster.

Stage 2: Setting Up the Xray Node Cluster

Once the RabbitMQ cluster is successfully configured, set up the Xray node cluster, which will integrate with the previously configured RabbitMQ cluster.

  1. Download the Xray docker compose-based installer(tar.gz) from the JFrog Xray Downloads page.

  2. Extract the tar.gz file:

    tar -xvf jfrog-xray-<version>-compose.tar.gz
📘

Note

📘

Note

From 3.107 onwards, JFrog Xray installers have organized files into designated subfolders. After extracting the tar.gz file, ensure you navigate to the xray directory.

cd jfrog-xray-<version>-compose
  1. Run the config.sh

    ./config.sh -t xray
  2. Upon running the command, you will be prompted with interactive installation questions.

    1. Before performing this upgrade, have you disconnected Artifactory Xray pairings, except one (Refer http://service.jfrog.org/wiki/Xray+and+Artifactory+One+to+One+Pairing for more details)? - You can choose y. This step was added for migrating from older 2.x to 3.x versions. If you are already on xray 3.x, type y.

    2. Installation Directory (Default: /root/.jfrog/xray) - Choose defaults or change if needed.

    3. Provide the URL of your JFrog Platform( Artifactory) instance in the format:http://artifactory\_node\_ip:port

    4. Obtain the Join Key by navigating to Administration > Security > General > Connection Details in Artifactory.

      1. Enter the platform password to access the details.
      2. Copy the Join Key and use it when prompted.
    5. Specify the machine's IP address:  You can manually input the current node's IP address or press ENTER to use the default, as the system automatically detects the current node's IP address. For IPv6, ensure the address is enclosed in square brackets: [<ipv6_address>].

    6. Are you adding an additional node to the existing product cluster? For the first node of Xray, select N (No), as this will set up a new cluster. For subsequent nodes, select Y (Yes). The process for adding additional nodes is detailed in later steps.

    7. Provide the database connection details. Postgresql should be installed separately on a different node. Provide the following database connection details:

      1. Connection URL: postgres://<IP_ADDRESS>:<PORT>/<database name>?sslmode=disable

      2. Database Username: <YOUR_DATABASE_USERNAME>

      3. Database Password: <YOUR_DATABASE_PASSWORD>

        5.png

  3. Before starting the Xray service, update the system.yaml file with the RabbitMQ cluster URLs and credentials for all nodes in the cluster. Ensure that all RabbitMQ node URLs are correctly listed and include the appropriate username and password for authentication. This configuration ensures Xray can communicate effectively with the RabbitMQ cluster and save the changes.

    vi /root/.jfrog/xray/var/etc/system.yaml
📘

Note

📘

Note

If using a custom RabbitMQ password, ensure the same password is configured in Xray’s system.yaml file under the RabbitMQ section to maintain consistency.

6.png

  1. Start Xray service.

    docker-compose -p xray up -d
  2. Xray services should start; you can verify it by checking the logs.

    docker ps
    docker logs -f xray_router ## or any other xray services.

Adding Nodes to Form an Xray Cluster:

  1. Install Xray on a new node by repeating the initial Xray node setup steps. During the installation, you will be prompted to ask, "Are you adding an additional node to an existing product cluster? [y/N]?" At this point, select Y to confirm that this is an additional node for the X-ray cluster.

  2. In the next step, you will be prompted to provide the master_key. You can retrieve this key from the first node by executing the following command:

    cat /var/opt/jfrog/xray/etc/security/master.key

The remaining installation steps for the new node should be carried out like for the first node.

7.png

After installing, modify the system.yml file to update the RabbitMQ URLs. Change the order of the URLs to optimize load balancing and prevent all Xray nodes from connecting to the same RabbitMQ node. For example, if the RabbitMQ node URL order in the first Xray node is 1,2,3, then in the second node, it should be 2,3,1, and in the third node, it could be 3,1,2. This ensures that each Xray node connects to different RabbitMQ nodes, improving fault tolerance and distributing the load efficiently.

vi xray/etc/system.yaml
8.png
Uninstallation

Read about Uninstalling JFrog Xray.

RabbitMQ (Split) Upgrade Guide

Challenges of Default Xray Installer Deployment in Large Clusters

Xray installers, by default, install the Application services and Rabbit MQ on the same node, where the application connects to the local RMQ instance for message publishing and consumption. We use classic queues and mirroring to all nodes in the cluster. This deployment works well with smaller clusters.  However, this could cause issues for large clusters due to synchronised writes, synchronisation latency, resource overhead, etc. Also, with high replication, the chances of network partitions increase.  In a partitioned scenario, the cluster might split, leading to inconsistent states across different nodes, further complicating message delivery and queue management.

The below diagram will consider the example setup, a configuration that contains 3 nodes of Xray and MQ on every node, with an external Postgres DB. We will explore how to tune this setup for better stability and resource utilisation.

5.png6.png

The recommendation is to re-design it as follows: move to a 3-node cluster for X-ray and a three-node cluster for Rabbit MQ. This re-design offers several benefits, including improved stability, better resource utilisation, and enhanced scalability.

📘

Note

Important - This model of deployment, with separate application and rabbit-mq cluster

Installation Procedure

Upgrade can be done in two ways.  Transform the existing cluster by dedicating some nodes to RabbitMQ and converting the remaining nodes into an Xray only setup.

  1. Upgrade an existing node to a RabbitMQ only node

    1. Stop Xray and RabbitMQ.
    2. Run the installer to deploy only RabbitMQ.
    3. Verify that the RabbitMQ is online and is part of the cluster.
  2. Upgrade an existing node to an Xray only node

    1. Stop Xray and RabbitMQ.
    2. Disconnect the RabbitMQ from the cluster.
    3. Update the system.yaml (Provide the RabbitMQ urls )
    4. Run the installer to deploy only Xray Instances.
📘

Note

📘

Note

Please refer the Challenges of Default Xray Installer Deployment in Large Clusters for more details

RPM/ DEB Upgrade

1) Upgrade an existing node to a RabbitMQ only node (RPM/DEB)

This is a scenario where you need to update one of your nodes to a RabbitMQ only node

Follow the below steps to upgrade.

  1. Download the newer Xray RPM/DEB based installer(tar.gz) version from the JFrog Xray Downloads page.

  2. Extract the tar.gz file:

    tar -xvf jfrog-xray-<version>-rpm/deb.tar.gz
📘

Note

📘

Note

From 3.107 onwards, JFrog Xray installers have organized files into designated subfolders. After extracting the tar.gz file, ensure you navigate to the xray directory.

cd jfrog-xray-<version>-rpm/deb
  1. Stop Xray and RabbitMQ.

    Add autoStop: true flag and erlang cookie under RabbitMQ block in Xray system.yaml . The default cookie can be found under xray/app/third-party/rabbitmq/.erlang.cookie

    shared:
        rabbitMq:
            autoStop: true
            erlangCookie:
                value:

    Then, run the below command.

    systemctl stop xray.service

    Before proceeding, ensure the Xray services are shut down gracefully (it may take a while when the system is running)

  2. Run the installer script.

    ./install.sh -t rabbitmq
  3. Start RabbitMQ service.

    systemctl start xray-rabbitmq.service
  4. RabbitMQ service should start; you can verify it by checking the status.

    systemctl status xray-rabbitmq.service

2) Upgrade an existing node to an Xray only node (RPM/DEB)

This is a scenario where you must update one node to an Xray only node.

Follow the below steps to upgrade.

  1. Download the newer Xray RPM/DEB-based installer(tar.gz) version from the JFrog Xray Downloads page.

  2. Extract the tar.gz file:

    tar -xvf jfrog-xray-<version>-rpm/deb.tar.gz
📘

Note

📘

Note

From 3.107 onwards, JFrog Xray installers have organized files into designated subfolders. After extracting the tar.gz file, ensure you navigate to the xray directory.

cd jfrog-xray-<version>-rpm/deb
  1. Perform this step if you are upgrading from a node that has Xray + RabbitMQ Disconnect RabbitMQ from Cluster.

    1. Switch to Xray user (default: xray) and navigate to rabbitmq/sbin directory.

      su xray
      cd /opt/jfrog/xray/app/third-party/rabbitmq/sbin/
    2. We will stop and remove the RabbitMQ node from the cluster in this step. Replace the <current-node> with the node name you are working on. You can use the cluster_status command to get the correct node name. And the <active-node> can be any other node in the cluster still online.

      ./rabbitmqctl cluster_status
      ./rabbitmqctl stop_app -n <current-node>
      ./rabbitmqctl forget_cluster_node <current-node> --node  <active-node>
    3. Verify the node is not part of the cluster and no RabbitMQ services are running.

      ./rabbitmqctl cluster_status --node <active-node>
      ps -aux | grep erl
      ps -aux | grep epmd
  2. Stop Xray service

    systemctl stop xray.service

    Before proceeding, ensure the Xray services are shut down gracefully (it may take a while when the system is running)

  3. Run the installer script.

    ./install.sh -t xray
  4. Before starting the Xray service, update the system.yaml file with the RabbitMQ cluster URLs and credentials for all nodes in the cluster. Ensure that all RabbitMQ node URLs are correctly listed and include the appropriate username and password for authentication. This configuration ensures Xray can communicate effectively with the RabbitMQ cluster and save the changes.

    Change the order of the URLs to optimize load balancing and prevent all Xray nodes from connecting to the same RabbitMQ node. For example, if the RabbitMQ node URL order in the first Xray node is 1,2,3, then in the second node, it should be 2,3,1, and in the third node, it could be 3,1,2. This ensures that each Xray node connects to different RabbitMQ nodes, improving fault tolerance and distributing the load efficiently.

    vi /var/opt/jfrog/xray/etc/system.yaml
📘

Note

📘

Note

If using a custom RabbitMQ password, ensure the same password is configured in Xray’s system.yaml file under the RabbitMQ section to maintain consistency.

1.png 7. Start Xray service:

systemctl start xray.service
Linux Archive Upgrade

1) Upgrade an existing node to a RabbitMQ only node (Linux Archive)

This is a scenario where you need to update one of your nodes to a RabbitMQ only node

Follow the below steps to upgrade.

  1. Stop Xray and RabbitMQ.Add autoStop: true flag and erlang cookie under RabbitMQ block in Xray system.yaml . The default cookie can be found under xray/app/third-party/rabbitmq/.erlang.cookie

    shared:
        rabbitMq:
            autoStop: true
            erlangCookie:
                value:

    Then, run the below command.

    cd $JFROG_HOME/xray/app/bin
    ./xray.sh stop ## as xray user

    Or via service

    systemctl stop xray.service

    Wait for the services started by the Xray user to stop. Otherwise, manually kill all processes started by the Xray user.

  2. Extract the contents of the compressed archive and go to the extracted folder.

    mv jfrog-xray-<version>-linux.tar.gz /opt/jfrog/
    cd /opt/jfrog
    tar -xf jfrog-xray-<version>-linux.tar.gz
  3. Replace the existing $JFROG_HOME/xray/app with the new app folder.For eg:

    # Export variables to simplify commands
    export JFROG_HOME=/opt/jfrog   ## this is your old xray dir
    export JF_NEW_VERSION=/opt/jfrog/jfrog-xray-<version>-linux
    
    # Remove app
    rm -rf $JFROG_HOME/xray/app
    
    # Copy new app
    cp -fr $JF_NEW_VERSION/app $JFROG_HOME/xray/
    
    # Make sure to chown again to set the right permissions for xray user
    eg: chown -R xray:xray /opt/jfrog/xray
    
    # Remove extracted new version
    rm -rf $JF_NEW_VERSION
  4. Start Xray service

    xray/app/bin/xray.sh start|stop  ## As xray user

    Or via service

    systemctl <start|stop|status> xray.service

2) Upgrade an existing node to an Xray only node (Linux Archive)

This is a scenario where you must update one node to an Xray only node.

Follow the below steps to upgrade.

  1. Perform this step if you are upgrading from a node that has Xray + RabbitMQ Disconnect RabbitMQ from Cluster.

    1. Switch to Xray user (default: xray) and navigate to rabbitmq/sbin directory.

      su xray
      cd xray/app/third-party/rabbitmq/sbin/
    2. We will stop and remove the RabbitMQ node from the cluster in this step. Replace the <current-node> with the node name you are working on. You can use the cluster_status command to get the correct node name. And the <active-node> can be any other node in the cluster still online.

      ./rabbitmqctl cluster_status
      ./rabbitmqctl stop_app -n <current-node>
      ./rabbitmqctl forget_cluster_node <current-node> --node  <active-node>
    3. Verify the node is not part of the cluster and no RabbitMQ services are running.

      ./rabbitmqctl cluster_status --node <active-node>
      ps -aux | grep erl
      ps -aux | grep epmd
  2. Stop the current server.

    cd $JFROG_HOME/xray/app/bin
    ./xray.sh stop ## as xray user

    Or via service

    systemctl stop xray.service

    Wait for the services started by the Xray user to stop. Otherwise manually kill all processes started by the Xray user.

  3. Extract the contents of the compressed archive and go to the extracted folder.

    mv jfrog-xray-<version>-linux.tar.gz /opt/jfrog/
    cd /opt/jfrog
    tar -xf jfrog-xray-<version>-linux.tar.gz
  4. Replace the existing $JFROG_HOME/xray/app with the new app folder.

    For eg:

    # Export variables to simplify commands
    export JFROG_HOME=/opt/jfrog   ## this is your old xray dir
    export JF_NEW_VERSION=/opt/jfrog/jfrog-xray-<version>-linux
    
    # Remove app
    rm -rf $JFROG_HOME/xray/app
    
    # Copy new app
    cp -fr $JF_NEW_VERSION/app $JFROG_HOME/xray/
    
    # Make sure to chown again to set the right permissions for xray user
    eg: chown -R xray:xray /opt/jfrog/xray
    
    # Remove extracted new version
    rm -rf $JF_NEW_VERSION
  5. Before starting the Xray service, update the system.yaml file with the RabbitMQ cluster URLs and credentials for all nodes in the cluster. Ensure that all RabbitMQ node URLs are correctly listed and include the appropriate username and password for authentication. This configuration ensures Xray can communicate effectively with the RabbitMQ cluster and save the changes.

    Change the order of the URLs to optimize load balancing and prevent all Xray nodes from connecting to the same RabbitMQ node. For example, if the RabbitMQ node URL order in the first Xray node is 1,2,3, then in the second node, it should be 2,3,1, and in the third node, it could be 3,1,2. This ensures that each Xray node connects to different RabbitMQ nodes, improving fault tolerance and distributing the load efficiently.

    vi $JFROG_HOME/xray/var/etc/system.yaml
📘

Note

📘

Note

If using a custom RabbitMQ password, ensure the same password is configured in Xray’s system.yaml file under the RabbitMQ section to maintain consistency.

1.png 6. Edit the file $JFROG_HOME/xray/var/etc/installerState.yaml and add the below line at the root level if not there or set it to a different value. Please note: This step is essential only to start xray services

installation_method: xray
  1. Start Xray service:

    xray/app/bin/xray.sh start|stop  ## As xray user

    Or via service

    systemctl <start|stop|status> xray.service
Docker Compose Upgrade

1) Upgrade an existing node to a RabbitMQ only node (Docker compose)

This is a scenario where you need to update one of your nodes to a RabbitMQ only node

Follow the below steps to upgrade.

  1. Stop the service.

    1. For Xray services

      cd jfrog-xray-<version>-compose
      docker-compose -p xray down
    2. For RabbitMQ services

      cd jfrog-xray-<version>-compose
      docker-compose -p xray-rabbitmq down
  2. Download the newer Xray docker compose based installer(tar.gz) from the JFrog Xray Downloads page.

  3. Extract the newer tar.gz file:

    tar -xvf jfrog-xray-<version>-compose.tar.gz
📘

Note

📘

Note

From 3.107 onwards, JFrog Xray installers have organized files into designated subfolders. After extracting the tar.gz file, ensure you navigate to the xray directory.

cd jfrog-xray-<version>-compose
📘

Note

📘

Note

For Docker Compose upgrades, merge any customizations if you have done any in your current docker-compose.yaml file to the new extracted version of the docker-compose.yaml file.

Copy the contents of the .env file in the previous installation to the newly created .env file in this archive without copying the versions, as this will affect the upgrade.

  1. Run the installer script.

    ./config.sh -t rabbitmq
📘

Note

📘

Note

If you use a custom password for RabbitMQ, ensure the rabbitmq conf file has the correct password set before starting the service. Please refer here for more details.

  1. Start RabbitMQ service.

    docker-compose -p xray-rabbitmq -f docker-compose-rabbitmq.yaml up -d
  2. RabbitMQ service should start; you can verify it by checking the logs

    docker ps
    docker logs -f xray_rabbitmq
  3. (Optional) Access the UI through a browser to verify if the RabbitMQ nodes are up and running

📘

Note

📘

Note

The credentials provided below are the default ones. If you have customized the RabbitMQ password in’/opt/jfrog/xray/app/bin/rabbitmq/rabbitmq.conf’, ensure you use the correct password when accessing the management UI.

http://<your-node-ip>:15672

Username:  guest
Password:  JFXR_RABBITMQ_COOKIE

2) Upgrade an existing node to an Xray only node (Docker compose)

This is a scenario where you must update one node to an Xray only node.

Follow the below steps to upgrade.

  1. Stop the service.

    1. For Xray services

      cd jfrog-xray-<version>-compose
      docker-compose -p xray down
    2. For RabbitMQ services

      cd jfrog-xray-<version>-compose
      docker-compose -p xray-rabbitmq down
  2. Download the newer xray docker compose based installer(tar.gz) from the JFrog Xray Downloads page.

  3. Extract the newer tar.gz file:

    tar -xvf jfrog-xray-<version>-compose.tar.gz
📘

Note

📘

Note

From 3.107 onwards, JFrog Xray installers have organized files into designated subfolders. After extracting the tar.gz file, ensure you navigate to the xray directory.

cd jfrog-xray-<version>-compose
📘

Note

📘

Note

For Docker Compose upgrades, make sure to merge any customizations if you have done any  in your current docker-compose.yaml file to the new extracted version of the docker-compose.yaml file.

Copy the contents of the .env file in the previous installation to the newly created .env file in this archive without copying the versions, as this will affect the upgrade.

  1. Run the installer script.

    ./config.sh -t xray
  2. Upon running the command, you will be prompted with a question.

    1. Have you disconnected Artifactory Xray pairings, except one prior to performing this upgrade (Refer http://service.jfrog.org/wiki/Xray+and+Artifactory+One+to+One+Pairing for more details) ? - You can choose y. This step was added for migrating from older 2.x to 3.x versions. If you are already on Xray 3.x, type y.
  3. Before starting the Xray service, update the system.yaml file with the RabbitMQ cluster URLs and credentials for all nodes in the cluster. Ensure that all RabbitMQ node URLs are correctly listed and include the appropriate username and password for authentication. This configuration ensures Xray can communicate effectively with the RabbitMQ cluster and save the changes.

    Change the order of the URLs to optimize load balancing and prevent all Xray nodes from connecting to the same RabbitMQ node. For example, if the RabbitMQ node URL order in the first Xray node is 1,2,3, then in the second node, it should be 2,3,1, and in the third node, it could be 3,1,2. This ensures that each Xray node connects to different RabbitMQ nodes, improving fault tolerance and distributing the load efficiently.

    vi /root/.jfrog/xray/var/etc/system.yaml
📘

Note

📘

Note

If using a custom RabbitMQ password, ensure the same password is configured in Xray’s system.yaml file under the RabbitMQ section.

1.png 7. Start Xray service.

docker-compose -p xray up -d
  1. Xray services should start; you can verify it by checking the logs.

    docker ps
    docker logs -f xray_router ## or any other xray services.