Xray Single Node Manual Docker Compose Installation
Before you proceed, see System Requirements for information on supported platforms, supported browsers, and other requirements.
Before you proceed, see System Requirements for information on supported platforms, supported browsers, and other requirements.
Xray's system requirements are dependent on the scale of your environment.
Sizings
Non-Kubernetes Deployment
Up to 100K indexed artifacts. No High Availability
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray and DB | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ | 1 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
Up to 1M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 2 | 4 | 8 GB | 300 GB |
DB | 1 | 8 | 32 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 2 | 8 | 24 GB | 300 GB |
Up to 2M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 6 | 12 GB | 300 GB |
DB | 1 | 16 | 32 GB | 1 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 4 | 8 | 24 GB | 300 GB |
Up to 10M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 8 | 24 GB | 300 GB |
DB | 1 | 16 | 64 GB | 2.5 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 8 | 8 | 24 GB | 300 GB |
Over 10M indexed artifacts
Contact JFrog Support.
For Xray HA (High Availability) installations (more than one node and 100k indexed artifacts), it is recommended to install RabbitMQ and Xray on separate servers using split mode. For further details, please refer to our page here.
RabbitMQ Note
RabbitMQ is a crucial component of the Xray architecture, acting as a message broker for communication between various application services. Multiple queues facilitate communication channels between producers and consumers. For more information, please refer to our page here.
Kubernetes Deployment
We have included YAML files with different sizing configurations for Artifactory, Xray, and Distribution in our GitHub pages. You can use these YAML files when you set up your cluster.
Note: From Helm chart 103.124 version of JFrog Xray, JFrog added support for Quorum Queues in RabbitMQ. This requires a minimum of three nodes of RabbitMQ and ensures fault tolerance of at least one node. A non-HA setup of 1 node is also supported.
Note: We recommend a three-node cluster of RabbitMQ for Xray. Large clusters increase the overhead of replication and could affect the throughput. Our application has been tuned to work in a three-node setup.
To mitigate performance bottlenecks, avoid port conflicts, and prevent unusual configurations, use a dedicated node for Xray and RabbitMQ with no other software running.Scalability
Xray
You can create a high-availability cluster by adding multiple nodes (1, 2, 3, ...n) to distribute the workload and increase capacity.
RabbitMQ
An odd number of servers is required for RabbitMQ to work effectively for Quorum Queues. We recommend a three-node cluster for RabbitMQ, and if additional capacity is needed, this can be met through vertical scaling. Connect with us if you see a need for this.
Quorum Queues Notes
Starting with version 3.124.x Xray supports Quorum Queues.RabbitMQ has discontinued support for Classic Queue mirroring in version 3.0 and deprecated it in version 4.0. With this, JFrog will be deprecating Classic Queue support in upcoming releases.
A highly available RabbitMQ cluster with Quorum Queues requires a minimum of three nodes.A two-node cluster with RabbitMQ Quorum Queues will not be fault-tolerant.More than three node clusters won’t add fault tolerance and can weaken RMQ performance.
RMQ QQ nodes | HA | Comments |
|---|---|---|
1 | No | Simple deployment. Good for POC or small setups. |
2 | No | Degraded mode. Not recommended for the production.Can be present temporarily, if one node of a 3-node cluster is down. |
3 | Yes | Can survive one node absence |
4+ | Yes | Possible performance degradation. Waste of nodes |
For more information, please refer to the RabbitMQ Quorum Queues documentation here.
In most cases, our recommendation is to use an SSD drive for Xray to have better performance and it is not recommended to use an NFS drive, as it is a disk I/O-intensive service, a slow NFS server can suffer from I/O bottlenecks and NFS is mostly used for storage replication.
Xray stores node-specific files, such as configuration and temporary files, to the disk. These files are exclusively used by Xray and not shared with other services. Since the local storage used for Xray services is temporary, it does not require replication between the different nodes in a multi-node/HA deployment.
Use the following command to determine the current file handle allocation limit.
cat /proc/sys/fs/file-maxThen, set the following parameters in your /etc/security/limits.conf file to the lower of 100,000 or the file handle allocation limit determined above.
The example shows how the relevant parameters in the /etc/security/limits.conf file are set to 100000. The actual setting for your installation may be different depending file handle allocation limit in your system.
root hard nofile 100000
root soft nofile 100000
xray hard nofile 100000
xray soft nofile 100000
postgres hard nofile 100000
postgres soft nofile 100000The following table lists the supported operating systems and their versions:
Product | Debian | RHEL | Ubuntu | Amazon Linux | Windows Server |
|---|---|---|---|---|---|
Xray | 10.x, 11.x | 8.x, 9.x | 20.04, 22.04 | Amazon Linux 2023 |
Note
Debian 12.x and Ubuntu 24.04 are supported from Artifactory 7.104 and Distribution 2.28.
Windows 2022 is supported from Artifactory 7.125.
Supported Platforms
The following table lists the supported platforms:
Product | x86-64 | ARM64 | Kubernetes | OpenShift |
|---|---|---|---|---|
Xray | 1.27+ | 4.14+ |
Installation on Kubernetes environments is through Helm Charts. Supported Helm version is Helm 3.17+.
Kubernetes Sizing Requirements
We have included YAML files with different sizing configurations for Artifactory , Xray, and Distribution in our GitHub pages. You can use these YAML files when you set up your cluster.
ARM64 Support for Container-Based Installations
Artifactory, Xray and Distribution supports installation on ARM64 architecture specifically through Helm and Docker installations. When deploying a product on an ARM64 platform, an external database must be set up as Artifactory does not support the bundled database for ARM64 installations. The appropriate ARM64 Container Image is automatically pulled during the Helm or Docker installation process.
Database
Every artifact and build indexed by Xray is broken down into multiple components. These components and their interrelationships are represented in a checksum-based components graph. Xray uses PostgreSQL to store and query this component's graph.
Xray supports the following PostgreSQL versions:
| Minimum PostgreSQL Version | Maximum PostgreSQL Version | Xray Version |
|---|---|---|
| 13.x | 17.x | 3.121 |
| 13.x | 16.x | 3.107 |
| 13.x | 15.x | 3.78.9 |
| 13.x | 14.x | 3.42 |
| 13.x | 13.x | 3.18 |
RabbitMQ
RabbitMQ is installed as part of the Xray installation for every node. In an HA architecture, Xray utilises queue mirroring and replication between different RabbitMQ nodes. The recommended installation method involves using split mode and setting up a separate 3-node RabbitMQ cluster with Xray HA.
Note: JFrog has added support for RabbitMQ Quorum Queues, available as an optional parameter in the system.yaml, because RabbitMQ has deprecated Classic Queue mirroring in version 4.x. Consequently, JFrog will also deprecate Classic Queue support and transition to Quorum Queues. It is recommended to enable Quorum Queues in Xray, as JFrog plans to fully transition to RabbitMQ 4.x and discontinue Classic Queue support in upcoming versions.
| RabbitMQ Version | Quorum Queues | Classic Queues | Erlang Version Compatibility |
|---|---|---|---|
| 3.7.x | Not supported | Must | From 19.3 to 22.x |
| 3.8.0+ | Recommended | Not recommended | From 23.2 to 24.3 |
| 3.13.0+ | Recommended | Not recommended | From 26.0 to 26.2.x |
| 4.x | Must | Not supported | From 26.2.x to 27.x |
Xray encompasses multiple flows, including scanning, impact analysis, and database synchronisation. These flows require processing by various Xray microservices. Flows comprise multiple steps completed by the Xray services. Xray uses RabbitMQ to manage these different flows and track synchronous and asynchronous communication between microservices.
Erlang
Xray incorporates Erlang and DB-Util as third-party dependencies. These packages are bundled with all Xray installers except for the Linux Archive.
Please ensure you are using the correct Erlang version corresponding to your Xray version:
- Xray versions 3.124.x requires Erlang 26. For more information on RabbitMQ and Erlang compatibility, please refer to the RabbitMQ and Erlang/OTP Compatibility Matrix.
- Xray 3.124 and later versions need Erlang 27 if we have enabled the RabbitMQ 4.x using properties.
Xray uses the 8082 port by default for external communication.
Xray uses the following internal ports by default for communication with JFrog Platform microservices.
Microservice | Port |
|---|---|
Xray Server | 8000 |
Analysis | 7000 |
Indexer | 7002 |
Persist | 7003 |
Router | HTTP: 8082, 8046, 8049 gRPC: 8047 |
RabbitMQ | 4369, 5671, 5672, 15672, 25672 and 35672 to 35682 |
PostgreSQL (if you use the bundled PostgreSQL database) | 5432 |
Observability | HTTP: 8036 gRPC: 8037 |
Policy Enforcer | 7009 |
In addition, review the Docker requirements.
For Docker and Docker Compose installations, JFrog services require Docker Engine 25.0 and above, and Docker Compose v2 to be installed on the machine where you want to run them.
For more information, see Docker and Docker Compose.
Follow these steps to install the product:
-
Extract the contents of the compressed archive and go to the extracted folder.
tar -xvf jfrog-xray-<version>-compose.tar.gz
.env file included within the Docker-Compose archive
The .env file is used by docker-compose and is updated during installations and upgrades.
Some operating systems do not display dot files by default. If you make any changes to the file, remember to backup before an upgrade.
-
Create the following folder structure under
JFROG_HOME.|-- [ ] app |-- [ ] third-party |-- [999 999] rabbitmq |-- [1035 1035] var |-- [1035 1035] data |-- [1035 1035] etc -
Copy the appropriate docker-compose templates from the templates folder to the extracted folder. Rename it as
docker-compose.yaml.
Requirement | Template |
|---|---|
Xray |
|
RabbitMQ |
|
PostgreSQL |
|
Docker for Mac
When you use Docker Compose in Mac,
/etc/localtimemight not work as expected since it might not be a shared location in the docker-for-mac settings.You can remove the following line from the selected
docker-compose.yamlfile to avoid installation issues.- /etc/localtime:/etc/localtime:ro
-
Update the
.envfile.## The installation directory for Xray. Default [$HOME/.jfrog/xray] ROOT_DATA_DIR= # Host ID. Other nodes in the cluster will use this ID to identify this node HOST_ID= # ID of the active node. Please leave the value as "None" for active nodes. (shared.rabbitMq.active.node.name). JF_SHARED_RABBITMQ_ACTIVE_NODE_NAME=None # IP of the active node. (shared.rabbitMq.active.node.ip) JF_SHARED_RABBITMQ_ACTIVE_NODE_IP=127.0.0.1 # Bind IP for Internal ports of Third party applications JF_THIRD_PARTY_BIND_IP=127.0.0.1ROOT_DATA_DIR= -
Customize the product configuration.
-
Set the Artifactory connection details.
-
Customize the PostgreSQL Database connection details. (optional)
-
Set any additional configurations (for example: ports, node id) using the Xray
system.yamlfile.
-
Note
Ensure the host's ID and IP are added to the
system.yaml. This is important to ensure that other products and Platform deployments can reach this instance.
-
Enter the RabbitMQ information in system.yaml. If you want to setup a RabbitMQ HA cluster, enter the information in all the slave nodes.
shared: rabbitMq: active: node: ip: <IP> name: <xray-master-node-id> # Enter the value of HOST_ID from the .env file as xray-master-node-id and the value of JF_THIRD_PARTY_BIND_IP value from the .env file as the IP. -
Customize any additional product configuration (optional) including, Java Opts and filestore.
-
Copy the
rabbitmq.confandsetRabbitCluster.shfiles to the folderapp/third-party/rabbitmq.Ensure both are owned by 999:999 (rabbitmq uid/gid)
-
Edit
rabbitmq.confand enter the following information. If you want to setup a RabbitMQ HA cluster, enter the information in all the slave nodes.cluster_formation.peer_discovery_backend = rabbit_peer_discovery_classic_config cluster_formation.classic_config.nodes.1 = rabbit@<xray-master-node-id># Enter the value of HOST_ID from the .env file as xray-master-node-id. -
Start Xray and PostgreSQL using docker-compose commands.
## Start RabbitMQ before starting other services docker-compose -p xray-rabbitmq -f docker-compose-rabbitmq.yaml up -d ## From Xray 3.8.x, Start PostgreSQL before starting the other services. docker-compose -p xray-postgres -f docker-compose-postgres.yaml up -d docker-compose -p xray up -d ## Check whether service is up docker-compose -p xray psdocker-compose -p distribution logs docker-compose -p distribution ps docker-compose -p distribution up -d docker-compose -p distribution down -
Access Artifactory from your browser at:
http://SERVER_HOSTNAME/ui/.For example, on your local machine:
http://localhost/ui/. -
Check the Xray log.
docker-compose -p xray logs
Configure log rotation of the console log
The
console.logfile can grow quickly since all services write to it. For more information, see configure the log rotation.
After installing and before running Xray, you may set the following configurations.
You can configure all your system settings using the system.yaml file located in the $JFROG_HOME/xray /var/etc folder. For more information, see Xray System YAML.
If you don't have a System YAML file in your folder, copy the template available in the folder and name it system.yaml.
For the Helm charts, the system.yaml file is managed in the chart’s values.yaml.
Xray requires a working Artifactory server and a suitable license. The Xray connection to Artifactory requires the following parameters.
-
jfrogUrl
URL to the machine where JFrog Artifactory is deployed, or the load balancer pointing to it. It is recommended to use DNS names rather than direct IPs. For example:
http://jfrog.acme.comorhttp://10.20.30.40:8082. Note that/artifactorycontext is not longer required.Set it in the Shared Configurations section of the
$JFROG_HOME/xray/var/etc/system.yamlfile. -
join.key
This is the "secret" key required by Artifactory for registering and authenticating the Xray server.
You can fetch the Artifactory
joinKey(join Key) from the JPD UI in the User Management | Settings | Join Key.Set the join.key used by your Artifactory server in the Shared Configurations section of the
$JFROG_HOME/xray/var/etc/system.yamlfile.
Xray comes bundled with a PostgreSQL database out-of-the-box, which come pre-configured with the default credentials.
To change the default credentials:
# Access PostgreSQL as the Xray user adding the optional -W flag to invoke the password prompt
$ psql -d xraydb -U xray -W
# Securely change the password for user "xray". Enter and then retype the password at the prompt.
\password xray
# Verify the update was successful by logging in with the new credentials
$ psql -d xraydb -U xray -WSet your PostgreSQL connection details in the Shared Configurations section of the $JFROG_HOME/xray/var/etc/system.yaml file.
Xray comes pre-installed with RabbitMQ, by setting the Erlang cookie value as the RabbitMQ password for guest users.
-
Set the new password in the
<MOUNT_DIR>/app/third-party/rabbitmq/rabbitmq.conffile.default_pass = <new password> -
Set your RabbitMQ password in the Shared Configurations section of the
$JFROG_HOME/xray/var/etc/system.yamlfile. -
Restart all services.
cd jfrog-xray-<version>-compose docker-compose -p xray restart
By default, RabbitMQ uses the short hostnames of other nodes in the cluster for communication. However, it be can be configured to use a fully qualified domain name (FQND) host name (a long hostname).
To configure RabbitMQ to use FQDN, follow these steps.
-
Install Xray , but do not start the services.
-
Modify the following files according to the installer type.
-
Docker-Compose
In docker-compose-rabbitmq.yaml: environment: - RABBITMQ_USE_LONGNAME=true In .env: HOST_ID=<long hostname> ## For secondary nodes only, provide the hostname of any of the active nodes where RabbitMQ service is running. #JF_SHARED_RABBITMQ_ACTIVE_NODE_NAME=<long hostname of active node> -
Common Change in All Installers
In system.yaml: shared: node: id: <long hostname> name: <long hostname> ## For secondary nodes only, provide the hostname of any of the active nodes where RabbitMQ service is running. # shared: # rabbitMq: # active: # node: # name: <long hostname of active node>
-
-
Start RabbitMQ and the Xray services.
Xray enables using an external log collector such as Sumologic or Splunk.
To adjust the permissions to allow the log collection service perform read operations on the generated log files.
-
Add the log collection service user to the relevant group if needed (the user and group that installed and started Xray).
-
Apply the user and group permissions as needed on the
$JFROG_HOME/xray/var/logdirectory using:$ chmod -R 640 $JFROG_HOME/xray/var/log -
Adjust the group read inheritance permissions
setgid bitusing:$ chmod -R 2755 $JFROG_HOME/xray/var/logThis command enables the generated log files to inherit the folder's group permissions.
Xray System Requirements and Platform Support
Xray's system requirements are dependent on the scale of your environment.
Sizings
Non-Kubernetes Deployment
Up to 100K indexed artifacts. No High Availability
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray and DB | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ | 1 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
Up to 1M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 2 | 4 | 8 GB | 300 GB |
DB | 1 | 8 | 32 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 2 | 8 | 24 GB | 300 GB |
Up to 2M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 6 | 12 GB | 300 GB |
DB | 1 | 16 | 32 GB | 1 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 4 | 8 | 24 GB | 300 GB |
Up to 10M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 8 | 24 GB | 300 GB |
DB | 1 | 16 | 64 GB | 2.5 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 8 | 8 | 24 GB | 300 GB |
Over 10M indexed artifacts
Contact JFrog Support.
For Xray HA (High Availability) installations (more than one node and 100k indexed artifacts), it is recommended to install RabbitMQ and Xray on separate servers using split mode. For further details, please refer to our page here.
RabbitMQ Note
RabbitMQ is a crucial component of the Xray architecture, acting as a message broker for communication between various application services. Multiple queues facilitate communication channels between producers and consumers. For more information, please refer to our page here.
Kubernetes Deployment
We have included YAML files with different sizing configurations for Artifactory, Xray, and Distribution in our GitHub pages. You can use these YAML files when you set up your cluster.
Note: From Helm chart 103.124 version of JFrog Xray, JFrog added support for Quorum Queues in RabbitMQ. This requires a minimum of three nodes of RabbitMQ and ensures fault tolerance of at least one node. A non-HA setup of 1 node is also supported.
Note: We recommend a three-node cluster of RabbitMQ for Xray. Large clusters increase the overhead of replication and could affect the throughput. Our application has been tuned to work in a three-node setup.
The following table lists the supported operating systems and their versions:
Product | Debian | RHEL | Ubuntu | Amazon Linux | Windows Server |
|---|---|---|---|---|---|
Xray | 10.x, 11.x | 8.x, 9.x | 20.04, 22.04 | Amazon Linux 2023 |
Note
Debian 12.x and Ubuntu 24.04 are supported from Artifactory 7.104 and Distribution 2.28.
Windows 2022 is supported from Artifactory 7.125.
Supported Platforms
The following table lists the supported platforms:
Product | x86-64 | ARM64 | Kubernetes | OpenShift |
|---|---|---|---|---|
Xray | 1.27+ | 4.14+ |
Installation on Kubernetes environments is through Helm Charts. Supported Helm version is Helm 3.17+.
Kubernetes Sizing Requirements
We have included YAML files with different sizing configurations for Artifactory , Xray, and Distribution in our GitHub pages. You can use these YAML files when you set up your cluster.
ARM64 Support for Container-Based Installations
Artifactory, Xray and Distribution supports installation on ARM64 architecture specifically through Helm and Docker installations. When deploying a product on an ARM64 platform, an external database must be set up as Artifactory does not support the bundled database for ARM64 installations. The appropriate ARM64 Container Image is automatically pulled during the Helm or Docker installation process.
Xray and RabbitMQ Nodes Recommendations
To mitigate performance bottlenecks, avoid port conflicts, and prevent unusual configurations, use a dedicated node for Xray and RabbitMQ with no other software running.
Scalability
Xray
You can create a high-availability cluster by adding multiple nodes (1, 2, 3, ...n) to distribute the workload and increase capacity.
RabbitMQ
An odd number of servers is required for RabbitMQ to work effectively for Quorum Queues. We recommend a three-node cluster for RabbitMQ, and if additional capacity is needed, this can be met through vertical scaling. Connect with us if you see a need for this.
Quorum Queues Notes
Starting with version 3.124.x Xray supports Quorum Queues.RabbitMQ has discontinued support for Classic Queue mirroring in version 3.0 and deprecated it in version 4.0. With this, JFrog will be deprecating Classic Queue support in upcoming releases.
A highly available RabbitMQ cluster with Quorum Queues requires a minimum of three nodes.A two-node cluster with RabbitMQ Quorum Queues will not be fault-tolerant.More than three node clusters won’t add fault tolerance and can weaken RMQ performance.
RMQ QQ nodes | HA | Comments |
|---|---|---|
1 | No | Simple deployment. Good for POC or small setups. |
2 | No | Degraded mode. Not recommended for the production.Can be present temporarily, if one node of a 3-node cluster is down. |
3 | Yes | Can survive one node absence |
4+ | Yes | Possible performance degradation. Waste of nodes |
For more information, please refer to the RabbitMQ Quorum Queues documentation here.
Xray Storage Recommendations
In most cases, our recommendation is to use an SSD drive for Xray to have better performance and it is not recommended to use an NFS drive, as it is a disk I/O-intensive service, a slow NFS server can suffer from I/O bottlenecks and NFS is mostly used for storage replication.
Xray stores node-specific files, such as configuration and temporary files, to the disk. These files are exclusively used by Xray and not shared with other services. Since the local storage used for Xray services is temporary, it does not require replication between the different nodes in a multi-node/HA deployment.
Xray File Handle Allocation Limit
Use the following command to determine the current file handle allocation limit.
cat /proc/sys/fs/file-maxThen, set the following parameters in your /etc/security/limits.conf file to the lower of 100,000 or the file handle allocation limit determined above.
The example shows how the relevant parameters in the /etc/security/limits.conf file are set to 100000. The actual setting for your installation may be different depending file handle allocation limit in your system.
root hard nofile 100000
root soft nofile 100000
xray hard nofile 100000
xray soft nofile 100000
postgres hard nofile 100000
postgres soft nofile 100000Database and Third-Party Applications in Xray
Database
Every artifact and build indexed by Xray is broken down into multiple components. These components and their interrelationships are represented in a checksum-based components graph. Xray uses PostgreSQL to store and query this component's graph.
Xray supports the following PostgreSQL versions:
| Minimum PostgreSQL Version | Maximum PostgreSQL Version | Xray Version |
|---|---|---|
| 13.x | 17.x | 3.121 |
| 13.x | 16.x | 3.107 |
| 13.x | 15.x | 3.78.9 |
| 13.x | 14.x | 3.42 |
| 13.x | 13.x | 3.18 |
RabbitMQ
RabbitMQ is installed as part of the Xray installation for every node. In an HA architecture, Xray utilises queue mirroring and replication between different RabbitMQ nodes. The recommended installation method involves using split mode and setting up a separate 3-node RabbitMQ cluster with Xray HA.
Note: JFrog has added support for RabbitMQ Quorum Queues, available as an optional parameter in the system.yaml, because RabbitMQ has deprecated Classic Queue mirroring in version 4.x. Consequently, JFrog will also deprecate Classic Queue support and transition to Quorum Queues. It is recommended to enable Quorum Queues in Xray, as JFrog plans to fully transition to RabbitMQ 4.x and discontinue Classic Queue support in upcoming versions.
| RabbitMQ Version | Quorum Queues | Classic Queues | Erlang Version Compatibility |
|---|---|---|---|
| 3.7.x | Not supported | Must | From 19.3 to 22.x |
| 3.8.0+ | Recommended | Not recommended | From 23.2 to 24.3 |
| 3.13.0+ | Recommended | Not recommended | From 26.0 to 26.2.x |
| 4.x | Must | Not supported | From 26.2.x to 27.x |
Xray encompasses multiple flows, including scanning, impact analysis, and database synchronisation. These flows require processing by various Xray microservices. Flows comprise multiple steps completed by the Xray services. Xray uses RabbitMQ to manage these different flows and track synchronous and asynchronous communication between microservices.
Erlang
Xray incorporates Erlang and DB-Util as third-party dependencies. These packages are bundled with all Xray installers except for the Linux Archive.
Please ensure you are using the correct Erlang version corresponding to your Xray version:
- Xray versions 3.124.x requires Erlang 26. For more information on RabbitMQ and Erlang compatibility, please refer to the RabbitMQ and Erlang/OTP Compatibility Matrix.
- Xray 3.124 and later versions need Erlang 27 if we have enabled the RabbitMQ 4.x using properties.
Xray Network Ports
Xray uses the 8082 port by default for external communication.
Xray uses the following internal ports by default for communication with JFrog Platform microservices.
Microservice | Port |
|---|---|
Xray Server | 8000 |
Analysis | 7000 |
Indexer | 7002 |
Persist | 7003 |
Router | HTTP: 8082, 8046, 8049 gRPC: 8047 |
RabbitMQ | 4369, 5671, 5672, 15672, 25672 and 35672 to 35682 |
PostgreSQL (if you use the bundled PostgreSQL database) | 5432 |
Observability | HTTP: 8036 gRPC: 8037 |
Policy Enforcer | 7009 |
Docker Requirements
For Docker and Docker Compose installations, JFrog services require Docker Engine 25.0 and above, and Docker Compose v2 to be installed on the machine where you want to run them.
For more information, see Docker and Docker Compose.
Updated 2 days ago
