Install Xray Single Node with Interactive Script
The installer script works with all supported upgrade methods (Linux RPM, Archive, Debian and Docker Compose).
The installer script works with all supported upgrade methods (Linux Archive, RPM, Debian and Docker Compose). It provides you an interactive way to install Xray and its dependencies.
Warning
When running the installer script for a Linux Archive installation, do not run the installer script from a symlinked folder, as this may cause the installer to fail.
Before you proceed with the installation, review the system requirements.
Xray's system requirements are dependent on the scale of your environment.
Sizings
Non-Kubernetes Deployment
Up to 100K indexed artifacts. No High Availability
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray and DB | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ | 1 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
Up to 1M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 2 | 4 | 8 GB | 300 GB |
DB | 1 | 8 | 32 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 2 | 8 | 24 GB | 300 GB |
Up to 2M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 6 | 12 GB | 300 GB |
DB | 1 | 16 | 32 GB | 1 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 4 | 8 | 24 GB | 300 GB |
Up to 10M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 8 | 24 GB | 300 GB |
DB | 1 | 16 | 64 GB | 2.5 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 8 | 8 | 24 GB | 300 GB |
Over 10M indexed artifacts
Contact JFrog Support.
For Xray HA (High Availability) installations (more than one node and 100k indexed artifacts), it is recommended to install RabbitMQ and Xray on separate servers using split mode. For further details, please refer to our page here.
RabbitMQ Note
RabbitMQ is a crucial component of the Xray architecture, acting as a message broker for communication between various application services. Multiple queues facilitate communication channels between producers and consumers. For more information, please refer to our page here.
Kubernetes Deployment
We have included YAML files with different sizing configurations for Artifactory, Xray, and Distribution in our GitHub pages. You can use these YAML files when you set up your cluster.
Note: From Helm chart 103.124 version of JFrog Xray, JFrog added support for Quorum Queues in RabbitMQ. This requires a minimum of three nodes of RabbitMQ and ensures fault tolerance of at least one node. A non-HA setup of 1 node is also supported.
Note: We recommend a three-node cluster of RabbitMQ for Xray. Large clusters increase the overhead of replication and could affect the throughput. Our application has been tuned to work in a three-node setup.
To mitigate performance bottlenecks, avoid port conflicts, and prevent unusual configurations, use a dedicated node for Xray and RabbitMQ with no other software running.Scalability
Xray
You can create a high-availability cluster by adding multiple nodes (1, 2, 3, ...n) to distribute the workload and increase capacity.
RabbitMQ
An odd number of servers is required for RabbitMQ to work effectively for Quorum Queues. We recommend a three-node cluster for RabbitMQ, and if additional capacity is needed, this can be met through vertical scaling. Connect with us if you see a need for this.
Quorum Queues Notes
Starting with version 3.124.x Xray supports Quorum Queues.RabbitMQ has discontinued support for Classic Queue mirroring in version 3.0 and deprecated it in version 4.0. With this, JFrog will be deprecating Classic Queue support in upcoming releases.
A highly available RabbitMQ cluster with Quorum Queues requires a minimum of three nodes.A two-node cluster with RabbitMQ Quorum Queues will not be fault-tolerant.More than three node clusters won’t add fault tolerance and can weaken RMQ performance.
RMQ QQ nodes | HA | Comments |
|---|---|---|
1 | No | Simple deployment. Good for POC or small setups. |
2 | No | Degraded mode. Not recommended for the production.Can be present temporarily, if one node of a 3-node cluster is down. |
3 | Yes | Can survive one node absence |
4+ | Yes | Possible performance degradation. Waste of nodes |
For more information, please refer to the RabbitMQ Quorum Queues documentation here.
In most cases, our recommendation is to use an SSD drive for Xray to have better performance and it is not recommended to use an NFS drive, as it is a disk I/O-intensive service, a slow NFS server can suffer from I/O bottlenecks and NFS is mostly used for storage replication.
Xray stores node-specific files, such as configuration and temporary files, to the disk. These files are exclusively used by Xray and not shared with other services. Since the local storage used for Xray services is temporary, it does not require replication between the different nodes in a multi-node/HA deployment.
Use the following command to determine the current file handle allocation limit.
cat /proc/sys/fs/file-maxThen, set the following parameters in your /etc/security/limits.conf file to the lower of 100,000 or the file handle allocation limit determined above.
The example shows how the relevant parameters in the /etc/security/limits.conf file are set to 100000. The actual setting for your installation may be different depending file handle allocation limit in your system.
root hard nofile 100000
root soft nofile 100000
xray hard nofile 100000
xray soft nofile 100000
postgres hard nofile 100000
postgres soft nofile 100000The following table lists the supported operating systems and their versions:
Product | Debian | RHEL | Ubuntu | Amazon Linux | Windows Server |
|---|---|---|---|---|---|
Xray | 10.x, 11.x | 8.x, 9.x | 20.04, 22.04 | Amazon Linux 2023 |
Note
Debian 12.x and Ubuntu 24.04 are supported from Artifactory 7.104 and Distribution 2.28.
Windows 2022 is supported from Artifactory 7.125.
Supported Platforms
The following table lists the supported platforms:
Product | x86-64 | ARM64 | Kubernetes | OpenShift |
|---|---|---|---|---|
Xray | 1.27+ | 4.14+ |
Installation on Kubernetes environments is through Helm Charts. Supported Helm version is Helm 3.17+.
Kubernetes Sizing Requirements
We have included YAML files with different sizing configurations for Artifactory , Xray, and Distribution in our GitHub pages. You can use these YAML files when you set up your cluster.
ARM64 Support for Container-Based Installations
Artifactory, Xray and Distribution supports installation on ARM64 architecture specifically through Helm and Docker installations. When deploying a product on an ARM64 platform, an external database must be set up as Artifactory does not support the bundled database for ARM64 installations. The appropriate ARM64 Container Image is automatically pulled during the Helm or Docker installation process.
Database
Every artifact and build indexed by Xray is broken down into multiple components. These components and their interrelationships are represented in a checksum-based components graph. Xray uses PostgreSQL to store and query this component's graph.
Xray supports the following PostgreSQL versions:
| Minimum PostgreSQL Version | Maximum PostgreSQL Version | Xray Version |
|---|---|---|
| 13.x | 17.x | 3.121 |
| 13.x | 16.x | 3.107 |
| 13.x | 15.x | 3.78.9 |
| 13.x | 14.x | 3.42 |
| 13.x | 13.x | 3.18 |
RabbitMQ
RabbitMQ is installed as part of the Xray installation for every node. In an HA architecture, Xray utilises queue mirroring and replication between different RabbitMQ nodes. The recommended installation method involves using split mode and setting up a separate 3-node RabbitMQ cluster with Xray HA.
Note: JFrog has added support for RabbitMQ Quorum Queues, available as an optional parameter in the system.yaml, because RabbitMQ has deprecated Classic Queue mirroring in version 4.x. Consequently, JFrog will also deprecate Classic Queue support and transition to Quorum Queues. It is recommended to enable Quorum Queues in Xray, as JFrog plans to fully transition to RabbitMQ 4.x and discontinue Classic Queue support in upcoming versions.
| RabbitMQ Version | Quorum Queues | Classic Queues | Erlang Version Compatibility |
|---|---|---|---|
| 3.7.x | Not supported | Must | From 19.3 to 22.x |
| 3.8.0+ | Recommended | Not recommended | From 23.2 to 24.3 |
| 3.13.0+ | Recommended | Not recommended | From 26.0 to 26.2.x |
| 4.x | Must | Not supported | From 26.2.x to 27.x |
Xray encompasses multiple flows, including scanning, impact analysis, and database synchronisation. These flows require processing by various Xray microservices. Flows comprise multiple steps completed by the Xray services. Xray uses RabbitMQ to manage these different flows and track synchronous and asynchronous communication between microservices.
Erlang
Xray incorporates Erlang and DB-Util as third-party dependencies. These packages are bundled with all Xray installers except for the Linux Archive.
Please ensure you are using the correct Erlang version corresponding to your Xray version:
- Xray versions 3.124.x requires Erlang 26. For more information on RabbitMQ and Erlang compatibility, please refer to the RabbitMQ and Erlang/OTP Compatibility Matrix.
- Xray 3.124 and later versions need Erlang 27 if we have enabled the RabbitMQ 4.x using properties.
Xray uses the 8082 port by default for external communication.
Xray uses the following internal ports by default for communication with JFrog Platform microservices.
Microservice | Port |
|---|---|
Xray Server | 8000 |
Analysis | 7000 |
Indexer | 7002 |
Persist | 7003 |
Router | HTTP: 8082, 8046, 8049 gRPC: 8047 |
RabbitMQ | 4369, 5671, 5672, 15672, 25672 and 35672 to 35682 |
PostgreSQL (if you use the bundled PostgreSQL database) | 5432 |
Observability | HTTP: 8036 gRPC: 8037 |
Policy Enforcer | 7009 |
Follow these steps to install the product:
-
Download the Xray Compressed Archive. Visit the Download page to get the Xray compressed archive.
-
Extract the contents of the compressed archive and go to the extracted folder.
tar -xvf jfrog-xray-<version>-<compose|rpm|deb>.tar.gz cd jfrog-xray-<version>-<compose|rpm|deb>
OS user permissions for Linux archive
When running Xray, the installation script creates a user called xray by default, which must have run and execute permissions on the installation directory.
We recommend that you extract the Xray download file into a directory that gives run and execute permissions to all users such as
/opt.mv jfrog-xray-<version>-linux.tar.gz /opt/ cd /opt tar -xf jfrog-xray-<version>-linux.tar.gz mv jfrog-xray-<version>-linux xray cd xray
.env file included within the Docker-Compose archive
The .env file is used by docker-compose and is updated during installations and upgrades.
Some operating systems do not display dot files by default. If you make any changes to the file, remember to backup before an upgrade.
-
Run the installer script.
The script prompts you with a series of mandatory inputs, including the jfrogURL (custom base URL) and joinKey
RPM or Debian
./install.shDocker Compose
./config.shLinux archive
Check prerequisites for Xray in Linux Archive before running install script.
./install.sh --user <user name> --group <group name> -h | --help : [optional] display usage -u | --user : [optional] (default: xray) user which will be used to run the product, it will be created if its unavailable -g | --group : [optional] (default: xray) group which will be used to run the product, it will be created if its unavailable
Note
- The installer script will prompt the
JFrog URLand theJoinKeyof the Artifactory to join the JFrog Platform cluster. Find the join key in the JPD UI under the Administration module | Security | General | Connection Details. Enter your login password in the Current Password field and click Unlock.- For production instances, JFrog recommends using an external database, so please create a database before the Xray installation.
- Xray uses Erlang and DB-Util third-party applications. These packages are included with all installers. If you encounter issues, install Erlang and DB-Util manually before re-running the script.
-
Validate and customize the product configuration (optional), including the third party dependencies connection details and ports.
Warning
Verify that a large file handle limitis specified before you start Xray.
-
Start and manage the Xray service.
Important
Starting from Xray 3.8x, the stop and restart action on Xray will not be applied to RabbitMQ process. On start action of Xray, if RabbitMQ is not running, it will be started.
If you want the script to perform stop and restart action on RabbitMQ, set shared.rabbitMq.autoStop as true in the system.yaml. This flag is not consumed in docker-compose installation.
systemd OS
systemctl start|stop xray.servicesystemv
service xray start|stopDocker Compose
cd jfrog-xray-<version>-compose
# Starting from Xray 3.8x RabbitMQ has been moved to a compose file of its own, this needs to be started before starting other services
docker-compose -p xray-rabbitmq -f docker-compose-rabbitmq.yaml up -d
# Starting from 3.8.x, PostgreSQL needs to be started before starting the other services.
docker-compose -p xray-postgres -f docker-compose-postgres.yaml up -d
docker-compose -p xray up -d
docker-compose -p xray ps
docker-compose -p xray downLinux archive
xray/app/bin/xray.sh start|stopYou can install and manage Xray as a service in a Linux archive installation. For more information, see start Xray section under Linux Archive Manual Installation. 7. Check the Xray log.
tail -f $JFROG_HOME/xray/var/log/console.logConfigure log rotation of the console log
The
console.logfile can grow quickly since all services write to it. For more information, see configure the log rotation.
-
Access Xray from your browser at:
http://<jfrogUrl>/ui/:port.Go to the Xray Security & Compliance tab in the Administration module in the UI.
After installing and before running Xray, you may set the following configurations.
You can configure all your system settings using the system.yaml file located in the $JFROG_HOME/xray /var/etc folder. For more information, see Xray System YAML.
If you don't have a System YAML file in your folder, copy the template available in the folder and name it system.yaml.
For the Helm charts, the system.yaml file is managed in the chart’s values.yaml.
Xray requires a working Artifactory server and a suitable license. The Xray connection to Artifactory requires the following parameters.
-
jfrogUrl
URL to the machine where JFrog Artifactory is deployed, or the load balancer pointing to it. It is recommended to use DNS names rather than direct IPs. For example:
http://jfrog.acme.comorhttp://10.20.30.40:8082. Note that/artifactorycontext is not longer required.Set it in the Shared Configurations section of the
$JFROG_HOME/xray/var/etc/system.yamlfile. -
join.key
This is the "secret" key required by Artifactory for registering and authenticating the Xray server.
You can fetch the Artifactory
joinKey(join Key) from the JPD UI in the User Management | Settings | Join Key.Set the join.key used by your Artifactory server in the Shared Configurations section of the
$JFROG_HOME/xray/var/etc/system.yamlfile.
Xray comes bundled with a PostgreSQL database out-of-the-box, which come pre-configured with the default credentials.
To change the default credentials:
# Access PostgreSQL as the Xray user adding the optional -W flag to invoke the password prompt
$ psql -d xraydb -U xray -W
# Securely change the password for user "xray". Enter and then retype the password at the prompt.
\password xray
# Verify the update was successful by logging in with the new credentials
$ psql -d xraydb -U xray -WSet your PostgreSQL connection details in the Shared Configurations section of the $JFROG_HOME/xray/var/etc/system.yaml file.
Xray comes pre-installed with RabbitMQ, by setting the Erlang cookie value as the RabbitMQ password for guest users.
- Set the new password in the
<MOUNT_DIR>/app/third-party/rabbitmq/rabbitmq.conffile.
default_pass = <new password>-
Set your RabbitMQ password in the Shared Configurations section of the
$JFROG_HOME/xray/var/etc/system.yamlfile. -
Restart all services.
cd jfrog-xray-<version>-compose docker-compose -p xray restart -
Set the new password in the
$JFROG_HOME/app/bin/rabbitmq/rabbitmq.conffile.
default_pass = <new password>- Set your RabbitMQ password in the Shared Configurations section of the
$JFROG_HOME/xray/var/etc/system.yamlfile. - Restart all services.
service xray restart / systemctl restart xray.service- Set the new password in the
$JFROG_HOME/app/bin/rabbitmq/rabbitmq.conffile.
default_pass = <new password>- Set your RabbitMQ password in the Shared Configurations section of the
$JFROG_HOME/xray/var/etc/system.yamlfile. - Restart all services.
xray/app/bin/xray.sh restartBy default, RabbitMQ uses the short hostnames of other nodes in the cluster for communication. However, it be can be configured to use a fully qualified domain name (FQND) host name (a long hostname).
To configure RabbitMQ to use FQDN, follow these steps.
-
Install Xray , but do not start the services.
-
Modify the following files according to the installer type.
- Docker-Compose
In docker-compose-rabbitmq.yaml:
environment:
- RABBITMQ_USE_LONGNAME=true
In .env:
HOST_ID=<long hostname>
## For secondary nodes only, provide the hostname of any of the active nodes where RabbitMQ service is running.
#JF_SHARED_RABBITMQ_ACTIVE_NODE_NAME=<long hostname of active node>-
Linux and Native Installers
In JFROG_HOME/app/bin/xray.default: export RABBITMQ_USE_LONGNAME=true -
Common Change in All Installers
In system.yaml: shared: node: id: <long hostname> name: <long hostname> ## For secondary nodes only, provide the hostname of any of the active nodes where RabbitMQ service is running. # shared: # rabbitMq: # active: # node: # name: <long hostname of active node>
- Start RabbitMQ and the Xray services.
Xray enables using an external log collector such as Sumologic or Splunk.
To adjust the permissions to allow the log collection service perform read operations on the generated log files.
-
Add the log collection service user to the relevant group if needed (the user and group that installed and started Xray).
-
Apply the user and group permissions as needed on the
$JFROG_HOME/xray/var/logdirectory using:$ chmod -R 640 $JFROG_HOME/xray/var/log -
Adjust the group read inheritance permissions
setgid bitusing:$ chmod -R 2755 $JFROG_HOME/xray/var/logThis command enables the generated log files to inherit the folder's group permissions.
Next Steps
- Installing JFrog Catalog with Interactive Script
- Install JFrog Advanced Security on Your Self-Hosted Environment Without Helm
Xray System Requirements and Platform Support
Xray's system requirements are dependent on the scale of your environment.
Sizings
Non-Kubernetes Deployment
Up to 100K indexed artifacts. No High Availability
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray and DB | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ | 1 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
Up to 1M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 2 | 4 | 8 GB | 300 GB |
DB | 1 | 8 | 32 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 2 | 8 | 24 GB | 300 GB |
Up to 2M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 6 | 12 GB | 300 GB |
DB | 1 | 16 | 32 GB | 1 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 4 | 8 | 24 GB | 300 GB |
Up to 10M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 8 | 24 GB | 300 GB |
DB | 1 | 16 | 64 GB | 2.5 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 8 | 8 | 24 GB | 300 GB |
Over 10M indexed artifacts
Contact JFrog Support.
For Xray HA (High Availability) installations (more than one node and 100k indexed artifacts), it is recommended to install RabbitMQ and Xray on separate servers using split mode. For further details, please refer to our page here.
RabbitMQ Note
RabbitMQ is a crucial component of the Xray architecture, acting as a message broker for communication between various application services. Multiple queues facilitate communication channels between producers and consumers. For more information, please refer to our page here.
Kubernetes Deployment
We have included YAML files with different sizing configurations for Artifactory, Xray, and Distribution in our GitHub pages. You can use these YAML files when you set up your cluster.
Note: From Helm chart 103.124 version of JFrog Xray, JFrog added support for Quorum Queues in RabbitMQ. This requires a minimum of three nodes of RabbitMQ and ensures fault tolerance of at least one node. A non-HA setup of 1 node is also supported.
Note: We recommend a three-node cluster of RabbitMQ for Xray. Large clusters increase the overhead of replication and could affect the throughput. Our application has been tuned to work in a three-node setup.
The following table lists the supported operating systems and their versions:
Product | Debian | RHEL | Ubuntu | Amazon Linux | Windows Server |
|---|---|---|---|---|---|
Xray | 10.x, 11.x | 8.x, 9.x | 20.04, 22.04 | Amazon Linux 2023 |
Note
Debian 12.x and Ubuntu 24.04 are supported from Artifactory 7.104 and Distribution 2.28.
Windows 2022 is supported from Artifactory 7.125.
Supported Platforms
The following table lists the supported platforms:
Product | x86-64 | ARM64 | Kubernetes | OpenShift |
|---|---|---|---|---|
Xray | 1.27+ | 4.14+ |
Installation on Kubernetes environments is through Helm Charts. Supported Helm version is Helm 3.17+.
Kubernetes Sizing Requirements
We have included YAML files with different sizing configurations for Artifactory , Xray, and Distribution in our GitHub pages. You can use these YAML files when you set up your cluster.
ARM64 Support for Container-Based Installations
Artifactory, Xray and Distribution supports installation on ARM64 architecture specifically through Helm and Docker installations. When deploying a product on an ARM64 platform, an external database must be set up as Artifactory does not support the bundled database for ARM64 installations. The appropriate ARM64 Container Image is automatically pulled during the Helm or Docker installation process.
Xray and RabbitMQ Nodes Recommendations
To mitigate performance bottlenecks, avoid port conflicts, and prevent unusual configurations, use a dedicated node for Xray and RabbitMQ with no other software running.
Scalability
Xray
You can create a high-availability cluster by adding multiple nodes (1, 2, 3, ...n) to distribute the workload and increase capacity.
RabbitMQ
An odd number of servers is required for RabbitMQ to work effectively for Quorum Queues. We recommend a three-node cluster for RabbitMQ, and if additional capacity is needed, this can be met through vertical scaling. Connect with us if you see a need for this.
Quorum Queues Notes
Starting with version 3.124.x Xray supports Quorum Queues.RabbitMQ has discontinued support for Classic Queue mirroring in version 3.0 and deprecated it in version 4.0. With this, JFrog will be deprecating Classic Queue support in upcoming releases.
A highly available RabbitMQ cluster with Quorum Queues requires a minimum of three nodes.A two-node cluster with RabbitMQ Quorum Queues will not be fault-tolerant.More than three node clusters won’t add fault tolerance and can weaken RMQ performance.
RMQ QQ nodes | HA | Comments |
|---|---|---|
1 | No | Simple deployment. Good for POC or small setups. |
2 | No | Degraded mode. Not recommended for the production.Can be present temporarily, if one node of a 3-node cluster is down. |
3 | Yes | Can survive one node absence |
4+ | Yes | Possible performance degradation. Waste of nodes |
For more information, please refer to the RabbitMQ Quorum Queues documentation here.
Xray Storage Recommendations
In most cases, our recommendation is to use an SSD drive for Xray to have better performance and it is not recommended to use an NFS drive, as it is a disk I/O-intensive service, a slow NFS server can suffer from I/O bottlenecks and NFS is mostly used for storage replication.
Xray stores node-specific files, such as configuration and temporary files, to the disk. These files are exclusively used by Xray and not shared with other services. Since the local storage used for Xray services is temporary, it does not require replication between the different nodes in a multi-node/HA deployment.
Xray File Handle Allocation Limit
Use the following command to determine the current file handle allocation limit.
cat /proc/sys/fs/file-maxThen, set the following parameters in your /etc/security/limits.conf file to the lower of 100,000 or the file handle allocation limit determined above.
The example shows how the relevant parameters in the /etc/security/limits.conf file are set to 100000. The actual setting for your installation may be different depending file handle allocation limit in your system.
root hard nofile 100000
root soft nofile 100000
xray hard nofile 100000
xray soft nofile 100000
postgres hard nofile 100000
postgres soft nofile 100000Database and Third-Party Applications in Xray
Database
Every artifact and build indexed by Xray is broken down into multiple components. These components and their interrelationships are represented in a checksum-based components graph. Xray uses PostgreSQL to store and query this component's graph.
Xray supports the following PostgreSQL versions:
| Minimum PostgreSQL Version | Maximum PostgreSQL Version | Xray Version |
|---|---|---|
| 13.x | 17.x | 3.121 |
| 13.x | 16.x | 3.107 |
| 13.x | 15.x | 3.78.9 |
| 13.x | 14.x | 3.42 |
| 13.x | 13.x | 3.18 |
RabbitMQ
RabbitMQ is installed as part of the Xray installation for every node. In an HA architecture, Xray utilises queue mirroring and replication between different RabbitMQ nodes. The recommended installation method involves using split mode and setting up a separate 3-node RabbitMQ cluster with Xray HA.
Note: JFrog has added support for RabbitMQ Quorum Queues, available as an optional parameter in the system.yaml, because RabbitMQ has deprecated Classic Queue mirroring in version 4.x. Consequently, JFrog will also deprecate Classic Queue support and transition to Quorum Queues. It is recommended to enable Quorum Queues in Xray, as JFrog plans to fully transition to RabbitMQ 4.x and discontinue Classic Queue support in upcoming versions.
| RabbitMQ Version | Quorum Queues | Classic Queues | Erlang Version Compatibility |
|---|---|---|---|
| 3.7.x | Not supported | Must | From 19.3 to 22.x |
| 3.8.0+ | Recommended | Not recommended | From 23.2 to 24.3 |
| 3.13.0+ | Recommended | Not recommended | From 26.0 to 26.2.x |
| 4.x | Must | Not supported | From 26.2.x to 27.x |
Xray encompasses multiple flows, including scanning, impact analysis, and database synchronisation. These flows require processing by various Xray microservices. Flows comprise multiple steps completed by the Xray services. Xray uses RabbitMQ to manage these different flows and track synchronous and asynchronous communication between microservices.
Erlang
Xray incorporates Erlang and DB-Util as third-party dependencies. These packages are bundled with all Xray installers except for the Linux Archive.
Please ensure you are using the correct Erlang version corresponding to your Xray version:
- Xray versions 3.124.x requires Erlang 26. For more information on RabbitMQ and Erlang compatibility, please refer to the RabbitMQ and Erlang/OTP Compatibility Matrix.
- Xray 3.124 and later versions need Erlang 27 if we have enabled the RabbitMQ 4.x using properties.
Xray Network Ports
Xray uses the 8082 port by default for external communication.
Xray uses the following internal ports by default for communication with JFrog Platform microservices.
Microservice | Port |
|---|---|
Xray Server | 8000 |
Analysis | 7000 |
Indexer | 7002 |
Persist | 7003 |
Router | HTTP: 8082, 8046, 8049 gRPC: 8047 |
RabbitMQ | 4369, 5671, 5672, 15672, 25672 and 35672 to 35682 |
PostgreSQL (if you use the bundled PostgreSQL database) | 5432 |
Observability | HTTP: 8036 gRPC: 8037 |
Policy Enforcer | 7009 |
Updated 2 days ago
