Xray HA OpenShift Installation
Xray on OpenShift is available from version 3.80.9 onwards. Before you the proceed with installation, review the system requirements.
Xray on OpenShift is available from version 3.80.9 onwards.
Before you proceed with the installation, review the system requirements.
Xray's system requirements are dependent on the scale of your environment.
Sizings
Non-Kubernetes Deployment
Up to 100K indexed artifacts. No High Availability
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray and DB | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ | 1 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
Up to 1M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 2 | 4 | 8 GB | 300 GB |
DB | 1 | 8 | 32 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 2 | 8 | 24 GB | 300 GB |
Up to 2M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 6 | 12 GB | 300 GB |
DB | 1 | 16 | 32 GB | 1 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 4 | 8 | 24 GB | 300 GB |
Up to 10M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 8 | 24 GB | 300 GB |
DB | 1 | 16 | 64 GB | 2.5 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 8 | 8 | 24 GB | 300 GB |
Over 10M indexed artifacts
Contact JFrog Support.
For Xray HA (High Availability) installations (more than one node and 100k indexed artifacts), it is recommended to install RabbitMQ and Xray on separate servers using split mode. For further details, please refer to our page here.
RabbitMQ Note
RabbitMQ is a crucial component of the Xray architecture, acting as a message broker for communication between various application services. Multiple queues facilitate communication channels between producers and consumers. For more information, please refer to our page here.
Kubernetes Deployment
We have included YAML files with different sizing configurations for Artifactory, Xray, and Distribution in our GitHub pages. You can use these YAML files when you set up your cluster.
Note: From Helm chart 103.124 version of JFrog Xray, JFrog added support for Quorum Queues in RabbitMQ. This requires a minimum of three nodes of RabbitMQ and ensures fault tolerance of at least one node. A non-HA setup of 1 node is also supported.
Note: We recommend a three-node cluster of RabbitMQ for Xray. Large clusters increase the overhead of replication and could affect the throughput. Our application has been tuned to work in a three-node setup.
To mitigate performance bottlenecks, avoid port conflicts, and prevent unusual configurations, use a dedicated node for Xray and RabbitMQ with no other software running.Scalability
Xray
You can create a high-availability cluster by adding multiple nodes (1, 2, 3, ...n) to distribute the workload and increase capacity.
RabbitMQ
An odd number of servers is required for RabbitMQ to work effectively for Quorum Queues. We recommend a three-node cluster for RabbitMQ, and if additional capacity is needed, this can be met through vertical scaling. Connect with us if you see a need for this.
Quorum Queues Notes
Starting with version 3.124.x Xray supports Quorum Queues.RabbitMQ has discontinued support for Classic Queue mirroring in version 3.0 and deprecated it in version 4.0. With this, JFrog will be deprecating Classic Queue support in upcoming releases.
A highly available RabbitMQ cluster with Quorum Queues requires a minimum of three nodes.A two-node cluster with RabbitMQ Quorum Queues will not be fault-tolerant.More than three node clusters won’t add fault tolerance and can weaken RMQ performance.
RMQ QQ nodes | HA | Comments |
|---|---|---|
1 | No | Simple deployment. Good for POC or small setups. |
2 | No | Degraded mode. Not recommended for the production.Can be present temporarily, if one node of a 3-node cluster is down. |
3 | Yes | Can survive one node absence |
4+ | Yes | Possible performance degradation. Waste of nodes |
For more information, please refer to the RabbitMQ Quorum Queues documentation here.
In most cases, our recommendation is to use an SSD drive for Xray to have better performance and it is not recommended to use an NFS drive, as it is a disk I/O-intensive service, a slow NFS server can suffer from I/O bottlenecks and NFS is mostly used for storage replication.
Xray stores node-specific files, such as configuration and temporary files, to the disk. These files are exclusively used by Xray and not shared with other services. Since the local storage used for Xray services is temporary, it does not require replication between the different nodes in a multi-node/HA deployment.
Use the following command to determine the current file handle allocation limit.
cat /proc/sys/fs/file-maxThen, set the following parameters in your /etc/security/limits.conf file to the lower of 100,000 or the file handle allocation limit determined above.
The example shows how the relevant parameters in the /etc/security/limits.conf file are set to 100000. The actual setting for your installation may be different depending file handle allocation limit in your system.
root hard nofile 100000
root soft nofile 100000
xray hard nofile 100000
xray soft nofile 100000
postgres hard nofile 100000
postgres soft nofile 100000The following table lists the supported operating systems and their versions:
Product | Debian | RHEL | Ubuntu | Amazon Linux | Windows Server |
|---|---|---|---|---|---|
Xray | 10.x, 11.x | 8.x, 9.x | 20.04, 22.04 | Amazon Linux 2023 |
Note
Debian 12.x and Ubuntu 24.04 are supported from Artifactory 7.104 and Distribution 2.28.
Windows 2022 is supported from Artifactory 7.125.
Supported Platforms
The following table lists the supported platforms:
Product | x86-64 | ARM64 | Kubernetes | OpenShift |
|---|---|---|---|---|
Xray | 1.27+ | 4.14+ |
Installation on Kubernetes environments is through Helm Charts. Supported Helm version is Helm 3.17+.
Kubernetes Sizing Requirements
We have included YAML files with different sizing configurations for Artifactory , Xray, and Distribution in our GitHub pages. You can use these YAML files when you set up your cluster.
ARM64 Support for Container-Based Installations
Artifactory, Xray and Distribution supports installation on ARM64 architecture specifically through Helm and Docker installations. When deploying a product on an ARM64 platform, an external database must be set up as Artifactory does not support the bundled database for ARM64 installations. The appropriate ARM64 Container Image is automatically pulled during the Helm or Docker installation process.
Database
Every artifact and build indexed by Xray is broken down into multiple components. These components and their interrelationships are represented in a checksum-based components graph. Xray uses PostgreSQL to store and query this component's graph.
Xray supports the following PostgreSQL versions:
| Minimum PostgreSQL Version | Maximum PostgreSQL Version | Xray Version |
|---|---|---|
| 13.x | 17.x | 3.121 |
| 13.x | 16.x | 3.107 |
| 13.x | 15.x | 3.78.9 |
| 13.x | 14.x | 3.42 |
| 13.x | 13.x | 3.18 |
RabbitMQ
RabbitMQ is installed as part of the Xray installation for every node. In an HA architecture, Xray utilises queue mirroring and replication between different RabbitMQ nodes. The recommended installation method involves using split mode and setting up a separate 3-node RabbitMQ cluster with Xray HA.
Note: JFrog has added support for RabbitMQ Quorum Queues, available as an optional parameter in the system.yaml, because RabbitMQ has deprecated Classic Queue mirroring in version 4.x. Consequently, JFrog will also deprecate Classic Queue support and transition to Quorum Queues. It is recommended to enable Quorum Queues in Xray, as JFrog plans to fully transition to RabbitMQ 4.x and discontinue Classic Queue support in upcoming versions.
| RabbitMQ Version | Quorum Queues | Classic Queues | Erlang Version Compatibility |
|---|---|---|---|
| 3.7.x | Not supported | Must | From 19.3 to 22.x |
| 3.8.0+ | Recommended | Not recommended | From 23.2 to 24.3 |
| 3.13.0+ | Recommended | Not recommended | From 26.0 to 26.2.x |
| 4.x | Must | Not supported | From 26.2.x to 27.x |
Xray encompasses multiple flows, including scanning, impact analysis, and database synchronisation. These flows require processing by various Xray microservices. Flows comprise multiple steps completed by the Xray services. Xray uses RabbitMQ to manage these different flows and track synchronous and asynchronous communication between microservices.
Erlang
Xray incorporates Erlang and DB-Util as third-party dependencies. These packages are bundled with all Xray installers except for the Linux Archive.
Please ensure you are using the correct Erlang version corresponding to your Xray version:
- Xray versions 3.124.x requires Erlang 26. For more information on RabbitMQ and Erlang compatibility, please refer to the RabbitMQ and Erlang/OTP Compatibility Matrix.
- Xray 3.124 and later versions need Erlang 27 if we have enabled the RabbitMQ 4.x using properties.
Xray uses the 8082 port by default for external communication.
Xray uses the following internal ports by default for communication with JFrog Platform microservices.
Microservice | Port |
|---|---|
Xray Server | 8000 |
Analysis | 7000 |
Indexer | 7002 |
Persist | 7003 |
Router | HTTP: 8082, 8046, 8049 gRPC: 8047 |
RabbitMQ | 4369, 5671, 5672, 15672, 25672 and 35672 to 35682 |
PostgreSQL (if you use the bundled PostgreSQL database) | 5432 |
Observability | HTTP: 8036 gRPC: 8037 |
Policy Enforcer | 7009 |
Note
Currently, it is not possible to connect a JFrog product that is within a Kubernetes cluster with another JFrog product that is outside of the cluster, as this is considered a separate network. Therefore, JFrog products cannot be joined together if one of them is in a cluster.
Note
External RabbiMQ instances are not officially supported; the recommended method of installation is to use the bundled RabbitMQ.
Follow these steps to install the product:
Note
In our documentation, we use
occommands for code snippets related to OpenShift installation, butkubectlcommands will also work.
-
Add the charts.jfrog.io to your Helm client.
helm repo add jfrog https://charts.jfrog.io -
Update the repository.
helm repo update -
Next, create a unique master key. JFrog Xray requires a unique master key to be used by all micro-services in the same cluster. By default the chart has one set in
values.yaml(xray.masterKey).
Note
For production grade installations it is strongly recommended to use a custom master key. If you initially use the default master key it will be very hard to change the master key at a later stage This key is for demo purpose and should not be used in a production environment.
Generate a unique key and pass it to the template during installation/upgrade.
# Create a key
export MASTER_KEY=$(openssl rand -hex 32)
echo ${MASTER_KEY}You can pass this master key to the Helm installation through the Helm command or through the values.yaml file.
The following example shows the values.yaml file with the master key.
xray:
masterKey: <master key value>Alternatively, you can create a secret containing the master key manually and pass it to the template during installation/upgrade.
# Create a secret containing the key. The key in the secret must be named master-key
oc create secret generic masterkey-secret --from-literal=master-key=${MASTER_KEY}You can pass this master key secret to the Helm installation through the Helm command (by passing masterkey-secret) or through the values.yaml file.
The following example shows the values.yaml file with the master key secret.
xray:
masterKeySecretName: masterkey-secretNote
In either case, make sure to pass the same master key on all future calls to
helm installandhelm upgrade. In the first case, this means always passing--set xray.masterKey=${MASTER_KEY}. In the second, this means always passing--set xray.masterKeySecretName=masterkey-secretand ensuring the contents of the secret remain unchanged.You can also provide the master key or master key secret inside a
values.yamlfile and pass it along during the installation.
-
Installation requires a join key.
You can pass the join key along with the Helm install/upgrade command or pass it in a
values.yamlfile.The following sample shows how to provide join key in the
values.yamlfile.xray: joinKey: <join key value>Alternatively, you can manually create a secret containing the join key and then pass it to the template during install/upgrade. The key must be named join-key.
oc create secret generic joinkey-secret --from-literal=join-key=<YOUR_PREVIOUSLY_RETRIEVED_JOIN_KEY>The following example shows the
values.yamlfile with the join key secret.xray: joinKeySecretName: joinkey-secret
Note
In either case, make sure to pass the same join key on all future calls to
helm installandhelm upgrade. This means always passing--set xray.joinKey=<YOUR_PREVIOUSLY_RETRIEVED_JOIN_KEY>. In the second, this means always passing--set xray.joinKeySecretName=joinkey-secretand ensuring that the contents of the secret remain unchanged.
-
You need to enter the JFrog URL.
You can either pass the JFrog URL along with the Helm install/upgrade command or pass it along with the
values.yamlfile.The following example shows the
values.yamlfile with the JFrog URL.xray: jfrogUrl: <JFrog URL> -
For an HA Xray installation, set the
replicaCountvalue as >1 (the recommended is 3).You can either pass the value along with the Helm install/upgrade command or pass it along with the
values.yamlfile.The following example shows the
values.yamlfile with thereplicaCountvalues.replicaCount: 3 -
When you deploy Xray helm chart on an OpenShift cluster, you need to disable the
podSecurityContextandcontainerSecurityContext. Default OpenShift functionality automatically assigns and arbitrary UID block associated with the project.Set the following values in the
values.yamlso that you can pass it along with the installation.containerSecurityContext: enabled: false podSecurityContext: enabled: false rbac: create: true serviceAccount: create: true rabbitmq: rbac: create: true podSecurityContext: enabled: false containerSecurityContext: enabled: false -
To make PostgreSQL work on OpenShift, disable the securityContext in the pod and container level in the
values.yamlfile, and set the following values.postgresql: postgresqlPassword: password securityContext: enabled: false containerSecurityContext: enabled: false serviceAccount: enabled: true -
Create a
values.yamlfile with all the required configuration if you want to proceed with an installation that holds all the configurations in avalues.yamlfile.You can also use separate configuration files for each configuration and pass them as separate yaml files.
The following sample shows an example
values.yamlfile with join key and JFrog URL.replicaCount: 3 xray: jfrogUrl: http://artifactory.rt:8082 joinKey: joinkey-secret masterKey: masterkey-secret containerSecurityContext: enabled: false podSecurityContext: enabled: false rbac: create: true serviceAccount: create: true rabbitmq: rbac: create: true podSecurityContext: enabled: false containerSecurityContext: enabled: false replicaCount: 1 postgresql: postgresqlPassword: password securityContext: enabled: false containerSecurityContext: enabled: false serviceAccount: enabled: true ```text The following sample shows an example `values.yaml` file with join key as a secret and JFrog URL.replicaCount: 3 xray: jfrogUrl: http://artifactory.rt:8082 joinKeySecretName: joinkey-secret masterKeySecretName: masterkey-secret
containerSecurityContext: enabled: false podSecurityContext: enabled: false
rbac: create: true serviceAccount: create: true
rabbitmq: rbac: create: true podSecurityContext: enabled: false containerSecurityContext: enabled: false replicaCount: 1 postgresql: postgresqlPassword: password securityContext: enabled: false containerSecurityContext: enabled: false serviceAccount: enabled: true
-
Run the Helm install command to proceed with the installation.
The following command shows how you can pass the required values through a
values.yamlfile.helm upgrade --install xray --namespace xray -f values.yaml -
To access the logs, find the name of the pod using the following command.
oc --namespace <your namespace> get pods -
To get the container logs, run the following command.
oc --namespace <your namespace> logs -f <name of the pod>
Note
Unlike other installations, Helm Chart configurations are made to the
values.yamland are then applied to thesystem.yaml.Follow these steps to apply the configuration changes.
Make the changes to
values.yaml.Run the command.
helm upgrade --install xray --namespace xray -f values.yaml
-
Access Xray from your browser at:
http://<jfrogUrl>/ui/and go to Xray Security & Compliance tab in the Administration module in the UI -
Check the status of your deployed helm releases.
helm status xray
After installing and before running Xray, you may set the following configurations.
You can configure all your system settings using the system.yaml file located in the $JFROG_HOME/xray /var/etc folder. For more information, see Xray System YAML.
If you don't have a System YAML file in your folder, copy the template available in the folder and name it system.yaml.
For the Helm charts, the system.yaml file is managed in the chart’s values.yaml.
Xray requires a working Artifactory server and a suitable license. The Xray connection to Artifactory requires the following parameters.
-
jfrogUrl
URL to the machine where JFrog Artifactory is deployed, or the load balancer pointing to it. It is recommended to use DNS names rather than direct IPs. For example:
http://jfrog.acme.comorhttp://10.20.30.40:8082. Note that/artifactorycontext is not longer required.Set it in the Shared Configurations section of the
$JFROG_HOME/xray/var/etc/system.yamlfile. -
join.key
This is the "secret" key required by Artifactory for registering and authenticating the Xray server.
You can fetch the Artifactory
joinKey(join Key) from the JPD UI in the User Management | Settings | Join Key.Set the join.key used by your Artifactory server in the Shared Configurations section of the
$JFROG_HOME/xray/var/etc/system.yamlfile.
Xray comes bundled with a PostgreSQL database out-of-the-box, which come pre-configured with the default credentials.
To change the default credentials:
# Access PostgreSQL as the Xray user adding the optional -W flag to invoke the password prompt
$ psql -d xraydb -U xray -W
# Securely change the password for user "xray". Enter and then retype the password at the prompt.
\password xray
# Verify the update was successful by logging in with the new credentials
$ psql -d xraydb -U xray -WSet your PostgreSQL connection details in the Shared Configurations section of the $JFROG_HOME/xray/var/etc/system.yaml file.
Xray comes pre-installed with RabbitMQ, by setting the Erlang cookie value as the RabbitMQ password for guest users.
By default, RabbitMQ uses the short hostnames of other nodes in the cluster for communication. However, it be can be configured to use a fully qualified domain name (FQND) host name (a long hostname).
To configure RabbitMQ to use FQDN, follow these steps.
-
Install Xray , but do not start the services.
-
Modify the following files according to the installer type.
-
Common Change in All Installers
In system.yaml: shared: node: id: <long hostname> name: <long hostname> ## For secondary nodes only, provide the hostname of any of the active nodes where RabbitMQ service is running. # shared: # rabbitMq: # active: # node: # name: <long hostname of active node>
-
-
Start RabbitMQ and the Xray services.
Xray enables using an external log collector such as Sumologic or Splunk.
To adjust the permissions to allow the log collection service perform read operations on the generated log files.
-
Add the log collection service user to the relevant group if needed (the user and group that installed and started Xray).
-
Apply the user and group permissions as needed on the
$JFROG_HOME/xray/var/logdirectory using:$ chmod -R 640 $JFROG_HOME/xray/var/log -
Adjust the group read inheritance permissions
setgid bitusing:$ chmod -R 2755 $JFROG_HOME/xray/var/logThis command enables the generated log files to inherit the folder's group permissions.
Xray System Requirements and Platform Support
Xray's system requirements are dependent on the scale of your environment.
Sizings
Non-Kubernetes Deployment
Up to 100K indexed artifacts. No High Availability
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray and DB | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ | 1 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
Up to 1M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 2 | 4 | 8 GB | 300 GB |
DB | 1 | 8 | 32 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 2 | 8 | 24 GB | 300 GB |
Up to 2M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 6 | 12 GB | 300 GB |
DB | 1 | 16 | 32 GB | 1 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 4 | 8 | 24 GB | 300 GB |
Up to 10M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 8 | 24 GB | 300 GB |
DB | 1 | 16 | 64 GB | 2.5 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 8 | 8 | 24 GB | 300 GB |
Over 10M indexed artifacts
Contact JFrog Support.
For Xray HA (High Availability) installations (more than one node and 100k indexed artifacts), it is recommended to install RabbitMQ and Xray on separate servers using split mode. For further details, please refer to our page here.
RabbitMQ Note
RabbitMQ is a crucial component of the Xray architecture, acting as a message broker for communication between various application services. Multiple queues facilitate communication channels between producers and consumers. For more information, please refer to our page here.
Kubernetes Deployment
We have included YAML files with different sizing configurations for Artifactory, Xray, and Distribution in our GitHub pages. You can use these YAML files when you set up your cluster.
Note: From Helm chart 103.124 version of JFrog Xray, JFrog added support for Quorum Queues in RabbitMQ. This requires a minimum of three nodes of RabbitMQ and ensures fault tolerance of at least one node. A non-HA setup of 1 node is also supported.
Note: We recommend a three-node cluster of RabbitMQ for Xray. Large clusters increase the overhead of replication and could affect the throughput. Our application has been tuned to work in a three-node setup.
The following table lists the supported operating systems and their versions:
Product | Debian | RHEL | Ubuntu | Amazon Linux | Windows Server |
|---|---|---|---|---|---|
Xray | 10.x, 11.x | 8.x, 9.x | 20.04, 22.04 | Amazon Linux 2023 |
Note
Debian 12.x and Ubuntu 24.04 are supported from Artifactory 7.104 and Distribution 2.28.
Windows 2022 is supported from Artifactory 7.125.
Supported Platforms
The following table lists the supported platforms:
Product | x86-64 | ARM64 | Kubernetes | OpenShift |
|---|---|---|---|---|
Xray | 1.27+ | 4.14+ |
Installation on Kubernetes environments is through Helm Charts. Supported Helm version is Helm 3.17+.
Kubernetes Sizing Requirements
We have included YAML files with different sizing configurations for Artifactory , Xray, and Distribution in our GitHub pages. You can use these YAML files when you set up your cluster.
ARM64 Support for Container-Based Installations
Artifactory, Xray and Distribution supports installation on ARM64 architecture specifically through Helm and Docker installations. When deploying a product on an ARM64 platform, an external database must be set up as Artifactory does not support the bundled database for ARM64 installations. The appropriate ARM64 Container Image is automatically pulled during the Helm or Docker installation process.
Xray and RabbitMQ Nodes Recommendations
To mitigate performance bottlenecks, avoid port conflicts, and prevent unusual configurations, use a dedicated node for Xray and RabbitMQ with no other software running.
Scalability
Xray
You can create a high-availability cluster by adding multiple nodes (1, 2, 3, ...n) to distribute the workload and increase capacity.
RabbitMQ
An odd number of servers is required for RabbitMQ to work effectively for Quorum Queues. We recommend a three-node cluster for RabbitMQ, and if additional capacity is needed, this can be met through vertical scaling. Connect with us if you see a need for this.
Quorum Queues Notes
Starting with version 3.124.x Xray supports Quorum Queues.RabbitMQ has discontinued support for Classic Queue mirroring in version 3.0 and deprecated it in version 4.0. With this, JFrog will be deprecating Classic Queue support in upcoming releases.
A highly available RabbitMQ cluster with Quorum Queues requires a minimum of three nodes.A two-node cluster with RabbitMQ Quorum Queues will not be fault-tolerant.More than three node clusters won’t add fault tolerance and can weaken RMQ performance.
RMQ QQ nodes | HA | Comments |
|---|---|---|
1 | No | Simple deployment. Good for POC or small setups. |
2 | No | Degraded mode. Not recommended for the production.Can be present temporarily, if one node of a 3-node cluster is down. |
3 | Yes | Can survive one node absence |
4+ | Yes | Possible performance degradation. Waste of nodes |
For more information, please refer to the RabbitMQ Quorum Queues documentation here.
Xray Storage Recommendations
In most cases, our recommendation is to use an SSD drive for Xray to have better performance and it is not recommended to use an NFS drive, as it is a disk I/O-intensive service, a slow NFS server can suffer from I/O bottlenecks and NFS is mostly used for storage replication.
Xray stores node-specific files, such as configuration and temporary files, to the disk. These files are exclusively used by Xray and not shared with other services. Since the local storage used for Xray services is temporary, it does not require replication between the different nodes in a multi-node/HA deployment.
Xray File Handle Allocation Limit
Use the following command to determine the current file handle allocation limit.
cat /proc/sys/fs/file-maxThen, set the following parameters in your /etc/security/limits.conf file to the lower of 100,000 or the file handle allocation limit determined above.
The example shows how the relevant parameters in the /etc/security/limits.conf file are set to 100000. The actual setting for your installation may be different depending file handle allocation limit in your system.
root hard nofile 100000
root soft nofile 100000
xray hard nofile 100000
xray soft nofile 100000
postgres hard nofile 100000
postgres soft nofile 100000Database and Third-Party Applications in Xray
Database
Every artifact and build indexed by Xray is broken down into multiple components. These components and their interrelationships are represented in a checksum-based components graph. Xray uses PostgreSQL to store and query this component's graph.
Xray supports the following PostgreSQL versions:
| Minimum PostgreSQL Version | Maximum PostgreSQL Version | Xray Version |
|---|---|---|
| 13.x | 17.x | 3.121 |
| 13.x | 16.x | 3.107 |
| 13.x | 15.x | 3.78.9 |
| 13.x | 14.x | 3.42 |
| 13.x | 13.x | 3.18 |
RabbitMQ
RabbitMQ is installed as part of the Xray installation for every node. In an HA architecture, Xray utilises queue mirroring and replication between different RabbitMQ nodes. The recommended installation method involves using split mode and setting up a separate 3-node RabbitMQ cluster with Xray HA.
Note: JFrog has added support for RabbitMQ Quorum Queues, available as an optional parameter in the system.yaml, because RabbitMQ has deprecated Classic Queue mirroring in version 4.x. Consequently, JFrog will also deprecate Classic Queue support and transition to Quorum Queues. It is recommended to enable Quorum Queues in Xray, as JFrog plans to fully transition to RabbitMQ 4.x and discontinue Classic Queue support in upcoming versions.
| RabbitMQ Version | Quorum Queues | Classic Queues | Erlang Version Compatibility |
|---|---|---|---|
| 3.7.x | Not supported | Must | From 19.3 to 22.x |
| 3.8.0+ | Recommended | Not recommended | From 23.2 to 24.3 |
| 3.13.0+ | Recommended | Not recommended | From 26.0 to 26.2.x |
| 4.x | Must | Not supported | From 26.2.x to 27.x |
Xray encompasses multiple flows, including scanning, impact analysis, and database synchronisation. These flows require processing by various Xray microservices. Flows comprise multiple steps completed by the Xray services. Xray uses RabbitMQ to manage these different flows and track synchronous and asynchronous communication between microservices.
Erlang
Xray incorporates Erlang and DB-Util as third-party dependencies. These packages are bundled with all Xray installers except for the Linux Archive.
Please ensure you are using the correct Erlang version corresponding to your Xray version:
- Xray versions 3.124.x requires Erlang 26. For more information on RabbitMQ and Erlang compatibility, please refer to the RabbitMQ and Erlang/OTP Compatibility Matrix.
- Xray 3.124 and later versions need Erlang 27 if we have enabled the RabbitMQ 4.x using properties.
Xray Network Ports
Xray uses the 8082 port by default for external communication.
Xray uses the following internal ports by default for communication with JFrog Platform microservices.
Microservice | Port |
|---|---|
Xray Server | 8000 |
Analysis | 7000 |
Indexer | 7002 |
Persist | 7003 |
Router | HTTP: 8082, 8046, 8049 gRPC: 8047 |
RabbitMQ | 4369, 5671, 5672, 15672, 25672 and 35672 to 35682 |
PostgreSQL (if you use the bundled PostgreSQL database) | 5432 |
Observability | HTTP: 8036 gRPC: 8037 |
Policy Enforcer | 7009 |
Updated 2 days ago
