Xray HA Helm Installation
Before you proceed with the installation, review the system requirements. Xray's system requirements are dependent on the scale of your environment.
Before you proceed with the installation, review the system requirements.
Xray's system requirements are dependent on the scale of your environment.
Sizings
Non-Kubernetes Deployment
Up to 100K indexed artifacts. No High Availability
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray and DB | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ | 1 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
Up to 1M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 2 | 4 | 8 GB | 300 GB |
DB | 1 | 8 | 32 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 2 | 8 | 24 GB | 300 GB |
Up to 2M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 6 | 12 GB | 300 GB |
DB | 1 | 16 | 32 GB | 1 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 4 | 8 | 24 GB | 300 GB |
Up to 10M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 8 | 24 GB | 300 GB |
DB | 1 | 16 | 64 GB | 2.5 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 8 | 8 | 24 GB | 300 GB |
Over 10M indexed artifacts
Contact JFrog Support.
For Xray HA (High Availability) installations (more than one node and 100k indexed artifacts), it is recommended to install RabbitMQ and Xray on separate servers using split mode. For further details, please refer to our page here.
RabbitMQ Note
RabbitMQ is a crucial component of the Xray architecture, acting as a message broker for communication between various application services. Multiple queues facilitate communication channels between producers and consumers. For more information, please refer to our page here.
Kubernetes Deployment
We have included YAML files with different sizing configurations for Artifactory, Xray, and Distribution in our GitHub pages. You can use these YAML files when you set up your cluster.
Note: From Helm chart 103.124 version of JFrog Xray, JFrog added support for Quorum Queues in RabbitMQ. This requires a minimum of three nodes of RabbitMQ and ensures fault tolerance of at least one node. A non-HA setup of 1 node is also supported.
Note: We recommend a three-node cluster of RabbitMQ for Xray. Large clusters increase the overhead of replication and could affect the throughput. Our application has been tuned to work in a three-node setup.
Note
Currently, it is not possible to connect a JFrog product that is within a Kubernetes cluster with another JFrog product that is outside of the cluster, as this is considered a separate network. Therefore, JFrog products cannot be joined together if one of them is in a cluster.
Note
External RabbiMQ instances are not officially supported; the recommended method of installation is to use the bundled RabbitMQ.
Follow these steps to install the product:
-
Add the charts.jfrog.io to your Helm client.
helm repo add jfrog https://charts.jfrog.io -
Update the repository.
helm repo update -
Next, create a unique master key. JFrog Xray requires a unique master key to be used by all micro-services in the same cluster. By default the chart has one set in
values.yaml(xray.masterKey).
Note
For production grade installations it is strongly recommended to use a custom master key. If you initially use the default master key it will be very hard to change the master key at a later stage This key is for demo purpose and should not be used in a production environment.
Generate a unique key and pass it to the template during installation/upgrade.
# Create a key
export MASTER_KEY=$(openssl rand -hex 32)
echo ${MASTER_KEY}You can pass this master key to the Helm installation through the Helm command or through the values.yaml file.
The following example shows the values.yaml file with the master key.
xray:
masterKey: <master key value>Alternatively, you can create a secret containing the master key manually and pass it to the template during installation/upgrade.
# Create a secret containing the key. The key in the secret must be named master-key
kubectl create secret generic masterkey-secret --from-literal=master-key=${MASTER_KEY}You can pass this master key secret to the Helm installation through the Helm command (by passing masterkey-secret) or through the values.yaml file.
The following example shows the values.yaml file with the master key secret.
xray:
masterKeySecretName: masterkey-secretNote
In either case, make sure to pass the same master key on all future calls to
helm installandhelm upgrade. In the first case, this means always passing--set xray.masterKey=${MASTER_KEY}. In the second, this means always passing--set xray.masterKeySecretName=masterkey-secretand ensuring the contents of the secret remain unchanged.You can also provide the master key or master key secret inside a
values.yamlfile and pass it along during the installation.
-
Installation requires a join key.
You can pass the join key along with the Helm install/upgrade command or pass it in a
values.yamlfile.The following sample shows how to provide join key in the
values.yamlfile.xray: joinKey: <join key value>Alternatively, you can manually create a secret containing the join key and then pass it to the template during install/upgrade. The key must be named join-key.
kubectl create secret generic joinkey-secret --from-literal=join-key=<YOUR_PREVIOUSLY_RETRIEVED_JOIN_KEY>The following example shows the
values.yamlfile with the join key secret.xray: joinKeySecretName: joinkey-secret
Note
In either case, make sure to pass the same join key on all future calls to
helm installandhelm upgrade. This means always passing--set xray.joinKey=<YOUR_PREVIOUSLY_RETRIEVED_JOIN_KEY>. In the second, this means always passing--set xray.joinKeySecretName=joinkey-secretand ensuring that the contents of the secret remain unchanged.
-
You need to enter the JFrog URL.
You can either pass the JFrog URL along with the Helm install/upgrade command or pass it along with the
values.yamlfile.The following example shows the
values.yamlfile with the JFrog URL.xray: jfrogUrl: <JFrog URL> -
For an HA Xray installation, set the
replicaCountvalue as >1 (the recommended is 3).You can either pass the value along with the Helm install/upgrade command or pass it along with the
values.yamlfile.The following example shows the
values.yamlfile with thereplicaCountvalues.replicaCount: 3 -
Create a values.yaml file with all the required configuration if you want to proceed with an installation that holds all the configurations in a
values.yamlfile.You can also use separate configuration files for each configuration and pass them as separate YAML files.
The following sample shows an example
values.yamlfile with join key and master key as secrets, replicaCount, and JFrog URL.replicaCount: 3 xray: jfrogUrl: http://artifactory.rt:8082 joinKeySecretName: joinkey-secret masterKeySecretName: masterkey-secret ```text The following sample shows an example `values.yaml` file with join key, master key, replicaCount, and JFrog URL.replicaCount: 3 xray: jfrogUrl: http://artifactory.rt:8082 joinKey: EEEEEEEEEEEEEEEEEEEEEEEEEEEEEEEE masterKey: FFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFFF
-
Run the Helm install command to proceed with the installation.
The following command shows how you can pass the required values through a
values.yamlfile.helm upgrade --install xray --namespace xray -f values.yamlThe following command shows how you can pass the required values along with the command.
helm upgrade --install xray --set xray.masterKey=${MASTER_KEY} \ --set xray.joinKey=<YOUR_PREVIOUSLY_RETRIEVED_JOIN_KEY> \ --set xray.jfrogUrl=<YOUR_PREVIOUSLY_RETRIEVED_BASE_URL> --namespace xray jfrog/xrayThe following command shows how you can pass join key and master key as a secret along with the command.
helm upgrade --install xray --set xray.masterKeySecretName=masterkey-secret \ --set xray.joinKeySecretName=joinkey-secret \ --set xray.jfrogUrl=<YOUR_PREVIOUSLY_RETRIEVED_BASE_URL> --namespace xray jfrog/xray -
To access the logs, find the name of the pod using the following command.
kubectl --namespace <your namespace> get pods -
To get the container logs, run the following command.
kubectl --namespace <your namespace> logs -f <name of the pod>
Note
Unlike other installations, Helm Chart configurations are made to the
values.yamland are then applied to thesystem.yaml.Follow these steps to apply the configuration changes.
Make the changes to
values.yaml.Run the command.
helm upgrade --install xray --namespace xray -f values.yaml
-
Access Xray from your browser at:
http://<jfrogUrl>/ui/and go to Xray Security & Compliance tab in the Administration module in the UI -
Check the status of your deployed helm releases.
helm status xray
Note
For advanced installation options, see Helm Charts Installers for Advanced Users.
After installing and before running Xray, you may set the following configurations.
You can configure all your system settings using the system.yaml file located in the $JFROG_HOME/xray /var/etc folder. For more information, see Xray System YAML.
If you don't have a System YAML file in your folder, copy the template available in the folder and name it system.yaml.
For the Helm charts, the system.yaml file is managed in the chart’s values.yaml.
Xray requires a working Artifactory server and a suitable license. The Xray connection to Artifactory requires the following parameters.
-
jfrogUrl
URL to the machine where JFrog Artifactory is deployed, or the load balancer pointing to it. It is recommended to use DNS names rather than direct IPs. For example:
http://jfrog.acme.comorhttp://10.20.30.40:8082. Note that/artifactorycontext is not longer required.Set it in the Shared Configurations section of the
$JFROG_HOME/xray/var/etc/system.yamlfile. -
join.key
This is the "secret" key required by Artifactory for registering and authenticating the Xray server.
You can fetch the Artifactory
joinKey(join Key) from the JPD UI in the User Management | Settings | Join Key.Set the join.key used by your Artifactory server in the Shared Configurations section of the
$JFROG_HOME/xray/var/etc/system.yamlfile.
Xray comes bundled with a PostgreSQL database out-of-the-box, which come pre-configured with the default credentials.
To change the default credentials:
# Access PostgreSQL as the Xray user adding the optional -W flag to invoke the password prompt
$ psql -d xraydb -U xray -W
# Securely change the password for user "xray". Enter and then retype the password at the prompt.
\password xray
# Verify the update was successful by logging in with the new credentials
$ psql -d xraydb -U xray -WSet your PostgreSQL connection details in the Shared Configurations section of the $JFROG_HOME/xray/var/etc/system.yaml file.
Xray comes pre-installed with RabbitMQ, by setting the Erlang cookie value as the RabbitMQ password for guest users.
By default, RabbitMQ uses the short hostnames of other nodes in the cluster for communication. However, it be can be configured to use a fully qualified domain name (FQND) host name (a long hostname).
To configure RabbitMQ to use FQDN, follow these steps.
-
Install Xray , but do not start the services.
-
Modify the following files according to the installer type.
-
Common Change in All Installers
In system.yaml: shared: node: id: <long hostname> name: <long hostname> ## For secondary nodes only, provide the hostname of any of the active nodes where RabbitMQ service is running. # shared: # rabbitMq: # active: # node: # name: <long hostname of active node>
-
-
Start RabbitMQ and the Xray services.
Xray enables using an external log collector such as Sumologic or Splunk.
To adjust the permissions to allow the log collection service perform read operations on the generated log files.
-
Add the log collection service user to the relevant group if needed (the user and group that installed and started Xray).
-
Apply the user and group permissions as needed on the
$JFROG_HOME/xray/var/logdirectory using:$ chmod -R 640 $JFROG_HOME/xray/var/log -
Adjust the group read inheritance permissions
setgid bitusing:$ chmod -R 2755 $JFROG_HOME/xray/var/logThis command enables the generated log files to inherit the folder's group permissions.
Xray System Requirements and Platform Support
Xray's system requirements are dependent on the scale of your environment.
Sizings
Non-Kubernetes Deployment
Up to 100K indexed artifacts. No High Availability
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray and DB | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ | 1 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 1 | 6 | 24 GB | 500 GB (SSD, 3000 IOPS) |
Up to 1M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 2 | 4 | 8 GB | 300 GB |
DB | 1 | 8 | 32 GB | 500 GB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 2 | 8 | 24 GB | 300 GB |
Up to 2M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 6 | 12 GB | 300 GB |
DB | 1 | 16 | 32 GB | 1 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 4 | 8 | 24 GB | 300 GB |
Up to 10M indexed artifacts
Component | Nodes | CPU Cores | Memory | Disk Space |
|---|---|---|---|---|
Xray | 3 | 8 | 24 GB | 300 GB |
DB | 1 | 16 | 64 GB | 2.5 TB (SSD, 3000 IOPS) |
RabbitMQ Split | 3 | 4 | 8 GB | 100GB (SSD, 3000 IOPS) |
JFrog Advanced Security | 8 | 8 | 24 GB | 300 GB |
Over 10M indexed artifacts
Contact JFrog Support.
For Xray HA (High Availability) installations (more than one node and 100k indexed artifacts), it is recommended to install RabbitMQ and Xray on separate servers using split mode. For further details, please refer to our page here.
RabbitMQ Note
RabbitMQ is a crucial component of the Xray architecture, acting as a message broker for communication between various application services. Multiple queues facilitate communication channels between producers and consumers. For more information, please refer to our page here.
Kubernetes Deployment
We have included YAML files with different sizing configurations for Artifactory, Xray, and Distribution in our GitHub pages. You can use these YAML files when you set up your cluster.
Note: From Helm chart 103.124 version of JFrog Xray, JFrog added support for Quorum Queues in RabbitMQ. This requires a minimum of three nodes of RabbitMQ and ensures fault tolerance of at least one node. A non-HA setup of 1 node is also supported.
Note: We recommend a three-node cluster of RabbitMQ for Xray. Large clusters increase the overhead of replication and could affect the throughput. Our application has been tuned to work in a three-node setup.
The following table lists the supported operating systems and their versions:
Product | Debian | RHEL | Ubuntu | Amazon Linux | Windows Server |
|---|---|---|---|---|---|
Xray | 10.x, 11.x | 8.x, 9.x | 20.04, 22.04 | Amazon Linux 2023 |
Note
Debian 12.x and Ubuntu 24.04 are supported from Artifactory 7.104 and Distribution 2.28.
Windows 2022 is supported from Artifactory 7.125.
Supported Platforms
The following table lists the supported platforms:
Product | x86-64 | ARM64 | Kubernetes | OpenShift |
|---|---|---|---|---|
Xray | 1.27+ | 4.14+ |
Installation on Kubernetes environments is through Helm Charts. Supported Helm version is Helm 3.17+.
Kubernetes Sizing Requirements
We have included YAML files with different sizing configurations for Artifactory , Xray, and Distribution in our GitHub pages. You can use these YAML files when you set up your cluster.
ARM64 Support for Container-Based Installations
Artifactory, Xray and Distribution supports installation on ARM64 architecture specifically through Helm and Docker installations. When deploying a product on an ARM64 platform, an external database must be set up as Artifactory does not support the bundled database for ARM64 installations. The appropriate ARM64 Container Image is automatically pulled during the Helm or Docker installation process.
Updated 2 days ago
