Advanced Storage Options
Configure Artifactory Helm filestore: file-system, NFS, S3, GCS, Azure Blob, external databases, and custom binarystore settings.
The filestore is where binaries are physically stored. JFrog Artifactory supports a wide range of storage backends. For more information, see Artifactory Filestore options.
Setting the Artifactory Persistency Storage Type
In the Helm chart, set the type of storage you want with artifactory.persistence.type and pass the required configuration settings. The default storage in this chart is the file-system replication, where the data is replicated to all nodes.
Important
All storage configurations except Network File System (NFS) include a default
artifactory.persistence.redundancyparameter that sets how many binary replicas are stored across the cluster's nodes. Once set on initial deployment, this value cannot be updated using Helm. Set it to a number greater than half your cluster's size, and never scale the cluster below that number.
To use your selected bucket as the HA's filestore, pass the filestore's parameters to the Helm installation/upgrade.
Setting up the Network File System (NFS) Storage
To use an NFS server as your cluster's storage, complete the following steps.
-
Set up an NFS server and get its IP as
NFS_IP. -
Create
dataandbackupdirectories on the NFS export and grant write permissions to all. -
Pass NFS parameters to the Helm installation/upgrade as follows.
artifactory: persistence: type: nfs nfs: ip: ${NFS_IP}
Configuring the NFS Persistence Type
In some cases, the Helm chart cannot set up NFS mounts automatically for Artifactory. In these cases (for example, AWS EFS), use artifactory.persistence.type=file-system even though the underlying persistence is a network file system.
The same applies when using slow storage devices (such as standard spinning disks) as the main storage solution. Serving frequently accessed files from slow storage can degrade performance, making a local cache filesystem on fast disks (such as SSD) desirable.
-
Create a
values.yamlfile. -
Set up your volume mount to your fast storage device as follows.
artifactory: ## Set up your volume mount to your fast storage device customVolumes: | - name: my-cache-fast-storage persistentVolumeClaim: claimName: my-cache-fast-storage-pvc ## Enable caching and configure the cache directory customVolumeMounts: | - name: my-cache-fast-storage mountPath: /my-fast-cache-mount ## Install the helm chart with the values file you created persistence: cacheProviderDir: /my-fast-cache-mount fileSystem: cache: enabled: true -
Install Artifactory with the values file you created.
Artifactory
helm upgrade --install artifactory jfrog/artifactory --namespace artifactory -f values.yamlArtifactory HA
helm upgrade --install artifactory-ha jfrog/artifactory-ha --namespace artifactory-ha -f values.yaml
Google Storage
Use a Google Cloud Storage bucket as the cluster's filestore by passing the parameters to helm install and helm upgrade. For more information, see Google Storage Binary Provider.
artifactory:
persistence:
type: google-storage-v2-direct
googleStorage:
bucketName: "artifactory-gcp"To use a GCP service account, Artifactory requires a gcp.credentials.json file in the same directory as the binarystore.xml file.
Generate it by running:
gcloud iam service-accounts keys create <file_name> --iam-account <service_account_name>Save the output to a file or copy it directly into your values.yaml.
{
"type": "service_account",
"project_id": "<project_id>",
"private_key_id": "?????",
"private_key": "-----BEGIN PRIVATE KEY-----\n????????==\n-----END PRIVATE KEY-----\n",
"client_email": "???@j<project_id>.iam.gserviceaccount.com",
"client_id": "???????",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1....."
}One option is to create your own secret and pass it to your helm install in a custom values.yaml.
# Create the Kubernetes secret from the file you created earlier.
# IMPORTANT: The file must be called "gcp.credentials.json" because this is used later as the secret key!
kubectl create secret generic artifactory-gcp-creds --from-file=./gcp.credentials.jsonSet this secret in your custom values.yaml.
artifactory:
persistence:
googleStorage
gcpServiceAccount:
enabled: true
customSecretName: artifactory-gcp-credsAlternatively, put the config directly in your values.yaml — the chart creates a secret from it.
artifactory:
persistence:
googleStorage
gcpServiceAccount:
enabled: true
config: |
{
"type": "service_account",
"project_id": "<project_id>",
"private_key_id": "?????",
"private_key": "-----BEGIN PRIVATE KEY-----\n????????==\n-----END PRIVATE KEY-----\n",
"client_email": "???@j<project_id>.iam.gserviceaccount.com",
"client_id": "???????",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://oauth2.googleapis.com/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "https://www.googleapis.com/robot/v1....."
}AWS S3 V3
To use an AWS S3 bucket as the cluster's filestore and access it with the official AWS SDK, see the S3 Official SDK Binary Provider. This filestore template uses the official AWS SDK. Use this template if you want to Create an IAM OIDC provider and assign the IAM role to Kubernetes Service Accounts.
Pass the AWS S3 V3 parameters and the annotation pointing to the IAM role to your helm install in a custom values.yaml.
# Using an existing IAM role
serviceAccount:
create: true
annotations:
eks.amazonaws.com/role-arn: arn:aws:iam::<ACCOUNT_ID>:role/<IAM_ROLE_NAME>
artifactory:
persistence:
type: s3-storage-v3-direct
awsS3V3:
region: ${AWS_REGION}
bucketName: ${AWS_S3_BUCKET_NAME}
useInstanceCredentials: true
# Using explicit credentials
artifactory:
persistence:
type: s3-storage-v3-direct
awsS3V3:
region: ${AWS_REGION}
bucketName: ${AWS_S3_BUCKET_NAME}
identity: ${AWS_ACCESS_KEY_ID}
credential: ${AWS_SECRET_ACCESS_KEY}
useInstanceCredentials: falseTo enable Direct Cloud Storage Download, use the following.
artifactory:
persistence:
awsS3V3:
enableSignedUrlRedirect: trueMicrosoft Azure Blob Storage
You can use Azure Blob Storage as the cluster's filestore by passing the Azure Blob Storage parameters to helm install and helm upgrade. For more information, see Azure Blob Storage.
artifactory:
persistence:
type: azure-blob-storage-v2-direct
azureBlob:
accountName: ${AZURE_ACCOUNT_NAME}
accountKey: ${AZURE_ACCOUNT_KEY}
endpoint: ${AZURE_ENDPOINT}
containerName: ${AZURE_CONTAINER_NAME}Custom binarystore.xml
Provide a custom binarystore.xml using one of two methods:
-
Editing directly in the values.yaml.
artifactory: persistence: binarystoreXml: | <!-- The custom XML snippet --> <config version="v1"> <chain template="file-system"/> </config> -
Create your own secret and pass it to your
helm installcommand.# Prepare your custom Secret file (custom-binarystore.yaml) kind: Secret apiVersion: v1 metadata: name: custom-binarystore labels: app: artifactory chart: artifactory stringData: binarystore.xml: |- <!-- The custom XML snippet --> <config version="v1"> <chain template="file-system"/> </config> -
Next, create a secret from the file.
kubectl apply -n artifactory -f ./custom-binarystore.yaml -
Pass the secret to your
helm installcommand.Artifactory
helm upgrade --install artifactory --namespace artifactory --set artifactory.persistence.customBinarystoreXmlSecret=custom-binarystore jfrog/artifactoryArtifactory HA
helm upgrade --install artifactory-ha --namespace artifactory-ha --set artifactory.persistence.customBinarystoreXmlSecret=custom-binarystore jfrog/artifactory-ha
Updated 13 days ago
