Load Balancer
Configure a load balancer as the entry point for JFrog Platform HA deployments to distribute requests across Artifactory nodes.
The load balancer is the entry point to your High Availability installation and distributes requests across the server nodes in your system. Set up your load balancer before directing traffic to the platform.
How does it work?
When the Artifactory node receives the request from the load balancer, the Artifactory router forwards each request to the appropriate JFrog services (Xray and Distribution).
For more information about the complete system overview, see System Architecture.
Set up Your Load Balancer
JFrog supports using a reverse proxy, which retrieves resources from one or more servers. In HA configurations, use a load balancer instead of a reverse proxy. One load balancer is required per JFrog Platform Deployment (JPD). Configure the load balancer to target the Artifactory instances in your JPD. For more information, see Using a Load Balancer in High Availability Setups.
TLS Enabled
If you have enabled TLS in your JFrog Platform Deployment, the JFrog Router blocks all non-TLS connections. Set up a secure connection between your load balancer and your JFrog Platform Deployment by adding the JPD TLS certificate to your load balancer key store. For more information, see Managing TLS Certificates.
Configure Health Check
Use the following endpoints to configure the health check for your load balancer. For more information, see SYSTEM & CONFIGURATION REST APIs.
System Health Ping
Description: Get a simple status response about the state of Artifactory.
Since: 2.3.0
Security: Requires a valid user (can be anonymous).
Usage: GET /artifactory/api/system/ping
Produces: text/plain
Sample Output:
GET /artifactory/api/system/ping
OKResponse status codes:
200 - Successful request. Returns 'OK' text if Artifactory is working properly; otherwise returns an HTTP error code with a reason.
Readiness Probe
Description: The readiness probe replaces the system/ping endpoint above and returns a simple status response about the state of Artifactory using a Kubernetes-style readiness probe (system/ping remains in place for legacy systems). The probe measures system latency and provides a metric for monitoring.
Since: 7.31.x
Security: Requires a valid user (can be anonymous).
Usage: GET /artifactory/api/v1/system/readiness
Produces: application/json
Sample Output:
{
"code": "OK"
}Response status codes:
200 - Successful request. Returns 'OK' if Artifactory is working properly; otherwise returns an HTTP error code with a reason.
Liveness Probe
Description: Get a status response to know when a container is ready to start accepting traffic.
Since: 7.31.x
Security: Requires a valid user (can be anonymous).
Usage: GET /artifactory/api/v1/system/liveness
Produces: application/json
Sample Output:
{
"code": "OK"
}Response status codes:
200 - Successful request. Returns 'OK' if working properly; otherwise returns an HTTP error code with a reason.
Helm Deployments
For Kubernetes deployments using Helm charts, the load balancer is provisioned as a Kubernetes Service of type LoadBalancer. Configure the service type, static IP, source IP restrictions, and cloud-provider annotations through the nginx.service.* Helm values.
- Configure Nginx Service and Load Balancer for Helm — Full parameter reference for
nginx.service.*including service types, static IP, source ranges, and cloud annotations for AWS, GKE, and AKS - Nginx SSL Termination at the Load Balancer — Offload TLS termination to the load balancer layer using cloud-managed certificates
- Ingress Behind Another Load Balancer — Preserve
X-Forwarded-*headers when using Nginx Ingress Controller behind an external LB
Updated 23 days ago
