kubectl: Connection Refused — The Connection to the Server Was Refused
The connection to the server localhost:8080 was refused — did you specify the right host or port?
This error means kubectl can’t connect to a Kubernetes API server. It’s either not running, not reachable, or kubectl is pointed at the wrong cluster.
What causes this
kubectl reads connection details from your kubeconfig file (default: ~/.kube/config). When it can’t reach the API server at the configured address, you get “connection refused.” The localhost:8080 variant specifically means kubectl has no valid kubeconfig at all — it’s falling back to the default address.
Common triggers:
- The Kubernetes cluster isn’t running (Docker Desktop, minikube, or kind is stopped)
- The
KUBECONFIGenvironment variable points to a missing or invalid file ~/.kube/configdoesn’t exist or has no valid context- The cluster was deleted or the API server endpoint changed
- A VPN or network change made the cluster unreachable
Fix 1: Start your local cluster
If you’re using a local Kubernetes setup, make sure it’s actually running:
# minikube
minikube status
minikube start
# kind
kind get clusters
kind create cluster # If no cluster exists
# Docker Desktop
# Open Docker Desktop → Settings → Kubernetes → Enable Kubernetes
After starting, verify the connection:
kubectl cluster-info
Fix 2: Check your kubeconfig
Make sure kubectl is using the right config file and context:
# See which config file is being used
echo $KUBECONFIG
# List available contexts
kubectl config get-contexts
# See the current context
kubectl config current-context
# Switch to the right context
kubectl config use-context my-cluster
If $KUBECONFIG is set to a file that doesn’t exist, unset it to fall back to the default:
unset KUBECONFIG
Fix 3: Set the right kubeconfig file
If your kubeconfig is in a non-default location, point kubectl to it:
export KUBECONFIG=/path/to/my/kubeconfig
# Or merge multiple configs
export KUBECONFIG=~/.kube/config:~/.kube/cluster2-config
For cloud-managed clusters, regenerate the kubeconfig:
# AWS EKS
aws eks update-kubeconfig --name my-cluster --region us-east-1
# GCP GKE
gcloud container clusters get-credentials my-cluster --zone us-central1-a
# Azure AKS
az aks get-credentials --resource-group mygroup --name my-cluster
Fix 4: Check network connectivity
If the cluster is remote, verify you can reach the API server:
# Get the server address from kubeconfig
kubectl config view --minify -o jsonpath='{.clusters[0].cluster.server}'
# Test connectivity
curl -k https://<api-server-address>:6443/healthz
If you’re on a VPN, make sure it’s connected. If the cluster is behind a firewall, check that your IP is allowed.
Fix 5: Fix certificate issues
If the API server is reachable but the connection is rejected, you might have stale certificates:
# Check if the certificate is valid
kubectl config view --minify --raw -o jsonpath='{.clusters[0].cluster.certificate-authority-data}' | base64 -d | openssl x509 -text -noout
For cloud clusters, regenerating the kubeconfig (Fix 3) usually resolves certificate issues.
Related resources
How to prevent it
- Add
kubectl cluster-infoto your shell startup or use a prompt plugin (likekube-ps1) that shows the current context — you’ll immediately notice when you’re not connected. - Use
kubectxandkubensto switch contexts and namespaces easily, reducing the chance of pointing at the wrong cluster. - For cloud clusters, script the kubeconfig refresh so it runs automatically when you start your workday.