Cloudways gives you a managed server with SSH access — perfect for AI apps that need system-level packages, background workers, or custom Python environments. Here’s how to deploy a FastAPI AI app from scratch.
Why Cloudways for AI apps
Unlike Railway (PaaS, no SSH) or Vercel (serverless, time limits), Cloudways gives you a full server where you can:
- Install Ollama for local inference
- Run long-running background tasks
- Install any Python package or system dependency
- Keep full SSH access for debugging
- Get managed SSL, backups, and monitoring included
Step 1: Create a Cloudways server
- Sign up at Cloudways
- Click “Add Server”
- Choose your cloud provider:
- DigitalOcean — best value ($14/mo for 1GB)
- Vultr — good performance
- AWS — if you need specific regions
- GCP — if you’re in the Google ecosystem
- Select server size (2GB+ recommended for AI apps)
- Choose region closest to your users
- Launch server (takes 2-3 minutes)
Step 2: Set up Python environment
SSH into your server (credentials in Cloudways dashboard):
ssh master@your-server-ip
# Install Python 3.11+ (if not already available)
sudo apt update
sudo apt install python3.11 python3.11-venv python3-pip -y
# Create project directory
mkdir -p /home/master/ai-app
cd /home/master/ai-app
# Create virtual environment
python3.11 -m venv venv
source venv/bin/activate
Step 3: Deploy your FastAPI app
# main.py
from fastapi import FastAPI, HTTPException
from pydantic import BaseModel
import httpx
import os
app = FastAPI()
class Query(BaseModel):
prompt: str
max_tokens: int = 500
@app.post("/chat")
async def chat(query: Query):
api_key = os.environ.get("ANTHROPIC_API_KEY")
if not api_key:
raise HTTPException(500, "API key not configured")
async with httpx.AsyncClient() as client:
response = await client.post(
"https://api.anthropic.com/v1/messages",
headers={
"x-api-key": api_key,
"anthropic-version": "2023-06-01",
"content-type": "application/json",
},
json={
"model": "claude-sonnet-4-5-20250514",
"max_tokens": query.max_tokens,
"messages": [{"role": "user", "content": query.prompt}],
},
timeout=30.0,
)
data = response.json()
return {"response": data["content"][0]["text"]}
@app.get("/health")
async def health():
return {"status": "ok"}
Upload via Git or SFTP:
# Option 1: Git (recommended)
cd /home/master/ai-app
git clone https://github.com/your-repo/ai-app.git .
# Option 2: SFTP
# Use FileZilla or scp to upload files
# Install dependencies
pip install fastapi uvicorn httpx pydantic
Step 4: Set environment variables
# Create .env file
cat > /home/master/ai-app/.env << 'EOF'
ANTHROPIC_API_KEY=sk-ant-your-key-here
PORT=8000
EOF
# Or export directly
export ANTHROPIC_API_KEY=sk-ant-your-key-here
For production, store secrets securely. See our password managers guide for best practices.
Step 5: Run with systemd (production)
Create a systemd service so your app starts automatically and restarts on crash:
sudo cat > /etc/systemd/system/ai-app.service << 'EOF'
[Unit]
Description=AI App FastAPI
After=network.target
[Service]
User=master
WorkingDirectory=/home/master/ai-app
Environment="PATH=/home/master/ai-app/venv/bin"
EnvironmentFile=/home/master/ai-app/.env
ExecStart=/home/master/ai-app/venv/bin/uvicorn main:app --host 0.0.0.0 --port 8000
Restart=always
RestartSec=3
[Install]
WantedBy=multi-user.target
EOF
sudo systemctl daemon-reload
sudo systemctl enable ai-app
sudo systemctl start ai-app
# Check status
sudo systemctl status ai-app
Step 6: Configure Nginx reverse proxy
Cloudways includes Nginx. Add a reverse proxy to route your domain to the FastAPI app:
# Add to your Nginx server block
location /api/ {
proxy_pass http://127.0.0.1:8000/;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_read_timeout 60s;
}
Cloudways handles SSL automatically through their dashboard.
Step 7: Set up auto-deploy with Git
For automatic deploys on git push:
# On the server, create a bare repo
mkdir -p /home/master/ai-app.git
cd /home/master/ai-app.git
git init --bare
# Create post-receive hook
cat > hooks/post-receive << 'EOF'
#!/bin/bash
GIT_WORK_TREE=/home/master/ai-app git checkout -f
cd /home/master/ai-app
source venv/bin/activate
pip install -r requirements.txt
sudo systemctl restart ai-app
EOF
chmod +x hooks/post-receive
Now push from your local machine:
git remote add production master@your-server-ip:/home/master/ai-app.git
git push production main
Cost breakdown
| Component | Cost |
|---|---|
| Cloudways DO 2GB server | $28/mo |
| Domain (Cloudflare) | ~$1/mo |
| LLM API (Claude, DeepSeek) | $5-50/mo |
| SSL, backups, monitoring | Included |
| Total | $34-79/mo |
Compare with Railway ($5-30/mo but no SSH) or Hetzner (€4.50/mo but no managed services).
When to choose Cloudways over Railway
| Need | Cloudways | Railway |
|---|---|---|
| SSH access | ✅ | ❌ |
| Install system packages | ✅ | Limited |
| Background workers | ✅ | ✅ |
| Managed SSL/backups | ✅ | ✅ |
| One-click deploy | ❌ (Git/SFTP) | ✅ |
| Auto-scaling | ❌ (vertical only) | ✅ |
| Multiple apps on one server | ✅ | ❌ (separate services) |
| Predictable pricing | ✅ | ❌ (usage-based) |
Choose Cloudways when you need server access and predictable costs. Choose Railway when you want the simplest possible deploy.
Related: Cloudways vs Railway vs Hetzner · Deploy AI App on Railway · AI App Deployment Checklist · Best Hosting for AI Side Projects · Best Password Managers for Developers