🤖 AI Tools
· 5 min read

Claude Code Routines: Automate Dev Workflows on a Schedule (2026)


Claude Code Routines, launched April 14, 2026 as a research preview, turn Claude Code from an interactive coding assistant into an autonomous agent that runs without you. You configure a prompt, connect your repos and tools, pick a trigger, and Claude handles the rest on Anthropic’s cloud infrastructure. No laptop required.

This is different from the existing /loop and /schedule commands that run inside a local Claude Code session. Routines run in the cloud, persist across sessions, and trigger automatically.

How routines work

A routine is a saved configuration:

  • Prompt: What Claude should do (written like an SOP, not a chat message)
  • Repository: Which codebase to work on
  • Connectors: OAuth connections to GitHub, Slack, Notion, Gmail, etc.
  • Trigger: When to run (schedule, API call, or GitHub event)

When triggered, Claude spins up a cloud session, clones your repo, executes the prompt, and reports results through your connected tools.

Setting up your first routine

  1. Go to claude.ai/code/routines
  2. Click “New routine”
  3. Name it descriptively (e.g., “Daily security audit”)
  4. Write your prompt
  5. Connect your repository
  6. Add connectors (Slack, GitHub, etc.)
  7. Choose a trigger

Writing effective routine prompts

Routines run unattended, so your prompt needs to be more precise than a normal chat message. Spell out edge cases, output formats, and what “done” looks like:

You are a daily code reviewer for our main repository.

Every morning:
1. Check all PRs opened in the last 24 hours
2. For each PR, review for:
   - Security vulnerabilities (SQL injection, XSS, auth issues)
   - Performance problems (N+1 queries, missing indexes)
   - Code style violations against our .eslintrc
3. Post a review comment on each PR with findings
4. If any critical security issues found, also post to #security-alerts on Slack
5. Post a daily summary to #engineering with PR count and issues found

If no PRs were opened, post "Quiet day - no new PRs" to #engineering.
Do not create issues or modify code. Review only.

The key differences from interactive prompts:

  • Explicit about what NOT to do (“Do not create issues or modify code”)
  • Handles the empty case (“If no PRs were opened…”)
  • Specifies exact output channels (which Slack channel, where to post)

Trigger types

Schedule (cron)

Run on a fixed cadence:

Every weekday at 9:00 AM UTC
Every Monday at 8:00 AM UTC
Every 6 hours
First day of each month

Best for: daily code reviews, weekly dependency audits, nightly test runs.

API call

Trigger via HTTP request:

curl -X POST https://api.claude.ai/routines/ROUTINE_ID/trigger \
  -H "Authorization: Bearer YOUR_API_KEY" \
  -H "Content-Type: application/json" \
  -d '{"context": "Deploy just completed for v2.3.1"}'

Best for: post-deployment checks, on-demand audits, integration with CI/CD pipelines.

GitHub events

Trigger on repository events:

  • Push to main: Run tests, check for breaking changes
  • PR opened: Automated code review
  • Issue created: Triage and label
  • Release published: Generate changelog, update docs

Best for: automated code review, PR triage, release automation.

Connectors

Connectors give routines OAuth access to external services. Set them up in Settings → Connectors:

ConnectorWhat routines can do
GitHubRead/write PRs, issues, code, releases
SlackPost messages, read channels
GmailRead/send emails
Google CalendarRead/create events
NotionRead/write pages and databases
LinearCreate/update issues

Connectors use OAuth, so your credentials are never stored in the routine prompt.

Practical routine examples

Daily error log analysis

Every morning at 7:00 AM UTC:
1. Pull the last 24 hours of error logs from the /logs directory
2. Group errors by type and frequency
3. Identify any new error patterns not seen in the previous week
4. For errors occurring >10 times, investigate the root cause in the codebase
5. Post a summary to #on-call in Slack with:
   - Total error count vs yesterday
   - Top 5 errors by frequency
   - Any new error patterns with suspected root cause
   - Suggested fixes for the top 3 errors

Weekly dependency audit

Every Monday at 8:00 AM UTC:
1. Run npm audit in the repository
2. Check for outdated dependencies with npm outdated
3. For any critical or high severity vulnerabilities:
   - Check if a patch version exists
   - If yes, create a PR with the update
   - If no, document the vulnerability and workaround
4. Post results to #engineering in Slack
5. Create a GitHub issue for any unresolvable vulnerabilities

Post-deployment smoke test

Trigger: API call from CI/CD pipeline

After deployment:
1. Run the smoke test suite against the staging URL provided in context
2. Check all API endpoints return 200
3. Verify the health check endpoint returns the correct version
4. Run a basic user flow test (login, create item, delete item)
5. If all pass: post ✅ to #deployments in Slack
6. If any fail: post ❌ with details to #deployments AND #on-call

Routines vs local scheduling

FeatureRoutines (cloud)/schedule (local)/loop (local)
Runs whereAnthropic cloudYour machineYour machine
Requires laptop openNoYesYes
TriggersSchedule, API, GitHubCronInterval
ConnectorsOAuth (Slack, GitHub, etc.)Local tools onlyLocal tools only
PersistenceSurvives restartsSession-onlySession-only
CostIncluded in Pro/Team planIncludedIncluded

Use routines for anything that should run reliably without you. Use /schedule and /loop for tasks during an active coding session.

Routines vs Zapier/n8n

If you’re already using Zapier or n8n for automation, routines offer something different: the automation logic is written in natural language and executed by an AI that understands code. Zapier connects apps with rigid if/then logic. Routines connect apps with judgment.

Example: “Review this PR” in Zapier means running a linter. In a Claude routine, it means actually reading the code, understanding the architecture, and posting a thoughtful review.

The trade-off: routines are less predictable. An if/then workflow does the same thing every time. An AI routine might handle edge cases differently each run. For critical workflows, consider using routines for analysis and deterministic pipelines for actions.

Limitations (research preview)

  • Max execution time per routine run: not publicly documented yet
  • Limited to repositories connected via GitHub
  • Connector list is growing but not comprehensive
  • No way to chain routines (routine A triggers routine B)
  • Token usage counts against your Claude plan limits

Getting started

If you’re on Claude Pro or Team:

  1. Go to claude.ai/code/routines
  2. Start with a simple daily summary routine
  3. Add connectors as needed
  4. Graduate to more complex routines once you trust the output

Start with read-only routines (analysis, summaries, reviews) before giving routines write access (creating PRs, posting to Slack). Build trust incrementally.

Related: How to Use Claude Code · Claude Code vs Codex CLI vs Gemini CLI · Agent vs Workflow · Zapier Agent SDK Guide · How to Build an AI Agent · AI Agent Security · LLM Observability