Quick Start — Local Development
Get ProjectAchilles running on your machine or a remote server with a single command. The startup script handles everything: installs system dependencies, walks you through authentication setup, and starts the platform. Add --tunnel to expose it via HTTPS for remote access and agent enrollment.
Prerequisites
The start script automatically installs missing dependencies on supported platforms. You can also install them manually if preferred:
- Node.js 22.x or higher
- npm 10.x or higher
- Git
- Go 1.24+ (for agent development and building test binaries)
Supported Platforms for Auto-Install
| Platform | Package Manager | Notes |
|---|---|---|
| Ubuntu / Debian / WSL | apt | Node.js via NodeSource, Go via official tarball |
| Fedora / RHEL / CentOS | dnf | Falls back to tarball if Go version is too old |
| Arch / Manjaro | pacman | |
| openSUSE / SLES | zypper | Node.js via NodeSource, Go via official tarball |
| macOS | brew | Requires Homebrew |
If your platform isn't listed, install the prerequisites manually before running the start script.
Steps
1. Clone the Repository
git clone https://github.com/projectachilles/ProjectAchilles.git
cd ProjectAchilles
2. Start the Development Stack
./scripts/start.sh -k --daemon
This script will:
- Kill any existing ProjectAchilles processes (
-k) - Detect your platform and install missing system dependencies (Node.js, npm, Git, Go)
- Guide you through Clerk authentication setup (if not already configured)
- Configure Clerk RBAC (session token custom claim + admin role via Clerk API)
- Configure Elasticsearch (if not already set up — Cloud or skip)
- Check the test library and prompt for a GitHub token if authentication is required
- Install npm dependencies for both frontend and backend
- Find available ports (defaults: frontend 5173, backend 3000)
- Start both services in the background (
--daemon)
On a fresh machine, the script will install dependencies, then walk you through Clerk setup:
Checking Clerk authentication...
✗ No valid Clerk keys configured
╭──────────────────────────────────────────────────────╮
│ Clerk Setup (free account — takes ~2 minutes) │
│ │
│ 1. Sign up or log in at clerk.com │
│ 2. Create a new application │
│ 3. Go to "API Keys" in the sidebar │
│ 4. Copy both keys below │
╰──────────────────────────────────────────────────────╯
Press Enter to open Clerk in your browser (or S to skip):
The script opens Clerk's dashboard in your browser, prompts you for both keys, validates them against Clerk's API, and writes them to both backend/.env and frontend/.env automatically.
If you've set up Clerk before, just add CLERK_PUBLISHABLE_KEY and CLERK_SECRET_KEY to backend/.env before running the script — it will detect them and skip the interactive setup.
After key setup, the script guides you through RBAC configuration — a required one-time step for role-based access:
- Session token claim (manual) — you'll be directed to add
"metadata": "{{user.public_metadata}}"to your Clerk session token template. This lets the backend read user roles from the JWT. - Create your user — while in the Clerk Dashboard, go to Users → Add user and create your admin account with an email and password.
- Admin role (automated) — the script asks for your email, looks up your Clerk user ID via API, and sets
{"role": "admin"}on your account automatically.
This step runs once and is remembered (.clerk-rbac-configured flag file). If you skip it, the script will re-prompt on the next run.
3. Open the Dashboard
Navigate to http://localhost:5173 in your browser. You'll be redirected to Clerk's sign-in page. After authenticating, you'll see the Test Browser.
Remote Access
If you're running ProjectAchilles on a remote server (cloud VM, VPS, headless machine), you have two options to access the dashboard and enable agent communication.
Option A — SSH Tunnel (quickest, personal use)
Forward the ports to your local machine. No extra tools or accounts needed:
# From your local machine (not the server)
ssh -L 5173:localhost:5173 -L 3000:localhost:3000 user@your-server
Then open http://localhost:5173 in your local browser. Clerk auth works because it sees localhost.
SSH tunnels are great for verifying the app works, but agents on other machines can't reach the backend through your SSH tunnel. Use Cloudflare Tunnels (Option B) for agent enrollment.
Option B — Cloudflare Tunnels (full setup with agents, ~5 minutes)
Exposes both frontend and backend via free HTTPS tunnel URLs. Agents can enroll from any network. No account or credit card required.
./scripts/start.sh -k --daemon --tunnel
The script will:
- Install
cloudflaredif not present (single binary, ~30 MB) - Start two HTTPS tunnels (frontend + backend)
- Register the tunnel URL with Clerk's allowed origins automatically
- Set
AGENT_SERVER_URLso enrollment one-liners use the tunnel address
Starting Cloudflare tunnels...
Waiting for tunnel URLs...
✓ Dashboard: https://random-words.trycloudflare.com
✓ Agent API: https://other-words.trycloudflare.com
Registering tunnel with Clerk allowed_origins...
✓ Clerk allowed_origins updated
╔══════════════════════════════════════════════════════╗
║ Dashboard: https://random-words.trycloudflare.com ║
║ Agent API: https://other-words.trycloudflare.com ║
║ ║
║ Agent enrollment URL (use this in agent config): ║
║ https://other-words.trycloudflare.com ║
╚══════════════════════════════════════════════════════╝
Open the Dashboard URL in your browser to access the platform. Use the Agent API URL when enrolling agents — the one-liner install commands on the Agents page will use it automatically.
Cloudflare quick tunnel URLs change every time the server restarts. For persistent URLs, use a named Cloudflare Tunnel (free, requires a Cloudflare account) or deploy with Docker behind a reverse proxy.
If you prefer ngrok, set your custom domains in backend/.env and the script will auto-detect it:
NGROK_FRONTEND_DOMAIN=your-app.ngrok.app
NGROK_BACKEND_DOMAIN=your-api.ngrok.app
Or force the provider: TUNNEL_PROVIDER=ngrok ./scripts/start.sh -k --daemon --tunnel
Optional: Elasticsearch
The Analytics module requires an Elasticsearch connection. The start script prompts you to configure it during first run:
Checking Elasticsearch...
Not configured (Analytics will prompt for setup in the UI)
Configure Elasticsearch now? [Cloud / Local Docker / Skip (default)] C
Free 14-day trial (no credit card):
https://cloud.elastic.co/registration
Elasticsearch endpoint URL: https://my-project-fa3ad...cloud:443
API Key: VFo5a3...
Testing connection...
✓ Connected to Elasticsearch
✓ Elasticsearch credentials saved to backend/.env
Initialize Elasticsearch indices? [Y/n]
Seed with synthetic demo data? [y/N] y
✓ 3 indices created, 1000 documents seeded
Both values are shown on the Elastic Cloud "Getting Started" page after creating a deployment — two copy-pastes from one page.
Pre-set ELASTICSEARCH_NODE and ELASTICSEARCH_API_KEY in backend/.env to skip the interactive prompt entirely.
Option B — Local Elasticsearch via Docker:
docker compose --profile elasticsearch up -d
This starts Elasticsearch 8.17 (single-node, security disabled) and seeds 1,000 synthetic test results. The local instance is available at http://localhost:9200 — no credentials needed.
Option C — Initialize an external instance manually:
./scripts/init-elasticsearch.sh --cloud-id "deploy:..." --api-key "..."
./scripts/init-elasticsearch.sh --cloud-id "deploy:..." --api-key "..." --seed # with demo data
Optional: Test Library
The Test Browser displays security tests synced from a Git repository. The default library (f0_library) is configured automatically.
If the repository requires authentication (temporary restriction), the start script detects this and prompts you:
Checking test library...
✗ Test library requires authentication
The security test library (f0_library) currently requires a GitHub
token for access. This is a temporary restriction — the repo will
become fully public soon.
GitHub Token (hidden, or S to skip): ****
✓ Token saved to backend/.env
Get a read-only token from the project maintainer and paste it when prompted. The token is saved to backend/.env (gitignored) and never committed.
To use a different test repository, set it in backend/.env:
TESTS_REPO_URL=https://github.com/your-org/your-test-library.git
GITHUB_TOKEN=ghp_your_pat_here # only for private repos
Managing Services
# Stop everything (servers + tunnels)
./scripts/start.sh --stop
# Restart servers only (keep tunnel URLs alive)
./scripts/start.sh --restart-servers
# or: ./scripts/start.sh -r
Changed Clerk keys, Elasticsearch credentials, or .env settings? Use --restart-servers (-r) to reload without generating new tunnel URLs. The backend re-reads backend/.env on startup.
Troubleshooting
Port Conflicts
If ports 5173 or 3000 are in use, the start script will automatically find the next available port. Check the script output for the actual ports used.
Clerk Authentication Errors
- Ensure both
frontend/.envandbackend/.envhave the correct keys - The publishable key should start with
pk_test_(development) orpk_live_(production) - The secret key should start with
sk_test_orsk_live_
Dependency Auto-Install Issues
sudopassword prompt: The script usessudofor system-level installs (apt, dnf, pacman). Ensure your user has sudo privileges.- nvm users: If Node.js was installed via nvm, the script detects and sources
~/.nvm/nvm.shautomatically. If it still isn't found, ensure nvm is installed correctly and restart your terminal. - Go PATH after install: Go is installed to
/usr/local/go. The script adds it to PATH for the current session, but to persist it, add to your shell profile:export PATH=/usr/local/go/bin:$PATH - macOS without Homebrew: Install Homebrew first — the script requires it on macOS.
- Unsupported distro: Install Node.js 22+, npm, Git, and Go 1.24+ manually, then re-run the script.
TypeScript Build Errors
Verify the project compiles cleanly:
cd frontend && npm run build
cd ../backend && npm run build
Next Steps
- Quick Start — Docker — Deploy with Docker Compose
- Features Overview — Explore all platform capabilities
- Development Setup — Full contributor guide