JarvisLabs CLI
jarvislabs packageThe jl CLI is part of the new jarvislabs package, replacing the deprecated jlclient. If you're still using jlclient, see the migration note.
The jl command-line tool lets you manage GPU instances, run training scripts, transfer files, and monitor experiments on JarvisLabs.ai — all from your terminal. It's built to work seamlessly with AI coding agents like Claude Code, Codex, Cursor, and OpenCode, so your agent can spin up GPUs, run experiments, and monitor results autonomously.
Package: jarvislabs | CLI command: jl | Version: 0.2.x (beta)
The jl CLI is in beta. Commands and options may change between releases. Pin your version in CI/automation scripts and check the changelog when upgrading.
Linux and macOS are fully supported. Windows is experimental and not fully tested — if you run into issues, please report them.
Jump to the Examples section for end-to-end workflows covering training runs, agent automation, filesystem management, and more.
Installation
The package is currently in beta. Install with the --pre flag to get the latest prerelease.
As a CLI tool (recommended)
uv tool install --pre jarvislabs
To upgrade:
uv tool upgrade --pre jarvislabs
With pip
pip install --pre jarvislabs
After installation, the jl command is available in your terminal.
jl setup do?Run jl setup once after installing from your terminal. It walks you through:
- Authentication — prompts for your API token (get one from jarvislabs.ai/settings/api-keys) and saves it locally
- Account status — shows your current balance and active instances
- Agent skill installation — asks which AI coding agents you use (Claude Code, Codex, Cursor, OpenCode) and installs skill files for them with your approval, so your agent knows how to use
jlout of the box
--helpEvery command and subcommand supports --help. It's the quickest way to see what's available, what flags a command takes, and what they do. You can pretty much learn the entire CLI from help alone.
jl --help # top-level commands
jl instance --help # all instance subcommands
jl run --help # run options, targets, lifecycle flags
jl instance create --help # every flag for creating an instance
Quick Start
There are two main ways to use the CLI, depending on how much control you need.
Path 1: Run a script directly on a fresh GPU
The fastest way to get started. This creates a GPU instance, uploads your code, installs dependencies, runs the script, and pauses the instance when done — all in one command.
# One-time setup
jl setup
# Check your balance and make sure you're good to go
jl status
# See which GPUs are currently available and their pricing
jl gpus
# Run a single training script on a fresh RTX5000
# Creates instance, uploads train.py, installs requirements, runs it, pauses when done
jl run train.py --gpu RTX5000 --requirements requirements.txt -- --epochs 50
# Or if you have a project directory, sync the whole thing
# This uploads your directory, creates a venv, installs deps, and runs the entrypoint
jl run . --script train.py --gpu A100 --requirements requirements.txt
# You can also run setup commands or a setup script before your main command
jl run . --script train.py --gpu A100 \
--requirements requirements.txt \
--setup "pip install flash-attn" \
--setup-file setup.sh
# The CLI streams logs by default. Once the run finishes, the instance is auto-paused.
# If you detached (Ctrl+C) or used --no-follow, you can check logs anytime:
jl run logs <run_id> --tail 50
# Check the final status of your run
jl run status <run_id>
Path 2: Manage instances yourself
If you want more control — SSH access, reusing machines across runs, attaching filesystems, or interactive debugging — create and manage instances directly.
# One-time setup
jl setup
# See available GPUs and pricing
jl gpus
# Create a GPU instance with 100 GB storage
jl instance create --gpu A100 --storage 100 --name "my-experiment"
# List your instances to get the machine ID
jl instance list
# SSH into your instance for interactive work
jl instance ssh <machine_id>
# Or upload and run a script on it
jl run train.py --on <machine_id>
# Check logs while the run is going
jl run logs <run_id> --tail 50
# Upload additional files to the instance
jl instance upload <machine_id> ./data /home/data
# Download results when you're done
jl instance download <machine_id> /home/results ./results -r
# Pause when you're done - stops compute billing, keeps your data
jl instance pause <machine_id>
# Later, resume with the same or a different GPU
jl instance resume <machine_id> --gpu RTX5000
# When you're completely done, destroy to stop all billing (including storage)
jl instance destroy <machine_id>
Authentication
Get your API token from jarvislabs.ai/settings/api-keys.
Interactive setup
jl setup
This authenticates, optionally installs agent skills, shows your account status, and displays a getting-started guide.
Non-interactive setup
jl setup --token YOUR_TOKEN --yes
Without --yes, jl setup will still prompt for agent-skill installation even when --token is provided. Use --agents all or --yes to make setup fully non-interactive.
Environment variable
export JL_API_KEY="YOUR_TOKEN"
Token precedence
Both the CLI and SDK use the same resolution chain:
| Priority | Method | Used by |
|---|---|---|
| 1 | Client(api_key="...") argument | SDK only |
| 2 | JL_API_KEY environment variable | CLI + SDK |
| 3 | Config file (saved by jl setup) | CLI + SDK |
See Config file location below for config paths. See the SDK Authentication docs for more details.
Config file location
The config file is stored via platformdirs:
- Linux:
~/.config/jl/config.toml - macOS:
~/Library/Application Support/jl/config.toml
Removing saved credentials
jl logout
Global Flags
These flags are available on most commands (exceptions noted below):
| Flag | Description |
|---|---|
--json | Output as machine-readable JSON (to stdout). Human-readable output goes to stderr. |
--yes / -y | Skip all confirmation prompts. |
--version | Print version and exit (root-level: jl --version). |
--json and --yes are command-level options, not root-level — so jl instance list --json works correctly. Most commands support --json. --yes is only available on commands that have confirmation prompts (create, pause, resume, destroy, rename, run start, etc.). jl setup supports --yes but not --json. Read-only commands like jl gpus and jl run logs do not accept --yes.
Account Commands
jl setup
Set up the JarvisLabs CLI: authenticate and install agent skills.
| Option | Short | Description |
|---|---|---|
--token | -t | API token (skips interactive prompt) |
--agents | Comma-separated agent list: claude-code, codex, cursor, opencode, or all | |
--yes | -y | Skip confirmation prompts; auto-selects all agents |
# Interactive setup
jl setup
# Non-interactive with token and all agent skills
jl setup --token YOUR_TOKEN --agents all --yes
# Install skills for specific agents only
jl setup --agents claude-code,cursor
If already authenticated, jl setup will show your current login and ask to re-authenticate. The --agents flag controls which coding agent skill files are installed:
| Agent | Skill file path |
|---|---|
claude-code | ~/.claude/skills/jarvislabs/SKILL.md |
codex | ~/.agents/skills/jarvislabs/SKILL.md |
cursor | ~/.cursor/skills/jarvislabs/SKILL.md |
opencode | ~/.config/opencode/skills/jarvislabs/SKILL.md |
jl logout
Remove the saved API token from the config file. Supports --json for scripted usage.
jl logout
jl status
Show account info: name, user ID, balance, grants, and running/paused instance counts.
jl status
jl status --json
JSON output includes additional fields not shown in the human-readable table: running VMs, paused VMs, active deployments, filesystems, and billing currency.
jl gpus
Show GPU types with availability, region, VRAM, RAM, CPUs, and hourly pricing. Available GPUs are marked with a green dot, unavailable with a dim circle.
jl gpus
jl gpus --json
jl templates
List available framework templates that can be used with --template when creating instances (e.g. pytorch, tensorflow, jax, vm).
jl templates
jl templates --json
Regions & GPUs
JarvisLabs has three regions, each with different GPU types available. When creating an instance, the CLI auto-selects the best region based on your chosen GPU — or you can pin a specific region with --region.
| Region | Available GPUs |
|---|---|
IN1 | RTX5000, A5000Pro, A6000, RTX6000Ada, A100 |
IN2 | L4, A100, A100-80GB |
EU1 | H100, H200 |
Run jl gpus to see real-time availability and pricing for each GPU type.
- EU1 region: supports 1 or 8 GPUs per instance only, 100 GB minimum storage (auto-bumped if you specify less)
- VM template: 100 GB minimum storage (auto-bumped if you specify less)
- VM template is only available in
IN2andEU1regions, and requires at least one SSH key registered
Instance Commands
All instance commands live under jl instance. Here's how you can manage the full lifecycle of GPU instances — from creation to teardown.
jl instance list
List all your instances with their ID, name, status, GPU type, GPU count, storage, region, cost, and template.
jl instance list
jl instance list --json
jl instance get <machine_id>
Show full details of a specific instance including SSH command, notebook URL, HTTP ports, and endpoint URLs.
jl instance get 12345
jl instance get 12345 --json
jl instance create
Create a new GPU instance. The command blocks until the instance reaches Running status, so when it returns, your instance is ready to use.
| Option | Short | Default | Description |
|---|---|---|---|
--gpu | -g | (required) | GPU type (run jl gpus to see options) |
--template | -t | pytorch | Framework template (run jl templates to see options) |
--storage | -s | 40 | Storage in GB |
--name | -n | "Name me" | Instance name (max 40 characters) |
--num-gpus | 1 | Number of GPUs | |
--region | Region pin (e.g. IN1, IN2, EU1) | ||
--http-ports | Comma-separated HTTP ports to expose (e.g. 7860,8080) | ||
--script-id | Startup script ID to run on launch | ||
--script-args | Arguments passed to the startup script | ||
--fs-id | Filesystem ID to attach | ||
--yes | -y | Skip confirmation | |
--json | Output as JSON |
# Basic instance
jl instance create --gpu RTX5000
# H100 with more storage and a name
jl instance create --gpu H100 --storage 200 --name "training-box"
# With a startup script and filesystem
jl instance create --gpu A100 --script-id 42 --fs-id 10
# Pin to a region
jl instance create --gpu A100 --region EU1
# Expose HTTP ports
jl instance create --gpu RTX5000 --http-ports "7860,8080"
# VM instance (requires SSH key - add one first with jl ssh-key add)
jl instance create --gpu H100 --template vm --name "my-vm"
# Non-interactive
jl instance create --gpu RTX5000 --yes --json
Prompts for confirmation unless --yes is passed. See Regions & GPUs for which GPUs are available in each region and storage constraints.
jl instance pause <machine_id>
Pause a running instance. Compute billing stops; a small storage cost continues.
| Option | Short | Description |
|---|---|---|
--yes | -y | Skip confirmation |
--json | Output as JSON |
jl instance pause 12345
jl instance pause 12345 --yes --json
jl instance resume <machine_id>
Resume a paused instance. You can also use this opportunity to change the GPU type, expand storage, rename the instance, or attach a different startup script or filesystem. The command blocks until the instance is running again.
| Option | Short | Description |
|---|---|---|
--gpu | -g | Resume with a different GPU type |
--num-gpus | Change number of GPUs | |
--storage | -s | Expand storage in GB (can only increase, never shrink) |
--name | -n | Rename instance on resume |
--script-id | Startup script ID to run on resume | |
--script-args | Arguments for the startup script | |
--fs-id | Filesystem ID to attach | |
--yes | -y | Skip confirmation |
--json | Output as JSON |
# Resume with defaults
jl instance resume 12345
# Resume with a bigger GPU
jl instance resume 12345 --gpu H100
# Resume with more storage and a new name
jl instance resume 12345 --storage 200 --name "upgraded"
Resume is region-locked — an instance always resumes in its original region. If you request a GPU type not available in that region, the API returns an error.
Resume may also assign a new machine ID. The CLI warns you when this happens. Always use the returned ID for subsequent operations.
jl instance destroy <machine_id>
Permanently delete an instance and all its data.
This action is irreversible. All data on the instance is lost. If you need to keep data across instances, use a filesystem.
| Option | Short | Description |
|---|---|---|
--yes | -y | Skip confirmation |
--json | Output as JSON |
jl instance destroy 12345
jl instance destroy 12345 --yes --json
jl instance rename <machine_id>
Rename an instance.
| Option | Short | Description |
|---|---|---|
--name | -n | New instance name (required, max 40 characters) |
--yes | -y | Skip confirmation |
--json | Output as JSON |
jl instance rename 12345 --name "experiment-v2"