JarvisLabs Python SDK
This is the new jarvislabs package, replacing the deprecated jlclient. It includes a modern Python SDK, a full CLI (jl), and built-in support for AI coding agents. If you're still using jlclient, see the migration note.
Python SDK for managing GPU instances on JarvisLabs.ai. Create, pause, resume, and destroy GPU instances programmatically.
Package: jarvislabs | Version: 0.2.x (beta) | Python: 3.11+
The JarvisLabs Python SDK is currently in beta. The API may change between releases, so pin your version and check the changelog when upgrading.
Linux and macOS are fully supported. Windows is experimental and not fully tested — if you run into issues on Windows, please report them.
Jump to the Examples section to see common SDK workflows like creating instances, attaching filesystems, and checking GPU availability.
Installation
Since the package is in beta, you'll need to install it with the --pre flag:
pip install --pre jarvislabs
Or with uv:
uv pip install --pre jarvislabs
After installation, from jarvislabs import Client is available in Python.
jl setupRun jl setup once after installing from your terminal. It walks you through:
- Saves your API token locally so both the CLI and SDK can use it automatically
- Shows your account status — balance and active instances
- Installs agent skill files — with your approval, it sets up skill files for AI coding agents (Claude Code, Codex, Cursor, OpenCode) so they know how to use
jlout of the box
After running jl setup, Client() picks up your token automatically — no need to pass it in code.
Quick Start
Here's the fastest way to get a GPU instance up and running with the SDK — create an instance, grab its connection details, and pause it when you're done:
from jarvislabs import Client
with Client() as client:
# Create a GPU instance (blocks until running)
inst = client.instances.create(gpu_type="A100", name="my-instance")
print(f"SSH: {inst.ssh_command}")
print(f"Notebook: {inst.url}")
# Pause to stop billing (data persists)
client.instances.pause(inst.machine_id)
create() blocks until the instance is fully running, so by the time it returns you can SSH in or open the notebook URL right away. Pausing stops compute billing while keeping your data intact.
Authentication
Get your API token from jarvislabs.ai/settings/api-keys.
The SDK resolves your API token in this order:
| Priority | Method | Description |
|---|---|---|
| 1 | api_key argument | Pass directly to Client(api_key="...") |
| 2 | JL_API_KEY env var | Set in your environment |
| 3 | Config file | Saved by jl setup (~/.config/jl/config.toml on Linux, ~/Library/Application Support/jl/config.toml on macOS) |
If no token is found through any method, Client() raises AuthError.
from jarvislabs import Client
# Option 1: Pass directly
client = Client(api_key="YOUR_TOKEN")
# Option 2: Set JL_API_KEY env var, then:
client = Client()
# Option 3: Run `jl setup` once in the terminal, then:
client = Client()
The easiest setup is to run jl setup once in your terminal. After that, Client() picks up your token automatically — no need to pass it every time.
Client
Client is the entry point for all SDK operations. It supports Python's context manager protocol (with statement) so the underlying HTTP transport is cleaned up automatically when you're done.
from jarvislabs import Client
# With context manager (recommended)
with Client(api_key="...") as client:
instances = client.instances.list()
# Without context manager
client = Client()
instances = client.instances.list()
client.close()
Constructor
Client(api_key: str | None = None)
| Parameter | Type | Default | Description |
|---|---|---|---|
api_key | str | None | None | API token. If None, resolves from env var or config file. |
Raises AuthError if no token is found.
Methods
| Method | Description |
|---|---|
close() | Close the underlying transport. Called automatically when using with. |
Namespaces
All operations are organized into namespaces on the client:
| Attribute | Description |
|---|---|
client.account | Balance, user info, GPU availability, templates |
client.instances | Instance lifecycle: list, create, pause, resume, destroy, rename |
client.ssh_keys | SSH key management |
client.scripts | Startup script management |
client.filesystems | Persistent filesystem management |
Account
The account namespace gives you methods for checking your balance, viewing user details, listing available GPUs and templates, and checking your billing currency.
balance() -> Balance
Returns your current account balance and any available grants (credits).
bal = client.account.balance()
print(f"Balance: {bal.balance}, Grants: {bal.grants}")
Returns: Balance with fields:
| Field | Type | Description |
|---|---|---|
balance | float | Current account balance |
grants | float | Available grants/credits |
user_info() -> UserInfo
Returns your account profile information, including name, address, and contact details.
info = client.account.user_info()
print(f"User: {info.name} ({info.user_id})")
Returns: UserInfo with fields:
| Field | Type | Description |
|---|---|---|
user_id | str | Unique user identifier |
name | str | None | Full name |
address1 | str | None | Address line 1 |
address2 | str | None | Address line 2 |
city | str | None | City |
country | str | None | Country |
phone_number | str | None | Phone number |
state | str | None | State/province |
zip_code | str | None | ZIP/postal code |
tax_id | str | None | Tax ID |
resource_metrics() -> ResourceMetrics
Returns a summary of your current resources. Instances and VMs are broken down by running and paused status; deployments and filesystems are single counts.
metrics = client.account.resource_metrics()
print(f"Running: {metrics.running_instances}, Paused: {metrics.paused_instances}")
Returns: ResourceMetrics with fields:
| Field | Type | Description |
|---|---|---|
running_instances | int | Number of running instances |
paused_instances | int | Number of paused instances |
running_vms | int | Number of running VMs |
paused_vms | int | Number of paused VMs |
deployments | int | Number of active deployments |
filesystems | int | Number of filesystems |
templates() -> list[Template]
Returns the list of available framework templates you can use when creating an instance (e.g. pytorch, tensorflow, jax, vm).
templates = client.account.templates()
for t in templates:
print(f"{t.id}: {t.title}")
Returns: list[Template] with fields:
| Field | Type | Description |
|---|---|---|
id | str | Template identifier (e.g. pytorch, tensorflow, vm) |
title | str | Display name |
description | str | None | Template description |
category | str | None | Category grouping |
versions | str | None | Available versions |
gpu_availability() -> list[ServerMetaGPU]
Returns all GPU types with their pricing, VRAM, and current availability across regions. This is useful for checking what's available before creating an instance, or for building dashboards that show real-time GPU stock.
gpus = client.account.gpu_availability()
for gpu in gpus:
region_code = gpu.model_dump()["region"] # display code: IN1, IN2, EU1
print(f"{gpu.gpu_type} ({region_code}): {gpu.num_free_devices} free, ${gpu.price_per_hour}/hr")
Returns: list[ServerMetaGPU] with fields:
| Field | Type | Description |
|---|---|---|
gpu_type | str | GPU type name |
region | str | Region (raw backend value via attribute access; use .model_dump() for display codes like IN1, IN2, EU1) |
num_free_devices | int | Number of available GPUs |
price_per_hour | float | None | Hourly price |
vram | str | None | GPU memory |
arc | str | None | GPU architecture |
cpus_per_gpu | int | None | CPU cores allocated per GPU |
ram_per_gpu | int | None | RAM in GB allocated per GPU |
currency() -> str
Returns "INR" or "USD" based on your account's payment location.
currency = client.account.currency()
print(f"Billing currency: {currency}")
Regions & GPUs
JarvisLabs has three regions, each with different GPU types available:
| Region | Available GPUs |
|---|---|
IN1 | RTX5000, A5000Pro, A6000, RTX6000Ada, A100 |
IN2 | L4, A100, A100-80GB |
EU1 | H100, H200 |
Available GPU types
RTX5000, A5000Pro, A6000, A100, A100-80GB, RTX6000Ada, H100, H200, L4
Region auto-selection
If you don't pass a region to create(), the SDK automatically picks the best region based on current GPU availability. You can also check availability yourself with client.account.gpu_availability() and pass a specific region code.
EU1 GPU counts
EU1 supports 1 or 8 GPUs per instance. Requesting other counts (e.g. 2 or 4) will raise a ValidationError.
Storage minimums
EU1region: 100 GB minimum storage (auto-bumped if you specify less)vmtemplate: 100 GB minimum storage (auto-bumped if you specify less)
VM template restrictions
The vm template is only available in IN2 and EU1. Creating a VM instance also requires at least one SSH key registered on your account (see SSH Keys).
Instances
The instances namespace handles the full instance lifecycle: listing, creating, pausing, resuming, destroying, and renaming GPU instances.
All create() parameters are keyword-only — you must use named arguments. For resume(), all parameters except machine_id are keyword-only.
list() -> list[Instance]
Returns all your instances, regardless of their current status. Each instance includes its connection details, GPU config, and current state.
instances = client.instances.list()
for inst in instances:
print(f"{inst.machine_id}: {inst.name} [{inst.status}]")
get(machine_id: int) -> Instance
Fetches a single instance by its ID. This is useful when you already know which instance you want to check on.
inst = client.instances.get(12345)
print(f"{inst.name}: {inst.status}")
| Parameter | Type | Description |
|---|---|---|
machine_id | int | Instance ID |
Raises: NotFoundError if the instance doesn't exist.
create(...) -> Instance
Creates a new GPU instance and blocks until it's fully running. By the time this method returns, the instance is ready to use — you can SSH in or open its notebook URL immediately.
All parameters are keyword-only.
inst = client.instances.create(
gpu_type="A100",
num_gpus=1,
template="pytorch",
storage=100,
name="training-box",
)
print(f"Machine ID: {inst.machine_id}")
print(f"SSH: {inst.ssh_command}")
| Parameter | Type | Default | Description |
|---|---|---|---|
gpu_type | str | required | GPU type (e.g. "A100", "H100"). See Regions & GPUs for the full list. |
num_gpus | int | 1 | Number of GPUs |
template | str | "pytorch" | Framework template (see client.account.templates()) |
storage | int | 40 | Storage in GB |
name | str | "Name me" | Instance name (max 40 characters) |
disk_type | str | "ssd" | Disk type |
http_ports | str | "" | Comma-separated HTTP ports to expose (e.g. "7860,8080") |
script_id | str | None | None | Startup script ID to run on launch (note: scripts.list() returns int IDs — pass as str(script_id)) |
script_args | str | "" | Arguments passed to the startup script |
fs_id | int | None | None | Filesystem ID to attach |
arguments | str | "" | Additional arguments |
region | str | None | None | Region code (e.g. "IN1", "IN2", "EU1"). Auto-selected if omitted. |
Returns: Instance — the fully provisioned instance.
Raises:
ValidationErrorifgpu_typeis missing, name exceeds 40 characters, region is invalid,fs_iddoesn't match an existing filesystem, or VM template has no SSH keys registered.APIErrorif instance creation fails.
pause(machine_id: int) -> bool
Pauses a running instance. Once paused, compute billing stops — you'll only pay a small storage cost to keep your data. Your instance's state (installed packages, files, etc.) is preserved and will be restored when you resume.
client.instances.pause(inst.machine_id)
| Parameter | Type | Description |
|---|---|---|
machine_id | int | Instance ID |
Returns: True on success.
pause() returns as soon as the API accepts the request, but the instance transitions through Pausing → Paused in the background. If you need to resume the same instance shortly after, wait for its status to reach Paused first.
resume(machine_id: int, ...) -> Instance
Resumes a paused instance and blocks until it's running again. You can optionally change the GPU type, storage size, or other settings on resume — this is handy when you want to switch from a training GPU to a cheaper inference GPU, for example.
All parameters except machine_id are keyword-only. Any parameter you omit keeps the instance's current configuration.
# Resume with the same configuration
inst = client.instances.resume(12345)
# Resume with a different GPU and more storage
inst = client.instances.resume(12345, gpu_type="H100", storage=200)
| Parameter | Type | Default | Description |
|---|---|---|---|
machine_id | int | required | Instance ID (positional) |
gpu_type | str | None | None | Change GPU type |
num_gpus | int | None | None | Change GPU count |
storage | int | None | None | Expand storage in GB |
name | str | None | None | Rename the instance on resume |
script_id | str | None | None | Startup script ID to run on resume |
script_args | str | None | None | Arguments for the startup script |
fs_id | int | None | None | Filesystem ID to attach |
Returns: Instance — the resumed instance.
Raises:
ValidationErrorif the instance is not inPausedstatus, the requested GPU isn't available in the instance's region,fs_iddoesn't match an existing filesystem, or VM template has no SSH keys.APIErrorif the resume fails.
An instance always resumes in its original region. If you request a GPU type that isn't available in that region, a ValidationError is raised. Check Regions & GPUs to see which GPUs are available where.
resume() may return an instance with a new machine_id. Always use the returned instance's ID for subsequent operations instead of the old one.
destroy(machine_id: int) -> bool
Permanently deletes an instance and its storage. This action cannot be undone. Any attached filesystems are not affected — they remain available for use with other instances.
client.instances.destroy(inst.machine_id)
| Parameter | Type | Description |
|---|---|---|
machine_id | int | Instance ID |
Returns: True on success.
rename(machine_id: int, name: str) -> bool
Renames an instance. The new name can be up to 40 characters.
client.instances.rename(12345, "fine-tuning-run-3")
| Parameter | Type | Description |
|---|---|---|
machine_id | int | Instance ID |
name | str | New name (1–40 characters) |
Returns: True on success.
Raises: ValidationError if the name is empty or exceeds 40 characters.
SSH Keys
The ssh_keys namespace lets you manage the SSH public keys on your account. SSH keys are required for vm template instances — you'll need at least one registered before you can create a VM.
list() -> list[SSHKey]
Returns all SSH keys registered on your account.
keys = client.ssh_keys.list()
for key in keys:
print(f"{key.key_id}: {key.key_name}")
Returns: list[SSHKey] with fields:
| Field | Type | Description |
|---|---|---|
ssh_key | str | Public key content |
key_name | str | Key name |
key_id | str | Unique key identifier |
user_id | str | None | Owner user ID |
add(ssh_key: str, key_name: str) -> bool
Registers a new SSH public key on your account. Pass the contents of your public key file (e.g. ~/.ssh/id_ed25519.pub) along with a friendly name to identify it.
client.ssh_keys.add(
ssh_key="ssh-ed25519 AAAAC3Nza... user@host",
key_name="my-laptop",
)
| Parameter | Type | Description |
|---|---|---|
ssh_key | str | Public key content (e.g. contents of ~/.ssh/id_ed25519.pub) |
key_name | str | A name for this key |
Returns: True on success.
remove(key_id: str) -> bool
Removes an SSH key from your account. Use the key_id from list() to identify which key to remove.
client.ssh_keys.remove("abc123")
| Parameter | Type | Description |
|---|---|---|
key_id | str | Key ID (from list()) |
Returns: True on success.
Startup Scripts
The scripts namespace lets you manage shell scripts that run automatically when an instance launches or resumes. This is useful for installing dependencies, pulling datasets, or any other setup you want to happen automatically.
list() -> list[StartupScript]
Returns all your saved startup scripts.
scripts = client.scripts.list()
for s in scripts:
print(f"{s.script_id}: {s.script_name}")
Returns: list[StartupScript] with fields:
| Field | Type | Description |
|---|---|---|
script_id | int | Unique script identifier |
script_name | str | None | Script name |
add(script: bytes | bytearray | str, name: str = "") -> bool
Uploads a new startup script. You can pass the script content as a string directly, or read bytes from a file.
# From a string
client.scripts.add(script="#!/bin/bash\npip install wandb", name="install-deps")
# From a file
with open("setup.sh", "rb") as f:
client.scripts.add(script=f.read(), name="setup")
| Parameter | Type | Default | Description |
|---|---|---|---|
script | bytes | bytearray | str | required | Script content |
name | str | "" | Script name |
Returns: True on success.
Raises: ValidationError if the script content is empty.
update(script_id: int, script: bytes | bytearray | str) -> bool
Replaces the contents of an existing startup script. The script ID stays the same, so any instances referencing it will use the updated version on their next launch or resume.
client.scripts.update(script_id=42, script="#!/bin/bash\npip install wandb torch")
| Parameter | Type | Description |
|---|---|---|
script_id | int | Script ID (from list()) |
script | bytes | bytearray | str | New script content |
Returns: True on success.
Raises: ValidationError if the script content is empty.
remove(script_id: int) -> bool
Deletes a startup script from your account.
client.scripts.remove(script_id=42)
| Parameter | Type | Description |
|---|---|---|
script_id | int | Script ID |
Returns: True on success.
Filesystems
The filesystems namespace lets you manage persistent storage volumes. Unlike instance storage, filesystems survive instance pause, resume, and even destroy operations. This makes them ideal for storing datasets, model checkpoints, or anything you want to reuse across multiple instances.
list() -> list[Filesystem]
Returns all your filesystems.
filesystems = client.filesystems.list()
for fs in filesystems:
print(f"{fs.fs_id}: {fs.fs_name} ({fs.storage} GB)")
Returns: list[Filesystem] with fields:
| Field | Type | Description |
|---|---|---|
fs_id | int | Unique filesystem identifier |
fs_name | str | None | Filesystem name |
storage | int | None | Storage size in GB |
create(fs_name: str, storage: int, deployment_id: str | None = None) -> int
Creates a new filesystem with the given name and size. The returned filesystem ID is what you'll pass to instances.create() or instances.resume() to attach it.
fs_id = client.filesystems.create(fs_name="datasets", storage=200)
print(f"Created filesystem: {fs_id}")
| Parameter | Type | Default | Description |
|---|---|---|---|
fs_name | str | required | Filesystem name (max 30 characters) |
storage | int | required | Storage in GB (50–2048) |
deployment_id | str | None | None | Optional deployment ID to associate |
Returns: int — the new filesystem ID.
Raises: ValidationError if the name is empty, exceeds 30 characters, or storage is outside 50–2048 GB.
edit(fs_id: int, storage: int) -> int
Expands a filesystem to a larger size. Storage can only be increased, never decreased.
new_fs_id = client.filesystems.edit(fs_id=7, storage=500)
| Parameter | Type | Description |
|---|---|---|
fs_id | int | Filesystem ID |
storage | int | New storage size in GB (50–2048) |
Returns: int — the filesystem ID (may be a new ID). Always use the returned value for subsequent operations.
Raises: ValidationError if storage is outside 50–2048 GB.
remove(fs_id: int) -> bool
Deletes a filesystem. Make sure no instances are currently using it before deleting.
client.filesystems.remove(fs_id=7)
| Parameter | Type | Description |
|---|---|---|
fs_id | int | Filesystem ID |
Returns: True on success.
Instance Fields
The Instance object is returned by list(), get(), create(), and resume(). Here are all the fields available on it:
| Field | Type | Description |
|---|---|---|
machine_id | int | Unique instance ID |
name | str | None | Instance name |
status | str | Current status: Running, Paused, Creating, Resuming, Failed, etc. |
gpu_type | str | None | GPU type (e.g. A100, H100) |
num_gpus | int | None | Number of GPUs |
template | str | Framework template (e.g. pytorch, vm) |
storage_gb | int | None | Storage in GB |
cost | float | Session cost (or storage cost if paused) |
runtime | str | int | Session runtime |
ram | int | None | RAM in GB |
cores | int | None | Number of CPU cores |
ssh_command | str | None | Full SSH command string (available when running) |
url | str | None | Notebook/IDE URL (available when running) |
vs_url | str | None | VS Code URL |
public_ip | str | None | Public IP address |
region | str | None | Region (raw backend value via attribute access). Use .model_dump() to get display codes like IN1, IN2, EU1. |
fs_id | int | None | Attached filesystem ID |
http_ports | str | None | Custom HTTP port mappings |
disk_type | str | None | Disk type (e.g. ssd) |
endpoints | list[str] | None | Active endpoints |
framework_id | str | None | Framework identifier |
version | str | None | Framework version |
paused_image_size | float | None | Size of paused image in GB |
is_reserved | bool | None | Whether the instance is reserved |
billing_frequency | str | None | Billing frequency |
deployment_id | str | None | Associated deployment ID |
user_id | str | None | Owner user ID |
Error Handling
All SDK exceptions inherit from JarvislabsError. You can import them directly from the jarvislabs package.
Exception hierarchy
JarvislabsError
├── AuthError
├── NotFoundError
├── InsufficientBalanceError
├── ValidationError
├── APIError
└── SSHError
├── SSHConnectionError
└── SSHAuthError
| Exception | When it's raised |
|---|---|
JarvislabsError | Base class. Catch this to handle any SDK error. |
AuthError | No API token found, or the token is invalid/expired. |
NotFoundError | Instance, SSH key, filesystem, or other resource not found. |
InsufficientBalanceError | Account balance too low to perform the action. |
ValidationError | Client-side validation failure: missing GPU type, name too long, empty script, storage out of range, etc. |
APIError | Backend or operational error that doesn't fit a specific category. Exposes .status_code (int — 0 for non-HTTP errors like timeouts), .message (str), and .error_code (str or None). |
SSHError | SSH command parsing or execution failure. |
SSHConnectionError | SSH transport/connectivity failure. Subclass of SSHError. |
SSHAuthError | SSH authentication failure. Subclass of SSHError. |
Catching specific errors
If you want to handle different failure modes separately, you can catch specific exception types. This is useful in production scripts where you want to retry on transient errors but fail fast on auth issues:
from jarvislabs import Client, AuthError, NotFoundError, ValidationError, APIError
try:
with Client() as client:
inst = client.instances.create(gpu_type="A100")
except AuthError:
print("Invalid or missing API token. Check your key or run: jl setup")
except ValidationError as e:
print(f"Invalid input: {e}")
except NotFoundError as e:
print(f"Resource not found: {e}")
except APIError as e:
print(f"API error ({e.status_code}): {e.message}")
Catch-all
For simpler scripts, you can catch JarvislabsError as a catch-all for any SDK error:
from jarvislabs import Client, JarvislabsError
try:
with Client() as client:
client.instances.pause(99999)
except JarvislabsError as e:
print(f"SDK error: {e}")
Examples
Pause all running instances
A quick way to stop all compute billing. This loops through your instances and pauses any that are currently running:
from jarvislabs import Client
with Client() as client:
for inst in client.instances.list():
if inst.status == "Running":
client.instances.pause(inst.machine_id)
print(f"Paused {inst.machine_id} ({inst.name})")
Create an instance with a startup script and filesystem
This example shows how to set up a complete training environment: upload a startup script that installs your dependencies, create a persistent filesystem for datasets, and launch an instance with both attached:
from jarvislabs import Client
with Client() as client:
# Upload a startup script that installs your dependencies
client.scripts.add(
script="#!/bin/bash\npip install wandb tensorboard",
name="install-deps",
)
# Find the script ID by name
scripts = client.scripts.list()
script_id = next(s.script_id for s in scripts if s.script_name == "install-deps")
# Create a persistent filesystem for your datasets
fs_id = client.filesystems.create(fs_name="datasets", storage=200)
# Launch an instance with the startup script and filesystem attached
# Note: script_id must be passed as a string
inst = client.instances.create(
gpu_type="A100",
storage=100,
name="training",
script_id=str(script_id),
fs_id=fs_id,
)
print(f"Instance {inst.machine_id} is running")
print(f"SSH: {inst.ssh_command}")
scripts.list() returns script_id as int, but instances.create() expects script_id as str | None. Remember to convert it with str(script_id) when passing it.
Multi-GPU instance
If your workload needs multiple GPUs (for distributed training, large model fine-tuning, etc.), set the num_gpus parameter. The available GPU counts depend on the region — check Regions & GPUs for details.
from jarvislabs import Client
with Client() as client:
# Create an 8x H100 instance in EU1 for large-scale training
inst = client.instances.create(
gpu_type="H100",
num_gpus=8,
storage=500,
name="distributed-training",
region="EU1",
)
print(f"Instance: {inst.name}")
print(f"GPUs: {inst.num_gpus}x {inst.gpu_type}")
print(f"SSH: {inst.ssh_command}")
Resume with different hardware
A common pattern is to train on a powerful GPU and then switch to something cheaper for inference or debugging. Since resume lets you change the GPU type, you can do this without losing your instance state:
from jarvislabs import Client
with Client() as client:
# Pause the expensive training instance
client.instances.pause(12345)
# Important: wait for the instance to finish pausing before resuming.
# pause() returns immediately, but the instance transitions through
# Pausing -> Paused in the background.
# Resume later with a cheaper GPU for inference
inst = client.instances.resume(12345, gpu_type="RTX5000")
# The machine_id may change on resume - always use the returned value
print(f"Resumed with new machine ID: {inst.machine_id}")
pause() returns immediately on API success, but the instance needs time to transition to Paused status. Calling resume() too quickly may raise ValidationError. In practice, wait a few seconds or poll the instance status before resuming.
Check GPU availability before creating
Before creating an instance, you might want to see what GPUs are available right now and pick the best option. This example finds all available GPUs and creates an instance on the cheapest one:
from jarvislabs import Client
with Client() as client:
gpus = client.account.gpu_availability()
available = [g for g in gpus if g.num_free_devices > 0]
# Print what's available
for gpu in available:
region_code = gpu.model_dump()["region"]
print(f"{gpu.gpu_type} ({region_code}): {gpu.num_free_devices} free, ${gpu.price_per_hour}/hr")
# Create on the cheapest available GPU
if available:
cheapest = min(available, key=lambda g: g.price_per_hour or float("inf"))
cheapest_region = cheapest.model_dump()["region"]
inst = client.instances.create(gpu_type=cheapest.gpu_type, region=cheapest_region)
print(f"Created on {inst.gpu_type}: {inst.ssh_command}")
else:
print("No GPUs available right now - try again shortly")
Check balance and resource usage
A quick dashboard of your account status — balance, currency, and how many resources you have running:
from jarvislabs import Client
with Client() as client:
bal = client.account.balance()
metrics = client.account.resource_metrics()
currency = client.account.currency()
print(f"Balance: {bal.balance} {currency} (Grants: {bal.grants} {currency})")
print(f"Running: {metrics.running_instances} instances, {metrics.running_vms} VMs")
print(f"Paused: {metrics.paused_instances} instances, {metrics.paused_vms} VMs")
Persistent datasets with filesystems
Filesystems persist independently of instances. This means you can create a filesystem, load your datasets onto it, destroy the instance, and then attach the same filesystem to a completely different instance later. Here's how that workflow looks:
from jarvislabs import Client
with Client() as client:
# Create a 500 GB filesystem for your datasets
fs_id = client.filesystems.create(fs_name="datasets", storage=500)
# Create an instance and attach the filesystem
inst = client.instances.create(
gpu_type="A100",
fs_id=fs_id,
name="training",
)
print(f"Filesystem mounted - upload your data via SSH: {inst.ssh_command}")
# ... train your model, save checkpoints to the filesystem ...
# Destroy the instance - the filesystem and its data persist
client.instances.destroy(inst.machine_id)
# Later, create a new instance with the same filesystem
# Your datasets and checkpoints are still there
inst = client.instances.create(
gpu_type="RTX5000",
fs_id=fs_id,
name="inference",
)
print(f"Same data, different GPU: {inst.ssh_command}")
VM instance with SSH keys
VM instances give you a bare Linux environment without JupyterLab or any pre-installed frameworks. They require at least one SSH key registered on your account. Here's how to set that up:
from pathlib import Path
from jarvislabs import Client
with Client() as client:
# Register your SSH public key (required for VM instances)
pubkey = Path("~/.ssh/id_ed25519.pub").expanduser().read_text().strip()
client.ssh_keys.add(ssh_key=pubkey, key_name="my-laptop")
# Create a VM instance - only available in IN2 and EU1
inst = client.instances.create(
gpu_type="H100",
template="vm",
name="my-vm",
region="EU1",
)
print(f"SSH: {inst.ssh_command}")