Skip to main content

JarvisLabs Python SDK

New SDK — Replaces JLClient

This is the new jarvislabs package, replacing the deprecated jlclient. It includes a modern Python SDK, a full CLI (jl), and built-in support for AI coding agents. If you're still using jlclient, see the migration note.

Python SDK for managing GPU instances on JarvisLabs.ai. Create, pause, resume, and destroy GPU instances programmatically.

Package: jarvislabs | Version: 0.2.x (beta) | Python: 3.11+

Beta Software

The JarvisLabs Python SDK is currently in beta. The API may change between releases, so pin your version and check the changelog when upgrading.

Platform Support

Linux and macOS are fully supported. Windows is experimental and not fully tested — if you run into issues on Windows, please report them.

See it in action

Jump to the Examples section to see common SDK workflows like creating instances, attaching filesystems, and checking GPU availability.


Installation

Since the package is in beta, you'll need to install it with the --pre flag:

pip install --pre jarvislabs

Or with uv:

uv pip install --pre jarvislabs

After installation, from jarvislabs import Client is available in Python.

Setting up authentication with jl setup

Run jl setup once after installing from your terminal. It walks you through:

  1. Saves your API token locally so both the CLI and SDK can use it automatically
  2. Shows your account status — balance and active instances
  3. Installs agent skill files — with your approval, it sets up skill files for AI coding agents (Claude Code, Codex, Cursor, OpenCode) so they know how to use jl out of the box

After running jl setup, Client() picks up your token automatically — no need to pass it in code.


Quick Start

Here's the fastest way to get a GPU instance up and running with the SDK — create an instance, grab its connection details, and pause it when you're done:

from jarvislabs import Client

with Client() as client:
# Create a GPU instance (blocks until running)
inst = client.instances.create(gpu_type="A100", name="my-instance")
print(f"SSH: {inst.ssh_command}")
print(f"Notebook: {inst.url}")

# Pause to stop billing (data persists)
client.instances.pause(inst.machine_id)

create() blocks until the instance is fully running, so by the time it returns you can SSH in or open the notebook URL right away. Pausing stops compute billing while keeping your data intact.


Authentication

Get your API token from jarvislabs.ai/settings/api-keys.

The SDK resolves your API token in this order:

PriorityMethodDescription
1api_key argumentPass directly to Client(api_key="...")
2JL_API_KEY env varSet in your environment
3Config fileSaved by jl setup (~/.config/jl/config.toml on Linux, ~/Library/Application Support/jl/config.toml on macOS)

If no token is found through any method, Client() raises AuthError.

from jarvislabs import Client

# Option 1: Pass directly
client = Client(api_key="YOUR_TOKEN")

# Option 2: Set JL_API_KEY env var, then:
client = Client()

# Option 3: Run `jl setup` once in the terminal, then:
client = Client()
tip

The easiest setup is to run jl setup once in your terminal. After that, Client() picks up your token automatically — no need to pass it every time.


Client

Client is the entry point for all SDK operations. It supports Python's context manager protocol (with statement) so the underlying HTTP transport is cleaned up automatically when you're done.

from jarvislabs import Client

# With context manager (recommended)
with Client(api_key="...") as client:
instances = client.instances.list()

# Without context manager
client = Client()
instances = client.instances.list()
client.close()

Constructor

Client(api_key: str | None = None)
ParameterTypeDefaultDescription
api_keystr | NoneNoneAPI token. If None, resolves from env var or config file.

Raises AuthError if no token is found.

Methods

MethodDescription
close()Close the underlying transport. Called automatically when using with.

Namespaces

All operations are organized into namespaces on the client:

AttributeDescription
client.accountBalance, user info, GPU availability, templates
client.instancesInstance lifecycle: list, create, pause, resume, destroy, rename
client.ssh_keysSSH key management
client.scriptsStartup script management
client.filesystemsPersistent filesystem management

Account

The account namespace gives you methods for checking your balance, viewing user details, listing available GPUs and templates, and checking your billing currency.

balance() -> Balance

Returns your current account balance and any available grants (credits).

bal = client.account.balance()
print(f"Balance: {bal.balance}, Grants: {bal.grants}")

Returns: Balance with fields:

FieldTypeDescription
balancefloatCurrent account balance
grantsfloatAvailable grants/credits

user_info() -> UserInfo

Returns your account profile information, including name, address, and contact details.

info = client.account.user_info()
print(f"User: {info.name} ({info.user_id})")

Returns: UserInfo with fields:

FieldTypeDescription
user_idstrUnique user identifier
namestr | NoneFull name
address1str | NoneAddress line 1
address2str | NoneAddress line 2
citystr | NoneCity
countrystr | NoneCountry
phone_numberstr | NonePhone number
statestr | NoneState/province
zip_codestr | NoneZIP/postal code
tax_idstr | NoneTax ID

resource_metrics() -> ResourceMetrics

Returns a summary of your current resources. Instances and VMs are broken down by running and paused status; deployments and filesystems are single counts.

metrics = client.account.resource_metrics()
print(f"Running: {metrics.running_instances}, Paused: {metrics.paused_instances}")

Returns: ResourceMetrics with fields:

FieldTypeDescription
running_instancesintNumber of running instances
paused_instancesintNumber of paused instances
running_vmsintNumber of running VMs
paused_vmsintNumber of paused VMs
deploymentsintNumber of active deployments
filesystemsintNumber of filesystems

templates() -> list[Template]

Returns the list of available framework templates you can use when creating an instance (e.g. pytorch, tensorflow, jax, vm).

templates = client.account.templates()
for t in templates:
print(f"{t.id}: {t.title}")

Returns: list[Template] with fields:

FieldTypeDescription
idstrTemplate identifier (e.g. pytorch, tensorflow, vm)
titlestrDisplay name
descriptionstr | NoneTemplate description
categorystr | NoneCategory grouping
versionsstr | NoneAvailable versions

gpu_availability() -> list[ServerMetaGPU]

Returns all GPU types with their pricing, VRAM, and current availability across regions. This is useful for checking what's available before creating an instance, or for building dashboards that show real-time GPU stock.

gpus = client.account.gpu_availability()
for gpu in gpus:
region_code = gpu.model_dump()["region"] # display code: IN1, IN2, EU1
print(f"{gpu.gpu_type} ({region_code}): {gpu.num_free_devices} free, ${gpu.price_per_hour}/hr")

Returns: list[ServerMetaGPU] with fields:

FieldTypeDescription
gpu_typestrGPU type name
regionstrRegion (raw backend value via attribute access; use .model_dump() for display codes like IN1, IN2, EU1)
num_free_devicesintNumber of available GPUs
price_per_hourfloat | NoneHourly price
vramstr | NoneGPU memory
arcstr | NoneGPU architecture
cpus_per_gpuint | NoneCPU cores allocated per GPU
ram_per_gpuint | NoneRAM in GB allocated per GPU

currency() -> str

Returns "INR" or "USD" based on your account's payment location.

currency = client.account.currency()
print(f"Billing currency: {currency}")

Regions & GPUs

JarvisLabs has three regions, each with different GPU types available:

RegionAvailable GPUs
IN1RTX5000, A5000Pro, A6000, RTX6000Ada, A100
IN2L4, A100, A100-80GB
EU1H100, H200

Available GPU types

RTX5000, A5000Pro, A6000, A100, A100-80GB, RTX6000Ada, H100, H200, L4

Region auto-selection

If you don't pass a region to create(), the SDK automatically picks the best region based on current GPU availability. You can also check availability yourself with client.account.gpu_availability() and pass a specific region code.

EU1 GPU counts

EU1 supports 1 or 8 GPUs per instance. Requesting other counts (e.g. 2 or 4) will raise a ValidationError.

Storage minimums

  • EU1 region: 100 GB minimum storage (auto-bumped if you specify less)
  • vm template: 100 GB minimum storage (auto-bumped if you specify less)

VM template restrictions

The vm template is only available in IN2 and EU1. Creating a VM instance also requires at least one SSH key registered on your account (see SSH Keys).


Instances

The instances namespace handles the full instance lifecycle: listing, creating, pausing, resuming, destroying, and renaming GPU instances.

All create() parameters are keyword-only — you must use named arguments. For resume(), all parameters except machine_id are keyword-only.

list() -> list[Instance]

Returns all your instances, regardless of their current status. Each instance includes its connection details, GPU config, and current state.

instances = client.instances.list()
for inst in instances:
print(f"{inst.machine_id}: {inst.name} [{inst.status}]")

get(machine_id: int) -> Instance

Fetches a single instance by its ID. This is useful when you already know which instance you want to check on.

inst = client.instances.get(12345)
print(f"{inst.name}: {inst.status}")
ParameterTypeDescription
machine_idintInstance ID

Raises: NotFoundError if the instance doesn't exist.

create(...) -> Instance

Creates a new GPU instance and blocks until it's fully running. By the time this method returns, the instance is ready to use — you can SSH in or open its notebook URL immediately.

All parameters are keyword-only.

inst = client.instances.create(
gpu_type="A100",
num_gpus=1,
template="pytorch",
storage=100,
name="training-box",
)
print(f"Machine ID: {inst.machine_id}")
print(f"SSH: {inst.ssh_command}")
ParameterTypeDefaultDescription
gpu_typestrrequiredGPU type (e.g. "A100", "H100"). See Regions & GPUs for the full list.
num_gpusint1Number of GPUs
templatestr"pytorch"Framework template (see client.account.templates())
storageint40Storage in GB
namestr"Name me"Instance name (max 40 characters)
disk_typestr"ssd"Disk type
http_portsstr""Comma-separated HTTP ports to expose (e.g. "7860,8080")
script_idstr | NoneNoneStartup script ID to run on launch (note: scripts.list() returns int IDs — pass as str(script_id))
script_argsstr""Arguments passed to the startup script
fs_idint | NoneNoneFilesystem ID to attach
argumentsstr""Additional arguments
regionstr | NoneNoneRegion code (e.g. "IN1", "IN2", "EU1"). Auto-selected if omitted.

Returns: Instance — the fully provisioned instance.

Raises:

  • ValidationError if gpu_type is missing, name exceeds 40 characters, region is invalid, fs_id doesn't match an existing filesystem, or VM template has no SSH keys registered.
  • APIError if instance creation fails.

pause(machine_id: int) -> bool

Pauses a running instance. Once paused, compute billing stops — you'll only pay a small storage cost to keep your data. Your instance's state (installed packages, files, etc.) is preserved and will be restored when you resume.

client.instances.pause(inst.machine_id)
ParameterTypeDescription
machine_idintInstance ID

Returns: True on success.

info

pause() returns as soon as the API accepts the request, but the instance transitions through PausingPaused in the background. If you need to resume the same instance shortly after, wait for its status to reach Paused first.

resume(machine_id: int, ...) -> Instance

Resumes a paused instance and blocks until it's running again. You can optionally change the GPU type, storage size, or other settings on resume — this is handy when you want to switch from a training GPU to a cheaper inference GPU, for example.

All parameters except machine_id are keyword-only. Any parameter you omit keeps the instance's current configuration.

# Resume with the same configuration
inst = client.instances.resume(12345)

# Resume with a different GPU and more storage
inst = client.instances.resume(12345, gpu_type="H100", storage=200)
ParameterTypeDefaultDescription
machine_idintrequiredInstance ID (positional)
gpu_typestr | NoneNoneChange GPU type
num_gpusint | NoneNoneChange GPU count
storageint | NoneNoneExpand storage in GB
namestr | NoneNoneRename the instance on resume
script_idstr | NoneNoneStartup script ID to run on resume
script_argsstr | NoneNoneArguments for the startup script
fs_idint | NoneNoneFilesystem ID to attach

Returns: Instance — the resumed instance.

Raises:

  • ValidationError if the instance is not in Paused status, the requested GPU isn't available in the instance's region, fs_id doesn't match an existing filesystem, or VM template has no SSH keys.
  • APIError if the resume fails.
Resume is region-locked

An instance always resumes in its original region. If you request a GPU type that isn't available in that region, a ValidationError is raised. Check Regions & GPUs to see which GPUs are available where.

Machine ID may change on resume

resume() may return an instance with a new machine_id. Always use the returned instance's ID for subsequent operations instead of the old one.

destroy(machine_id: int) -> bool

Permanently deletes an instance and its storage. This action cannot be undone. Any attached filesystems are not affected — they remain available for use with other instances.

client.instances.destroy(inst.machine_id)
ParameterTypeDescription
machine_idintInstance ID

Returns: True on success.

rename(machine_id: int, name: str) -> bool

Renames an instance. The new name can be up to 40 characters.

client.instances.rename(12345, "fine-tuning-run-3")
ParameterTypeDescription
machine_idintInstance ID
namestrNew name (1–40 characters)

Returns: True on success.

Raises: ValidationError if the name is empty or exceeds 40 characters.


SSH Keys

The ssh_keys namespace lets you manage the SSH public keys on your account. SSH keys are required for vm template instances — you'll need at least one registered before you can create a VM.

list() -> list[SSHKey]

Returns all SSH keys registered on your account.

keys = client.ssh_keys.list()
for key in keys:
print(f"{key.key_id}: {key.key_name}")

Returns: list[SSHKey] with fields:

FieldTypeDescription
ssh_keystrPublic key content
key_namestrKey name
key_idstrUnique key identifier
user_idstr | NoneOwner user ID

add(ssh_key: str, key_name: str) -> bool

Registers a new SSH public key on your account. Pass the contents of your public key file (e.g. ~/.ssh/id_ed25519.pub) along with a friendly name to identify it.

client.ssh_keys.add(
ssh_key="ssh-ed25519 AAAAC3Nza... user@host",
key_name="my-laptop",
)
ParameterTypeDescription
ssh_keystrPublic key content (e.g. contents of ~/.ssh/id_ed25519.pub)
key_namestrA name for this key

Returns: True on success.

remove(key_id: str) -> bool

Removes an SSH key from your account. Use the key_id from list() to identify which key to remove.

client.ssh_keys.remove("abc123")
ParameterTypeDescription
key_idstrKey ID (from list())

Returns: True on success.


Startup Scripts

The scripts namespace lets you manage shell scripts that run automatically when an instance launches or resumes. This is useful for installing dependencies, pulling datasets, or any other setup you want to happen automatically.

list() -> list[StartupScript]

Returns all your saved startup scripts.

scripts = client.scripts.list()
for s in scripts:
print(f"{s.script_id}: {s.script_name}")

Returns: list[StartupScript] with fields:

FieldTypeDescription
script_idintUnique script identifier
script_namestr | NoneScript name

add(script: bytes | bytearray | str, name: str = "") -> bool

Uploads a new startup script. You can pass the script content as a string directly, or read bytes from a file.

# From a string
client.scripts.add(script="#!/bin/bash\npip install wandb", name="install-deps")

# From a file
with open("setup.sh", "rb") as f:
client.scripts.add(script=f.read(), name="setup")
ParameterTypeDefaultDescription
scriptbytes | bytearray | strrequiredScript content
namestr""Script name

Returns: True on success.

Raises: ValidationError if the script content is empty.

update(script_id: int, script: bytes | bytearray | str) -> bool

Replaces the contents of an existing startup script. The script ID stays the same, so any instances referencing it will use the updated version on their next launch or resume.

client.scripts.update(script_id=42, script="#!/bin/bash\npip install wandb torch")
ParameterTypeDescription
script_idintScript ID (from list())
scriptbytes | bytearray | strNew script content

Returns: True on success.

Raises: ValidationError if the script content is empty.

remove(script_id: int) -> bool

Deletes a startup script from your account.

client.scripts.remove(script_id=42)
ParameterTypeDescription
script_idintScript ID

Returns: True on success.


Filesystems

The filesystems namespace lets you manage persistent storage volumes. Unlike instance storage, filesystems survive instance pause, resume, and even destroy operations. This makes them ideal for storing datasets, model checkpoints, or anything you want to reuse across multiple instances.

list() -> list[Filesystem]

Returns all your filesystems.

filesystems = client.filesystems.list()
for fs in filesystems:
print(f"{fs.fs_id}: {fs.fs_name} ({fs.storage} GB)")

Returns: list[Filesystem] with fields:

FieldTypeDescription
fs_idintUnique filesystem identifier
fs_namestr | NoneFilesystem name
storageint | NoneStorage size in GB

create(fs_name: str, storage: int, deployment_id: str | None = None) -> int

Creates a new filesystem with the given name and size. The returned filesystem ID is what you'll pass to instances.create() or instances.resume() to attach it.

fs_id = client.filesystems.create(fs_name="datasets", storage=200)
print(f"Created filesystem: {fs_id}")
ParameterTypeDefaultDescription
fs_namestrrequiredFilesystem name (max 30 characters)
storageintrequiredStorage in GB (50–2048)
deployment_idstr | NoneNoneOptional deployment ID to associate

Returns: int — the new filesystem ID.

Raises: ValidationError if the name is empty, exceeds 30 characters, or storage is outside 50–2048 GB.

edit(fs_id: int, storage: int) -> int

Expands a filesystem to a larger size. Storage can only be increased, never decreased.

new_fs_id = client.filesystems.edit(fs_id=7, storage=500)
ParameterTypeDescription
fs_idintFilesystem ID
storageintNew storage size in GB (50–2048)

Returns: int — the filesystem ID (may be a new ID). Always use the returned value for subsequent operations.

Raises: ValidationError if storage is outside 50–2048 GB.

remove(fs_id: int) -> bool

Deletes a filesystem. Make sure no instances are currently using it before deleting.

client.filesystems.remove(fs_id=7)
ParameterTypeDescription
fs_idintFilesystem ID

Returns: True on success.


Instance Fields

The Instance object is returned by list(), get(), create(), and resume(). Here are all the fields available on it:

FieldTypeDescription
machine_idintUnique instance ID
namestr | NoneInstance name
statusstrCurrent status: Running, Paused, Creating, Resuming, Failed, etc.
gpu_typestr | NoneGPU type (e.g. A100, H100)
num_gpusint | NoneNumber of GPUs
templatestrFramework template (e.g. pytorch, vm)
storage_gbint | NoneStorage in GB
costfloatSession cost (or storage cost if paused)
runtimestr | intSession runtime
ramint | NoneRAM in GB
coresint | NoneNumber of CPU cores
ssh_commandstr | NoneFull SSH command string (available when running)
urlstr | NoneNotebook/IDE URL (available when running)
vs_urlstr | NoneVS Code URL
public_ipstr | NonePublic IP address
regionstr | NoneRegion (raw backend value via attribute access). Use .model_dump() to get display codes like IN1, IN2, EU1.
fs_idint | NoneAttached filesystem ID
http_portsstr | NoneCustom HTTP port mappings
disk_typestr | NoneDisk type (e.g. ssd)
endpointslist[str] | NoneActive endpoints
framework_idstr | NoneFramework identifier
versionstr | NoneFramework version
paused_image_sizefloat | NoneSize of paused image in GB
is_reservedbool | NoneWhether the instance is reserved
billing_frequencystr | NoneBilling frequency
deployment_idstr | NoneAssociated deployment ID
user_idstr | NoneOwner user ID

Error Handling

All SDK exceptions inherit from JarvislabsError. You can import them directly from the jarvislabs package.

Exception hierarchy

JarvislabsError
├── AuthError
├── NotFoundError
├── InsufficientBalanceError
├── ValidationError
├── APIError
└── SSHError
├── SSHConnectionError
└── SSHAuthError
ExceptionWhen it's raised
JarvislabsErrorBase class. Catch this to handle any SDK error.
AuthErrorNo API token found, or the token is invalid/expired.
NotFoundErrorInstance, SSH key, filesystem, or other resource not found.
InsufficientBalanceErrorAccount balance too low to perform the action.
ValidationErrorClient-side validation failure: missing GPU type, name too long, empty script, storage out of range, etc.
APIErrorBackend or operational error that doesn't fit a specific category. Exposes .status_code (int — 0 for non-HTTP errors like timeouts), .message (str), and .error_code (str or None).
SSHErrorSSH command parsing or execution failure.
SSHConnectionErrorSSH transport/connectivity failure. Subclass of SSHError.
SSHAuthErrorSSH authentication failure. Subclass of SSHError.

Catching specific errors

If you want to handle different failure modes separately, you can catch specific exception types. This is useful in production scripts where you want to retry on transient errors but fail fast on auth issues:

from jarvislabs import Client, AuthError, NotFoundError, ValidationError, APIError

try:
with Client() as client:
inst = client.instances.create(gpu_type="A100")
except AuthError:
print("Invalid or missing API token. Check your key or run: jl setup")
except ValidationError as e:
print(f"Invalid input: {e}")
except NotFoundError as e:
print(f"Resource not found: {e}")
except APIError as e:
print(f"API error ({e.status_code}): {e.message}")

Catch-all

For simpler scripts, you can catch JarvislabsError as a catch-all for any SDK error:

from jarvislabs import Client, JarvislabsError

try:
with Client() as client:
client.instances.pause(99999)
except JarvislabsError as e:
print(f"SDK error: {e}")

Examples

Pause all running instances

A quick way to stop all compute billing. This loops through your instances and pauses any that are currently running:

from jarvislabs import Client

with Client() as client:
for inst in client.instances.list():
if inst.status == "Running":
client.instances.pause(inst.machine_id)
print(f"Paused {inst.machine_id} ({inst.name})")

Create an instance with a startup script and filesystem

This example shows how to set up a complete training environment: upload a startup script that installs your dependencies, create a persistent filesystem for datasets, and launch an instance with both attached:

from jarvislabs import Client

with Client() as client:
# Upload a startup script that installs your dependencies
client.scripts.add(
script="#!/bin/bash\npip install wandb tensorboard",
name="install-deps",
)

# Find the script ID by name
scripts = client.scripts.list()
script_id = next(s.script_id for s in scripts if s.script_name == "install-deps")

# Create a persistent filesystem for your datasets
fs_id = client.filesystems.create(fs_name="datasets", storage=200)

# Launch an instance with the startup script and filesystem attached
# Note: script_id must be passed as a string
inst = client.instances.create(
gpu_type="A100",
storage=100,
name="training",
script_id=str(script_id),
fs_id=fs_id,
)
print(f"Instance {inst.machine_id} is running")
print(f"SSH: {inst.ssh_command}")
script_id type mismatch

scripts.list() returns script_id as int, but instances.create() expects script_id as str | None. Remember to convert it with str(script_id) when passing it.

Multi-GPU instance

If your workload needs multiple GPUs (for distributed training, large model fine-tuning, etc.), set the num_gpus parameter. The available GPU counts depend on the region — check Regions & GPUs for details.

from jarvislabs import Client

with Client() as client:
# Create an 8x H100 instance in EU1 for large-scale training
inst = client.instances.create(
gpu_type="H100",
num_gpus=8,
storage=500,
name="distributed-training",
region="EU1",
)
print(f"Instance: {inst.name}")
print(f"GPUs: {inst.num_gpus}x {inst.gpu_type}")
print(f"SSH: {inst.ssh_command}")

Resume with different hardware

A common pattern is to train on a powerful GPU and then switch to something cheaper for inference or debugging. Since resume lets you change the GPU type, you can do this without losing your instance state:

from jarvislabs import Client

with Client() as client:
# Pause the expensive training instance
client.instances.pause(12345)

# Important: wait for the instance to finish pausing before resuming.
# pause() returns immediately, but the instance transitions through
# Pausing -> Paused in the background.

# Resume later with a cheaper GPU for inference
inst = client.instances.resume(12345, gpu_type="RTX5000")

# The machine_id may change on resume - always use the returned value
print(f"Resumed with new machine ID: {inst.machine_id}")
warning

pause() returns immediately on API success, but the instance needs time to transition to Paused status. Calling resume() too quickly may raise ValidationError. In practice, wait a few seconds or poll the instance status before resuming.

Check GPU availability before creating

Before creating an instance, you might want to see what GPUs are available right now and pick the best option. This example finds all available GPUs and creates an instance on the cheapest one:

from jarvislabs import Client

with Client() as client:
gpus = client.account.gpu_availability()
available = [g for g in gpus if g.num_free_devices > 0]

# Print what's available
for gpu in available:
region_code = gpu.model_dump()["region"]
print(f"{gpu.gpu_type} ({region_code}): {gpu.num_free_devices} free, ${gpu.price_per_hour}/hr")

# Create on the cheapest available GPU
if available:
cheapest = min(available, key=lambda g: g.price_per_hour or float("inf"))
cheapest_region = cheapest.model_dump()["region"]
inst = client.instances.create(gpu_type=cheapest.gpu_type, region=cheapest_region)
print(f"Created on {inst.gpu_type}: {inst.ssh_command}")
else:
print("No GPUs available right now - try again shortly")

Check balance and resource usage

A quick dashboard of your account status — balance, currency, and how many resources you have running:

from jarvislabs import Client

with Client() as client:
bal = client.account.balance()
metrics = client.account.resource_metrics()
currency = client.account.currency()

print(f"Balance: {bal.balance} {currency} (Grants: {bal.grants} {currency})")
print(f"Running: {metrics.running_instances} instances, {metrics.running_vms} VMs")
print(f"Paused: {metrics.paused_instances} instances, {metrics.paused_vms} VMs")

Persistent datasets with filesystems

Filesystems persist independently of instances. This means you can create a filesystem, load your datasets onto it, destroy the instance, and then attach the same filesystem to a completely different instance later. Here's how that workflow looks:

from jarvislabs import Client

with Client() as client:
# Create a 500 GB filesystem for your datasets
fs_id = client.filesystems.create(fs_name="datasets", storage=500)

# Create an instance and attach the filesystem
inst = client.instances.create(
gpu_type="A100",
fs_id=fs_id,
name="training",
)
print(f"Filesystem mounted - upload your data via SSH: {inst.ssh_command}")

# ... train your model, save checkpoints to the filesystem ...

# Destroy the instance - the filesystem and its data persist
client.instances.destroy(inst.machine_id)

# Later, create a new instance with the same filesystem
# Your datasets and checkpoints are still there
inst = client.instances.create(
gpu_type="RTX5000",
fs_id=fs_id,
name="inference",
)
print(f"Same data, different GPU: {inst.ssh_command}")

VM instance with SSH keys

VM instances give you a bare Linux environment without JupyterLab or any pre-installed frameworks. They require at least one SSH key registered on your account. Here's how to set that up:

from pathlib import Path
from jarvislabs import Client

with Client() as client:
# Register your SSH public key (required for VM instances)
pubkey = Path("~/.ssh/id_ed25519.pub").expanduser().read_text().strip()
client.ssh_keys.add(ssh_key=pubkey, key_name="my-laptop")

# Create a VM instance - only available in IN2 and EU1
inst = client.instances.create(
gpu_type="H100",
template="vm",
name="my-vm",
region="EU1",
)
print(f"SSH: {inst.ssh_command}")