Overview
This guide shows how to operate the Linkerweb control plane programmatically—provision compute, networking, storage, and managed databases using a first-class SDK experience similar to the major cloud providers.
- Create and manage CPU instances (dedicated and virtual)
- Design networks (VPCs, subnets, firewall rules, public IPs)
- Use object storage (buckets, objects, lifecycle policies)
- Provision managed databases (PostgreSQL/MySQL) with backups
- Read usage and billing information for automation and FinOps workflows
Core concepts
Regions & availability zones
Resources are created in a region. Some resources are zonal (instances, volumes), others are regional (VPCs).
Projects
Projects provide isolation boundaries for access control, quotas, billing attribution, and tagging strategy.
Identifiers
Resources have stable IDs (e.g., srv_..., vpc_...) and human-friendly names.
Tags
Use tags to group resources by environment, cost center, service, or owner.
Quickstart
In under two minutes, you will create a CPU instance, wait until it becomes reachable, and print its public IP.
-
Set credentials
Store API credentials in environment variables so they never land in source control.
setx LINKERWEB_ACCESS_KEY_ID "YOUR_ACCESS_KEY_ID" setx LINKERWEB_SECRET_ACCESS_KEY "YOUR_SECRET_ACCESS_KEY" setx LINKERWEB_REGION "us-west-1" -
Install the SDK
python -m pip install linkerweb-sdk -
Run the provisioning script
from linkerweb import LinkerClient client = LinkerClient() image = client.images.list(family="ubuntu", version="24.04")[0] network = client.networks.get_default() key = client.ssh_keys.import_public_key( name="workstation", public_key=open("~/.ssh/id_rsa.pub", "r", encoding="utf-8").read().strip(), ) instance = client.compute.instances.create( name="api-cpu-1", flavor="c2-standard-4", image_id=image["id"], network_id=network["id"], ssh_key_id=key["id"], tags={"env": "prod", "service": "api"}, ) client.waiters.instance_running(instance["id"], timeout_seconds=900) ip = client.compute.instances.get(instance["id"])["public_ipv4"] print("Instance ready:", instance["id"], ip)
Authentication
The control plane supports API key authentication for SDK and REST access. Credentials map to a principal with scoped permissions enforced by policy.
Credential sources
| Source | How it works | Recommended |
|---|---|---|
| Environment variables | LINKERWEB_ACCESS_KEY_ID and LINKERWEB_SECRET_ACCESS_KEY |
Yes |
| Local credentials file | ~/.linkerweb/credentials with named profiles |
Yes (developer workstations) |
| Explicit parameters | Pass keys when constructing the client | No (avoid hardcoding) |
- Never embed long-lived secrets in front-end code.
- Rotate keys regularly and revoke unused credentials.
- Prefer least-privilege policies (project scoping + action-level permissions).
SDK Concepts
Clients and services
The SDK exposes a single LinkerClient with service namespaces such as
compute, networks, storage, and databases.
Operations return plain Python dictionaries to keep serialization transparent.
Retries and backoff
The SDK automatically retries transient failures (network timeouts, 5xx responses) with exponential backoff. You can tune retry behavior per client.
from linkerweb import LinkerClient, RetryConfig
client = LinkerClient(
retry=RetryConfig(
max_attempts=6,
base_delay_ms=250,
max_delay_ms=4000,
jitter=True,
)
)
Waiters
Waiters poll resource state until a target condition is met (e.g., instance becomes running).
They are designed for automation pipelines and safe orchestration.
Install & Configure (Python)
python -m pip install linkerweb-sdk
Profiles
Create a profile to switch between environments or accounts.
# ~/.linkerweb/credentials
[default]
access_key_id = YOUR_ACCESS_KEY_ID
secret_access_key = YOUR_SECRET_ACCESS_KEY
region = us-west-1
[production]
access_key_id = YOUR_ACCESS_KEY_ID
secret_access_key = YOUR_SECRET_ACCESS_KEY
region = us-east-1
Client Initialization
The client loads credentials in this order: explicit parameters → environment variables → credentials file.
from linkerweb import LinkerClient
# Uses env vars or ~/.linkerweb/credentials
client = LinkerClient()
# Or select a named profile
client = LinkerClient(profile="production")
Custom endpoints
For private connectivity, you can override the API endpoint. TLS is always required.
client = LinkerClient(
endpoint="https://api.linkerweb.com",
region="us-west-1",
)
Create a CPU Instance (Python)
This walkthrough provisions a dedicated CPU instance with a hardened network policy and SSH access.
1) Choose an image and flavor
images = client.images.list(family="ubuntu", version="24.04")
image_id = images[0]["id"]
flavors = client.compute.flavors.list(category="cpu")
flavor = next(f for f in flavors if f["name"] == "c2-standard-4")
2) Create a network and firewall rule
vpc = client.networks.vpcs.create(
name="prod-vpc",
cidr_block="10.42.0.0/16",
)
subnet = client.networks.subnets.create(
name="prod-subnet-a",
vpc_id=vpc["id"],
cidr_block="10.42.1.0/24",
zone="us-west-1a",
)
fw = client.networks.firewalls.create(name="prod-ssh-only")
client.networks.firewalls.add_ingress_rule(
firewall_id=fw["id"],
protocol="tcp",
port_range="22",
source_cidr="203.0.113.0/24",
description="SSH from office network",
)
3) Import an SSH public key
ssh_key = client.ssh_keys.import_public_key(
name="workstation",
public_key=open("~/.ssh/id_rsa.pub", "r", encoding="utf-8").read().strip(),
)
4) Create the instance
instance = client.compute.instances.create(
name="cpu-prod-1",
flavor=flavor["name"],
image_id=image_id,
subnet_id=subnet["id"],
firewall_ids=[fw["id"]],
ssh_key_id=ssh_key["id"],
root_volume_gb=80,
enable_public_ipv4=True,
tags={
"env": "prod",
"owner": "platform",
"service": "core-api",
"cost_center": "cc-042",
},
)
5) Wait for readiness and connect
client.waiters.instance_running(instance["id"], timeout_seconds=900)
details = client.compute.instances.get(instance["id"])
print("Instance:", details["id"])
print("Public IPv4:", details["public_ipv4"])
print("SSH user:", details.get("ssh_user", "ubuntu"))
# ssh ubuntu@PUBLIC_IP
For fleets, model instances as immutable units: bake images, deploy via tags, and roll forward on change. Avoid manual drift.
Manage Instances
List instances
for inst in client.compute.instances.list(status="running", limit=50):
print(inst["id"], inst["name"], inst["public_ipv4"])
Stop / start / reboot
instance_id = "srv_1234567890"
client.compute.instances.stop(instance_id)
client.waiters.instance_stopped(instance_id, timeout_seconds=600)
client.compute.instances.start(instance_id)
client.waiters.instance_running(instance_id, timeout_seconds=900)
client.compute.instances.reboot(instance_id, mode="soft")
Attach a volume
vol = client.storage.volumes.create(
name="data-volume",
zone="us-west-1a",
size_gb=200,
type="ssd",
)
client.compute.instances.attach_volume(
instance_id=instance_id,
volume_id=vol["id"],
device="/dev/sdb",
)
Create a snapshot
snap = client.storage.snapshots.create_from_volume(
volume_id=vol["id"],
name="data-volume-snap-001",
)
client.waiters.snapshot_available(snap["id"], timeout_seconds=1200)
Terminate
client.compute.instances.terminate(instance_id)
Networking
Build private networks with controlled ingress and route traffic via public IPs, NAT gateways, and load balancers.
Create a public IP and associate it
eip = client.networks.public_ips.allocate(name="api-eip")
client.compute.instances.associate_public_ip(
instance_id="srv_1234567890",
public_ip_id=eip["id"],
)
Security groups (firewall policies)
fw = client.networks.firewalls.create(name="web-tier")
client.networks.firewalls.add_ingress_rule(
firewall_id=fw["id"],
protocol="tcp",
port_range="443",
source_cidr="0.0.0.0/0",
description="HTTPS",
)
Object Storage
Object storage is ideal for artifacts, backups, logs, and static content. Buckets are globally unique within your account.
Create a bucket
bucket = client.storage.buckets.create(
name="prod-artifacts",
region="us-west-1",
versioning=True,
)
Upload and download objects
client.storage.put_object(
bucket="prod-artifacts",
key="releases/app-1.9.2.tar.gz",
file_path="./dist/app-1.9.2.tar.gz",
content_type="application/gzip",
)
client.storage.get_object(
bucket="prod-artifacts",
key="releases/app-1.9.2.tar.gz",
file_path="./downloads/app-1.9.2.tar.gz",
)
Managed Databases
Provision managed database instances with automated backups, maintenance windows, and optional high availability.
Create a PostgreSQL instance
db = client.databases.instances.create(
name="orders-db",
engine="postgres",
engine_version="16",
size_gb=200,
instance_class="db-cpu-4",
ha=True,
subnet_id="subnet_1234567890",
backup_retention_days=14,
maintenance_window="sun:02:00-sun:03:00",
tags={"env": "prod", "service": "orders"},
)
client.waiters.database_available(db["id"], timeout_seconds=1800)
endpoint = client.databases.instances.get(db["id"])["endpoint"]
print("DB endpoint:", endpoint)
Create a read replica
replica = client.databases.replicas.create(
source_instance_id=db["id"],
name="orders-db-ro-1",
)
client.waiters.database_available(replica["id"], timeout_seconds=1800)
REST Basics
The REST API is resource-oriented and uses JSON over HTTPS. Requests must include authentication headers.
Responses include a stable request_id for troubleshooting and audit correlation.
Example request
POST /v1/compute/instances HTTP/1.1
Host: api.linkerweb.com
Authorization: Bearer <token>
Content-Type: application/json
Idempotency-Key: 2f3b2c9a-1e2f-4f9b-bad3-3bd22d1a6e8a
{
"name": "cpu-prod-1",
"flavor": "c2-standard-4",
"image_id": "img_abc123",
"subnet_id": "subnet_def456",
"enable_public_ipv4": true,
"tags": { "env": "prod", "service": "core-api" }
}
Example response
{
"request_id": "req_01HV2Q1J6G9G8FQY9P9V8YQG2N",
"data": {
"id": "srv_1234567890",
"status": "provisioning",
"created_at": "2026-02-28T10:12:31Z"
}
}
Pagination
List endpoints support cursor-based pagination for stable iteration.
{
"data": [ ... ],
"next_cursor": "cursor_01HV2Q..."
}
items = []
cursor = None
while True:
page = client.compute.instances.list(limit=100, cursor=cursor)
items.extend(page["data"])
cursor = page.get("next_cursor")
if not cursor:
break
Idempotency
For create operations, provide an idempotency key to make retries safe and avoid duplicate resources. The SDK automatically generates and retries with the same key, but you can supply your own.
client.compute.instances.create(
name="cpu-prod-1",
flavor="c2-standard-4",
image_id="img_abc123",
subnet_id="subnet_def456",
idempotency_key="a9b2f7f2-0c2f-4a10-93b1-7c9d4b2d5c11",
)
Errors
Errors are returned as structured JSON with a machine-readable code and a human-friendly message.
{
"request_id": "req_01HV2Q...",
"error": {
"code": "quota_exceeded",
"message": "Instance quota exceeded for flavor c2-standard-4 in region us-west-1",
"details": { "quota": 20, "used": 20 }
}
}
Python exception handling
from linkerweb import LinkerClient
from linkerweb.errors import (
AuthenticationError,
PermissionDeniedError,
RateLimitError,
ValidationError,
ApiError,
)
client = LinkerClient()
try:
client.compute.instances.create(
name="cpu-prod-1",
flavor="c2-standard-4",
image_id="img_abc123",
subnet_id="subnet_def456",
)
except AuthenticationError:
print("Invalid credentials or expired token.")
except PermissionDeniedError:
print("The principal is not allowed to perform this action.")
except ValidationError as e:
print("Invalid request:", e)
except RateLimitError:
print("Too many requests. Back off and retry.")
except ApiError as e:
print("Unexpected API error:", e)
Rate Limits
To protect platform stability, API calls are rate-limited per account and per token. When a request is throttled,
you will receive HTTP 429 with retry guidance.
| Header | Description |
|---|---|
Retry-After |
Seconds to wait before retrying |
X-RateLimit-Remaining |
Remaining tokens in the current window |
X-RateLimit-Reset |
Unix timestamp when the window resets |
Load Balancers
Distribute incoming traffic across multiple instances to improve availability and handle increased load. Load balancers support health checks, SSL termination, and session affinity.
Create a load balancer
lb = client.networks.load_balancers.create(
name="api-lb",
scheme="internet-facing",
listeners=[
{
"protocol": "HTTP",
"port": 80,
"target_group": "api-backend",
},
{
"protocol": "HTTPS",
"port": 443,
"certificate_id": "cert_1234567890",
"target_group": "api-backend",
},
],
health_check={
"protocol": "HTTP",
"port": 8080,
"path": "/health",
"interval_seconds": 30,
"timeout_seconds": 5,
"healthy_threshold": 2,
"unhealthy_threshold": 3,
},
tags={"env": "prod", "service": "api"},
)
client.waiters.load_balancer_available(lb["id"], timeout_seconds=600)
print("Load balancer DNS:", lb["dns_name"])
Attach instances to a target group
client.networks.load_balancers.register_targets(
load_balancer_id=lb["id"],
target_group="api-backend",
instance_ids=["srv_1234567890", "srv_0987654321"],
)
Auto-scaling
Automatically adjust the number of instances based on demand. Configure scaling policies based on CPU, memory, network metrics, or custom CloudWatch metrics.
Create an auto-scaling group
asg = client.compute.auto_scaling_groups.create(
name="api-asg",
min_size=2,
max_size=10,
desired_capacity=3,
launch_template={
"image_id": "img_abc123",
"flavor": "c2-standard-4",
"subnet_id": "subnet_def456",
"user_data": "#!/bin/bash\napt-get update && apt-get install -y nginx",
},
target_groups=["api-backend"],
health_check_type="ELB",
health_check_grace_period=300,
tags={"env": "prod", "service": "api"},
)
Configure scaling policies
# Scale up when CPU > 70%
client.compute.auto_scaling_groups.put_scaling_policy(
auto_scaling_group_id=asg["id"],
policy_name="scale-up-cpu",
policy_type="TargetTrackingScaling",
target_tracking_config={
"predefined_metric_specification": {
"predefined_metric_type": "ASGAverageCPUUtilization",
},
"target_value": 70.0,
},
estimated_instance_warmup=300,
)
Monitoring & Metrics
Monitor resource health, performance, and costs using built-in metrics and custom alarms. Metrics are available at 1-minute granularity and retained for 15 months.
Query instance metrics
metrics = client.monitoring.get_metric_statistics(
namespace="Compute/Instances",
metric_name="CPUUtilization",
dimensions={"InstanceId": "srv_1234567890"},
start_time="2026-02-28T00:00:00Z",
end_time="2026-02-28T23:59:59Z",
period=300,
statistics=["Average", "Maximum"],
)
for point in metrics["datapoints"]:
print(f"{point['timestamp']}: avg={point['average']:.2f}%, max={point['maximum']:.2f}%")
Create a CloudWatch alarm
alarm = client.monitoring.put_metric_alarm(
alarm_name="high-cpu-api-instance",
alarm_description="Alert when CPU exceeds 80%",
metric_name="CPUUtilization",
namespace="Compute/Instances",
statistic="Average",
period=300,
evaluation_periods=2,
threshold=80.0,
comparison_operator="GreaterThanThreshold",
dimensions={"InstanceId": "srv_1234567890"},
alarm_actions=["arn:linkerweb:sns:us-west-1:123456789012:alerts"],
)
Performance Optimization
Best practices for optimizing application performance, reducing latency, and minimizing costs.
Instance placement
Use placement groups to launch instances close together for low-latency networking. Ideal for high-performance computing and tightly-coupled workloads.
pg = client.compute.placement_groups.create(
name="hpc-cluster",
strategy="cluster",
)
instance = client.compute.instances.create(
name="hpc-node-1",
flavor="c2-standard-16",
image_id="img_abc123",
subnet_id="subnet_def456",
placement_group_id=pg["id"],
)
Caching strategies
Use managed Redis or Memcached for application-level caching. For static content, leverage CloudFront CDN integration or object storage with public read access.
Database optimization
- Enable query performance insights to identify slow queries
- Use read replicas to distribute read traffic
- Configure connection pooling to reduce overhead
- Enable automated backups during off-peak hours
Billing & Usage
Retrieve usage signals for reporting, cost allocation, and anomaly detection. Usage is emitted at a fine granularity and can be aggregated by tags.
Get current month usage
usage = client.billing.usage.summary(
from_date="2026-02-01",
to_date="2026-02-28",
group_by=["service", "tag:env", "tag:cost_center"],
)
for row in usage["rows"]:
print(row["service"], row["amount_usd"], row["dimensions"])
List invoices
for inv in client.billing.invoices.list(limit=20):
print(inv["invoice_id"], inv["period"], inv["total_usd"], inv["status"])
Webhooks
Webhooks let you subscribe to platform events (instance lifecycle, database backups, billing alerts). Use signature verification to ensure events are authentic.
Create a webhook endpoint
endpoint = client.webhooks.endpoints.create(
name="prod-events",
url="https://example.com/linkerweb/webhooks",
events=[
"compute.instance.running",
"compute.instance.stopped",
"databases.backup.completed",
"billing.invoice.finalized",
],
)
Verify webhook signatures (Flask)
from flask import Flask, request
from linkerweb import Webhook
app = Flask(__name__)
@app.post("/linkerweb/webhooks")
def handle_webhook():
signature = request.headers.get("X-Linkerweb-Signature", "")
timestamp = request.headers.get("X-Linkerweb-Timestamp", "")
payload = request.get_data()
event = Webhook.verify_and_parse(
payload=payload,
signature=signature,
timestamp=timestamp,
secret="YOUR_WEBHOOK_SIGNING_SECRET",
tolerance_seconds=300,
)
# Route events by type
if event["type"] == "compute.instance.running":
instance_id = event["data"]["instance_id"]
print("Instance is running:", instance_id)
return {"ok": True}
SDK Method Reference
Complete reference for all SDK methods organized by service namespace.
Compute Service
| Method | Description |
|---|---|
client.compute.instances.create() |
Create a new compute instance |
client.compute.instances.get(id) |
Get instance details by ID |
client.compute.instances.list() |
List all instances with optional filters |
client.compute.instances.start(id) |
Start a stopped instance |
client.compute.instances.stop(id) |
Stop a running instance |
client.compute.instances.reboot(id) |
Reboot an instance |
client.compute.instances.terminate(id) |
Terminate an instance |
client.compute.flavors.list() |
List available instance flavors |
client.images.list() |
List available OS images |
Networking Service
| Method | Description |
|---|---|
client.networks.vpcs.create() |
Create a VPC |
client.networks.subnets.create() |
Create a subnet |
client.networks.firewalls.create() |
Create a firewall policy |
client.networks.public_ips.allocate() |
Allocate a public IP address |
client.networks.load_balancers.create() |
Create a load balancer |
Storage Service
| Method | Description |
|---|---|
client.storage.volumes.create() |
Create a block storage volume |
client.storage.buckets.create() |
Create an object storage bucket |
client.storage.put_object() |
Upload an object to a bucket |
client.storage.get_object() |
Download an object from a bucket |
Databases Service
| Method | Description |
|---|---|
client.databases.instances.create() |
Create a managed database instance |
client.databases.replicas.create() |
Create a read replica |
client.databases.backups.create() |
Create a manual backup |
REST API Reference
Complete REST API endpoint reference with request/response examples.
Compute Endpoints
| Method | Endpoint | Description |
|---|---|---|
| POST | /v1/compute/instances |
Create an instance |
| GET | /v1/compute/instances |
List instances |
| GET | /v1/compute/instances/{id} |
Get instance details |
| POST | /v1/compute/instances/{id}/start |
Start instance |
| POST | /v1/compute/instances/{id}/stop |
Stop instance |
| DELETE | /v1/compute/instances/{id} |
Terminate instance |
Networking Endpoints
| Method | Endpoint | Description |
|---|---|---|
| POST | /v1/networks/vpcs |
Create a VPC |
| POST | /v1/networks/subnets |
Create a subnet |
| POST | /v1/networks/firewalls |
Create a firewall |
| POST | /v1/networks/public-ips |
Allocate public IP |
| POST | /v1/networks/load-balancers |
Create load balancer |
Storage Endpoints
| Method | Endpoint | Description |
|---|---|---|
| POST | /v1/storage/volumes |
Create a volume |
| POST | /v1/storage/buckets |
Create a bucket |
| PUT | /v1/storage/buckets/{name}/objects/{key} |
Upload object |
| GET | /v1/storage/buckets/{name}/objects/{key} |
Download object |
Authentication
All REST API requests must include authentication headers:
Authorization: Bearer YOUR_ACCESS_TOKEN
Content-Type: application/json
Idempotency-Key: {uuid} # Required for POST/PUT/DELETE
Response Format
Successful responses return HTTP 200-299 with JSON body:
{
"request_id": "req_01HV2Q1J6G9G8FQY9P9V8YQG2N",
"data": { ... }
}
Error responses return HTTP 400+ with error details:
{
"request_id": "req_01HV2Q1J6G9G8FQY9P9V8YQG2N",
"error": {
"code": "validation_error",
"message": "Invalid parameter: flavor",
"details": { "field": "flavor", "reason": "not_found" }
}
}
Troubleshooting
Common issues
Authentication failed
Confirm your access key and secret key, region, and system clock. Rotate and re-test if needed.
Instance stuck in provisioning
Check quota, flavor capacity in the zone, and network requirements. Use the instance events endpoint for details.
SSH connection refused
Validate firewall ingress on port 22, correct SSH user, and that the public IP is attached to the instance.
Rate limited (429)
Back off using Retry-After. Prefer batch APIs and pagination. Avoid tight polling loops.
Enable SDK debug logging
import logging
logging.basicConfig(level=logging.DEBUG)
from linkerweb import LinkerClient
client = LinkerClient()