Agent SkillsAgent Skills
jeremylongshore

vastai-data-handling

@jeremylongshore/vastai-data-handling
jeremylongshore
1,761
231 forks
Updated 3/31/2026
View on GitHub

Manage training data and model artifacts securely on Vast.ai GPU instances. Use when transferring data to instances, managing checkpoints, or implementing secure data lifecycle on rented hardware. Trigger with phrases like "vastai data", "vastai upload data", "vastai checkpoints", "vastai data security", "vastai artifacts".

Installation

$npx agent-skills-cli install @jeremylongshore/vastai-data-handling
Claude Code
Cursor
Copilot
Codex
Antigravity

Details

Pathplugins/saas-packs/vastai-pack/skills/vastai-data-handling/SKILL.md
Branchmain
Scoped Name@jeremylongshore/vastai-data-handling

Usage

After installing, this skill will be available to your AI coding assistant.

Verify installation:

npx agent-skills-cli list

Skill Instructions


name: vastai-data-handling description: | Manage training data and model artifacts securely on Vast.ai GPU instances. Use when transferring data to instances, managing checkpoints, or implementing secure data lifecycle on rented hardware. Trigger with phrases like "vastai data", "vastai upload data", "vastai checkpoints", "vastai data security", "vastai artifacts". allowed-tools: Read, Write, Edit, Bash(vastai:), Bash(ssh:), Bash(scp:*) version: 1.0.0 license: MIT author: Jeremy Longshore jeremy@intentsolutions.io compatible-with: claude-code, codex, openclaw tags: [saas, vast-ai, compliance, data]


Vast.ai Data Handling

Overview

Manage training data and model artifacts securely on Vast.ai GPU instances. Covers data transfer, encryption, checkpoint management, and cleanup. Critical consideration: Vast.ai instances run on shared hardware operated by third-party hosts.

Prerequisites

  • Vast.ai instance with SSH access
  • Cloud storage (S3, GCS) for persistent artifacts
  • Understanding of data sensitivity classification

Instructions

Step 1: Data Transfer Patterns

# Small datasets (<5GB): Direct SCP
scp -P $PORT -r ./data/ root@$HOST:/workspace/data/

# Large datasets (5-50GB): Compressed transfer
tar czf - ./data/ | ssh -p $PORT root@$HOST "tar xzf - -C /workspace/"

# Very large datasets (>50GB): Cloud storage staging
# Upload to S3/GCS first, then download on instance
ssh -p $PORT root@$HOST "aws s3 sync s3://bucket/dataset/ /workspace/data/"

Step 2: Encrypted Data Transfer

import subprocess, os

def encrypt_and_upload(local_path, host, port, remote_path, passphrase):
    """Encrypt data before transferring to Vast.ai instance."""
    encrypted = f"{local_path}.enc"
    # Encrypt with AES-256
    subprocess.run([
        "openssl", "enc", "-aes-256-cbc", "-salt", "-pbkdf2",
        "-in", local_path, "-out", encrypted,
        "-pass", f"pass:{passphrase}",
    ], check=True)

    # Transfer encrypted file
    subprocess.run([
        "scp", "-P", str(port), encrypted,
        f"root@{host}:{remote_path}.enc",
    ], check=True)

    # Decrypt on instance
    subprocess.run([
        "ssh", "-p", str(port), f"root@{host}",
        f"openssl enc -aes-256-cbc -d -pbkdf2 "
        f"-in {remote_path}.enc -out {remote_path} "
        f"-pass pass:{passphrase} && rm {remote_path}.enc"
    ], check=True)

    os.remove(encrypted)

Step 3: Checkpoint to Cloud Storage

import torch, boto3, os

class CloudCheckpointManager:
    def __init__(self, s3_bucket, prefix, save_every=500):
        self.s3 = boto3.client("s3")
        self.bucket = s3_bucket
        self.prefix = prefix
        self.save_every = save_every

    def save(self, model, optimizer, step, loss):
        if step % self.save_every != 0:
            return
        local_path = f"/tmp/ckpt-{step}.pt"
        torch.save({
            "step": step, "loss": loss,
            "model": model.state_dict(),
            "optimizer": optimizer.state_dict(),
        }, local_path)
        self.s3.upload_file(local_path, self.bucket,
                           f"{self.prefix}/ckpt-{step}.pt")
        os.remove(local_path)
        print(f"Checkpoint saved: step {step}, loss {loss:.4f}")

    def load_latest(self):
        resp = self.s3.list_objects_v2(Bucket=self.bucket, Prefix=self.prefix)
        if not resp.get("Contents"):
            return None
        latest = sorted(resp["Contents"], key=lambda o: o["Key"])[-1]
        self.s3.download_file(self.bucket, latest["Key"], "/tmp/latest.pt")
        return torch.load("/tmp/latest.pt")

Step 4: Secure Cleanup Before Destroy

# ALWAYS clean sensitive data before destroying an instance
ssh -p $PORT root@$HOST << 'CLEANUP'
# Remove training data and checkpoints
rm -rf /workspace/data /workspace/checkpoints /workspace/*.pt

# Clear command history
history -c && rm -f ~/.bash_history

# Overwrite sensitive files (optional, for high-security)
find /workspace -name "*.env" -exec shred -u {} \;

echo "Cleanup complete"
CLEANUP

# Then destroy
vastai destroy instance $INSTANCE_ID

Step 5: Data Lifecycle Policy

Data TypeOn InstanceAfter JobRetention
Training dataDecrypt on useDelete before destroySource system only
CheckpointsLocal + cloud syncKeep in cloud storage30 days
Final modelLocalUpload to model registryPermanent
LogsLocalUpload to logging service90 days
Temp files/tmpAuto-deleted on destroyNone

Output

  • Data transfer patterns (SCP, compressed, cloud-staged)
  • Encrypted transfer for sensitive datasets
  • Cloud checkpoint manager with S3 integration
  • Secure cleanup script before instance destruction
  • Data lifecycle policy

Error Handling

ErrorCauseSolution
SCP timeoutLarge file or slow networkUse compressed transfer or cloud staging
Checkpoint upload failsS3 credentials not on instancePass AWS creds via env vars at instance creation
Disk full during trainingInsufficient disk allocationIncrease --disk or clean old checkpoints
Data left after destroySkipped cleanupAlways run cleanup script before vastai destroy

Resources

Next Steps

For enterprise access control, see vastai-enterprise-rbac.

Examples

Sensitive data workflow: Encrypt dataset locally, SCP encrypted file to instance, decrypt on-instance, train, save checkpoints to S3, clean and destroy.

Resume after preemption: Load latest checkpoint from S3 on new instance, continue training from last saved step.

More by jeremylongshore

View all
vertex-agent-builder
1,768

Build and deploy production-ready generative AI agents using Vertex AI, Gemini models, and Google Cloud infrastructure with RAG, function calling, and multi-modal capabilities. Use when appropriate context detected. Trigger with relevant phrases based on skill purpose.

gcp-examples-expert
1,768

Generate production-ready Google Cloud code examples from official repositories including ADK samples, Genkit templates, Vertex AI notebooks, and Gemini patterns. Use when asked to "show ADK example" or "provide GCP starter kit". Trigger with relevant phrases based on skill purpose.

genkit-production-expert
1,768

Build production Firebase Genkit applications including RAG systems, multi-step flows, and tool calling for Node.js/Python/Go. Deploy to Firebase Functions or Cloud Run with AI monitoring. Use when asked to "create genkit flow" or "implement RAG". Trigger with relevant phrases based on skill purpose.

validator-expert
1,768

Validate production readiness of Vertex AI Agent Engine deployments across security, monitoring, performance, compliance, and best practices. Generates weighted scores (0-100%) with actionable remediation plans. Use when asked to validate a deployment, run a production readiness check, audit security posture, or verify compliance for Vertex AI agents. Trigger with "validate deployment", "production readiness", "security audit", "compliance check", "is this agent ready for prod", "check my ADK agent", "review before deploy", or "production readiness check". Make sure to use this skill whenever validating ADK agents for Agent Engine.