hpc-ai logo
hpc-ai logo
Cloud GPUs
Model APIs
Pricing
Docs
Resources
Company

Ready to Dive In?

Join Fine-Tuning SDK today.

Contact Us →
hpc-ai logo

HPC AI TECHNOLOGY PTE. LTD.

1 MARITIME SQUARE HARBOURFRONT

CENTRE #11-18, Singapore

Products

  • Cloud GPUs
  • Model APIs New
  • Fine-Tuning
  • Reserved Cluster

Latest Models

  • DeepSeek: DeepSeek V4 Pro
  • DeepSeek: DeepSeek V4 Flash
  • Xiaomi: MiMo V2.5 Pro
  • Xiaomi: MiMo V2.5
  • MoonshotAI: Kimi K2.6

Pricing

  • Cloud GPUs
  • Model APIs
  • Fine-Tuning

Featured GPUs

  • B200 SXM6
  • H200 SXM5
  • B300 SXM6

Developers

  • Docs
  • API Service
  • Quick Start

Resources

  • Blog
  • Customer
  • Partner Program
  • Hosting

Company

  • About Us
  • Contact Us
  • Newsroom
  • Research Papers

Legal

  • Privacy Policy
  • Terms of Service
FacebookXGitHubMediumLinkedinSlack

Copyright © 2026, HPC AI TECHNOLOGY PTE. LTD.

Fine-Tuning

image
What's new

Fine-Tuning SDK

A versatile SDK used to fine-tune language models on gpu cloud.
Seize full control of model fine-tuning— we take care of the
underlying infrastructure.
image
What's new

Fine-Tuning SDK

A versatile SDK used to fine-tune language models on gpu cloud.
Seize full control of model fine-tuning— we take care of the
underlying infrastructure.
image
What's new

Fine-Tuning SDK

A versatile SDK used to fine-tune language models on gpu cloud.
Seize full control of model fine-tuning— we take care of the
underlying infrastructure.

Major Advantages

Simplicity

  • Simply import + hpcai + API Key to use
  • Supports standard PyTorch syntax
  • Low learning curve, existing code modifications < 10 lines

Flexible & Controllable

  • Custom Loss Function, Handwritten Training Loop
  • Supports LoRA and Full Fine-tuning
  • Meets research-level fine-tuning needs

Colossal-AI Inside

  • Customizable application data parallelism, tensor parallelism, and pipeline parallelism
  • Increase throughput and reduce memory usage
  • Run larger models with less money

Reliability

  • Customizable handling of node failures, supports checkpoint export
  • Supports breakpoint recovery
  • Model weights belong to the user and can be downloaded and deployed at any time

Target Scenarios

Scale experiments effortlessly: template-based tuning is too restrictive for research, and custom distributed code is costly and fragile. The Fine-tuning SDK gives you full experimental freedom, letting you iterate locally and scale to large clusters without the engineering burden.

image

Fine-Tuning SDK Process

You Control (The Logic)

1. Dataset & Tokenizer Definitions

2. Hyperparameters (Learning Rate, Batch Size, Epochs)

3. Training Loop Construction (Step-by-step control)

4. Custom Algorithms

5. Evaluation Metrics

Install

(pip install hpcai)

[The Bridge: API_KEY]

We Handle (The Infrastructure)

1. Massive GPU Allocation & Orchestration

2. Environment Setup (CUDA, PyTorch, Dependencies)

3. Distributed Parallelism (Colossal-AI Acceleration)

4. Checkpointing & State Management

Models Support

Model training and inference rates. All prices are in USD per million tokens.

Base Model
Prefill
Sample
Train
No data
No information available

Frequently Asked Questions

Yes! New accounts automatically receive free credits ($5.00) to get started. This allows you to run initial experiments and test the SDK workflow without adding a credit card immediately.