Skip to main content

Introduction

APUS AI Inference is an OpenAI-compatible API layer designed for Deterministic, Confidential, and Verifiable AI computation. It runs on the Arweave AO decentralized compute network, combining trusted execution with reproducible model outputs.

Core Features

  • Deterministic — Reproducible inference results across trusted nodes
  • Confidential — Secure model execution with data privacy guarantees
  • Verifiable — Transparent computation trace and on-chain attestations
  • OpenAI-Compatible — Works directly with the official openai SDK
Install the SDK
pip install openai
Initialize the Client
from openai import OpenAI

client = OpenAI(
    api_key="",  # Not required during testing
    base_url="https://hb.apus.network/[email protected]",
)

MODEL = "google/gemma-3-27b-it"
Single-turn Chat Example
resp = client.chat.completions.create(
    model=MODEL,
    messages=[{"role": "user", "content": "What is 2 + 2?"}],
)

print(resp.choices[0].message.content)
Multi-turn Conversation Example
messages = [
    {"role": "system", "content": "You are a math assistant."},
    {"role": "user", "content": "What is 10 * 10?"},
]

resp = client.chat.completions.create(model=MODEL, messages=messages)
print("Assistant:", resp.choices[0].message.content)

# Continue the conversation
messages += [
    {"role": "assistant", "content": resp.choices[0].message.content},
    {"role": "user", "content": "And what is 100 / 5?"},
]

resp2 = client.chat.completions.create(model=MODEL, messages=messages)
print("Assistant:", resp2.choices[0].message.content)

Notes

  • No API key required during the test phase
  • Replace base_url with your custom endpoint
  • Fully compatible with openai.chat.completions.create()
  • Designed for Deterministic, Confidential, and Verifiable AI inference