Skip to main content

Documentation Index

Fetch the complete documentation index at: https://docs.deapi.ai/llms.txt

Use this file to discover all available pages before exploring further.

deAPI’s OpenAI-compatible API gives you access to image generation, speech synthesis, transcription, video generation, and embeddings — at lower cost, on decentralized GPU infrastructure. Switch from OpenAI by changing two parameters. Your existing code stays unchanged. This is the same approach used by Groq, Together AI, Fireworks AI, and other leading inference providers.

Quick start

from openai import OpenAI

client = OpenAI(
    api_key="dpn-sk-your-token-here",
    base_url="https://oai.deapi.ai/v1"
)

# Generate an image — same call as OpenAI
response = client.images.generate(
    model="Flux1schnell",
    prompt="A futuristic city at sunset, cinematic lighting",
    size="1024x1024",
    n=1
)
print(response.data[0].url)
Get your API key at app.deapi.ai/dashboard. New accounts receive a $5 bonus — no credit card required.

What changes vs OpenAI

ParameterOpenAIdeAPI
base_url / baseURLhttps://api.openai.com/v1https://oai.deapi.ai/v1
api_key / apiKeysk-...Your deAPI key (starts with dpn-sk-)
modele.g. dall-e-3deAPI model ID (e.g. Flux1schnell)
Your full API key looks like dpn-sk-2206|ixoAULVrh... — the dpn-sk- prefix is required by the gateway. You can find it in your Dashboard → Settings → API Keys. Everything else — request format, response schema, error envelope — follows the OpenAI specification.

Available models

deAPI runs open-source models, not OpenAI’s proprietary ones. Model IDs are native slugs (e.g. Flux1schnell, Kokoro, WhisperLargeV3) — not OpenAI model names. This is by design. Use GET /v1/models to fetch the current list:
curl "https://oai.deapi.ai/v1/models" \
  -H "Authorization: Bearer dpn-sk-your-token-here"
Response follows the OpenAI format:
{
  "object": "list",
  "data": [
    { "id": "Flux1schnell", "object": "model", "created": 1700000000, "owned_by": "deapi" },
    { "id": "Kokoro", "object": "model", "created": 1700000000, "owned_by": "deapi" }
  ]
}
Models are added and updated regularly. For the full, always-current list with capabilities, limits, and defaults, see the Model Selection endpoint and the Models guide.

Supported endpoints

EndpointStatusDescription
GET /v1/modelsList available models
POST /v1/images/generationsText-to-image
POST /v1/images/editsImage editing (img2img)
POST /v1/audio/speechText-to-speech
POST /v1/audio/transcriptionsAudio & video transcription
POST /v1/embeddingsText embeddings
POST /v1/videosVideo generation
POST /v1/chat/completionsNot in scope — deAPI does not serve LLMs
POST /v1/images/edits with maskInpainting not supported — returns 400
POST /v1/files🔜Coming soon — files are sent inline (multipart) for now

Migration examples

Image generation (DALL-E → deAPI)

Before (OpenAI):
from openai import OpenAI

client = OpenAI(api_key="sk-...")

response = client.images.generate(
    model="dall-e-3",
    prompt="A cozy cabin in the woods",
    size="1024x1024",
    n=1
)
After (deAPI) — change api_key, base_url, and model:
from openai import OpenAI

client = OpenAI(
    api_key="dpn-sk-your-token-here",       # ← changed
    base_url="https://oai.deapi.ai/v1"   # ← changed
)

response = client.images.generate(
    model="Flux1schnell",                 # ← deAPI model ID
    prompt="A cozy cabin in the woods",
    size="1024x1024",
    n=1
)

Text-to-speech (OpenAI TTS → deAPI)

Before (OpenAI):
audio = client.audio.speech.create(
    model="tts-1",
    voice="alloy",
    input="Hello world"
)
After (deAPI):
from openai import OpenAI

client = OpenAI(
    api_key="dpn-sk-your-token-here",
    base_url="https://oai.deapi.ai/v1"
)

audio = client.audio.speech.create(
    model="Kokoro",
    voice="alloy",   # OpenAI voice aliases supported: alloy, echo, fable, onyx, nova, shimmer
    input="Hello world"
)
Supported output formats: mp3, wav, flac, opus.
Kokoro supports the same six OpenAI voice aliases (alloy, echo, fable, onyx, nova, shimmer). Voice language is determined by the voice prefix, not the input text: af_/am_ → US English, bf_/bm_ → British English. See the TTS endpoint docs for the full voice list.

Transcription (Whisper → deAPI)

Before (OpenAI):
with open("audio.mp3", "rb") as f:
    transcript = client.audio.transcriptions.create(
        model="whisper-1",
        file=f
    )
print(transcript.text)
After (deAPI):
from openai import OpenAI

client = OpenAI(
    api_key="dpn-sk-your-token-here",
    base_url="https://oai.deapi.ai/v1"
)

with open("audio.mp3", "rb") as f:
    transcript = client.audio.transcriptions.create(
        model="WhisperLargeV3",
        file=f
    )
print(transcript.text)
Supported response_format values: "json" (default), "text", "verbose_json". The maximum file size is 80MB – more than three times the OpenAI limit of 25MB.

Embeddings

Before (OpenAI):
response = client.embeddings.create(
    model="text-embedding-3-small",
    input="The quick brown fox"
)
After (deAPI):
from openai import OpenAI

client = OpenAI(
    api_key="dpn-sk-your-token-here",
    base_url="https://oai.deapi.ai/v1"
)

response = client.embeddings.create(
    model="Bge_M3_FP16",
    input="The quick brown fox"
)

# Vector dimension: 1024
print(len(response.data[0].embedding))  # → 1024
Supports single string and array input. Supports encoding_format: "base64".

Framework integrations

Because deAPI uses the OpenAI API format, it works with any framework that accepts a base_url / api_base parameter.

LangChain

from langchain_openai import OpenAIEmbeddings

embeddings = OpenAIEmbeddings(
    openai_api_key="dpn-sk-your-token-here",
    openai_api_base="https://oai.deapi.ai/v1",
    model="Bge_M3_FP16"
)

LlamaIndex

from llama_index.embeddings.openai import OpenAIEmbedding

embed_model = OpenAIEmbedding(
    api_key="dpn-sk-your-token-here",
    api_base="https://oai.deapi.ai/v1",
    model="Bge_M3_FP16"
)

Vercel AI SDK

import { createOpenAI } from "@ai-sdk/openai";

const deapi = createOpenAI({
  apiKey: "dpn-sk-your-token-here",
  baseURL: "https://oai.deapi.ai/v1",
});

Environment variables

If your codebase already uses the standard OpenAI environment variables, override them at the process level — zero code changes required:
export OPENAI_API_KEY="dpn-sk-your-token-here"
export OPENAI_BASE_URL="https://oai.deapi.ai/v1"
For multi-provider setups (e.g. OpenAI for chat, deAPI for images and audio), instantiate separate clients:
import os
from openai import OpenAI

openai_client = OpenAI(api_key=os.environ["OPENAI_KEY"])

deapi_client = OpenAI(
    api_key=os.environ["DEAPI_KEY"],
    base_url="https://oai.deapi.ai/v1",
)

# openai_client → GPT models
# deapi_client  → images, audio, video, embeddings

Known differences

FeatureOpenAIdeAPI
Chat completions❌ Out of scope
Inpainting (mask in /v1/images/edits)❌ Returns 400
Model IDsdall-e-3, tts-1, whisper-1Native slugs (e.g. Flux1schnell, Kokoro)
Image size valuesOpenAI fixed setModel-specific — check Models
Max n (images)104
Audio file size limit25 MB80 MB
Embedding dimensions1536 (ada-002)Model-specific (e.g. 1024 for Bge_M3_FP16)
style: "vivid"Controls image styleAccepted but ignored
Video generationPOST /v1/videos
/v1/files🔜 Coming soon
For the full list of models, supported parameters, and limits, see the Model Selection endpoint.

Next steps

Models

Discover all available models and their capabilities

Quickstart

Get your API key and make your first request

API v2 Reference

Full native API reference with all endpoints

Pricing

Pay-as-you-go rates per task and model