Skip to main content
POST
/
api
/
v1
/
client
/
txt2embedding
cURL
curl --request POST \
  --url https://api.deapi.ai/api/v1/client/txt2embedding \
  --header 'Accept: <accept>' \
  --header 'Authorization: Bearer <token>' \
  --header 'Content-Type: application/json' \
  --data '
{
  "input": "This is a sample text for embedding generation.",
  "model": "Bge_M3_FP16",
  "return_result_in_response": false
}
'
{
  "data": {
    "request_id": "c08a339c-73e5-4d67-a4d5-231302fbff9a"
  }
}
Prerequisite: To ensure a successful request, you must first consult the Model Selection endpoint to identify a valid model slug, check specific limits and features, and verify LoRA availability.

Authorizations

Authorization
string
header
required

Bearer authentication header of the form Bearer <token>, where <token> is your auth token.

Headers

Accept
enum<string>
default:application/json
required
Available options:
application/json

Body

application/json

Text to embedding conversion parameters

input
required

Input text(s) to generate embeddings for. Can be a single string or array of strings (max 2048 items). Each input limited to 8192 tokens, total request limited to 300k tokens.

Example:

"This is a sample text for embedding generation."

model
string
required

The embedding model to use. Available models can be retrieved via the GET /api/v1/client/models endpoint.

Example:

"Bge_M3_FP16"

return_result_in_response
boolean | null
default:false

If true, the embedding result will be returned directly in the response instead of only download url. Optional parameter.

Example:

false

Response

ID of the inference request.

data
object

Information from success endpoint