Endpoint for requesting img2img inference. Accepts either a single image via “image” or multiple images via “images[]” (mutually exclusive).
slug, check specific limits and features, and verify LoRA availability. Please omit the LoRA parameter by default during initial testing.Bearer authentication header of the form Bearer <token>, where <token> is your auth token.
application/json Image generation parameters. Either "image" or "images[]" must be provided (not both).
The main prompt for image generation
"A beautiful sunset over mountains"
The model to use for image editing. Available models can be retrieved via the GET /api/v1/client/models endpoint.
"QwenImageEdit_Plus_NF4"
Number of inference steps
20
Random seed for generation
42
Elements to avoid in the generated image
"blur, darkness, noise"
Single source image to edit. Required if "images[]" is not provided. Mutually exclusive with "images[]". Supported formats: JPG, JPEG, PNG, GIF, BMP, WebP. Maximum file size: 10 MB.
Multiple source images for editing. Maximum count is model-dependent (see model specs max_input_images, defaults to 1). Required if "image" is not provided. Mutually exclusive with "image". Supported formats: JPG, JPEG, PNG, GIF, BMP, WebP. Maximum file size per image: 10 MB.
Output image width in pixels. Optional - defaults to first input image width if not specified. Subject to model-specific min/max limits.
512
Output image height in pixels. Optional - defaults to first input image height if not specified. Subject to model-specific min/max limits.
512
Array of LoRA models to apply
Guidance scale for the generation
7.5
Optional HTTPS URL to receive webhook notifications for job status changes (processing, completed, failed). Must be HTTPS. Max 2048 characters. See Webhook Documentation for payload structure and authentication details.
2048"https://your-server.com/webhooks/deapi"
ID of the inference request.
Information from success endpoint