FLUX.1 [dev]
Overview
FLUX.1 [dev] is a state-of-the-art 12 billion parameter text-to-image model developed by Black Forest Labs (BFL). It is the open-weight, guidance-distilled version of BFL’s flagship FLUX.1 [pro] model, designed to deliver similar high-quality outputs and strong prompt adherence while being more efficient.
As an open model, FLUX.1 [dev] has quickly become a go-to choice for AI artists and developers due to its exceptional detail, style diversity, and complex scene generation capabilities. Notably, it excels at following complex prompts and producing anatomically accurate details (even notoriously tricky elements like hands and faces).
FLUX.1 [dev] supports both text-to-image and image-to-image generation, enabling users to create images from scratch or transform existing images based on a text prompt.
Getting Started
Before generating images, make sure your account is ready to use the Inference API. Follow the Getting Started guide to create an account and top up your balance.
Authorization
To access and use these API endpoints, authorization is required. Please visit our Authorization page for detailed instructions on obtaining and using a bearer token for secure API access.
Generating images
Parameters
prompt
(string)
prompt
(string)The text description to generate an image from. This is the core input that drives the output.
size
(string)
size
(string)The resolution of the output image in "width*height"
format.
Default: "1024*1024"
num_inference_steps
(integer)
num_inference_steps
(integer)How many denoising steps to run during generation.
More steps may improve quality, at the cost of speed.
Default: 28
seed
(integer)
seed
(integer)A seed value for reproducibility.
The same seed + prompt + model version = same image.
Use -1
to randomize.
Default: -1
guidance_scale
(float)
guidance_scale
(float)CFG (Classifier-Free Guidance) controls how closely the image follows the prompt.
Higher values = stronger prompt adherence, but may reduce creativity.
Default: 3.5
num_images
(integer)
num_images
(integer)How many images to generate per request.
Default: 1
enable_safety_checker
(boolean)
enable_safety_checker
(boolean)If enabled, content will be checked for safety violations.
Default: true
output_format
(string)
output_format
(string)The file format of the generated image.
Possible values: "jpeg"
, "png"
, "webp"
Default: "jpeg"
output_quality
(integer)
output_quality
(integer)Applies to "jpeg"
and "webp"
formats.
Defines compression quality.
Range: 1–100
Default: 95
enable_base64_output
(boolean)
enable_base64_output
(boolean)If true
, the API will return the image as a base64-encoded string in the response.
Default: false
Text to image
curl --request POST "https://inference.datacrunch.io/flux-dev/predict" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer <your_api_key>" \
--data '{
"input": {
"prompt": "a scientist racoon eating icecream in a datacenter",
"num_inference_steps": 50
}
}'
Image to image
# Encode the image
INPUT_BASE64=$(base64 -i cats.png)
# Write JSON payload to a file
cat > payload.json <<EOF
{
"input": {
"prompt": "Three cats wearing detailed astronaut suits inside a space shuttle",
"num_inference_steps": 50,
"guidance_scale": 7.5,
"strength": 0.7,
"image": "${INPUT_BASE64}"
}
}
EOF
# Send the request
curl --request POST "https://inference.datacrunch.io/flux-dev/predict" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer <your_api_key>" \
--data @payload.json
LoRA support
FLUX.1 [dev] also supports LoRA (Low-Rank Adaptation) extensions for fine-tuning model behavior on specific styles, characters, or domains — without retraining the full model.
Endpoint: Use
https://inference.datacrunch.io/flux-dev-lora/predict
instead of the standardflux-dev
path.
Parameters
In addition to the common parameters, you can provide one or more LoRA modules to influence the model’s behavior during image generation. Multiple LoRAs will be merged together before inference.
loras
A list of LoRA weights to apply. Each entry describes a single LoRA file and how strongly it should influence the generation.
Example:
"loras": [
{
"path": "https://huggingface.co/your/lora1.safetensors",
"scale": 0.8
},
{
"path": "https://huggingface.co/your/lora2.safetensors",
"scale": 1.2
}
]
Each item in the list is a LoraWeight
object with the following fields:
path
(string)
The full URL or a local path to the LoRA weights file.
This file must be in .safetensors
format.
scale
(float)
A scaling factor that adjusts the influence of the LoRA on the final image.
Higher values mean stronger stylistic or content impact from the LoRA.
Default: 1.0
Text to image
curl --request POST "https://inference.datacrunch.io/flux-dev-lora/predict" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer <your_api_key>" \
--data '{
"input": {
"prompt": "a cool anteater wearing sunglasses chilling on the beach",
"loras": [
{
"path": "https://huggingface.co/gradjitta/anteater_lora/resolve/main/anteater_lora.safetensors?download=true",
"scale": 1.0
}
]
}
}'
Image to image
# Encode the image
INPUT_BASE64=$(base64 -i palm.png)
# Write JSON payload to a file
cat > payload.json <<EOF
{
"input": {
"prompt": "a cool anteater wearing sunglasses chilling on the beach",
"strength": 0.7,
"image": "${INPUT_BASE64}",
"loras": [
{
"path": "https://huggingface.co/gradjitta/anteater_lora/resolve/main/anteater_lora.safetensors?download=true",
"scale": 1.0
}
]
}
}
EOF
# Send the request
curl --request POST "https://inference.datacrunch.io/flux-dev-lora/predict" \
--header "Content-Type: application/json" \
--header "Authorization: Bearer <your_api_key>" \
--data @payload.json
Last updated
Was this helpful?