Join our Discord server

Discord.Rocks API Documentation

Want a free proxy? https://scraper.api.airforce

Total requests: 646865

Total models: 74

Playground: https://llmplayground.net

Our maps: https://api.airforce/maps

Endpoints

There are six main endpoints:

OpenAI Compatible

openai.base_url = 'https://api.airforce'

LLM Models

['claude-3-haiku-20240307', 'claude-3-sonnet-20240229', 'claude-3-5-sonnet-20240620', 'claude-3-opus-20240229', 'chatgpt-4o-latest', 'gpt-4', 'gpt-4-0613', 'gpt-4-turbo', 'gpt-4o-mini-2024-07-18', 'gpt-4o-mini', 'gpt-3.5-turbo', 'gpt-3.5-turbo-0125', 'gpt-3.5-turbo-1106', 'gpt-3.5-turbo-16k', 'gpt-3.5-turbo-0613', 'gpt-3.5-turbo-16k-0613', 'gpt-4o', 'llama-3-70b-chat', 'llama-3-70b-chat-turbo', 'llama-3-8b-chat', 'llama-3-8b-chat-turbo', 'llama-3-70b-chat-lite', 'llama-3-8b-chat-lite', 'llama-2-70b-chat', 'llama-2-13b-chat', 'llama-2-7b-chat', 'llama-3.1-405b-turbo', 'llama-3.1-70b-turbo', 'llama-3.1-8b-turbo', 'LlamaGuard-2-8b', 'Yi-34B-Chat', 'Yi-34B', 'Yi-6B', 'Mixtral-8x7B-v0.1', 'Mixtral-8x22B', 'Mixtral-8x7B-Instruct-v0.1', 'Mixtral-8x22B-Instruct-v0.1', 'Mistral-7B-Instruct-v0.1', 'Mistral-7B-Instruct-v0.2', 'Mistral-7B-Instruct-v0.3', 'openchat-3.5', 'WizardLM-13B-V1.2', 'WizardCoder-Python-34B-V1.0', 'Qwen1.5-0.5B-Chat', 'Qwen1.5-1.8B-Chat', 'Qwen1.5-4B-Chat', 'Qwen1.5-7B-Chat', 'Qwen1.5-14B-Chat', 'Qwen1.5-72B-Chat', 'Qwen1.5-110B-Chat', 'Qwen2-72B-Instruct', 'gemma-2b-it', 'gemma-7b-it', 'gemma-2b', 'gemma-7b', 'dbrx-instruct', 'vicuna-7b-v1.5', 'vicuna-13b-v1.5', 'dolphin-2.5-mixtral-8x7b', 'deepseek-coder-33b-instruct', 'deepseek-coder-67b-instruct', 'deepseek-llm-67b-chat', 'Nous-Capybara-7B-V1p9', 'Nous-Hermes-2-Mixtral-8x7B-DPO', 'Nous-Hermes-2-Mixtral-8x7B-SFT', 'Nous-Hermes-llama-2-7b', 'Nous-Hermes-Llama2-13b', 'Nous-Hermes-2-Yi-34B', 'Mistral-7B-OpenOrca', 'alpaca-7b', 'OpenHermes-2-Mistral-7B', 'OpenHermes-2.5-Mistral-7B', 'WizardLM-2-8x22B', 'NexusRaven-V2-13B', 'Phind-CodeLlama-34B-v2', 'CodeLlama-7b-Python-hf', 'CodeLlama-7b-Python', 'CodeLlama-13b-Python-hf', 'CodeLlama-34b-Python-hf', 'CodeLlama-70b-Python-hf', 'snowflake-arctic-instruct', 'SOLAR-10.7B-Instruct-v1.0', 'StripedHyena-Hessian-7B', 'StripedHyena-Nous-7B', 'Llama-2-7B-32K-Instruct', 'CodeLlama-13b-Instruct', 'evo-1-131k-base', 'OLMo-7B-Instruct', 'Platypus2-70B-instruct', 'Snorkel-Mistral-PairRM-DPO', 'ReMM-SLERP-L2-13B', 'MythoMax-L2-13b', 'chronos-hermes-13b', 'Llama-Guard-7b', 'gemma-2-9b-it', 'gemma-2-27b-it', 'Toppy-M-7B', 'gemini-1.5-flash', 'gemini-1.5-pro', 'sparkdesk', 'cosmosrp']

API Key Usage

You can get a premium api key. Include the API key in the Authorization header as follows:

Authorization: Bearer YOUR_API_KEY

You can check the usage of your key via the following:

/check?key=<KEY>

Get api key at https://discord.gg/pMeMK4FwXB

Chat Completions Endpoint

The /v1/chat/completions and /chat/completions endpoints are used to send a prompt to the specified LLM model and receive a response.

Usage

Send a POST request to /v1/chat/completions or /chat/completions with the following JSON payload:

{
    'model': 'claude-3-opus',
    'messages': [
        {'role': 'system', 'content': 'System prompt (only the first message, once)'},
        {'role': 'user', 'content': 'Message content'},
        {'role': 'assistant', 'content': 'Assistant response'}
    ],
    'max_tokens': 2048,
    'stream': false,
    'temperature': 0.7,
    'top_p': 0.5,
    'top_k': 0
}

Response Format

The response will be in the following format:

{
    'id': 'chatcmpl-123',
    'object': 'chat.completion',
    'created': 1677652288,
    'model': 'claude-3-opus',
    'system_fingerprint': 'fp_44709d6fcb',
    'choices': [{
        'index': 0,
        'message': {
            'role': 'assistant',
            'content': 'Response content'
        },
        'logprobs': null,
        'finish_reason': 'stop'
    }],
    'usage': {
        'prompt_tokens': 9,
        'completion_tokens': 12,
        'total_tokens': 21
    }
}

Models Endpoint

The /v1/models and /models endpoints are used to list all available models.

Usage

Send a GET request to /v1/models or /models to retrieve a list of available models.

Response Format

The response will be in the following format:

{
    'object': 'list',
    'data': [
        {
            'id': 'model-id',
            'object': 'model',
            'created': 1686935002,
            'owned_by': 'provider'
        },
        ...
    ]
}

Imagine Endpoint

The /v1/imagine and /imagine endpoints are used to generate images.

List all models: https://api.airforce/imagine/models Playground: https://flux.api.airforce

Usage

Send a GET request to /v1/imagine or /imagine with the following query parameters:

{
    'prompt': 'A beautiful landscape',
    'size': '1:1' [one of 1:1, 16:9, 9:16, 21:9, 9:21, 1:2 or 2:1]
    'seed': 123456,
    'model': 'flux' or 'flux-realism' or 'flux-anime' or 'flux-3d',
}

Response Format

The response will be an image in PNG format.

The response will be in the following format:

Content-Type: image/png
(binary image data)

Example

Example GET request:

GET /v1/imagine?prompt=A+beautiful+landscape&size=1:1&seed=123456&model=flux-realism

Imagine2 Endpoint

The /v1/imagine2 and /imagine2 endpoints are used to generate images fast and stable.

Playground: https://flux.api.airforce

This is a duplicated of Imagine.

GlobalAsk Endpoint

The /v1/ask and /ask endpoints are using the GlobalAsk api which performs a request to chatgpt with web search and image generation.

Usage

Send a GET request to /v1/ask or /ask with the following query parameter:

{
    'prompt': 'What is ChatGPT'
}

Response Format

The response will be a text/event-stream consisting of multiple json chunks.

Every line is sent seperatly and contains a json code in the following format:

{
    'message': STRING,
    'url': STRING
}

GlobalAsk Playground

You can try the GlobalAsk API here.

Vision Endpoint

The /v1/images/vision and /images/vision endpoints are used to describe an image using specified vision models.

Usage

Send a GET request to /v1/images/vision or /images/vision with the following query parameters:

{
    'url': 'URL of the image to be described',
    'model': 'gemini-pro-vision' // optional, default is 'gemini-pro-vision'
}

Valid models: 'gpt-4o', 'gpt-4o-mini', 'gemini-pro-vision'

If no model is specified, the default model 'gemini-pro-vision' will be used.

Response Format

The response will a string

Example

Example GET request:

GET /v1/images/vision?url=https://example.com/image.png&model=gemini-pro-vision

Suno Endpoint

The /v1/suno and /suno endpoints are used to generate a song using Suno's AI models.

Try here: https://api.airforce/suno

Usage

There are two ways to interact with this endpoint:

POST Request

To generate a song, send a POST request to /v1/suno or /suno with the following JSON payload:

{
            "model": "chirp-v3-5", // Optional, can be one of "chirp-v2-xxl-alpha", "chirp-v3-5", "chirp-v3-0". Default is "chirp-v3-5".
            "prompt": "Describe the type of song you want to generate.", // Required, between 1 and 200 characters.
            "instrumental": "false", // Optional, if set to "true", generates an instrumental version.
            "style": "genre or style", // Optional, between 1 and 120 characters. Only required for custom generation.
            "custom": "false" // Optional, if set to "true", allows for longer and more detailed prompts (up to 1250 characters for most models, up to 3000 characters for "chirp-v3-5").
        }

GET Request

To retrieve the status and URLs for an existing song, send a GET request to /v1/suno/<uuid> or /suno/<uuid> with the song's UUID.

Response Format

For POST Request:

Successful response will include:

{
            "status": "success",
            "clips": ["uuid1", "uuid2"]
        }

Each UUID can be used to retrieve the song's content via a subsequent GET request.

For GET Request:

The response will include:

{
            "status": "generating" | "finished",
            "image_url": "https://link_to_image",
            "audio_url": "https://link_to_audio",
            "video_url": "https://link_to_video"
        }

Example POST Request

POST /v1/suno
        Content-Type: application/json
        
        {
            "model": "chirp-v3-5",
            "prompt": "A catchy pop song with upbeat rhythm",
            "instrumental": "false",
            "custom": "false"
        }

Example GET Request

GET /v1/suno/uuid1

Get Audio Endpoint

The /get-audio endpoint is used to convert text to speech and return the audio as an MP3 file. The text is provided via a URL query parameter.

Usage

Send a GET request to /get-audio with the following query parameter:

GET /get-audio?text=Your+text+here&voice=alex

For example, to convert "Hello world" to speech:

GET /get-audio?text=Hello+world&voice=sophia

Available voices are alex and sophia - more soon.

Response Format

The response will be an MP3 audio file with the following format:

Content-Type: audio/mpeg
(binary audio data)

Example

Example GET request:

GET /get-audio?text=Hello+world&voice=alex

Successful request will return an MP3 file containing the spoken text.