module: dllm
class AsyncClient(dllm_base_url: Union[str, list[str]] = None,
api_key_or_token: str = None)
Main client class for interacting with DLLM services. Accepts either a base URL string for remote DLLM API endpoints, or a list of local JSON prompt configuration file paths. If not provided, defaults to "https://api.dllmstack.com". When using remote mode, API key can be provided explicitly or via DLLM_API_KEY environment variable.
single(value: str) -> ValueModel
Creates a ValueModel containing a single text value. The text is stored in the ValueModel's texts array.
dict(value: dict) -> ValueModel
Creates a ValueModel from a dictionary. Performs a shallow copy, handling 'texts' and 'images' keys specially, while other keys become custom attributes accessible via model_fields.
async image(value: str) -> ValueModel
Creates a ValueModel containing an image. Accepts URL (http/https), local file path, or base64 encoded image data.
async file(name: str) -> ValueModel
Reads a text file asynchronously and returns its contents in a ValueModel. Raises IOError if file cannot be read.
openai(project_id: str, spec_id: str) -> Union[OpenAIClient, OpenAIClient2]
Returns an OpenAI chat completions client for the specified project and spec. Uses remote API if dllm_base_url is a URL, otherwise uses local prompt configuration.
openai_chat(project_id: str, spec_id: str) -> Union[OpenAIClient, OpenAIClient2]
Alias for openai() method. Returns an OpenAI chat completions client.
openai_responses(project_id: str, spec_id: str) -> Union[OpenAIResponses, OpenAIResponses2]
Returns an OpenAI Responses API client for the specified project and spec. The Responses API supports structured outputs and function calling with input/instructions format.
openai_image(project_id: str, spec_id: str) -> Union[OpenAIImageClient, OpenAIImageClient2]
Returns an OpenAI image generation client (DALL-E) for the specified project and spec.
gemini(project_id: str, spec_id: str) -> Union[GeminiClient, GeminiClient2]
Returns a Google Gemini chat completions client for the specified project and spec.
gemini_imagen(project_id: str, spec_id: str) -> Union[GeminiImagenClient, GeminiImagenClient2]
Returns a Google Imagen image generation client for the specified project and spec.
grok_chat(project_id: str, spec_id: str) -> Union[GrokClient, GrokClient2]
Returns a Grok (x.ai) chat completions client for the specified project and spec.
grok_image(project_id: str, spec_id: str) -> Union[GrokImageClient, GrokImageClient2]
Returns a Grok (x.ai) image generation client for the specified project and spec.
anthropic(project_id: str, spec_id: str) -> Union[AnthropicClient, AnthropicClient2]
Returns an Anthropic Claude messages client for the specified project and spec.
module: dllm.asyncdllm
class ValueModel(value: any = None)
Base class for all request and response models. You typically don't instantiate this class directly - use AsyncClient.single(), AsyncClient.dict(), or AsyncClient.image() instead. Stores texts and images arrays, plus optional custom attributes accessible via model_fields.
is_value_empty() -> bool
Returns True if the ValueModel contains no texts, images, or custom fields.
add_param(name: str, value: any)
Adds a custom attribute to the model. Cannot use reserved names 'texts' or 'images'.
text -> str
Property that returns the first text value, or empty string if no texts exist.
image -> any
Property that returns the first image value, or None if no images exist.
add_text(text: str)
Appends a text string to the texts array.
async add_image(anything: str, b64: bool = False, format: str = None, width: int = None, height: int = None)
Adds an image from a URL (http/https), local file path, or base64 encoded data. Automatically detects image dimensions and format using PIL.
async __or__(other: T) -> T
Pipeline operator for chaining operations. Enables using the pipe (|) operator to pass values from one ValueModel to another LlmClientModel for processing.
class LlmClientModel(ValueModel)
Base class for all LLM provider clients (OpenAI, Gemini, Grok, Anthropic). Extends ValueModel to add LLM-specific functionality including response handling, token tracking, and timing metrics. Stores cumulative elapsed_time, input_tokens, output_tokens, and price across multiple calls.
async create_response(extra_body: dict) -> Self
Abstract method that must be implemented by subclasses to create LLM API responses. Takes extra_body parameters from prompt configuration and returns self for chaining.
async get_query_params() -> list
Retrieves query parameter definitions from the prompt specification metadata. Returns a list of parameter specifications with name, type, and other properties.
async __call__(*args, **kwargs) -> Self
Executes the LLM API call. Accepts ValueModel arguments, extracts parameters according to query param definitions, and passes them to create_response(). Supports simple single-parameter or complex multi-parameter calls.
async aclose()
Closes and releases resources used by the LLM client, including underlying HTTP connections. Should be called when the client is no longer needed to properly clean up resources.
class OpenAIClient(LlmClientModel)
Client for OpenAI Chat Completions API. Handles chat-based text generation, including support for tool calls. Automatically retries on empty responses and tracks token usage. API key can be provided or loaded from OPENAI_API_KEY environment variable.
class OpenAIImageClient(LlmClientModel)
Client for OpenAI Image Generation API (DALL-E). Generates images from text prompts and returns them as URLs or base64 encoded data. Handles image metadata including dimensions and format.
class GeminiClient(LlmClientModel)
Client for Google Gemini Chat Completions API. Supports text and multimodal generation with automatic retry logic. API key can be provided or loaded from GOOGLE_API_KEY or GEMINI_API_KEY environment variables.
class GeminiImagenClient(GeminiClient)
Client for Google Imagen image generation. Generates images from text prompts using Gemini's imaging capabilities. Returns base64 encoded image data.
class GrokClient(OpenAIClient)
Client for Grok (x.ai) Chat Completions API. Extends OpenAIClient with Grok-specific configurations. API key loaded from GROK_API_KEY environment variable.
class GrokImageClient(OpenAIImageClient)
Client for Grok (x.ai) Image Generation API. Extends OpenAIImageClient for Grok's image generation capabilities, using b64_json response format.
class AnthropicClient(LlmClientModel)
Client for Anthropic Claude Messages API. Handles Claude chat completions with automatic retry logic and token tracking. API key can be provided or loaded from ANTHROPIC_API_KEY environment variable.
class OpenAIResponses(LlmClientModel)
Client for OpenAI Responses API. Supports structured outputs and function calling using the /v1/responses endpoint. Handles both text output and function calls, with automatic retry logic and token tracking. API key can be provided or loaded from OPENAI_API_KEY environment variable.