Development

Function Calling

A provider-specific API feature where the model returns a structured tool-call request (function name + JSON arguments) that your runtime executes and feeds back.

First published April 14, 2026

Function calling is the concrete API-level mechanism for tool use. OpenAI, Anthropic, and Google each have slightly different surface syntaxes but the concept is identical: you pass tool definitions (name, description, JSON Schema for args) in the request, the model decides whether to call one, you execute and return the result, generation continues.

Key implementation gotchas: parallel tool calls (the model can request multiple in one turn -- handle concurrency), tool validation (hallucinated args are common on smaller models -- validate before executing), loop termination (track iterations and fail fast on infinite tool-call loops).

Example Prompt

# OpenAI-style function definition
{
  "type": "function",
  "function": {
    "name": "get_weather",
    "description": "Get current weather for a city.",
    "parameters": {
      "type": "object",
      "properties": {
        "city": {"type": "string"},
        "units": {"type": "string", "enum": ["celsius", "fahrenheit"]}
      },
      "required": ["city"]
    }
  }
}

When to use it

  • Integrating LLMs with existing APIs and databases
  • Building agents that need consistent, validatable tool calls
  • Using the strongly-typed schema as a safety layer against hallucinated args

When NOT to use it

  • Single-shot text generation with no side effects -- plain prompting is simpler
  • Your runtime can't validate or sandbox the called functions
  • You're targeting open-weight models with weak function-calling fidelity (consider a specialist like Hermes or constrained decoding instead)