greentic

Using Ollama Agents in Greentic

This guide explains how to use the built-in OllamaAgent node type to interact with local Ollama models in your Greentic flows. Agents support generation, embeddings, and structured assistant prompts with tool calling and state updates.


🧠 Overview

The OllamaAgent wraps a call to your local Ollama instance using the ollama-rs client. It supports the following modes:


🧩 Configuration

Example YAML:

generate_reply:
  ollama:
    task: "Summarise the payload"
    model: "llama3"
    mode: generate
    ollama_url: "http://localhost:11434"

Fields:


🤖 Chat Mode (Structured Prompting)

This is the default and most powerful mode. The agent receives:

The model must return structured JSON with:

{
  "payload": { ... },
  "state": {
    "add": [...],
    "update": [...],
    "delete": [...]
  },
  "tool_call": {
    "name": "weather_api",
    "action": "forecast",
    "input": { "q": "London" }
  },
  "connections": ["next_node"],
  "reply_to_origin": false
}

Empty fields should be omitted. Errors in structure will be logged.


✍️ Generate Mode

For simple completion tasks:

generate_summary:
  ollama:
    mode: generate
    task: "Create summary"
    model: "llama3"

Payload:

{ "prompt": "Explain why the sky is blue" }

Returns:

{ "generated_text": "..." }

🔎 Embed Mode

To compute vector embeddings:

vectorise:
  ollama:
    mode: embed
    task: "Embed this"
    model: "llama3-embed"

Payload:

{ "text": "The quick brown fox." }

Returns:

{ "embeddings": [0.123, -0.456, ...] }

You’re now ready to use AI agents inside Greentic to power dynamic, tool-calling, state-aware flows!