OpenFang Setup Tutorial

β€’AI Expert

Difficulty: Beginner | Duration: 15 Minutes | Reward: Master OpenFang deployment and model configuration

Preface

If you are looking for a lightweight yet powerful AI Agent framework, OpenFang is worth your attention. It is an open-source Agent Operating System written in Rust. The entire system compiles to only about 32MB, yet it features built-in support for 20+ mainstream LLM providers, including Anthropic Claude, Google Gemini, OpenAI GPT, DeepSeek, Groq, and more.

Unlike traditional frameworks, OpenFang is not just a simple Chatbot wrapper, but a true autonomous Agent system that works for youβ€”it can run according to schedules, help you build knowledge graphs, monitor targets, generate leads, manage social media, and automatically report results to your Dashboard.

Today, let's build it together.


Target Audience

  • Developers with 1-5 years of experience
  • Technical personnel interested in AI Agents
  • Tech enthusiasts looking to quickly deploy local AI capabilities

Core Dependencies and Environment

Before starting, ensure your machine meets the following conditions:

  • OS: Linux, macOS, Windows (WSL2 or PowerShell)
  • Rust: 1.75+ (recommended to manage via rustup)
  • Memory: At least 4GB RAM
  • Disk: At least 500MB available space

Install Rust (if not already installed)

# Linux/macOS
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh

# Windows (using PowerShell)
irm https://sh.rustup.rs | iex

After installation, run rustc --version to confirm.


Project Structure

openfang/
β”œβ”€β”€ agents/                 # Agent templates directory
β”‚   β”œβ”€β”€ hello-world/
β”‚   β”œβ”€β”€ coder/
β”‚   β”œβ”€β”€ researcher/
β”‚   └── ...
β”œβ”€β”€ crates/                # Rust crates (14 in total)
β”‚   β”œβ”€β”€ openfang-cli/      # CLI command line
β”‚   β”œβ”€β”€ openfang-runtime/  # Runtime core
β”‚   β”œβ”€β”€ openfang-api/      # REST API
β”‚   └── ...
β”œβ”€β”€ docs/                  # Official documentation
β”œβ”€β”€ sdk/                   # Multi-language SDKs
β”‚   β”œβ”€β”€ python/
β”‚   └── javascript/
└── config.toml.example    # Configuration example

Quick Installation

Method 1: Official One-Click Install (Recommended)

Linux/macOS:

curl -fsSL https://openfang.sh/install | sh

Windows PowerShell:

irm https://openfang.sh/install.ps1 | iex

Once installed, you will see the following prompt:

βœ… OpenFang installed successfully!
Run 'openfang init' to get started.

Method 2: Manual Compilation

If you want to build from source or are in an environment that doesn't support the one-click installer:

# Clone the repository
git clone https://github.com/RightNow-AI/openfang.git
cd openfang

# Build release version
cargo build --release

# Or build only the CLI
cargo build --release -p openfang-cli

After compilation, the binary will be located at target/release/openfang (or target/release/openfang.exe).


Initial Configuration

Create Configuration File

Run the initialization command:

openfang init

This will create a configuration folder in your home directory:

~/.openfang/
└── config.toml    # Main configuration file

Configuration Structure

Open ~/.openfang/config.toml. The basic configuration looks like this:

# API Service Config
host = "127.0.0.1"
port = 4200

# API Key (Optional, recommended for production)
api_key = ""

# Default Model Config
[default_model]
provider = "groq"
model = "llama-3.3-70b-versatile"

# Agent Default Config
[agents.defaults]

[!TIP]
All fields in the config file are optional; missing fields will use default values.

Setting API Keys

OpenFang supports setting API Keys via environment variables. We recommend starting with Groq (free tier) or Defapi (half price) for the best experience:

# Method 1: Groq (Free with rate limits)
export GROQ_API_KEY="gsk_your_groq_key"

# Method 2: Defapi (Half price, recommended)
export DEFAPI_API_KEY="your_defapi_key"

[!WARNING]
Do not write API Keys directly into the configuration file; use environment variable references instead.


Starting the Service

Start OpenFang

openfang start

You will see output similar to:

πŸš€ Starting OpenFang daemon...
πŸ“‘ Server listening on http://127.0.0.1:4200
🌐 Dashboard: http://127.0.0.1:4200
βœ… OpenFang is ready!

Verify the Service

# Health check
curl http://127.0.0.1:4200/api/health

# View available models
curl http://127.0.0.1:4200/api/models

# View provider status
curl http://127.0.0.1:4200/api/providers

Access the Dashboard

Open your browser and go to http://127.0.0.1:4200 to see the OpenFang Web Dashboard.


Your First Agent

Create Using Built-in Templates

OpenFang provides several built-in Agent templates located in the agents/ directory:

# List available templates
openfang template list

Common templates:

  • hello-world: Simplest chat Agent
  • coder: Code writing assistant
  • researcher: Research assistant
  • assistant: General purpose assistant

Create the Agent

# Create using the hello-world template
openfang agent create hello-world my-first-agent

Test Messaging

# Send message via CLI
openfang agent message my-first-agent "Hello, say hi in 3 words"

Or via REST API:

curl -X POST http://127.0.0.1:4200/api/agents/{agent-id}/message \
  -H "Content-Type: application/json" \
  -d '{"message": "Hello, say hi in 3 words"}'

If you receive a proper response, your Agent is working!


Model Configuration Practice

Configuring Groq (Free Tier)

Groq offers a free tier with extremely fast speeds, ideal for development and testing:

# ~/.openfang/config.toml
[default_model]
provider = "groq"
model = "llama-3.3-70b-versatile"

Environment variable:

export GROQ_API_KEY="gsk_your_key"

[!TIP]
Groq's free tier has rate limits. For production, consider other providers.

Configuring Defapi (Half-Price Recommendation)

If you want to use high-quality models at a lower price, Defapi is an excellent choice. Prices are only 50% of the official rates:

  • Gemini 2.5 Pro: Official $1.25/M β†’ Defapi only $0.625/M
  • Claude Sonnet 4: Official $3.00/M β†’ Defapi only $1.50/M
# ~/.openfang/config.toml

# Add custom provider
[[providers]]
name = "defapi"
base_url = "https://api.defapi.org/v1"
api_key_env = "DEFAPI_API_KEY"

# Set default model
[default_model]
provider = "defapi"
model = "claude-sonnet-4-20250514"

Environment variable:

export DEFAPI_API_KEY="your_defapi_key"

[!TIP]
Defapi supports multiple protocols: v1/chat/completions, v1/messages, v1beta/models/*, making it perfectly compatible with OpenFang.

Configuring Anthropic Claude

[default_model]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
export ANTHROPIC_API_KEY="sk-ant-your_key"

Configuring Google Gemini

[default_model]
provider = "gemini"
model = "gemini-2.5-flash"
# Both GEMINI_API_KEY or GOOGLE_API_KEY work
export GEMINI_API_KEY="AIza_your_key"

Configuring OpenAI

[default_model]
provider = "openai"
model = "gpt-4o-mini"
export OPENAI_API_KEY="sk-your_key"

Troubleshooting

Q1: API Key doesn't seem to work

Checklist:

  1. Confirm the environment variable is exported correctly: echo $GROQ_API_KEY
  2. Restart the OpenFang service: openfang restart
  3. View provider status: curl http://127.0.0.1:4200/api/providers

Q2: Model shows as unavailable

Possible causes:

  • API Key not configured or formatted incorrectly
  • The model is not in the provider's supported list
  • Network connection issues

Solution:

# View detailed model list
curl http://127.0.0.1:4200/api/models | jq

# Test specific provider connection
curl -X POST http://127.0.0.1:4200/api/providers/groq/test

Q3: Port 4200 is already in use

# Check the process occupying the port
lsof -i :4200   # Linux/macOS
netstat -ano | findstr :4200  # Windows

# Modify the config file to use a different port
# ~/.openfang/config.toml
port = 4201

Q4: Compilation error

Ensure your Rust version is up to date:

rustup update
rustc --version  # should be >= 1.75

If you encounter dependency issues, try:

cargo clean
cargo build --release

Q5: Command not found on Windows

Ensure the OpenFang binary is in your PATH:

# Add binary directory to PATH (permanent)
$env:PATH += ";$env:USERPROFILE\.openfang\bin"

# Or use the full path directly
~\.openfang\bin\openfang.exe start

Q6: How to view logs

# View logs in real-time
openfang logs

# Or view the log file
cat ~/.openfang/logs/openfang.log

Advanced Directions

Hands: 7 Built-in Capability Packs

The biggest feature of OpenFang is its 7 powerful built-in Hands:

HandFunctionality
ClipYouTube video downloading, clipping, dubbing, and auto-posting to social media
LeadDaily lead generation, customer prospecting, scoring, and deduplication
CollectorOSINT intelligence gathering, change detection, and knowledge graph construction
PredictorPrediction engine with confidence intervals and accuracy tracking
ResearcherDeep research, multi-source cross-verification, and APA format reporting
TwitterAuto-tweeting, interaction replies, and posting schedules
BrowserWeb automation, form filling, and multi-step workflows
# Activate Researcher Hand
openfang hand activate researcher

# Check status
openfang hand status researcher

# Pause
openfang hand pause researcher

Custom Agent Development

Create a custom Agent:

# my-agent.toml
name = "my-agent"
version = "0.1.0"
description = "My custom agent"
author = "you"

[model]
provider = "groq"
model = "llama-3.3-70b-versatile"

[capabilities]
tools = ["file_read", "web_fetch", "bash"]
memory_read = ["*"]
memory_write = ["self.*"]
# Deploy custom Agent
openfang agent deploy my-agent.toml

MCP Integration

OpenFang supports MCP (Model Context Protocol), which allows connecting to various external services:

[[mcp_servers]]
name = "filesystem"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]

Summary

Today we completed the full setup process for OpenFang:

  • βœ… Installed OpenFang (Official script or manual build)
  • βœ… Initialized configuration (config.toml + Environment Variables)
  • βœ… Started and verified the service
  • βœ… Created and tested the first Agent
  • βœ… Configured various LLM providers (Groq, Defapi, Claude, Gemini)
  • βœ… Troubleshooting common issues

Now you have a complete AI Agent execution platform! Next steps you can try:

  1. Activate a Hand to experience automated workflows
  2. Create a custom Agent with your own tools and capabilities
  3. Explore MCP integration to connect more external services

Have fun!

[!TIP]
For production, it is recommended to use Defapi (half price) or configure multiple Providers for failover to ensure service stability.


Further Reading