OpenFang Setup Tutorial
Difficulty: Beginner | Duration: 15 Minutes | Reward: Master OpenFang deployment and model configuration
Preface
If you are looking for a lightweight yet powerful AI Agent framework, OpenFang is worth your attention. It is an open-source Agent Operating System written in Rust. The entire system compiles to only about 32MB, yet it features built-in support for 20+ mainstream LLM providers, including Anthropic Claude, Google Gemini, OpenAI GPT, DeepSeek, Groq, and more.
Unlike traditional frameworks, OpenFang is not just a simple Chatbot wrapper, but a true autonomous Agent system that works for youβit can run according to schedules, help you build knowledge graphs, monitor targets, generate leads, manage social media, and automatically report results to your Dashboard.
Today, let's build it together.
Target Audience
- Developers with 1-5 years of experience
- Technical personnel interested in AI Agents
- Tech enthusiasts looking to quickly deploy local AI capabilities
Core Dependencies and Environment
Before starting, ensure your machine meets the following conditions:
- OS: Linux, macOS, Windows (WSL2 or PowerShell)
- Rust: 1.75+ (recommended to manage via rustup)
- Memory: At least 4GB RAM
- Disk: At least 500MB available space
Install Rust (if not already installed)
# Linux/macOS
curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh
# Windows (using PowerShell)
irm https://sh.rustup.rs | iex
After installation, run rustc --version to confirm.
Project Structure
openfang/
βββ agents/ # Agent templates directory
β βββ hello-world/
β βββ coder/
β βββ researcher/
β βββ ...
βββ crates/ # Rust crates (14 in total)
β βββ openfang-cli/ # CLI command line
β βββ openfang-runtime/ # Runtime core
β βββ openfang-api/ # REST API
β βββ ...
βββ docs/ # Official documentation
βββ sdk/ # Multi-language SDKs
β βββ python/
β βββ javascript/
βββ config.toml.example # Configuration example
Quick Installation
Method 1: Official One-Click Install (Recommended)
Linux/macOS:
curl -fsSL https://openfang.sh/install | sh
Windows PowerShell:
irm https://openfang.sh/install.ps1 | iex
Once installed, you will see the following prompt:
β
OpenFang installed successfully!
Run 'openfang init' to get started.
Method 2: Manual Compilation
If you want to build from source or are in an environment that doesn't support the one-click installer:
# Clone the repository
git clone https://github.com/RightNow-AI/openfang.git
cd openfang
# Build release version
cargo build --release
# Or build only the CLI
cargo build --release -p openfang-cli
After compilation, the binary will be located at target/release/openfang (or target/release/openfang.exe).
Initial Configuration
Create Configuration File
Run the initialization command:
openfang init
This will create a configuration folder in your home directory:
~/.openfang/
βββ config.toml # Main configuration file
Configuration Structure
Open ~/.openfang/config.toml. The basic configuration looks like this:
# API Service Config
host = "127.0.0.1"
port = 4200
# API Key (Optional, recommended for production)
api_key = ""
# Default Model Config
[default_model]
provider = "groq"
model = "llama-3.3-70b-versatile"
# Agent Default Config
[agents.defaults]
[!TIP]
All fields in the config file are optional; missing fields will use default values.
Setting API Keys
OpenFang supports setting API Keys via environment variables. We recommend starting with Groq (free tier) or Defapi (half price) for the best experience:
# Method 1: Groq (Free with rate limits)
export GROQ_API_KEY="gsk_your_groq_key"
# Method 2: Defapi (Half price, recommended)
export DEFAPI_API_KEY="your_defapi_key"
[!WARNING]
Do not write API Keys directly into the configuration file; use environment variable references instead.
Starting the Service
Start OpenFang
openfang start
You will see output similar to:
π Starting OpenFang daemon...
π‘ Server listening on http://127.0.0.1:4200
π Dashboard: http://127.0.0.1:4200
β
OpenFang is ready!
Verify the Service
# Health check
curl http://127.0.0.1:4200/api/health
# View available models
curl http://127.0.0.1:4200/api/models
# View provider status
curl http://127.0.0.1:4200/api/providers
Access the Dashboard
Open your browser and go to http://127.0.0.1:4200 to see the OpenFang Web Dashboard.
Your First Agent
Create Using Built-in Templates
OpenFang provides several built-in Agent templates located in the agents/ directory:
# List available templates
openfang template list
Common templates:
- hello-world: Simplest chat Agent
- coder: Code writing assistant
- researcher: Research assistant
- assistant: General purpose assistant
Create the Agent
# Create using the hello-world template
openfang agent create hello-world my-first-agent
Test Messaging
# Send message via CLI
openfang agent message my-first-agent "Hello, say hi in 3 words"
Or via REST API:
curl -X POST http://127.0.0.1:4200/api/agents/{agent-id}/message \
-H "Content-Type: application/json" \
-d '{"message": "Hello, say hi in 3 words"}'
If you receive a proper response, your Agent is working!
Model Configuration Practice
Configuring Groq (Free Tier)
Groq offers a free tier with extremely fast speeds, ideal for development and testing:
# ~/.openfang/config.toml
[default_model]
provider = "groq"
model = "llama-3.3-70b-versatile"
Environment variable:
export GROQ_API_KEY="gsk_your_key"
[!TIP]
Groq's free tier has rate limits. For production, consider other providers.
Configuring Defapi (Half-Price Recommendation)
If you want to use high-quality models at a lower price, Defapi is an excellent choice. Prices are only 50% of the official rates:
- Gemini 2.5 Pro: Official $1.25/M β Defapi only $0.625/M
- Claude Sonnet 4: Official $3.00/M β Defapi only $1.50/M
# ~/.openfang/config.toml
# Add custom provider
[[providers]]
name = "defapi"
base_url = "https://api.defapi.org/v1"
api_key_env = "DEFAPI_API_KEY"
# Set default model
[default_model]
provider = "defapi"
model = "claude-sonnet-4-20250514"
Environment variable:
export DEFAPI_API_KEY="your_defapi_key"
[!TIP]
Defapi supports multiple protocols: v1/chat/completions, v1/messages, v1beta/models/*, making it perfectly compatible with OpenFang.
Configuring Anthropic Claude
[default_model]
provider = "anthropic"
model = "claude-sonnet-4-20250514"
export ANTHROPIC_API_KEY="sk-ant-your_key"
Configuring Google Gemini
[default_model]
provider = "gemini"
model = "gemini-2.5-flash"
# Both GEMINI_API_KEY or GOOGLE_API_KEY work
export GEMINI_API_KEY="AIza_your_key"
Configuring OpenAI
[default_model]
provider = "openai"
model = "gpt-4o-mini"
export OPENAI_API_KEY="sk-your_key"
Troubleshooting
Q1: API Key doesn't seem to work
Checklist:
- Confirm the environment variable is exported correctly:
echo $GROQ_API_KEY - Restart the OpenFang service:
openfang restart - View provider status:
curl http://127.0.0.1:4200/api/providers
Q2: Model shows as unavailable
Possible causes:
- API Key not configured or formatted incorrectly
- The model is not in the provider's supported list
- Network connection issues
Solution:
# View detailed model list
curl http://127.0.0.1:4200/api/models | jq
# Test specific provider connection
curl -X POST http://127.0.0.1:4200/api/providers/groq/test
Q3: Port 4200 is already in use
# Check the process occupying the port
lsof -i :4200 # Linux/macOS
netstat -ano | findstr :4200 # Windows
# Modify the config file to use a different port
# ~/.openfang/config.toml
port = 4201
Q4: Compilation error
Ensure your Rust version is up to date:
rustup update
rustc --version # should be >= 1.75
If you encounter dependency issues, try:
cargo clean
cargo build --release
Q5: Command not found on Windows
Ensure the OpenFang binary is in your PATH:
# Add binary directory to PATH (permanent)
$env:PATH += ";$env:USERPROFILE\.openfang\bin"
# Or use the full path directly
~\.openfang\bin\openfang.exe start
Q6: How to view logs
# View logs in real-time
openfang logs
# Or view the log file
cat ~/.openfang/logs/openfang.log
Advanced Directions
Hands: 7 Built-in Capability Packs
The biggest feature of OpenFang is its 7 powerful built-in Hands:
| Hand | Functionality |
|---|---|
| Clip | YouTube video downloading, clipping, dubbing, and auto-posting to social media |
| Lead | Daily lead generation, customer prospecting, scoring, and deduplication |
| Collector | OSINT intelligence gathering, change detection, and knowledge graph construction |
| Predictor | Prediction engine with confidence intervals and accuracy tracking |
| Researcher | Deep research, multi-source cross-verification, and APA format reporting |
| Auto-tweeting, interaction replies, and posting schedules | |
| Browser | Web automation, form filling, and multi-step workflows |
# Activate Researcher Hand
openfang hand activate researcher
# Check status
openfang hand status researcher
# Pause
openfang hand pause researcher
Custom Agent Development
Create a custom Agent:
# my-agent.toml
name = "my-agent"
version = "0.1.0"
description = "My custom agent"
author = "you"
[model]
provider = "groq"
model = "llama-3.3-70b-versatile"
[capabilities]
tools = ["file_read", "web_fetch", "bash"]
memory_read = ["*"]
memory_write = ["self.*"]
# Deploy custom Agent
openfang agent deploy my-agent.toml
MCP Integration
OpenFang supports MCP (Model Context Protocol), which allows connecting to various external services:
[[mcp_servers]]
name = "filesystem"
command = "npx"
args = ["-y", "@modelcontextprotocol/server-filesystem", "/path/to/dir"]
Summary
Today we completed the full setup process for OpenFang:
- β Installed OpenFang (Official script or manual build)
- β Initialized configuration (config.toml + Environment Variables)
- β Started and verified the service
- β Created and tested the first Agent
- β Configured various LLM providers (Groq, Defapi, Claude, Gemini)
- β Troubleshooting common issues
Now you have a complete AI Agent execution platform! Next steps you can try:
- Activate a Hand to experience automated workflows
- Create a custom Agent with your own tools and capabilities
- Explore MCP integration to connect more external services
Have fun!
[!TIP]
For production, it is recommended to use Defapi (half price) or configure multiple Providers for failover to ensure service stability.