BeeAI Framework: Your Guide from Zero to Hero

12 minute read

Welcome to the BeeAI Framework tutorial! This comprehensive guide is meticulously crafted to take you from a complete beginner to a proficient developer, leveraging the powerful capabilities of the BeeAI Framework. Throughout this guide, you’ll master key concepts and practical applications that will enable you to build intelligent, context-aware AI applications.

Initially, you will learn foundational concepts like creating and rendering Prompt Templates to dynamically generate prompts tailored for specific interactions. Following that, you’ll explore ChatModel Interaction, discovering effective ways to interact with language models through message-based communications. You’ll then delve into advanced techniques of Memory Handling, essential for managing conversation histories and maintaining contextual coherence in AI interactions.

Further, you’ll gain expertise in enforcing Structured Outputs using robust Pydantic schemas, ensuring your AI’s responses adhere to predefined formats, thus enhancing reliability and predictability. You will also understand how to utilize System Prompts to strategically guide the behavior of language models, optimizing their responses for your specific use cases.

The tutorial advances into sophisticated areas such as developing ReAct Agents and Tools, which empowers your AI agents with reasoning and actionable capabilities through seamless integration of external tools. Finally, you will master Workflows, effectively orchestrating multiple steps and complex agent interactions into streamlined, dynamic processes, including the sophisticated management of multi-agent systems.

Below is a comprehensive table of contents for easy navigation through your journey with the BeeAI Framework.

Table of Contents

BeeAI Framework Basics

Dive into the foundational concepts of the BeeAI Framework, progressively building your knowledge and practical skills to confidently create intelligent, context-aware applications.

I will present some examples to demonstrate the fundamental usage patterns of BeeAI in Python. They progressively increase in complexity, providing a well-rounded overview of the framework.

docs_logo

Setup Environment

This section outlines the steps to set up your environment for running BeeAI Framework Python code examples on Windows and Ubuntu 22.04.

Prerequisites

  • Python 3.12+: Required for BeeAI Framework.
  • Anaconda or Miniconda (Recommended): For easier environment management.

Step-by-step Setup

Follow the instructions for your operating system.

Windows

  1. Install Python 3.12+:

    • Download from python.org.
    • Important: Check “Add Python 3.12 to PATH” during installation.
  2. Install Anaconda/Miniconda:

  3. Open Anaconda Prompt: Search in Start Menu.

  4. Create Virtual Environment:

    python -m venv venv
    
  5. Activate Virtual Environment:

    venv\Scripts\activate
    
  6. Install BeeAI Framework & Dependencies:

    pip install beeai-framework
    # Install additional dependencies if needed by examples (e.g., visualization libraries)
    # pip install pandas networkx matplotlib plotly scikit-learn
    
  7. Install Ollama:

    • Download the Windows installer from ollama.com.
    • Run the installer.
  8. Start Ollama Server: Open a new Anaconda Prompt and run:

    ollama serve &
    
  9. Download Ollama Model:

    ollama pull granite3.1-dense:8b
    
  10. Watsonx.ai Credentials (If using Watsonx):

    • Obtain Project ID, API Key, and API Endpoint URL from your Watsonx.ai service.

    • Set environment variables in Anaconda Prompt (or system-wide):

      set WATSONX_PROJECT_ID=YOUR_WATSONX_PROJECT_ID
      set WATSONX_API_KEY=YOUR_WATSONX_API_KEY
      set WATSONX_API_URL=YOUR_WATSONX_API_ENDPOINT_URL
      

Ubuntu 22.04

  1. Install Python 3.12+:

    sudo apt update
    sudo apt install python3.12 python3.12-venv
    
  2. Install Anaconda/Miniconda:

    • Download the Linux installer from Anaconda or Miniconda.
    • Run the .sh installer in your terminal.
  3. Activate Anaconda: Close and reopen terminal or source ~/.bashrc / source ~/.zshrc.

  4. Create Virtual Environment:

    python3.12 -m venv venv
    
  5. Activate Virtual Environment:

    source venv/bin/activate
    
  6. Install BeeAI Framework & Dependencies:

    pip install beeai-framework
    # Install additional dependencies if needed by examples
    # pip install pandas networkx matplotlib plotly scikit-learn
    
  7. Install Ollama:

    curl -fsSL [https://ollama.com/install.sh](https://ollama.com/install.sh) | sh
    
  8. Start Ollama Server: In a new terminal, run:

    ollama serve &
    
  9. Download Ollama Model:

    ollama pull granite3.1-dense:8b
    
  10. Watsonx.ai Credentials (If using Watsonx):

    • Obtain Watsonx.ai credentials.

    • Set environment variables in your terminal (or shell config file):

      export WATSONX_PROJECT_ID=YOUR_WATSONX_PROJECT_ID
      export WATSONX_API_KEY=YOUR_WATSONX_API_KEY
      export WATSONX_API_URL=YOUR_WATSONX_API_ENDPOINT_URL
      

Notes:

  • Virtual Environments: Always activate your virtual environment.
  • Ollama Server: Keep Ollama server running in background.
  • Watsonx Credentials: Securely manage your Watsonx API keys using environment variables.
  • Troubleshooting: Double-check each step if you encounter issues. Refer to BeeAI documentation for further assistance.

Your environment is now configured to run BeeAI Framework examples.

1. Prompt Templates

One of the core constructs in the BeeAI framework is the PromptTemplate. It allows you to dynamically insert data into a prompt before sending it to a language model. BeeAI uses the Mustache templating language for prompt formatting.

Example: RAG Prompt Template

2. More Complex Templates

The PromptTemplate class also supports more complex structures. For example, you can iterate over a list of search results to build a prompt.

Example: Template with a List of Search Results

3. The ChatModel

Once you have your prompt templates set up, you can begin interacting with a language model. BeeAI supports various LLMs through the ChatModel interface.

Example: Creating a User Message

Example: Sending a Message to the ChatModel

4. Memory Handling

Memory is a convenient way to store the conversation history (a series of messages) that the model uses for context.

Example: Storing and Retrieving Conversation History

5. Combining Templates and Messages

You can render a prompt from a template and then send it as a message to the ChatModel.

Example: Rendering a Template and Sending as a Message

6. Structured Outputs

Sometimes you want the LLM to generate output in a specific format. You can enforce this using structured outputs with a Pydantic schema.

Example: Enforcing a Specific Output Format

7. System Prompts

System messages can guide the overall behavior of the language model.

Example: Using a System Message

BeeAI ReAct Agents

The BeeAI ReAct agent implements the “Reasoning and Acting” pattern, separating the process into distinct steps. This section shows how to build an agent that uses its own memory for reasoning and even integrates tools for added functionality.

1. Basic ReAct Agent

Example: Setting Up a Basic ReAct Agent

2. Using Tools with the Agent

Agents can be extended with tools so that they can perform external actions, like fetching weather data.

Example: Using a Built-In Weather Tool

3. Imported Tools

You can also import tools from other libraries. Below are two examples that show how to integrate Wikipedia search via LangChain.

Example: Long-Form Integration with Wikipedia

Example: Shorter Form Using the @tool Decorator

BeeAI Workflows

Workflows allow you to combine what you’ve learned into a coherent multi-step process. A workflow is defined by a state (a Pydantic model) and steps (Python functions) that update the state and determine the next step. Workflows in BeeAI provide a flexible and extensible component for managing and executing structured sequences of tasks, especially useful for orchestration of complex agent behaviors and multi-agent systems.

Overview

Workflows provide a flexible and extensible component for managing and executing structured sequences of tasks. They are particularly useful for:

  • Dynamic Execution: Steps can direct the flow based on state or results
  • Validation: Define schemas for data consistency and type safety
  • Modularity: Steps can be standalone or invoke nested workflows
  • Observability: Emit events during execution to track progress or handle errors

Core Concepts

State

State is the central data structure in a workflow. It’s a Pydantic model that:

  • Holds the data passed between steps
  • Provides type validation and safety
  • Persists throughout the workflow execution

Steps

Steps are the building blocks of a workflow. Each step is a function that:

  • Takes the current state as input
  • Can modify the state
  • Returns the name of the next step to execute or a special reserved value

Transitions

Transitions determine the flow of execution between steps. Each step returns either:

  • The name of the next step to execute
  • Workflow.NEXT - proceed to the next step in order
  • Workflow.SELF - repeat the current step
  • Workflow.END - end the workflow execution

Basic Usage

Simple Workflow

The example below demonstrates a minimal workflow that processes steps in sequence. This pattern is useful for straightforward, linear processes where each step builds on the previous one.

Multi-Step Workflow

This advanced example showcases a workflow that implements multiplication through repeated addition—demonstrating control flow, state manipulation, nesting, and conditional logic.

This workflow demonstrates several powerful concepts:

  • Implementing loops by returning Workflow.SELF
  • Conditional transitions between steps
  • Progressive state modification to accumulate results
  • Sign handling through state transformation
  • Type-safe step transitions using Literal types

Advanced Features

Workflow Nesting

Workflow nesting allows complex behaviors to be encapsulated as reusable components, enabling hierarchical composition of workflows. This promotes modularity, reusability, and better organization of complex agent logic.

Multi-Agent Workflows: Orchestration with BeeAI

The multi-agent workflow pattern enables the orchestration of specialized agents that collaborate to solve complex problems. Each agent focuses on a specific domain or capability, with results combined by a coordinator agent. BeeAI Framework’s workflow engine is perfectly suited for creating sophisticated multi-agent systems.

The following example demonstrates how to orchestrate a multi-agent system using BeeAI workflows with Ollama backend. We will create a “Smart assistant” workflow composed of three specialized agents: WeatherForecaster, Researcher, and Solver.

This pattern demonstrates:

  • Role specialization through focused agent configuration. WeatherForecaster is designed specifically for weather-related queries, while Researcher is for general information retrieval.
  • Efficient tool distribution to relevant specialists. The WeatherForecaster agent is equipped with the OpenMeteoTool, and Researcher with DuckDuckGoSearchTool, ensuring each agent has the right tools for its job.
  • Parallel processing of different aspects of a query. Although not explicitly parallel in this example, the workflow structure is designed to easily support parallel execution of agents if needed.
  • Synthesis of multiple expert perspectives into a cohesive response. The Solver agent acts as a coordinator, taking responses from other agents and synthesizing them into a final answer.
  • Declarative agent configuration using the AgentWorkflow and add_agent methods, which simplifies the setup and management of complex agent systems.

Orchestration with Watsonx.ai Backend

To demonstrate the versatility of BeeAI workflows, let’s adapt the multi-agent workflow example to use Watsonx.ai as the backend LLM provider. First, ensure you have configured the Watsonx provider as described in the Backend section. Then, modify the ChatModel.from_name call to use a Watsonx model:

In this modified example, we simply changed the ChatModel.from_name call to watsonx:ibm/granite-3-8b-instruct. Assuming you have correctly set up your Watsonx environment variables, this code will now orchestrate the same multi-agent workflow but powered by Watsonx.ai. This highlights the provider-agnostic nature of BeeAI workflows, allowing you to easily switch between different LLM backends without significant code changes.

Memory in Workflows

Integrating memory into workflows allows agents to maintain context across interactions, enabling conversational interfaces and stateful processing. This example demonstrates a simple conversational echo workflow with persistent memory.

This pattern demonstrates:

  • Integration of memory as a first-class citizen in workflow state
  • Conversation loops that preserve context across interactions
  • Bidirectional memory updating (reading recent messages, storing responses)
  • Clean separation between the persistent memory and workflow-specific state

Backend

Backend is an umbrella module that encapsulates a unified way to work with the following functionalities:

  • Chat Models via (ChatModel class)
  • Embedding Models (coming soon)
  • Audio Models (coming soon)
  • Image Models (coming soon)

BeeAI framework’s backend is designed with a provider-based architecture, allowing you to switch between different AI service providers while maintaining a consistent API.

Supported providers

The table below lists supported providers, their dependencies, and required environment variables. Ensure these variables are properly configured before using each provider.

Provider Chat Support Dependency Required Environment Variables
Ollama Yes ollama-ai-provider OLLAMA_CHAT_MODEL,OLLAMA_BASE_URL
OpenAI Yes openai OPENAI_CHAT_MODEL,OPENAI_API_BASE,OPENAI_API_KEY,OPENAI_ORGANIZATION
Watsonx Yes @ibm-cloud/watsonx-ai WATSONX_CHAT_MODEL,WATSONX_API_KEY,WATSONX_PROJECT_ID,WATSONX_SPACE_ID,WATSONX_VERSION,WATSONX_REGION
Groq Yes   GROQ_CHAT_MODEL,GROQ_API_KEY
Amazon Bedrock Yes boto3 AWS_CHAT_MODEL,AWS_ACCESS_KEY_ID,AWS_SECRET_ACCESS_KEY,AWS_REGION_NAME
Google Vertex AI Yes   VERTEXAI_CHAT_MODEL,VERTEXAI_PROJECT,GOOGLE_APPLICATION_CREDENTIALS,GOOGLE_APPLICATION_CREDENTIALS_JSON,GOOGLE_CREDENTIALS
Azure OpenAI No Coming soon! AZURE_OPENAI_CHAT_MODEL,AZURE_OPENAI_API_KEY,AZURE_OPENAI_API_ENDPOINT,AZURE_OPENAI_API_RESOURCE,AZURE_OPENAI_API_VERSION
Anthropic Yes   ANTHROPIC_CHAT_MODEL,ANTHROPIC_API_KEY
xAI Yes   XAI_CHAT_MODEL,XAI_API_KEY

Backend initialization

The Backend class serves as a central entry point to access models from your chosen provider.

Watsonx Initialization

To use Watsonx with BeeAI framework, you need to install the Watsonx adapter and set up your environment variables.

Installation:

pip install beeai-framework[watsonx]

Environment Variables:

Set the following environment variables. You can obtain these from your IBM Cloud account and Watsonx service instance.

  • WATSONX_API_KEY: Your Watsonx API key.
  • WATSONX_PROJECT_ID: Your Watsonx project ID.
  • WATSONX_REGION: The region where your Watsonx service is deployed (e.g., us-south).
  • WATSONX_CHAT_MODEL: The specific Watsonx chat model you want to use (e.g., ibm/granite-3-8b-instruct).

Example Code:

Here’s how to initialize and use Watsonx ChatModel:

Chat model

The ChatModel class represents a Chat Large Language Model and provides methods for text generation, streaming responses, and more. You can initialize a chat model in multiple ways:

Method 1: Using the generic factory method

from beeai_framework.backend.chat import ChatModel

ollama_chat_model = ChatModel.from_name("ollama:llama3.1")

Method 2: Creating a specific provider model directly

from beeai_framework.adapters.ollama.backend.chat import OllamaChatModel

ollama_chat_model = OllamaChatModel("llama3.1")

Text generation

The most basic usage is to generate text responses:

[!NOTE]

Execution parameters (those passed to model.create({…})) are superior to ones defined via config.

Streaming responses

For applications requiring real-time responses:

Structured generation

Generate structured data according to a schema:

Tool calling

Integrate external tools with your AI model:

Embedding model

The EmbedingModel class provides functionality for generating vector embeddings from text.

Embedding model initialization

You can initialize an embedding model in multiple ways:

Method 1: Using the generic factory method

The most straightforward way to initialize an embedding model is using the EmbeddingModel.from_name() factory method. This method automatically handles the creation of the appropriate provider-specific model based on the name you provide. BeeAI Framework supports various providers out of the box, and this method simplifies their instantiation.

Method 2: Creating a specific provider model directly

For more granular control or when you need to configure provider-specific parameters, you can directly instantiate the embedding model class for your chosen provider. This method allows you to pass in specific configurations as needed.

Embedding model usage

Generate embeddings for one or more text strings using the create method. This method accepts a list of text strings in the values parameter and returns an EmbeddingResponse object containing the generated embeddings.

Advanced usage

If your preferred provider isn’t directly supported, you can use the LangChain adapter as a bridge. This allows you to leverage any provider that has LangChain compatibility, extending BeeAI Framework’s reach significantly.

To run this example, the optional packages:

  • langchain-core
  • langchain-community

need to be installed.

Troubleshooting

Common issues and their solutions:

  • Authentication errors: Ensure all required environment variables are set correctly, especially API keys and provider-specific credentials.
  • Model not found: Verify that the model ID is correct and available for the selected provider. Double-check the model name and provider compatibility.
  • Package dependencies: For LangChain integration, make sure you have installed the necessary LangChain packages (langchain-core, langchain-community, and any provider-specific LangChain integrations like langchain-openai).

Embedding Model

The EmbedingModel class represents an Embedding Model and can be initiated in one of the following ways, for example considering the node js.

or you can always create the concrete provider’s embedding model directly

Usage

Conclusion

Congratulations! You’ve learned how to turn text into powerful numerical representations, enabling AI to understand context, meaning, and relationships with accuracy. You’re now capable of building intelligent applications that go beyond simple keyword matching and embrace semantic relevance.

Throughout this BeeAI journey, you’ve developed critical skills:

  • Prompt Templates: Guiding language models precisely.
  • ChatModel Interaction: Creating dynamic conversations.
  • Memory Handling: Building context-aware interactions.
  • Structured Outputs: Delivering clear, structured information.
  • ReAct Agents and Tools: Developing reasoning agents that interact with the real world.
  • Workflows: Coordinating multi-agent systems for complex tasks.
  • Backend Flexibility: Deploying AI solutions across diverse platforms.
  • Embedding Models: Enhancing applications with semantic understanding.

You’re now equipped to architect advanced, intelligent systems that deeply understand and interact with the world. BeeAI Framework empowers you to turn your AI visions into reality.

Connect:

Email: [email protected]

Special thanks to the contributors, researchers, supporters, and the open-source community!

Posted:

Leave a comment