Skip to content

Commit

Permalink
Merge pull request #277 from PrefectHQ/docs
Browse files Browse the repository at this point in the history
Giant docs update
  • Loading branch information
jlowin authored Sep 4, 2024
2 parents 0f84994 + 286610d commit 470f24b
Show file tree
Hide file tree
Showing 32 changed files with 1,685 additions and 2,176 deletions.
59 changes: 0 additions & 59 deletions docs/concepts.mdx

This file was deleted.

91 changes: 0 additions & 91 deletions docs/concepts/agents.mdx

This file was deleted.

24 changes: 24 additions & 0 deletions docs/concepts/agents/agents.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
---
title: What are Agents?
sidebarTitle: Introduction
---

<Tip>Agents are the intelligent, autonomous entities that power your AI workflows.</Tip>

Agents represent AI models capable of understanding instructions, making decisions, and completing tasks. Think of agents as your AI workforce, each potentially specialized for different types of work.


Each agent in ControlFlow is a configurable entity with its own identity, capabilities, and even personality. Agents can even represent different LLM models, allowing you to optimize your AI workflows to always use the most appropriate model for the task at hand.

## Why Agents Matter

Agents are fundamental to ControlFlow's approach to AI workflows for three critical reasons:

1. **Portable Configuration**: Agents encapsulate how we interact with LLMs, providing a consistent and portable way to configure AI behavior. This abstraction allows you to define specialized AI entities that can be reused across different tasks and workflows, ensuring consistency and reducing complexity in your AI applications.

2. **Specialization and Expertise**: Agents can be tailored for specific domains or tasks, allowing you to create AI entities with deep, focused knowledge. This specialization leads to more accurate and efficient task completion, mimicking how human experts collaborate in complex projects. By combining multiple specialized agents, you can tackle complex, multi-faceted problems that a single, general-purpose AI might struggle with.

3. **Structured Collaboration**: When combined with ControlFlow's flow management, agents provide a powerful framework for organizing the flow of information and context between AI entities. This structured approach to agent collaboration enables more sophisticated problem-solving, allowing you to break down complex tasks into manageable steps and leverage the strengths of different agents at each stage of the process.

By leveraging these key aspects of agents in ControlFlow, you can create more powerful, flexible, and manageable AI workflows that can adapt to a wide range of challenges and use cases.

111 changes: 111 additions & 0 deletions docs/concepts/agents/assigning-agents.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,111 @@
---
title: Assigning Agents to Tasks
sidebarTitle: Task Assignment
---

To assign an agent to a task, use the `agents` parameter when creating a task. Each task requires at least one assigned agent, and will use a default agent if none are provided. Agents can be assigned to multiple tasks, and tasks can have multiple agents.

### Tasks with one agent

To assign a single agent to a task, create the task and pass the agent to the `agents` parameter:

```python
import controlflow as cf

poet = cf.Agent(name="Poet")

poem = cf.run("Write a short poem about AI", agents=[poet])
```

Alternatively, you can use the agent's own `run` method:

```python
import controlflow as cf

poet = cf.Agent(name="Poet")

poem = poet.run("Write a short poem about AI")
```

These two approaches are functionally equivalent.


### Tasks with multiple agents

Assign multiple agents to a task by passing them to the task's `agents` parameter as a list.

Here, we create two agents and assign them to a task that has them debate each other.

<CodeGroup>
```python Code
import controlflow as cf

optimist = cf.Agent(
name="Optimist",
instructions="Always find the best in every situation.",
)

pessimist = cf.Agent(
name="Pessimist",
instructions="Always find the worst in every situation.",
)

cf.run(
"Debate world peace",
agents=[optimist, pessimist],
instructions=(
"Mark the task successful once both agents have "
"found something to agree on."
)
)
```
```text Result
Optimist: I see where you're coming from, Pessimist. Human nature and the
disparities among nations do present significant challenges to achieving world
peace. However, it's important to focus on the positive aspects and the
potential for improvement.
Pessimist: While it's true that efforts towards peace can lead to some positive
outcomes, the reality is that these efforts are often met with setbacks,
failures, and unintended consequences. The end of apartheid and the fall of the
Berlin Wall were monumental achievements, but they didn't come without immense
struggle, loss, and suffering. Moreover, the aftermath of such events often
leaves lingering issues that take decades to resolve, if they ever are.
Optimist: For instance, while human nature has its flaws, it also has incredible
capacity for compassion, cooperation, and progress. These positive traits have
led to remarkable achievements in history, such as the end of apartheid, the
fall of the Berlin Wall, and advancements in human rights.
Pessimist: International cooperation through organizations like the United
Nations is often hampered by bureaucracy, political agendas, and lack of
enforcement power. Peace treaties can be fragile and easily broken, leading to
renewed conflicts that sometimes are even worse than before.
Optimist: Additionally, efforts like international cooperation through
organizations such as the United Nations and various peace treaties show that
despite differences, nations can come together for a common good. While world
peace may be difficult to achieve, the journey towards it can foster greater
understanding, reduce conflicts, and improve the quality of life for many
people.
Pessimist: So, while there might be some value in striving for peace, the harsh
truth is that the road is fraught with difficulties that may never be fully
overcome. In essence, the pursuit of world peace often feels like an endless,
Sisyphean task.
Optimist: Can we agree that, even though world peace is challenging, the efforts
and progress made towards it are valuable and can lead to significant positive
outcomes?
Pessimist: I suppose we can reluctantly agree that efforts towards peace might
lead to some temporary positive outcomes, but the overall picture remains bleak
and discouraging.
---
Result: Both agents agreed that efforts towards world peace can lead to some
temporary positive outcomes, despite the overall bleak and discouraging reality.
````
</CodeGroup>
69 changes: 69 additions & 0 deletions docs/concepts/agents/collaborating-agents.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,69 @@
---
title: Collaboration
---

Sometimes, agents need to collaborate to accomplish their tasks. In this case, agents take turns working until the task is complete.

A single turn may involve multiple calls to its LLM. For example, an agent might use a tool (one LLM call), examine the result of that tool (a second LLM call), post a message to update another agent (a third LLM call) and finally mark the task as successful (a fourth LLM call).

Because the number of LLM calls per turn can vary, ControlFlow needs a way to determine when an agent's turn is over, and how to select the next agent to act. These are referred to as **turn strategies**.

<Info>
It's tempting to say that a single LLM call is equivalent to a single turn. However, this approach breaks down quickly. If an agent uses a tool (one LLM call), it should almost always be invoked a second time to examine the result. Otherwise, tool calls could potentially be evaluated by an LLM that wasn't designed to handle the tool's output. Naively ending a turn after a tool call would prevent "thinking out loud" and other emergent behaviors.
</Info>

## Turn strategies

ControlFlow has a few built-in turn strategies for selecting which agent should take the next turn. The default strategy is `Popcorn`, which works well in most cases.

| `TurnStrategy` | Description | Ideal when... | Keep in mind... |
|---------------|-------------|--------------------|-----------|
| `Popcorn` | Each agent takes a turn, then picks the next agent to go next. | All agents are generally capable of making decisions and have visibility into all tasks. | Requires one extra tool call per turn, to pick the next agent. |
| `Moderated` | A moderator agent always decides which agent should act next. | You want a dedicated agent to orchestrate the others, who may not be powerful enough to make decisions themselves. | Requires up to two extra tool calls per turn: one for the agent to end its turn (which could happen in parallel with other work if your LLM supports it) and another for the moderator to pick the next agent. |
| `RoundRobin` | Agents take turns in a round-robin fashion. | You want agents to work in a specific sequence. | May be less efficient than other strategies, especially if agents have varying workloads. |
| `MostBusy` | The agent assigned to the most active tasks goes next. | You want to prioritize agents who have the most work to do. | May lead to task starvation for less busy agents. |
| `Random` | Invokes a random agent. | You want to distribute the load evenly across agents. | Can be inefficient; may select agents without relevant tasks. |
| `Single` | Only one agent is given the opportunity to act. | You want to control the sequence of agents yourself. | Requires manual management; may not adapt well to dynamic scenarios. |


### Using a strategy

To use a turn strategy, provide it as an argument to the `run()` call. Here, we use a round robin strategy to ensure that each agent gets a turn in order:

```python Round Robin
import controlflow as cf

agent1 = cf.Agent(name="Agent 1")
agent2 = cf.Agent(name="Agent 2")
agent3 = cf.Agent(name="Agent 3")

cf.run(
"Say hello to each other",
instructions=(
"Mark the task successful only when every "
"agent has posted a message to the thread."
),
agents=[agent1, agent2, agent3],
turn_strategy=cf.orchestration.turn_strategies.RoundRobin(),
)
```

We can also use the `Moderated` strategy to use a more powerful model to orchestrate smaller ones. In this example, we invite an "optimist" and "pessimist", both powered by `gpt-4o-mini`, to debate the meaning of life. A moderator agent is tasked with picking the next agent to speak. Note that the moderator is also the only `completion_agent`, meaning it's responsible for marking the task as successful.

```python Moderated
import controlflow as cf
from langchain_openai import ChatOpenAI

optimist = cf.Agent(name="Optimist", model=ChatOpenAI(model="gpt-4o-mini"))
pessimist = cf.Agent(name="Pessimist", model=ChatOpenAI(model="gpt-4o-mini"))
moderator = cf.Agent(name="Moderator")

cf.run(
"Debate the meaning of life",
instructions='Give each agent at least three chances to speak.',
agents=[moderator, optimist, pessimist],
completion_agents=[moderator],
turn_strategy=cf.orchestration.turn_strategies.Moderated(moderator=moderator),
)
```

Loading

0 comments on commit 470f24b

Please sign in to comment.