Skip to content

Commit

Permalink
Merge pull request #290 from PrefectHQ/context-deps
Browse files Browse the repository at this point in the history
Huge orchestration overhaul
  • Loading branch information
jlowin authored Sep 9, 2024
2 parents 1493135 + f3fd37d commit 8985435
Show file tree
Hide file tree
Showing 27 changed files with 841 additions and 767 deletions.
File renamed without changes.
File renamed without changes.
2 changes: 1 addition & 1 deletion docs/concepts/agents.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -196,7 +196,7 @@ temporary positive outcomes, despite the overall bleak and discouraging reality.
</CodeGroup>
When tasks have multiple agents, it's important to understand how they collaborate (and to provide them with clear instructions to guide that behavior). To learn more, see the [collaboration](/patterns/collaborating-agents) doc.
When tasks have multiple agents, it's important to understand how they collaborate (and to provide them with clear instructions to guide that behavior). To learn more, see the [collaboration](/patterns/collaboration) doc.
#### Assigning completion agents
Expand Down
4 changes: 2 additions & 2 deletions docs/guides/llms.mdx
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Configuring LLMs
title: Configuring LLM models
description: ControlFlow supports a variety of LLMs and model providers.
icon: gear
icon: sliders
---

ControlFlow is optimized for workflows that are composed of multiple tasks, each of which can be completed by a different agent. One benefit of this approach is that you can use a different LLM for each task, or even for each agent assigned to a task.
Expand Down
45 changes: 0 additions & 45 deletions docs/guides/orchestration.mdx

This file was deleted.

92 changes: 92 additions & 0 deletions docs/guides/settings.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
---
title: Settings
icon: gear
---

ControlFlow provides a variety of settings to configure its behavior. These can be configured via environment variables or programmatically.


## Environment variables
All settings can be set via environment variables using the format `CONTROLFLOW_<setting name>`.

For example, to set the default LLM model to `gpt-4o-mini` and the log level to `DEBUG`, you could set the following environment variables:
```shell
export CONTROLFLOW_LLM_MODEL=openai/gpt-4o-mini
export CONTROLFLOW_LOG_LEVEL=DEBUG
```

You can also set these values in a `.env` file. By default, ControlFlow will look for a `.env` file at `~/.controlflow/.env`, but you can change this behavior by setting the `CONTROLFLOW_ENV_FILE` environment variable.

```shell
export CONTROLFLOW_ENV_FILE="~/path/to/.env"
```

## Runtime settings
You can examine and modify ControlFlow's settings at runtime by inspecting or updating the `controlflow.settings` object. Most -- though not all -- changes to settings will take effect immediately. Here is the above example, but set programmatically:

```python
import controlflow as cf

cf.settings.llm_model = 'openai/gpt-4o-mini'
cf.settings.log_level = 'DEBUG'
```

## Available settings

### Home settings

- `home_path`: The path to the ControlFlow home directory. Default: `~/.controlflow`

### Display and logging settings

- `log_level`: The log level for ControlFlow. Options: `DEBUG`, `INFO`, `WARNING`,
`ERROR`, `CRITICAL`. Default: `INFO`
- `log_prints`: Whether to log workflow prints to the Prefect logger by default.
Default: `False`
- `log_all_messages`: If True, all LLM messages will be logged at the debug level.
Default: `False`
- `pretty_print_agent_events`: If True, a PrintHandler will be enabled and
automatically pretty-print agent events. Note that this may interfere with logging.
Default: `True`

### Orchestration settings

- `orchestrator_max_agent_turns`: The default maximum number of agent turns per
orchestration session. If None, orchestration may run indefinitely. This setting can
be overridden on a per-call basis. Default: `100`
- `orchestrator_max_llm_calls`: The default maximum number of LLM calls per
orchestrating session. If None, orchestration may run indefinitely. This setting can
be overridden on a per-call basis. Default: `1000`
- `task_max_llm_calls`: The default maximum number of LLM calls over a task's
lifetime. If None, the task may run indefinitely. This setting can be overridden on
a per-task basis. Default: `None`

### LLM settings

- `llm_model`: The default LLM model for agents. Default: `openai/gpt-4o`
- `llm_temperature`: The temperature for LLM sampling. Default: `0.7`
- `max_input_tokens`: The maximum number of tokens to send to an LLM. Default:
`100000`

### Debug settings

- `debug_messages`: If True, all messages will be logged at the debug level. Default:
`False`
- `tools_raise_on_error`: If True, an error in a tool call will raise an exception.
Default: `False`
- `tools_verbose`: If True, tools will log additional information. Default: `True`

### Experimental settings

- `enable_experimental_tui`: If True, the experimental TUI will be enabled. If False,
the TUI will be disabled. Default: `False`
- `run_tui_headless`: If True, the experimental TUI will run in headless mode, which
is useful for debugging. Default: `False`

### Prefect settings

These are default settings for Prefect when used with ControlFlow. They can be
overridden by setting standard Prefect environment variables.

- `prefect_log_level`: The log level for Prefect. Options: `DEBUG`, `INFO`,
`WARNING`, `ERROR`, `CRITICAL`. Default: `WARNING`
12 changes: 4 additions & 8 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -59,19 +59,15 @@
"patterns/instructions",
"patterns/planning",
"patterns/dependencies",
"patterns/subtasks",
"patterns/collaborating-agents",
"patterns/stopping-tasks-early"
"patterns/subtasks"
]
},
{
"group": "Guides",
"group": "Configuration",
"pages": [
"guides/tasks-and-agents",
"guides/settings",
"guides/llms",
"guides/default-agent",
"guides/agentic-loop",
"guides/orchestration"
"guides/default-agent"
]
},
{
Expand Down
71 changes: 0 additions & 71 deletions docs/patterns/collaborating-agents.mdx

This file was deleted.

11 changes: 4 additions & 7 deletions docs/patterns/planning.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ description: Use AI to generate new tasks.
icon: compass
---

The `plan()` function in ControlFlow extends the capabilities of AI workflows by allowing dynamic generation of tasks. This feature allows you to leverage AI for creating structured, goal-oriented task sequences programmatically.
The `plan` function in ControlFlow extends the capabilities of AI workflows by allowing dynamic generation of tasks. This feature allows you to leverage AI for creating structured, goal-oriented task sequences programmatically.

## Purpose of AI planning

Expand All @@ -15,9 +15,9 @@ While ControlFlow allows manual creation of tasks for AI workflows, there are sc
3. **Adaptive Workflows**: In processes that need to adjust based on changing conditions or new information.


## The `plan()` function
## The `plan` function

The `plan()` function takes a high-level objective and generates a structured sequence of tasks to achieve that goal. Here's a basic example:
The `plan` function takes a high-level objective and generates a structured sequence of tasks to achieve that goal. Here's a basic example:

```python
import controlflow as cf
Expand All @@ -31,11 +31,8 @@ tasks = cf.plan(
cf.run_tasks(tasks)
```

In this example, `plan()` will generate a list of 3 tasks that, when completed, should result in an analysis of customer feedback data. These tasks might include steps like "Load data", "Preprocess text", "Perform sentiment analysis", etc.
In this example, `plan` will generate a list of 3 tasks that, when completed, should result in an analysis of customer feedback data. These tasks might include steps like "Load data", "Preprocess text", "Perform sentiment analysis", etc.

## Advanced usage

The `plan` function can do more than just generate tasks that achieve an objective.

### Dependencies

Expand Down
Loading

0 comments on commit 8985435

Please sign in to comment.