Skip to content

Commit

Permalink
Update all settings
Browse files Browse the repository at this point in the history
  • Loading branch information
jlowin committed Sep 9, 2024
1 parent 7a9e3cb commit f400d59
Show file tree
Hide file tree
Showing 8 changed files with 138 additions and 20 deletions.
4 changes: 2 additions & 2 deletions docs/guides/llms.mdx
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
---
title: Configuring LLMs
title: Configuring LLM models
description: ControlFlow supports a variety of LLMs and model providers.
icon: gear
icon: sliders
---

ControlFlow is optimized for workflows that are composed of multiple tasks, each of which can be completed by a different agent. One benefit of this approach is that you can use a different LLM for each task, or even for each agent assigned to a task.
Expand Down
92 changes: 92 additions & 0 deletions docs/guides/settings.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,92 @@
---
title: Settings
icon: gear
---

ControlFlow provides a variety of settings to configure its behavior. These can be configured via environment variables or programmatically.


## Environment variables
All settings can be set via environment variables using the format `CONTROLFLOW_<setting name>`.

For example, to set the default LLM model to `gpt-4o-mini` and the log level to `DEBUG`, you could set the following environment variables:
```shell
export CONTROLFLOW_LLM_MODEL=openai/gpt-4o-mini
export CONTROLFLOW_LOG_LEVEL=DEBUG
```

You can also set these values in a `.env` file. By default, ControlFlow will look for a `.env` file at `~/.controlflow/.env`, but you can change this behavior by setting the `CONTROLFLOW_ENV_FILE` environment variable.

```shell
export CONTROLFLOW_ENV_FILE="~/path/to/.env"
```

## Runtime settings
You can examine and modify ControlFlow's settings at runtime by inspecting or updating the `controlflow.settings` object. Most -- though not all -- changes to settings will take effect immediately. Here is the above example, but set programmatically:

```python
import controlflow as cf

cf.settings.llm_model = 'openai/gpt-4o-mini'
cf.settings.log_level = 'DEBUG'
```

## Available settings

### Home settings

- `home_path`: The path to the ControlFlow home directory. Default: `~/.controlflow`

### Display and logging settings

- `log_level`: The log level for ControlFlow. Options: `DEBUG`, `INFO`, `WARNING`,
`ERROR`, `CRITICAL`. Default: `INFO`
- `log_prints`: Whether to log workflow prints to the Prefect logger by default.
Default: `False`
- `log_all_messages`: If True, all LLM messages will be logged at the debug level.
Default: `False`
- `pretty_print_agent_events`: If True, a PrintHandler will be enabled and
automatically pretty-print agent events. Note that this may interfere with logging.
Default: `True`

### Orchestration settings

- `orchestrator_max_agent_turns`: The default maximum number of agent turns per
orchestration session. If None, orchestration may run indefinitely. This setting can
be overridden on a per-call basis. Default: `100`
- `orchestrator_max_llm_calls`: The default maximum number of LLM calls per
orchestrating session. If None, orchestration may run indefinitely. This setting can
be overridden on a per-call basis. Default: `1000`
- `task_max_llm_calls`: The default maximum number of LLM calls over a task's
lifetime. If None, the task may run indefinitely. This setting can be overridden on
a per-task basis. Default: `None`

### LLM settings

- `llm_model`: The default LLM model for agents. Default: `openai/gpt-4o`
- `llm_temperature`: The temperature for LLM sampling. Default: `0.7`
- `max_input_tokens`: The maximum number of tokens to send to an LLM. Default:
`100000`

### Debug settings

- `debug_messages`: If True, all messages will be logged at the debug level. Default:
`False`
- `tools_raise_on_error`: If True, an error in a tool call will raise an exception.
Default: `False`
- `tools_verbose`: If True, tools will log additional information. Default: `True`

### Experimental settings

- `enable_experimental_tui`: If True, the experimental TUI will be enabled. If False,
the TUI will be disabled. Default: `False`
- `run_tui_headless`: If True, the experimental TUI will run in headless mode, which
is useful for debugging. Default: `False`

### Prefect settings

These are default settings for Prefect when used with ControlFlow. They can be
overridden by setting standard Prefect environment variables.

- `prefect_log_level`: The log level for Prefect. Options: `DEBUG`, `INFO`,
`WARNING`, `ERROR`, `CRITICAL`. Default: `WARNING`
1 change: 1 addition & 0 deletions docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -65,6 +65,7 @@
{
"group": "Configuration",
"pages": [
"guides/settings",
"guides/llms",
"guides/default-agent"
]
Expand Down
7 changes: 7 additions & 0 deletions docs/patterns/running-tasks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -310,11 +310,15 @@ orchestrator.run()

The `max_agent_turns` argument limits the number of agentic turns that can be taken in a single orchestration session. This limit is enforced by the orchestrator, which will end the turn early if the limit is reached.

A global default can be set with ControlFlow's `orchestrator_max_agent_turns` setting.


#### Limiting LLM calls

The `max_llm_calls` argument limits the number of LLM calls that can be made during a single orchestration session. This limit is enforced by the orchestrator, which will end the turn early if the limit is reached. Note that this is enforced independently of the `max_agent_turns` limit.

A global default can be set with ControlFlow's `orchestrator_max_llm_calls` setting.


#### Limiting LLM calls over the lifetime of a task

Expand Down Expand Up @@ -343,6 +347,9 @@ task.run()
```text Error
Task 5ba273e6 ("Roll 3 dice and report the results") failed: Max LLM calls reached for this task.
```

A global default can be set with ControlFlow's `task_max_llm_calls` setting.

</CodeGroup>

<Tip>
Expand Down
20 changes: 16 additions & 4 deletions src/controlflow/orchestration/orchestrator.py
Original file line number Diff line number Diff line change
Expand Up @@ -146,9 +146,10 @@ def run(
None, self.get_available_agents()
)

# Use default max_agent_turns if not provided
if max_agent_turns is None:
max_agent_turns = controlflow.settings.orchestrator_max_turns
max_agent_turns = controlflow.settings.orchestrator_max_agent_turns
if max_llm_calls is None:
max_llm_calls = controlflow.settings.orchestrator_max_llm_calls

# Signal the start of orchestration
self.handle_event(
Expand All @@ -159,7 +160,10 @@ def run(
while any(t.is_incomplete() for t in self.tasks):
# Check if we've reached the turn or call limit
if max_agent_turns is not None and turn_count >= max_agent_turns:
logger.debug(f"Max agent turns reached: {max_agent_turns}")
break

# this check seems redundant to the check below, but this one exits the outer loop
if max_llm_calls is not None and call_count >= max_llm_calls:
break

Expand Down Expand Up @@ -190,6 +194,7 @@ def run(

# Check if there are any ready tasks left
if not any(t.is_ready() for t in assigned_tasks):
logger.debug("No `ready` tasks to run")
break

call_count += 1
Expand All @@ -201,6 +206,7 @@ def run(

# Check if we've reached the call limit within a turn
if max_llm_calls is not None and call_count >= max_llm_calls:
logger.debug(f"Max LLM calls reached: {max_llm_calls}")
break

# Select the next agent for the following turn
Expand Down Expand Up @@ -247,9 +253,10 @@ async def run_async(
None, self.get_available_agents()
)

# Use default max_agent_turns if not provided
if max_agent_turns is None:
max_agent_turns = controlflow.settings.orchestrator_max_turns
max_agent_turns = controlflow.settings.orchestrator_max_agent_turns
if max_llm_calls is None:
max_llm_calls = controlflow.settings.orchestrator_max_llm_calls

# Signal the start of orchestration
self.handle_event(
Expand All @@ -260,7 +267,10 @@ async def run_async(
while any(t.is_incomplete() for t in self.tasks):
# Check if we've reached the turn or call limit
if max_agent_turns is not None and turn_count >= max_agent_turns:
logger.debug(f"Max agent turns reached: {max_agent_turns}")
break

# this check seems redundant to the check below, but this one exits the outer loop
if max_llm_calls is not None and call_count >= max_llm_calls:
break

Expand Down Expand Up @@ -291,6 +301,7 @@ async def run_async(

# Check if there are any ready tasks left
if not any(t.is_ready() for t in assigned_tasks):
logger.debug("No `ready` tasks to run")
break

call_count += 1
Expand All @@ -304,6 +315,7 @@ async def run_async(

# Check if we've reached the call limit within a turn
if max_llm_calls is not None and call_count >= max_llm_calls:
logger.debug(f"Max LLM calls reached: {max_llm_calls}")
break

# Select the next agent for the following turn
Expand Down
27 changes: 16 additions & 11 deletions src/controlflow/settings.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,14 +9,14 @@
from pydantic import Field, field_validator, model_validator
from pydantic_settings import BaseSettings, SettingsConfigDict

CONTROLFLOW_ENV_FILE = os.getenv("CONTROLFLOW_ENV_FILE", "~/.controlflow/.env")


class ControlFlowSettings(BaseSettings):
model_config = SettingsConfigDict(
env_prefix="CONTROLFLOW_",
env_file=(
""
if os.getenv("CONTROLFLOW_TEST_MODE")
else ("~/.controlflow/.env", ".env")
"" if os.getenv("CONTROLFLOW_TEST_MODE") else (".env", CONTROLFLOW_ENV_FILE)
),
extra="ignore",
arbitrary_types_allowed=True,
Expand Down Expand Up @@ -54,15 +54,20 @@ class Settings(ControlFlowSettings):
)

# ------------ orchestration settings ------------
orchestrator_max_turns: Optional[int] = Field(
default=100,
description="The maximum number of agent turns allowed when orchestrating tasks. "
"Turns are counted within a single orchestrator session. If None, orchestration may run indefinitely.",
)
orchestrator_max_calls: Optional[int] = Field(
orchestrator_max_agent_turns: Optional[int] = Field(
default=100,
description="The maximum number of LLM calls allowed per agent turn when orchestrating tasks. "
"If None, orchestration may run indefinitely.",
description="The default maximum number of agent turns per orchestration session."
"If None, orchestration may run indefinitely. This setting can be overriden on a per-call basis.",
)
orchestrator_max_llm_calls: Optional[int] = Field(
default=1000,
description="The default maximum number of LLM calls per orchestrating session. "
"If None, orchestration may run indefinitely. This setting can be overriden on a per-call basis.",
)
task_max_llm_calls: Optional[int] = Field(
default=None,
description="The default maximum number of LLM calls over a task's lifetime. "
"If None, the task may run indefinitely. This setting can be overriden on a per-task basis.",
)

# ------------ LLM settings ------------
Expand Down
3 changes: 2 additions & 1 deletion src/controlflow/tasks/task.py
Original file line number Diff line number Diff line change
Expand Up @@ -114,9 +114,10 @@ class Task(ControlFlowModel):
)
interactive: bool = False
max_llm_calls: Optional[int] = Field(
default_factory=lambda: controlflow.settings.task_max_llm_calls,
description="Maximum number of LLM calls to make before the task should be marked as failed. "
"The total calls are measured over the life of the task, and include any LLM call for "
"which this task is considered `assigned`."
"which this task is considered `assigned`.",
)
created_at: datetime.datetime = Field(default_factory=datetime.datetime.now)
_subtasks: set["Task"] = set()
Expand Down
4 changes: 2 additions & 2 deletions tests/conftest.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,8 +12,8 @@ def temp_controlflow_settings():
pretty_print_agent_events=False,
log_all_messages=True,
log_level="DEBUG",
orchestrator_max_turns=10,
orchestrator_max_calls=10,
orchestrator_max_agent_turns=10,
orchestrator_max_llm_calls=10,
):
yield

Expand Down

0 comments on commit f400d59

Please sign in to comment.