Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/v1/examples/camel.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -58,7 +58,7 @@ Now we will initialize our AgentOps client.


```python
agentops.init(tags=["camel", "multi-agent", "example"])
agentops.init(default_tags=["camel", "multi-agent", "example"])
```

Let's start with setting our task prompt and setting our tools.
Expand Down
2 changes: 1 addition & 1 deletion docs/v1/examples/langchain.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -65,7 +65,7 @@ Pass in your API key, and optionally any tags to describe this session for easie

```python
agentops_handler = AgentOpsLangchainCallbackHandler(
api_key=AGENTOPS_API_KEY, tags=["Langchain Example"]
api_key=AGENTOPS_API_KEY, default_tags=["Langchain Example"]
)

llm = ChatOpenAI(
Expand Down
20 changes: 6 additions & 14 deletions docs/v1/examples/multi_agent.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@ _View Notebook on <a href={'https://github.com/AgentOps-AI/agentops/blob/main/ex
{/* SOURCE_FILE: examples/multi_agent_example.ipynb */}

# Multi-Agent Support
This is an example implementation of tracking operations from two separate agents
This is an example implementation of tracking events from two separate agents

First let's install the required packages

Expand All @@ -25,7 +25,7 @@ Then import them

```python
import agentops
from agentops.sdk.decorators import agent, operation
from agentops import track_agent
from openai import OpenAI
import os
from dotenv import load_dotenv
Expand Down Expand Up @@ -53,20 +53,16 @@ logging.basicConfig(


```python
agentops.init(AGENTOPS_API_KEY, tags=["multi-agent-notebook"])
agentops.init(AGENTOPS_API_KEY, default_tags=["multi-agent-notebook"])
openai_client = OpenAI(api_key=OPENAI_API_KEY)
```

Now lets create a few agents!


```python
@agent(name="qa")
@track_agent(name="qa")
class QaAgent:
def __init__(self):
pass

@operation
def completion(self, prompt: str):
res = openai_client.chat.completions.create(
model="gpt-3.5-turbo",
Expand All @@ -82,12 +78,8 @@ class QaAgent:
return res.choices[0].message.content


@agent(name="engineer")
@track_agent(name="engineer")
class EngineerAgent:
def __init__(self):
pass

@operation
def completion(self, prompt: str):
res = openai_client.chat.completions.create(
model="gpt-3.5-turbo",
Expand All @@ -109,7 +101,7 @@ qa = QaAgent()
engineer = EngineerAgent()
```

Now we have our agents and we tagged them with the `@agent` decorator. Any LLM calls that go through this class will now be tagged as agent calls in AgentOps.
Now we have our agents and we tagged them with the `@track_agent` decorator. Any LLM calls that go through this class will now be tagged as agent calls in AgentOps.

Let's use these agents!

Expand Down
2 changes: 1 addition & 1 deletion docs/v1/examples/multion.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -92,7 +92,7 @@ When running `agentops.init()`, be sure to set `auto_start_session=False`. Multi

```python
agentops.init(
AGENTOPS_API_KEY, auto_start_session=False, tags=["MultiOn browse example"]
AGENTOPS_API_KEY, auto_start_session=False, default_tags=["MultiOn browse example"]
)
```

Expand Down
2 changes: 1 addition & 1 deletion docs/v1/examples/ollama.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ AGENTOPS_API_KEY = os.getenv("AGENTOPS_API_KEY") or "<your_agentops_key>"

```python
# Initialize AgentOps with some default tags
agentops.init(AGENTOPS_API_KEY, tags=["ollama-example"])
agentops.init(AGENTOPS_API_KEY, default_tags=["ollama-example"])
```

Now let's make some basic calls to Ollama. Make sure you have pulled the model first, use the following or replace with whichever model you want to use.
Expand Down
56 changes: 17 additions & 39 deletions docs/v1/examples/openai_assistants.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -49,6 +49,7 @@ We'll take a look at how these can be used to create powerful, stateful experien
```python
import json


def show_json(obj):
display(json.loads(obj.model_dump_json()))
```
Expand Down Expand Up @@ -105,7 +106,7 @@ OPENAI_API_KEY = os.getenv("OPENAI_API_KEY") or "<your_openai_key>"


```python
agentops.init(api_key=AGENTOPS_API_KEY, tags=["openai", "beta-assistants"])
agentops.init(api_key=AGENTOPS_API_KEY, default_tags=["openai", "beta-assistants"])
client = OpenAI(api_key=OPENAI_API_KEY)
```

Expand Down Expand Up @@ -186,6 +187,7 @@ To know when the Assistant has completed processing, we can poll the Run in a lo
```python
import time


def wait_on_run(run, thread):
while run.status == "queued" or run.status == "in_progress":
run = client.beta.threads.runs.retrieve(
Expand Down Expand Up @@ -223,9 +225,7 @@ Let's ask our Assistant to explain the result a bit further!

```python
# Create a message to append to our thread
message = client.beta.threads.messages.create(
thread_id=thread.id, role="user", content="Could you explain this to me?"
)
message = client.beta.threads.messages.create(thread_id=thread.id, role="user", content="Could you explain this to me?")

# Execute our run
run = client.beta.threads.runs.create(
Expand All @@ -237,9 +237,7 @@ run = client.beta.threads.runs.create(
wait_on_run(run, thread)

# Retrieve all the messages added after our last user message
messages = client.beta.threads.messages.list(
thread_id=thread.id, order="asc", after=message.id
)
messages = client.beta.threads.messages.list(thread_id=thread.id, order="asc", after=message.id)
show_json(messages)
```

Expand All @@ -265,10 +263,9 @@ MATH_ASSISTANT_ID = assistant.id # or a hard-coded ID like "asst-..."

client = OpenAI(api_key=os.environ.get("OPENAI_API_KEY", "<your OpenAI API key if not set as env var>"))


def submit_message(assistant_id, thread, user_message):
client.beta.threads.messages.create(
thread_id=thread.id, role="user", content=user_message
)
client.beta.threads.messages.create(thread_id=thread.id, role="user", content=user_message)
return client.beta.threads.runs.create(
thread_id=thread.id,
assistant_id=assistant_id,
Expand All @@ -293,9 +290,7 @@ def create_thread_and_run(user_input):


# Emulating concurrent user requests
thread1, run1 = create_thread_and_run(
"I need to solve the equation `3x + 11 = 14`. Can you help me?"
)
thread1, run1 = create_thread_and_run("I need to solve the equation `3x + 11 = 14`. Can you help me?")
thread2, run2 = create_thread_and_run("Could you explain linear algebra to me?")
thread3, run3 = create_thread_and_run("I don't like math. What can I do?")

Expand All @@ -307,8 +302,6 @@ Once all Runs are going, we can wait on each and get the responses.


```python
import time

# Pretty printing helper
def pretty_print(messages):
print("# Messages")
Expand Down Expand Up @@ -380,9 +373,7 @@ Now, let's ask the Assistant to use its new tool.


```python
thread, run = create_thread_and_run(
"Generate the first 20 fibbonaci numbers with code."
)
thread, run = create_thread_and_run("Generate the first 20 fibbonaci numbers with code.")
run = wait_on_run(run, thread)
pretty_print(get_response(thread))
```
Expand All @@ -399,9 +390,7 @@ A Run is composed of one or more Steps. Like a Run, each Step has a `status` tha


```python
run_steps = client.beta.threads.runs.steps.list(
thread_id=thread.id, run_id=run.id, order="asc"
)
run_steps = client.beta.threads.runs.steps.list(thread_id=thread.id, run_id=run.id, order="asc")
```

Let's take a look at each Step's `step_details`.
Expand Down Expand Up @@ -653,19 +642,17 @@ tool_calls = run.required_action.submit_tool_outputs.tool_calls
for tool_call in tool_calls:
arguments = json.loads(tool_call.function.arguments)
responses = display_quiz(arguments["title"], arguments["questions"])
tool_outputs.append({
"tool_call_id": tool_call.id,
"output": json.dumps(responses),
})
tool_outputs.append(
{
"tool_call_id": tool_call.id,
"output": json.dumps(responses),
}
)
```


```python
run = client.beta.threads.runs.submit_tool_outputs(
thread_id=thread.id,
run_id=run.id,
tool_outputs=tool_outputs
)
run = client.beta.threads.runs.submit_tool_outputs(thread_id=thread.id, run_id=run.id, tool_outputs=tool_outputs)
show_json(run)
```

Expand All @@ -678,15 +665,6 @@ run = wait_on_run(run, thread)
pretty_print(get_response(thread))
```

Now let's end the AgentOps session. By default, AgentOps will end the session in the "Intedeterminate" state. You can also end the session in the "Success" or "Failure" state.

We will end the session in the "Success" state.


```python
agentops.end_session(end_state="Success")
```

Woohoo 🎉

## Conclusion
Expand Down
7 changes: 1 addition & 6 deletions docs/v1/examples/recording_events.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ _View Notebook on <a href={'https://github.com/AgentOps-AI/agentops/blob/main/ex
# Recording Operations with Spans
AgentOps v0.4 uses spans to track different types of operations in your agent workflows.

We automatically instrument your LLM Calls from OpenAI, LiteLLM, Cohere, and more. Just make sure their SDKs are imported before initializing AgentOps like we see below.
We automatically instrument your LLM Calls from OpenAI, Anthropic, OpenAi Agents, and more. Just make sure their SDKs are imported before initializing AgentOps like we see below.

First let's install the required packages

Expand Down Expand Up @@ -80,9 +80,7 @@ my_session()
```

Click the AgentOps link above to see your session!

## Operations

AgentOps allows you to record operations using the `@operation` decorator:


Expand All @@ -95,7 +93,6 @@ def add(x, y):
```

## Agents

You can create agent spans that contain operations using the `@agent` decorator:


Expand All @@ -111,7 +108,6 @@ class MyAgent:
```

## Error Handling

Errors are automatically captured by the spans. When an exception occurs within a decorated function, it's recorded in the span:


Expand Down Expand Up @@ -140,7 +136,6 @@ error_session()
```

## Custom Span Attributes

You can add custom attributes to spans for additional context:


Expand Down
2 changes: 1 addition & 1 deletion docs/v1/examples/simple_agent.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -50,7 +50,7 @@ The AgentOps library is designed to be a plug-and-play replacement for the OpenA

```python
openai = OpenAI(api_key=OPENAI_API_KEY)
agentops.init(AGENTOPS_API_KEY, tags=["openai-gpt-notebook"])
agentops.init(AGENTOPS_API_KEY, default_tags=["openai-gpt-notebook"])
```

Now just use OpenAI as you would normally!
Expand Down