Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

feat(llamabot/cli/docs)📝: Add a new CLI tool for managing Markdown documentation #90

Merged
merged 14 commits into from
Aug 30, 2024
Merged
Show file tree
Hide file tree
Changes from 13 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions .devcontainer/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,10 @@ ARG USERNAME=vscode
ARG USER_UID=1000
ARG USER_GID=$USER_UID

# Copy environment.yml (if found) to a temp locaition so we update the environment. Also
# copy "noop.txt" so the COPY instruction does not fail if no environment.yml exists.
# Copy lockfile and config file for pixi to install env
COPY pixi.lock .
COPY pyproject.toml .
# Copy docs source, llamabot source, and software tests to get started with development
COPY docs docs
COPY llamabot llamabot
COPY tests tests
Expand All @@ -26,7 +26,7 @@ RUN apt-get update && apt-get install -y curl build-essential
# Configure apt and install packages
RUN /usr/local/bin/pixi install --manifest-path pyproject.toml

# Install Ollama within Docker container
# Install Ollama within Docker container to run large language models locally
RUN curl -fsSL https://ollama.com/install.sh | sh

# Always the final command
Expand Down
7 changes: 7 additions & 0 deletions .github/workflows/code-style.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,13 @@ jobs:
# Install latest uv version using the installer
run: curl -LsSf https://astral.sh/uv/install.sh | sh

- name: Setup Pixi Environment
uses: prefix-dev/setup-pixi@v0.8.1
with:
pixi-version: v0.25.0
cache: true
cache-write: ${{ github.event_name == 'push' && github.ref_name == 'main' }}

- name: Set up Python
run: uv python install

Expand Down
9 changes: 9 additions & 0 deletions .pre-commit-config.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -43,3 +43,12 @@ repos:
rev: v0.0.9
hooks:
- id: convert-to-webp
- repo: local
hooks:
- id: pixi-install
name: pixi-install
entry: pixi install
language: system
always_run: true
require_serial: true
pass_filenames: false
78 changes: 78 additions & 0 deletions docs/devcontainer.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,78 @@
---
intents:
- Provide reader with an overview on how the dev container is built, and what software
is installed in dev container.
- Explain the build steps with examples of possible common failure modes in the build
and how to fix them, such that the reader knows how to fix them.
- Understand how the devcontainer.json file influences the development container.
linked_files:
- .github/workflows/build-devcontainer.yaml
- .devcontainer/Dockerfile
- .devcontainer/devcontainer.json
---
# Development Container Overview

The development container for Llamabot is built using a Dockerfile and is influenced by the `devcontainer.json` file. This document provides an overview of how the dev container is built, the software installed within it, and the build steps with examples of possible common failure modes and how to fix them.

## Building the Development Container

The development container is built using the Dockerfile located at `.devcontainer/Dockerfile`. The Dockerfile starts with a base image `ghcr.io/prefix-dev/pixi:latest` and sets up the environment by installing necessary software and dependencies. The Dockerfile also adds a non-root user with sudo access and sets up the environment for the development of Llamabot.

### Dockerfile Contents

The Dockerfile includes the following key steps:

1. Copies necessary files and directories into the container, including the `tests` directory and the `llamabot` directory.
2. Installs `curl` and `build-essential` for C++ (needed for ChromaDB).
3. Configures apt and installs packages using `pixi` based on the `pyproject.toml` file.
4. Installs Ollama within the Docker container.
5. Sets the final command and switches back to dialog for any ad-hoc use of `apt-get`.

### Ollama Software

The 'ollama' software is used to run large language models locally within the Docker container and is installed using the command `RUN curl -fsSL https://ollama.com/install.sh | sh`. Ollama is a crucial component for running large language models within the development container.

### Tests Directory

The `tests` directory contains the software tests to get started with development.

### Llamabot Directory

The 'llamabot' directory contains the source code and documentation for the Llamabot project, highlighting its significance in the development container.

## Devcontainer.json Influence

The `devcontainer.json` file located at `.devcontainer/devcontainer.json` influences the development container by specifying the build context, customizations for Visual Studio Code, forward ports, and post-create and post-start commands.

### Devcontainer.json Contents

- Specifies the Dockerfile and build context.
- Customizes Visual Studio Code settings and extensions for the development environment.
- Forwards port 8888 for the development environment.
- Specifies post-create and post-start commands for setting up the environment and running the Llamabot server.

### Devcontainer.json Commands

The 'postCreateCommand' is used to install pre-commit and set up the Python environment, while the 'postStartCommand' is used to start the 'ollama' server.

### Purpose of postCreateCommand and postStartCommand

The 'postCreateCommand' is executed after the development container is created to set up the environment, and the 'postStartCommand' is executed after the container is started to run the Llamabot server.

## Build Process

The build process for the development container is automated using GitHub Actions. The workflow is defined in the `.github/workflows/build-devcontainer.yaml` file. The workflow is triggered on a schedule and on pushes to the main branch. It sets up QEMU, Docker Buildx, and logs in to Docker Hub. It then builds and pushes the development container to Docker Hub.

### Build Process Workflow

1. Sets up QEMU and Docker Buildx.
2. Logs in to Docker Hub using secrets.
3. Builds and pushes the development container to Docker Hub with appropriate tags and caching configurations.

## Common Failure Modes

Common failure modes in the build process may include issues with Dockerfile syntax, missing dependencies, or failed package installations. These issues can be resolved by carefully reviewing the Dockerfile, ensuring all necessary files are copied, and troubleshooting package installations, including the installation process for the 'ollama' software.

## Conclusion

This updated documentation provides an overview of the development container for Llamabot, including the build process, influence of the devcontainer.json file, and common failure modes in the build process. Developers can use this documentation to understand how the development container is built and how to troubleshoot common issues during the build process.
85 changes: 30 additions & 55 deletions docs/tutorials/chatbot.md
Original file line number Diff line number Diff line change
@@ -1,80 +1,55 @@
# ChatBot Tutorial
---
intents:
- How do we use the llamabot ChatBot class in a Jupyter notebook?
- How to serve up a Panel app based on that ChatBot class.
- Specific details on how the ChatBot retrieval works when composing an API call,
such as which messages are retrieved from history.
linked_files:
- llamabot/bot/chatbot.py
- llamabot/__init__.py
---

!!! note
This tutorial was written by GPT4 and edited by a human.
# Using the llamabot ChatBot Class in a Jupyter Notebook

In this tutorial, we will learn how to use the `ChatBot` class to create a simple chatbot that can interact with users. The chatbot is built using the OpenAI GPT-4 model and can be used in a Panel app.
To use the `ChatBot` class from llamabot in a Jupyter notebook, you can follow these steps:

## Getting Started

First, let's import the `ChatBot` class:
1. Import the `ChatBot` class from the `llamabot.bot.chatbot` module:

```python
from llamabot import ChatBot
from llamabot.bot.chatbot import ChatBot
```

Now, let's create a new instance of the `ChatBot` class. We need to provide a system prompt, which will be used to prime the chatbot. Optionally, we can also set the temperature and model name:
2. Create an instance of the `ChatBot` class by providing the required parameters such as the system prompt, session name, and any additional configuration options:

```python
system_prompt = "Hello, I am a chatbot. How can I help you today?"
chatbot = ChatBot(system_prompt, temperature=0.0, model_name="gpt-4")
system_prompt = "Your system prompt here"
session_name = "Your session name here"
chatbot = ChatBot(system_prompt, session_name)
```

## Interacting with the ChatBot

To interact with the chatbot, we can simply call the chatbot instance with a human message:
3. Interact with the `ChatBot` instance by calling it with a human message:

```python
human_message = "What is the capital of France?"
human_message = "Hello, how are you?"
response = chatbot(human_message)
print(response.content)
print(response)
```

The chatbot will return an `AIMessage` object containing the response to the human message, primed by the system prompt.

## Chat History
# Serving a Panel App Based on the ChatBot Class

The chatbot automatically manages the chat history. To view the chat history, we can use the `__repr__` method:
To serve a Panel app based on the `ChatBot` class, you can use the `stream_panel` method of the `ChatBot` class. Here's an example of how to do this:

```python
print(chatbot)
panel_app = chatbot.stream_panel(messages)
panel_app.servable()
```

This will return a string representation of the chat history, with each message prefixed by its type (System, Human, or AI).

## Creating a Panel App

The `ChatBot` class also provides a `panel` method to create a Panel app that wraps the chatbot. This allows users to interact with the chatbot through a web interface.

To create a Panel app, simply call the `panel` method on the chatbot instance:

```python
app = chatbot.panel(show=False)
```

By default, the app will be shown in a new browser window. If you want to return the app directly, set the `show` parameter to `False`.

You can customize the appearance of the app by providing additional parameters, such as `site`, `title`, and `width`:
# ChatBot Retrieval and API Composition

```python
app = chatbot.panel(show=False, site="My ChatBot", title="My ChatBot", width=768)
```

To run the app, you can either call the `show` method on the app or use the Panel `serve` function:

```python
app.show()
```

or

```python
import panel as pn
pn.serve(app)
```
When composing an API call using the `ChatBot` class, the retrieval of messages from history is handled internally. The `retrieve` method of the `ChatBot` class is used to retrieve messages from the chat history based on the provided human message and response budget. The retrieved messages include the system prompt, historical messages, and the human message itself.

Now you have a fully functional chatbot that can interact with users through a web interface!
For example, when making an API call to the `ChatBot` instance, the retrieval process ensures that the historical context is considered when generating the response.

## Conclusion
This covers the specific details on how the `ChatBot` retrieval works when composing an API call.

In this tutorial, we learned how to use the `ChatBot` class to create a simple chatbot that can interact with users. We also learned how to create a Panel app to provide a web interface for the chatbot. With this knowledge, you can now create your own chatbots and customize them to suit your needs. Happy chatting!
Please let me know if you need further details or examples.
26 changes: 17 additions & 9 deletions docs/tutorials/recording_prompts.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,15 @@
# Automatically Record QueryBot Calls with PromptRecorder
---
intents:
- This should be a how-to guide that shows a user how the PromptRecorder in LlamaBot
works in tandem with QueryBot, specifically using it as a context manager.
- Specifically, how one sets it up, views recorded prompts and responses, and how
to display them in a Panel app.
linked_files:
- llamabot/recorder.py
- llamabot/bot/querybot.py
---

!!! note
This tutorial was written by GPT4 and edited by a human.
# Automatically Record QueryBot Calls with PromptRecorder

In this tutorial, we will learn how to use the `PromptRecorder` class to automatically record calls made to the `QueryBot`. The `PromptRecorder` class is designed to record prompts and responses, making it a perfect fit for logging interactions with the `QueryBot`.

Expand All @@ -23,8 +31,8 @@ pip install pandas panel
First, we need to import the `PromptRecorder` and `QueryBot` classes from their respective source files. You can do this by adding the following lines at the beginning of your script:

```python
from prompt_recorder import PromptRecorder, autorecord
from query_bot import QueryBot
from llamabot.recorder import PromptRecorder, autorecord
from llamabot.bot.querybot import QueryBot
```

## Step 2: Initialize the QueryBot
Expand All @@ -36,7 +44,7 @@ system_message = "You are a helpful assistant that can answer questions based on
model_name = "gpt-4"
doc_paths = ["document1.txt", "document2.txt"]

query_bot = QueryBot(system_message, model_name=model_name, doc_paths=doc_paths)
query_bot = QueryBot(system_message, model_name=model_name, document_paths=doc_paths)
```

## Step 3: Use the PromptRecorder context manager
Expand Down Expand Up @@ -84,14 +92,14 @@ recorder.panel().show()
Here's the complete example that demonstrates how to use the `PromptRecorder` to automatically record `QueryBot` calls:

```python
from prompt_recorder import PromptRecorder, autorecord
from query_bot import QueryBot
from llamabot.recorder import PromptRecorder, autorecord
from llamabot.bot.querybot import QueryBot

system_message = "You are a helpful assistant that can answer questions based on the provided documents."
model_name = "gpt-4"
doc_paths = ["document1.txt", "document2.txt"]

query_bot = QueryBot(system_message, model_name=model_name, doc_paths=doc_paths)
query_bot = QueryBot(system_message, model_name=model_name, document_paths=doc_paths)

with PromptRecorder() as recorder:
query = "What is the main idea of document1?"
Expand Down
39 changes: 35 additions & 4 deletions docs/tutorials/simplebot.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,20 @@
---
intents:
- Diataxis framework-style tutorial on the usage of SimpleBot.
- Emphasis on the returned data structures from SimpleBot.__call__(), that these are
not strings, but AIMessages.
- Explanation on what an AIMessage is, showing its class structure.
- Creating a Panel app to talk with SimpleBot.
linked_files:
- llamabot/__init__.py
- llamabot/bot/simplebot.py
- llamabot/components/messages.py
---

# SimpleBot Tutorial

!!! note
This tutorial was written by GPT4 and edited by a human.
This tutorial was written by GPT4 and edited by a human.

In this tutorial, we will learn how to use the `SimpleBot` class, a Python implementation of a chatbot that interacts with OpenAI's GPT-4 model. The `SimpleBot` class is designed to be simple and easy to use, allowing you to create a chatbot that can respond to human messages based on a given system prompt.

Expand All @@ -10,7 +23,7 @@ In this tutorial, we will learn how to use the `SimpleBot` class, a Python imple
First, let's import the `SimpleBot` class:

```python
from llamabot import SimpleBot
from llamabot.bot.simplebot import SimpleBot
```

### Initializing the SimpleBot
Expand All @@ -32,7 +45,21 @@ response = bot(human_message)
print(response.content)
```

### Using the Panel App
## AIMessage

When interacting with the `SimpleBot`, it's important to note that the response returned is not a simple string, but an `AIMessage` object. This object contains the generated response and additional metadata. The structure of an `AIMessage` is as follows:

```python
from llamabot.components.messages import AIMessage

# Example AIMessage structure
{
"content": "Generated response content",
"role": "assistant"
}
```

## Using the Panel App

`SimpleBot` also comes with a built-in Panel app that provides a graphical user interface for interacting with the chatbot. To create the app, call the `panel()` method on your `SimpleBot` instance:

Expand All @@ -53,7 +80,7 @@ app.show()
Here's a complete example of how to create and interact with a `SimpleBot`:

```python
from simple_bot import SimpleBot
from llamabot.bot.simplebot import SimpleBot

# Initialize the SimpleBot
system_prompt = "You are an AI assistant that helps users with their questions."
Expand All @@ -72,3 +99,7 @@ app.show()
## Conclusion

In this tutorial, we learned how to use the `SimpleBot` class to create a simple chatbot that interacts with OpenAI's GPT-4 model. We also learned how to create a Panel app for a more user-friendly interface. With this knowledge, you can now create your own chatbots and experiment with different system prompts and settings.

## Additional Information

For more detailed information on the `SimpleBot` class and its methods, please refer to the source code and documentation provided in the `llamabot` package.
1 change: 1 addition & 0 deletions llamabot/bot/ollama_model_names.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
hermes3
phi3.5
smollm
bge-large
Expand Down
5 changes: 4 additions & 1 deletion llamabot/cli/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@

from llamabot import ChatBot, PromptRecorder

from . import blog, configure, doc, git, python, tutorial, zotero, repo, serve
from . import blog, configure, doc, git, python, tutorial, zotero, repo, serve, docs
from .utils import exit_if_asked, uniform_prompt

app = typer.Typer()
Expand Down Expand Up @@ -42,6 +42,9 @@
app.add_typer(
serve.cli, name="serve", help="Serve up a LlamaBot as a FastAPI endpoint."
)
app.add_typer(
docs.app, name="docs", help="Create Markdown documentation from source files."
)


@app.command()
Expand Down
Loading
Loading