Skip to content
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
13 changes: 13 additions & 0 deletions docs/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -33,19 +33,27 @@ First, install LLM using `pip` or Homebrew or `pipx`:
```bash
pip install llm
```

Or with Homebrew (see {ref}`warning note <homebrew-warning>`):

```bash
brew install llm
```

Or with [pipx](https://pypa.github.io/pipx/):

```bash
pipx install llm
```

Or with [uv](https://docs.astral.sh/uv/guides/tools/)

```bash
uv tool install llm
```

If you have an [OpenAI API key](https://platform.openai.com/api-keys) key you can run this:

```bash
# Paste your OpenAI API key into this
llm keys set openai
Expand All @@ -59,18 +67,23 @@ llm "extract text" -a scanned-document.jpg
# Use a system prompt against a file
cat myfile.py | llm -s "Explain this code"
```

Or you can {ref}`install a plugin <installing-plugins>` and use models that can run on your local device:

```bash
# Install the plugin
llm install llm-gpt4all

# Download and run a prompt against the Orca Mini 7B model
llm -m orca-mini-3b-gguf2-q4_0 'What is the capital of France?'
```

To start {ref}`an interactive chat <usage-chat>` with a model, use `llm chat`:

```bash
llm chat -m gpt-4o
```

```
Chatting with gpt-4o
Type 'exit' or 'quit' to exit
Expand Down
80 changes: 79 additions & 1 deletion docs/setup.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,46 +3,63 @@
## Installation

Install this tool using `pip`:

```bash
pip install llm
```

Or using [pipx](https://pypa.github.io/pipx/):

```bash
pipx install llm
```

Or using [uv](https://docs.astral.sh/uv/guides/tools/) ({ref}`more tips below <setup-uvx>`):

```bash
uv tool install llm
```

Or using [Homebrew](https://brew.sh/) (see {ref}`warning note <homebrew-warning>`):

```bash
brew install llm
```

## Upgrading to the latest version

If you installed using `pip`:

```bash
pip install -U llm
```

For `pipx`:

```bash
pipx upgrade llm
```

For `uv`:

```bash
uv tool upgrade llm
```

For Homebrew:

```bash
brew upgrade llm
```

If the latest version is not yet available on Homebrew you can upgrade like this instead:

```bash
llm install -U llm
```

(setup-uvx)=

## Using uvx

If you have [uv](https://docs.astral.sh/uv/) installed you can also use the `uvx` command to try LLM without first installing it like this:
Expand All @@ -51,20 +68,25 @@ If you have [uv](https://docs.astral.sh/uv/) installed you can also use the `uvx
export OPENAI_API_KEY='sx-...'
uvx llm 'fun facts about skunks'
```

This will install and run LLM using a temporary virtual environment.

You can use the `--with` option to add extra plugins. To use Anthropic's models, for example:

```bash
export ANTHROPIC_API_KEY='...'
uvx --with llm-anthropic llm -m claude-3.5-haiku 'fun facts about skunks'
```

All of the usual LLM commands will work with `uvx llm`. Here's how to set your OpenAI key without needing an environment variable for example:

```bash
uvx llm keys set openai
# Paste key here
```

(homebrew-warning)=

## A note about Homebrew and PyTorch

The version of LLM packaged for Homebrew currently uses Python 3.12. The PyTorch project do not yet have a stable release of PyTorch for that version of Python.
Expand All @@ -80,18 +102,21 @@ llm python -m pip install \
--index-url https://download.pytorch.org/whl/nightly/cpu
llm install llm-sentence-transformers
```

This should produce a working installation of that plugin.

## Installing plugins

{ref}`plugins` can be used to add support for other language models, including models that can run on your own device.

For example, the [llm-gpt4all](https://github.com/simonw/llm-gpt4all) plugin adds support for 17 new models that can be installed on your own machine. You can install that like so:

```bash
llm install llm-gpt4all
```

(api-keys)=

## API key management

Many LLM models require an API key. These API keys can be provided to this tool using several different mechanisms.
Expand All @@ -105,11 +130,14 @@ The easiest way to store an API key is to use the `llm keys set` command:
```bash
llm keys set openai
```

You will be prompted to enter the key like this:

```
% llm keys set openai
Enter key:
```

Once stored, this key will be automatically used for subsequent calls to the API:

```bash
Expand Down Expand Up @@ -137,10 +165,13 @@ Keys can be passed directly using the `--key` option, like this:
```bash
llm "Five names for pet weasels" --key sk-my-key-goes-here
```

You can also pass the alias of a key stored in the `keys.json` file. For example, if you want to maintain a personal API key you could add that like this:

```bash
llm keys set personal
```

And then use it for prompts like so:

```bash
Expand All @@ -156,6 +187,7 @@ For OpenAI models the key will be read from the `OPENAI_API_KEY` environment var
The environment variable will be used if no `--key` option is passed to the command and there is not a key configured in `keys.json`

To use an environment variable in place of the `keys.json` key run the prompt like this:

```bash
llm 'my prompt' --key $OPENAI_API_KEY
```
Expand All @@ -165,6 +197,7 @@ llm 'my prompt' --key $OPENAI_API_KEY
You can configure LLM in a number of different ways.

(setup-default-model)=

### Setting a custom default model

The model used when calling `llm` without the `-m/--model` option defaults to `gpt-4o-mini` - the fastest and least expensive OpenAI model.
Expand All @@ -174,10 +207,13 @@ You can use the `llm models default` command to set a different default model. F
```bash
llm models default gpt-4o
```

You can view the current model by running this:

```
llm models default
```

Any of the supported aliases for a model can be passed to this command.

### Setting a custom directory location
Expand All @@ -193,16 +229,58 @@ You can set a custom location for this directory by setting the `LLM_USER_PATH`
```bash
export LLM_USER_PATH=/path/to/my/custom/directory
```

(ssl-certificate-configuration)=

### SSL Certificate Configuration

When using LLM behind a corporate proxy or firewall (like Zscaler), you may encounter SSL certificate validation issues. You can configure SSL handling using environment variables:

```bash
# Use your system's native certificate store (similar to UV's --native-tls option)
export LLM_SSL_CONFIG=native_tls

# Or use a specific certificate bundle
export LLM_CA_BUNDLE=/path/to/certificate.pem
```

<details>
<summary>More SSL configuration options and details</summary>

#### Environment Variables

- `LLM_SSL_CONFIG`: Controls SSL verification behavior

- `native_tls`: Uses your system's native certificate store
- `no_verify`: Disables SSL verification entirely (not recommended for production)

- `LLM_CA_BUNDLE`: Path to a custom CA certificate bundle file

#### Finding Your Corporate Certificate

If you're behind a corporate proxy, you may need to export the certificate from your browser or obtain it from your IT department.

Common certificate locations:

- macOS: `~/Library/Application Support/Certificate Authority/`
- Linux: `/etc/ssl/certs/`
- Windows: The Windows Certificate Store
</details>

### Turning SQLite logging on and off

By default, LLM will log every prompt and response you make to a SQLite database - see {ref}`logging` for more details.

You can turn this behavior off by default by running:

```bash
llm logs off
```

Or turn it back on again with:

```
llm logs on
```
Run `llm logs status` to see the current states of the setting.

Run `llm logs status` to see the current states of the setting.
62 changes: 61 additions & 1 deletion llm/default_plugins/openai_models.py
Original file line number Diff line number Diff line change
Expand Up @@ -12,6 +12,7 @@
import httpx
import openai
import os
import warnings

from pydantic import field_validator, Field

Expand Down Expand Up @@ -535,8 +536,16 @@ def get_client(self, key, *, async_=False):
kwargs["api_key"] = "DUMMY_KEY"
if self.headers:
kwargs["default_headers"] = self.headers

# Configure SSL certificate handling from environment variables
ssl_client = _configure_ssl_client(self.model_id)

if ssl_client:
kwargs["http_client"] = ssl_client

if os.environ.get("LLM_OPENAI_SHOW_RESPONSES"):
kwargs["http_client"] = logging_client()
if "http_client" not in kwargs:
kwargs["http_client"] = logging_client()
Comment on lines +547 to +548
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't this mean that LLM_OPENAI_SHOW_RESPONSES won't work if a custom SSL configuration is in use?

if async_:
return openai.AsyncOpenAI(**kwargs)
else:
Expand Down Expand Up @@ -798,3 +807,54 @@ def redact_data(input_dict):
for item in input_dict:
redact_data(item)
return input_dict


def _configure_ssl_client(model_id):
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This function doesn't seem to be using the model_id argument for anything...

"""Configure SSL certificate handling based on environment variables."""
# Check for SSL config in environment variables
ssl_config = os.environ.get("LLM_SSL_CONFIG")
ca_bundle = os.environ.get("LLM_CA_BUNDLE")

if not ssl_config and not ca_bundle:
return None

# Import here to handle potential import errors
try:
from openai import DefaultHttpxClient
import httpx
except ImportError:
warnings.warn(
"Unable to import DefaultHttpxClient from openai - SSL configuration not available."
)
return None

# Validate ssl_config value
valid_ssl_configs = ["native_tls", "no_verify"]
if ssl_config and ssl_config not in valid_ssl_configs:
warnings.warn(
f"Invalid ssl_config value: {ssl_config}. Valid values are: {', '.join(valid_ssl_configs)}"
)
return None

try:
if ssl_config == "native_tls":
# Use the system's native certificate store
return DefaultHttpxClient(transport=httpx.HTTPTransport(verify=True))
Copy link

@bissonex bissonex Apr 15, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think this is enough to use the system's native certificate store. Have a look at encode/httpx#2490. The problem with the proposed solution, using Truststore, is that it requires Python3.10 or later.

elif ssl_config == "no_verify":
# Disable SSL verification entirely (less secure)
return DefaultHttpxClient(transport=httpx.HTTPTransport(verify=False))
elif ca_bundle:
# Check if certificate file exists
if not os.path.exists(ca_bundle):
warnings.warn(f"Certificate file not found: {ca_bundle}")
return None
else:
# Use a specific CA bundle file
return DefaultHttpxClient(
transport=httpx.HTTPTransport(verify=ca_bundle)
)
except Exception as e:
warnings.warn(f"Error configuring SSL client: {str(e)}")
return None

return None
Loading