Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vertexai: Add Intro to the Vertex AI library #660

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
152 changes: 152 additions & 0 deletions libs/vertexai/langchain_google_vertexai/__init__.py
Original file line number Diff line number Diff line change
@@ -1,3 +1,155 @@
"""
## langchain-google-vertexai

This module contains the LangChain integrations for Google Cloud generative models.

## Installation

```bash
pip install -U langchain-google-vertexai
```

## Supported Models (MaaS: Model-as-a-Service)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's start with a short description of all supported integration (and a difference between them, please). So that a user doesn't get an impression that only Llama and Mistarl are supported.
You can find a full list here:

(we don't need to mention things like SafetySettings), but let's mention all relevant Vertex AI integrations, please.


1. Llama
2. Mistral

Integration on Google Cloud Vertex AI Model-as-a-Service.

For more information, see:
https://cloud.google.com/blog/products/ai-machine-learning/llama-3-1-on-vertex-ai

#### Setup

You need to enable a corresponding MaaS model (Google Cloud UI console ->
Vertex AI -> Model Garden -> search for a model you need and click enable)

You must have the langchain-google-vertexai Python package installed
.. code-block:: bash

pip install -U langchain-google-vertexai

And either:
- Have credentials configured for your environment
(gcloud, workload identity, etc...)
- Store the path to a service account JSON file as the
GOOGLE_APPLICATION_CREDENTIALS environment variable

This codebase uses the google.auth library which first looks for the application
credentials variable mentioned above, and then looks for system-level auth.

For more information, see:
https://cloud.google.com/docs/authentication/application-default-credentials#GAC
and
https://googleapis.dev/python/google-auth/latest/reference/google.auth.html#module-google.auth.

## Chat Models
SauravP97 marked this conversation as resolved.
Show resolved Hide resolved

`ChatVertexAI` class exposes models such as `gemini-pro` and `chat-bison`.
SauravP97 marked this conversation as resolved.
Show resolved Hide resolved

To use, you should have Google Cloud project with APIs enabled, and configured
credentials. Initialize the model as:

```python
from langchain_google_vertexai import ChatVertexAI

llm = ChatVertexAI(model_name="gemini-pro")
llm.invoke("Sing a ballad of LangChain.")
```

You can use other models, e.g. `chat-bison`:

```python
from langchain_google_vertexai import ChatVertexAI

llm = ChatVertexAI(model_name="chat-bison", temperature=0.3)
llm.invoke("Sing a ballad of LangChain.")
```

#### Multimodal inputs

Gemini vision model supports image inputs when providing a single chat message. Example:

```python
from langchain_core.messages import HumanMessage
from langchain_google_vertexai import ChatVertexAI

llm = ChatVertexAI(model_name="gemini-pro-vision")
# example
message = HumanMessage(
content=[
{
"type": "text",
"text": "What's in this image?",
}, # You can optionally provide text parts
{"type": "image_url", "image_url": {"url": "https://picsum.photos/seed/picsum/200/300"}},
]
)
llm.invoke([message])
```

The value of `image_url` can be any of the following:

- A public image URL
- An accessible gcs file (e.g., "gcs://path/to/file.png")
- A base64 encoded image (e.g., `data:image/png;base64,abcd124`)

## Embeddings

You can use Google Cloud's embeddings models as:

```python
from langchain_google_vertexai import VertexAIEmbeddings

embeddings = VertexAIEmbeddings()
embeddings.embed_query("hello, world!")
```

## LLMs

You can use Google Cloud's generative AI models as Langchain LLMs:

```python
from langchain_core.prompts import PromptTemplate
from langchain_google_vertexai import ChatVertexAI

template = \"""Question: {question}

Answer: Let's think step by step.\"""
prompt = PromptTemplate.from_template(template)

llm = ChatVertexAI(model_name="gemini-pro")
chain = prompt | llm

question = "Who was the president of the USA in 1994?"
print(chain.invoke({"question": question}))
```

You can use Gemini and Palm models, including code-generations ones:

```python

from langchain_google_vertexai import VertexAI

llm = VertexAI(model_name="code-bison", max_output_tokens=1000, temperature=0.3)

question = "Write a python function that checks if a string is a valid email address"

output = llm(question)

## Vector Stores

#### Vector Search Vector Store GCS

VertexAI VectorStore that handles the search and indexing using Vector Search
and stores the documents in Google Cloud Storage.

#### Vector Search Vector Store Datastore

VectorSearch with DatasTore document storage.
```
"""

from google.cloud.aiplatform_v1beta1.types import (
FunctionCallingConfig,
FunctionDeclaration,
Expand Down
Loading