Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -651,7 +651,7 @@
"### Create the agent\n",
"\n",
"Finally, we create our agent with:\n",
"- **Model**: Gemini 2.0 Flash for fast, intelligent responses\n",
"- **Model**: Gemini 3 Flash for fast, intelligent responses\n",
"- **Name**: A descriptive identifier\n",
"- **Instruction**: A clear directive that shapes the agent's behavior\n",
"- **Tools**: The BigQuery toolset we just retrieved\n",
Expand Down
2 changes: 1 addition & 1 deletion audio/speech/sample-apps/live-translator/app.py
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@
if PROJECT_ID and not LOCATION:
LOCATION = "us-central1"

MODEL_ID = "gemini-2.0-flash-lite"
MODEL_ID = "gemini-3-flash-preview"

LANGUAGE_MAP = {
"Spanish (Español)": {
Expand Down
2 changes: 1 addition & 1 deletion embeddings/intro_embeddings_tuning.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -204,7 +204,7 @@
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"

Check failure on line 207 in embeddings/intro_embeddings_tuning.ipynb

View workflow job for this annotation

GitHub Actions / Check Spelling

`app.kernel.do_shutdown(True)` matches a line_forbidden.patterns entry: `app\.kernel\.do_shutdown\(True\)`. (forbidden-pattern)
]
},
{
Expand Down Expand Up @@ -526,7 +526,7 @@
") -> langchain_core.documents.base.Document:\n",
" \"\"\"A function to generate contextual queries based on preprocessed chunk\"\"\"\n",
"\n",
" model = GenerativeModel(\"gemini-2.0-flash\")\n",
" model = GenerativeModel(\"gemini-3-flash-preview\")\n",
"\n",
" generation_config = GenerationConfig(\n",
" max_output_tokens=2048, temperature=0.9, top_p=1\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -108,13 +108,13 @@
"\n",
"As part of this notebook you will learn how to:\n",
"1. Use SQL for preprocessing raw logs at scale, interact with LLMs and do vector search analysis\n",
"1. Use Gemini 2.0 to translate a log sequence into simple natural language summary directly from BigQuery\n",
"1. Use Gemini 3 to translate a log sequence into simple natural language summary directly from BigQuery\n",
"1. Use a text embedding model to generate a vector embedding for each log summary directly from BigQuery, and store all embeddings in BigQuery vector index for fast lookup\n",
"1. Tune and run vector search-based anomaly detection using BigQuery vector search\n",
"1. Evaluate performance and compare against popular unsupervised and semi-supervised outlier detection ML algorithms\n",
"\n",
"\n",
"This notebook demonstrates how using off-the-shelf Gemini 2.0 + Text Embeddings + BigQuery vector search yields comparable results to custom pre-trained language models including DeepLog, an [LSTM deep neural network](https://dl.acm.org/doi/10.1145/3133956.3134015).\n",
"This notebook demonstrates how using off-the-shelf Gemini 3 + Text Embeddings + BigQuery vector search yields comparable results to custom pre-trained language models including DeepLog, an [LSTM deep neural network](https://dl.acm.org/doi/10.1145/3133956.3134015).\n",
"\n",
"We also compare vector search-based outlier detection with common [scikit outlier detection](https://scikit-learn.org/stable/modules/outlier_detection.html#) algorithms (OneClassSVM, LocalOutlierFactor) using the same generated embeddings and we found that a custom SQL logic using BigQuery vector search to be more accurate and flexible for this use case and dataset.\n",
"\n",
Expand Down Expand Up @@ -158,7 +158,7 @@
"source": [
"Outlier detection, specifically novelty detection, is about detecting all anomalies including 'unknown' types of anomalies (e.g. system failures, new cyberattacks, etc.). Therefore supervised approaches where you train the model on both normal and abnormal logs are simply not applicable. Therefore, the approach described in this notebook is compared against leading and popular unsupervised and semi-supervised techniques. **Recall** is our primary objective for outlier detection (minimize false negatives) while also keeping reasonably high **precision** (minimize false positives).\n",
"\n",
"We use the open-source labelled HDFS dataset from [Loghub]((https://github.com/logpai/loghub) which is freely accessible and commonly used in log analysis evaluations, including anomaly detection. HDFS dataset provides 11M+ system log lines (\\~575k log sessions) collected from a real Hadoop cluster on 200 nodes, and labelled by Hadoop domain experts to identify the runtime anomalies. For the purpose of this notebook, we use a subset of the HDFS dataset (70k log sessions or ~12%) taking into consideration default BigQuery ML quota limits (e.g. 200 rpm and 72,000 rows per job with `gemini-2.0-flash` at the time of this writing). This way you can run this notebook as is in your own Google Cloud project in a timely fashion without necessarily requesting quota increases."
"We use the open-source labelled HDFS dataset from [Loghub]((https://github.com/logpai/loghub) which is freely accessible and commonly used in log analysis evaluations, including anomaly detection. HDFS dataset provides 11M+ system log lines (\\~575k log sessions) collected from a real Hadoop cluster on 200 nodes, and labelled by Hadoop domain experts to identify the runtime anomalies. For the purpose of this notebook, we use a subset of the HDFS dataset (70k log sessions or ~12%) taking into consideration default BigQuery ML quota limits (e.g. 200 rpm and 72,000 rows per job with `gemini-3-flash-preview` at the time of this writing). This way you can run this notebook as is in your own Google Cloud project in a timely fashion without necessarily requesting quota increases."
]
},
{
Expand Down Expand Up @@ -217,7 +217,7 @@
"import IPython\n",
"\n",
"app = IPython.Application.instance()\n",
"app.kernel.do_shutdown(True)"

Check failure on line 220 in embeddings/use-cases/outlier-detection/bq-vector-search-outlier-detection-infra-logs.ipynb

View workflow job for this annotation

GitHub Actions / Check Spelling

`app.kernel.do_shutdown(True)` matches a line_forbidden.patterns entry: `app\.kernel\.do_shutdown\(True\)`. (forbidden-pattern)
]
},
{
Expand Down Expand Up @@ -800,7 +800,7 @@
},
"source": [
"### Explain log sequences\n",
"Prompt Gemini 2.0 to translate log sequences into natural language"
"Prompt Gemini 3 to translate log sequences into natural language"
]
},
{
Expand All @@ -809,7 +809,7 @@
"id": "l7A220hCh_Ek"
},
"source": [
"#### Create the remote model for Gemini 2.0 in BigQuery"
"#### Create the remote model for Gemini 3 in BigQuery"
]
},
{
Expand All @@ -834,7 +834,7 @@
"%%bigquery\n",
"CREATE OR REPLACE MODEL `vs_logs_demo.gemini_1_5_flash`\n",
"REMOTE WITH CONNECTION `us.bq-llm-connection`\n",
"OPTIONS (endpoint = 'gemini-2.0-flash')"
"OPTIONS (endpoint = 'gemini-3-flash-preview')"
]
},
{
Expand Down Expand Up @@ -1016,7 +1016,7 @@
"id": "OPRRTIHjNm86"
},
"source": [
"The following **may take 6 hours or less** depending on your Vertex AI quota of requests per minute for `gemini-2.0-flash` base model. The following query goes over a small subset of the dataset: 70k log sequences will be translated into natural language summary. At the time of this writing, the default rate limit for `gemini-2.0-flash` in `us-central1` region is **200 requests/min**. To speed up this step, you can [request quota increase](https://cloud.google.com/vertex-ai/generative-ai/docs/quotas#view-quotas-in-console) for Vertex AI endpoint for `gemini-2.0-flash` for your specific region and project. If you do increase that Vertex AI quota, send an email to bqml-feedback@google.com to adjust and increase your BigQuery ML quota for `ML.GENERATE_TEXT` calls to that base model, since BigQuery ML rate-limits the calls to Vertex AI endpoint accordingly.\n",
"The following **may take 6 hours or less** depending on your Vertex AI quota of requests per minute for `gemini-3-flash-preview` base model. The following query goes over a small subset of the dataset: 70k log sequences will be translated into natural language summary. At the time of this writing, the default rate limit for `gemini-3-flash-preview` in `us-central1` region is **200 requests/min**. To speed up this step, you can [request quota increase](https://cloud.google.com/vertex-ai/generative-ai/docs/quotas#view-quotas-in-console) for Vertex AI endpoint for `gemini-3-flash-preview` for your specific region and project. If you do increase that Vertex AI quota, send an email to bqml-feedback@google.com to adjust and increase your BigQuery ML quota for `ML.GENERATE_TEXT` calls to that base model, since BigQuery ML rate-limits the calls to Vertex AI endpoint accordingly.\n",
"\n",
"The following SQL code firsts create the table `hdfs_full_explained`, then creates a procedure to generate content from Gemini and inserts into the destination table. You call the procedure iteratively to process data in batches and avoid losing processed results due to runtime error like query timeout or quota exhaustion, both of which are unlikely with smaller batches."
]
Expand Down Expand Up @@ -1120,7 +1120,7 @@
"source": [
"%%bigquery\n",
"-- Process 70k sequences with batches of 20k at a time to stay well within #rows/job quota\n",
"-- for ML.GENERATE_TEXT (72k with Gemini 2.0, 21.6k with Gemini 2.0)\n",
"-- for ML.GENERATE_TEXT (72k with Gemini 3, 21.6k with Gemini 3)\n",
"-- https://cloud.google.com/bigquery/quotas#cloud_ai_service_functions\n",
"BEGIN\n",
" CALL `vs_logs_demo.explain_hdfs_logs`(1, 20000);\n",
Expand Down Expand Up @@ -2264,7 +2264,7 @@
"\n",
"These metrics show how **BigQuery vector search** can perform similar, or even better than common novelty or outlier detection algorithms. This demonstrates the robustness of its underlying native k-means clustering. These results stem from a combination of factors:\n",
"\n",
"- First, Gemini 2.0 effectively summarizes logs sequence including any abnormal activity.\n",
"- First, Gemini 3 effectively summarizes logs sequence including any abnormal activity.\n",
"- Second, the vector embeddings generated by the text embedding model are well suited for clustering similar normal log sequences together, capturing nuances and representing most abherrations into outliers.\n",
"- Finally, vector search offers speed and flexibility in fine-tuning distance and neighbor count parameters to optimize the balance between precision and recall.\n"
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@
"import IPython\n",
"\n",
"app = IPython.Application.instance()\n",
"app.kernel.do_shutdown(True)"

Check failure on line 176 in gemini/agent-engine/evaluating_crewai_agent_engine_customized_template.ipynb

View workflow job for this annotation

GitHub Actions / Check Spelling

`app.kernel.do_shutdown(True)` matches a line_forbidden.patterns entry: `app\.kernel\.do_shutdown\(True\)`. (forbidden-pattern)
]
},
{
Expand Down Expand Up @@ -635,7 +635,7 @@
},
"outputs": [],
"source": [
"model = \"vertex_ai/gemini-2.0-flash\""
"model = \"vertex_ai/gemini-3-flash-preview\""
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@
"import IPython\n",
"\n",
"app = IPython.Application.instance()\n",
"app.kernel.do_shutdown(True)"

Check failure on line 176 in gemini/agent-engine/evaluating_langchain_agent_engine_prebuilt_template.ipynb

View workflow job for this annotation

GitHub Actions / Check Spelling

`app.kernel.do_shutdown(True)` matches a line_forbidden.patterns entry: `app\.kernel\.do_shutdown\(True\)`. (forbidden-pattern)
]
},
{
Expand Down Expand Up @@ -551,7 +551,7 @@
},
"outputs": [],
"source": [
"model = \"gemini-2.0-flash\""
"model = \"gemini-3-flash-preview\""
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@
"import IPython\n",
"\n",
"app = IPython.Application.instance()\n",
"app.kernel.do_shutdown(True)"

Check failure on line 178 in gemini/agent-engine/evaluating_langgraph_agent_engine_customized_template.ipynb

View workflow job for this annotation

GitHub Actions / Check Spelling

`app.kernel.do_shutdown(True)` matches a line_forbidden.patterns entry: `app\.kernel\.do_shutdown\(True\)`. (forbidden-pattern)
]
},
{
Expand Down Expand Up @@ -634,7 +634,7 @@
},
"outputs": [],
"source": [
"model = \"gemini-2.0-flash\""
"model = \"gemini-3-flash-preview\""
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion gemini/agent-engine/tracing_agents_in_agent_engine.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -236,7 +236,7 @@
"import IPython\n",
"\n",
"app = IPython.Application.instance()\n",
"app.kernel.do_shutdown(True)"

Check failure on line 239 in gemini/agent-engine/tracing_agents_in_agent_engine.ipynb

View workflow job for this annotation

GitHub Actions / Check Spelling

`app.kernel.do_shutdown(True)` matches a line_forbidden.patterns entry: `app\.kernel\.do_shutdown\(True\)`. (forbidden-pattern)
]
},
{
Expand Down Expand Up @@ -437,7 +437,7 @@
"outputs": [],
"source": [
"agent = LangchainAgent(\n",
" model=\"gemini-2.0-flash\",\n",
" model=\"gemini-3-flash-preview\",\n",
" model_kwargs={\"temperature\": 0},\n",
" tools=[classify_ticket, search_knowledge_base, escalate_to_human],\n",
" enable_tracing=True,\n",
Expand Down
2 changes: 1 addition & 1 deletion gemini/agent-engine/tutorial_ag2_on_agent_engine.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -216,7 +216,7 @@
"outputs": [],
"source": [
"# @title Set Configuration Parameters\n",
"MODEL_NAME = \"gemini-2.0-flash-001\" # @param {'type': 'string'}\n",
"MODEL_NAME = \"gemini-3-flash-preview\" # @param {'type': 'string'}\n",
"PROJECT_ID = \"<YOUR_PROJECT_ID>\" # @param {'type': 'string'}\n",
"STAGING_BUCKET = \"gs://<YOUR_GCS_BUCKET>\" # @param {'type': 'string'}\n",
"CACHING_SEED = 42 # @param {'type': 'integer'}\n",
Expand Down
2 changes: 1 addition & 1 deletion gemini/agent-engine/tutorial_alloydb_rag_agent.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -681,7 +681,7 @@
"\n",
"remote_app = agent_engines.create(\n",
" LangchainAgent(\n",
" model=\"gemini-2.0-flash\",\n",
" model=\"gemini-3-flash-preview\",\n",
" tools=[similarity_search],\n",
" model_kwargs={\n",
" \"temperature\": 0.1,\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -624,7 +624,7 @@
"\n",
"remote_app = agent_engines.create(\n",
" LangchainAgent(\n",
" model=\"gemini-2.0-flash\",\n",
" model=\"gemini-3-flash-preview\",\n",
" tools=[similarity_search],\n",
" model_kwargs={\n",
" \"temperature\": 0.1,\n",
Expand Down
4 changes: 2 additions & 2 deletions gemini/agent-engine/tutorial_google_maps_agent.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -347,7 +347,7 @@
"id": "47f07c83a754"
},
"source": [
"The first component of your agent involves the version of the generative model you want to use in your agent. Here you'll use the Gemini 2.0 model:"
"The first component of your agent involves the version of the generative model you want to use in your agent. Here you'll use the Gemini 3 model:"
]
},
{
Expand All @@ -358,7 +358,7 @@
},
"outputs": [],
"source": [
"model = \"gemini-2.0-flash\""
"model = \"gemini-3-flash-preview\""
]
},
{
Expand Down
2 changes: 1 addition & 1 deletion gemini/agent-engine/tutorial_langgraph.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -175,7 +175,7 @@
"import IPython\n",
"\n",
"app = IPython.Application.instance()\n",
"app.kernel.do_shutdown(True)"

Check failure on line 178 in gemini/agent-engine/tutorial_langgraph.ipynb

View workflow job for this annotation

GitHub Actions / Check Spelling

`app.kernel.do_shutdown(True)` matches a line_forbidden.patterns entry: `app\.kernel\.do_shutdown\(True\)`. (forbidden-pattern)
]
},
{
Expand Down Expand Up @@ -378,7 +378,7 @@
"\n",
" # The set_up method is used to define application initialization logic\n",
" def set_up(self) -> None:\n",
" model = ChatVertexAI(model=\"gemini-2.0-flash\")\n",
" model = ChatVertexAI(model=\"gemini-3-flash-preview\")\n",
"\n",
" builder = MessageGraph()\n",
"\n",
Expand Down
2 changes: 1 addition & 1 deletion gemini/agent-engine/tutorial_langgraph_rag_agent.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -195,7 +195,7 @@
"import IPython\n",
"\n",
"app = IPython.Application.instance()\n",
"app.kernel.do_shutdown(True)"

Check failure on line 198 in gemini/agent-engine/tutorial_langgraph_rag_agent.ipynb

View workflow job for this annotation

GitHub Actions / Check Spelling

`app.kernel.do_shutdown(True)` matches a line_forbidden.patterns entry: `app\.kernel\.do_shutdown\(True\)`. (forbidden-pattern)
]
},
{
Expand Down Expand Up @@ -688,7 +688,7 @@
"\n",
" # The set_up method is used to define application initialization logic\n",
" def set_up(self) -> None:\n",
" model = ChatVertexAI(model=\"gemini-2.0-flash\")\n",
" model = ChatVertexAI(model=\"gemini-3-flash-preview\")\n",
" builder = MessageGraph()\n",
"\n",
" # Checker node\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -1147,7 +1147,7 @@
"outputs": [],
"source": [
"agent = HotelBookingAgent(\n",
" model=\"gemini-2.0-flash\",\n",
" model=\"gemini-3-flash-preview\",\n",
" toolbox_endpoint=TOOLBOX_ENDPOINT,\n",
" project=PROJECT_ID,\n",
" location=REGION,\n",
Expand Down Expand Up @@ -1338,7 +1338,7 @@
"source": [
"remote_agent = agent_engines.create(\n",
" HotelBookingAgent(\n",
" model=\"gemini-2.0-flash\",\n",
" model=\"gemini-3-flash-preview\",\n",
" toolbox_endpoint=TOOLBOX_ENDPOINT,\n",
" project=PROJECT_ID,\n",
" location=REGION,\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -202,7 +202,7 @@
"import IPython\n",
"\n",
"app = IPython.Application.instance()\n",
"app.kernel.do_shutdown(True)"

Check failure on line 205 in gemini/agent-engine/tutorial_vertex_ai_search_rag_agent.ipynb

View workflow job for this annotation

GitHub Actions / Check Spelling

`app.kernel.do_shutdown(True)` matches a line_forbidden.patterns entry: `app\.kernel\.do_shutdown\(True\)`. (forbidden-pattern)
]
},
{
Expand Down Expand Up @@ -325,7 +325,7 @@
"id": "47f07c83a754"
},
"source": [
"The first component of your agent involves the version of the generative model you want to use in your agent. Here you'll use the Gemini 2.0 model:"
"The first component of your agent involves the version of the generative model you want to use in your agent. Here you'll use the Gemini 3 model:"
]
},
{
Expand All @@ -336,7 +336,7 @@
},
"outputs": [],
"source": [
"model = \"gemini-2.0-flash\""
"model = \"gemini-3-flash-preview\""
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -207,7 +207,7 @@
"# import IPython\n",
"\n",
"# app = IPython.Application.instance()\n",
"# app.kernel.do_shutdown(True)"

Check failure on line 210 in gemini/agents/genai-experience-concierge/agent-design-patterns/function-calling.ipynb

View workflow job for this annotation

GitHub Actions / Check Spelling

`app.kernel.do_shutdown(True)` matches a line_forbidden.patterns entry: `app\.kernel\.do_shutdown\(True\)`. (forbidden-pattern)
]
},
{
Expand Down Expand Up @@ -265,7 +265,7 @@
"\n",
"REGION = \"us-central1\" # @param {type:\"string\"}\n",
"CYMBAL_DATASET_LOCATION = \"US\" # @param {type:\"string\"}\n",
"CHAT_MODEL_NAME = \"gemini-2.0-flash-001\" # @param {type:\"string\"}\n",
"CHAT_MODEL_NAME = \"gemini-3-flash-preview\" # @param {type:\"string\"}\n",
"CYMBAL_DATASET = \"[project-id].[dataset-id]\" # @param {type:\"string\", placeholder: \"[project-id].[dataset-id]\"}\n",
"\n",
"MAX_STORE_RESULTS = 10 # @param {type:\"integer\"}\n",
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -239,9 +239,9 @@
" PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
"\n",
"REGION = \"us-central1\" # @param {type:\"string\"}\n",
"CHAT_MODEL_NAME = \"gemini-2.0-flash-001\" # @param {type:\"string\"}\n",
"TEST_CASE_MODEL_NAME = \"gemini-2.0-flash-001\" # @param {type:\"string\"}\n",
"GUARDRAIL_MODEL_NAME = \"gemini-2.0-flash-001\" # @param {type:\"string\"}"
"CHAT_MODEL_NAME = \"gemini-3-flash-preview\" # @param {type:\"string\"}\n",
"TEST_CASE_MODEL_NAME = \"gemini-3-flash-preview\" # @param {type:\"string\"}\n",
"GUARDRAIL_MODEL_NAME = \"gemini-3-flash-preview\" # @param {type:\"string\"}"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -241,8 +241,8 @@
" PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
"\n",
"REGION = \"us-central1\" # @param {type:\"string\"}\n",
"CHAT_MODEL_NAME = \"gemini-2.0-flash-001\" # @param {type:\"string\"}\n",
"ROUTER_MODEL_NAME = \"gemini-2.0-flash-001\" # @param {type:\"string\"}\n",
"CHAT_MODEL_NAME = \"gemini-3-flash-preview\" # @param {type:\"string\"}\n",
"ROUTER_MODEL_NAME = \"gemini-3-flash-preview\" # @param {type:\"string\"}\n",
"MAX_TURN_HISTORY = 3 # @param {type:\"integer\"}"
]
},
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -240,9 +240,9 @@
" PROJECT_ID = str(os.environ.get(\"GOOGLE_CLOUD_PROJECT\"))\n",
"\n",
"REGION = \"us-central1\" # @param {type:\"string\"}\n",
"PLANNER_MODEL_NAME = \"gemini-2.0-flash-001\" # @param {type:\"string\"}\n",
"REFLECTOR_MODEL_NAME = \"gemini-2.0-flash-001\" # @param {type:\"string\"}\n",
"EXECUTOR_MODEL_NAME = \"gemini-2.0-flash-001\" # @param {type:\"string\"}"
"PLANNER_MODEL_NAME = \"gemini-3-flash-preview\" # @param {type:\"string\"}\n",
"REFLECTOR_MODEL_NAME = \"gemini-3-flash-preview\" # @param {type:\"string\"}\n",
"EXECUTOR_MODEL_NAME = \"gemini-3-flash-preview\" # @param {type:\"string\"}"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -136,7 +136,7 @@ Each agent's route provides endpoints for invoking the agent, managing conversat
chat_config = chat.ChatConfig(
project="...",
region="us-central1",
chat_model_name="gemini-2.0-flash-001",
chat_model_name="gemini-3-flash-preview",
)

# Run an example streamed query
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -84,6 +84,6 @@ class TaskPlannerConfig(pydantic.BaseModel):

project: str
region: str = "us-central1"
planner_model_name: str = "gemini-2.0-flash-001"
executor_model_name: str = "gemini-2.0-flash-001"
reflector_model_name: str = "gemini-2.0-flash-001"
planner_model_name: str = "gemini-3-flash-preview"
executor_model_name: str = "gemini-3-flash-preview"
reflector_model_name: str = "gemini-3-flash-preview"
Original file line number Diff line number Diff line change
Expand Up @@ -30,13 +30,13 @@ class RuntimeSettings(pydantic_settings.BaseSettings):

# sane default values, only configure as needed
region: str = "us-central1"
chat_model_name: str = "gemini-2.0-flash-001"
function_calling_model_name: str = "gemini-2.0-flash-001"
router_model_name: str = "gemini-2.0-flash-001"
guardrail_model_name: str = "gemini-2.0-flash-001"
planner_model_name: str = "gemini-2.0-flash-001"
executor_model_name: str = "gemini-2.0-flash-001"
reflector_model_name: str = "gemini-2.0-flash-001"
chat_model_name: str = "gemini-3-flash-preview"
function_calling_model_name: str = "gemini-3-flash-preview"
router_model_name: str = "gemini-3-flash-preview"
guardrail_model_name: str = "gemini-3-flash-preview"
planner_model_name: str = "gemini-3-flash-preview"
executor_model_name: str = "gemini-3-flash-preview"
reflector_model_name: str = "gemini-3-flash-preview"
max_router_turn_history: int = 3

model_config = pydantic_settings.SettingsConfigDict(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
title="Gemini Chat",
icon="⭐",
description="""
This demo illustrates a simple "agent" which just consists of plain Gemini 2.0 Flash with conversation history.
This demo illustrates a simple "agent" which just consists of plain Gemini 3 Flash with conversation history.
Response text is streamed using a custom [langgraph.config.get_stream_writer](https://langchain-ai.github.io/langgraph/reference/config/#langgraph.config.get_stream_writer).
""".strip(),
chat_handler=gemini_chat.chat_handler,
Expand Down
Loading
Loading