Skip to content

Commit

Permalink
Merge branch 'main' into ODSC-63450/oci_odsc_llm
Browse files Browse the repository at this point in the history
  • Loading branch information
mrDzurb authored Jan 30, 2025
2 parents 44c2231 + d22fdcd commit fc8489a
Show file tree
Hide file tree
Showing 189 changed files with 38,468 additions and 2,292 deletions.
2 changes: 1 addition & 1 deletion .github/workflows/publish_release.yml
Original file line number Diff line number Diff line change
Expand Up @@ -38,7 +38,7 @@ jobs:
shell: bash
run: rm -rf llama-index-core/llama_index/core/_static/nltk_cache/corpora/stopwords.zip llama-index-core/llama_index/core/_static/nltk_cache/tokenizers/punkt.zip
- name: Build and publish to pypi
uses: JRubics/poetry-publish@v2.0
uses: JRubics/poetry-publish@v2.1
with:
python_version: ${{ env.PYTHON_VERSION }}
pypi_token: ${{ secrets.LLAMA_INDEX_PYPI_TOKEN }}
Expand Down
117 changes: 117 additions & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,122 @@
# ChangeLog

## [2025-01-25]

### `llama-index-core` [0.12.14]

- Fix agentworkflow handoffs for non-openai llms (#17631)
- small fixes to the multi-agent workflow demo notebook (#17628)

### `llama-index-embeddings-bedrock` [0.5.0]

- Implement async bedrock embeddings (#17610)

### `llama-index-llms-bedrock-converse` [0.4.4]

- Fix prompt stacking in bedrock converse (#17613)

### `llama-index-llms-deepseek` [0.1.0]

- DeepSeek official API LLM (#17625)

### `llama-index-readers-google` [0.6.0]

- GoogleDriveReader support file extensions (#17620)

## [2025-01-23]

### `llama-index-core` [0.12.13]

- Fixing header_path bug re: markdown level vs. stack depth in MarkdownNodeParser (#17602)
- Advanced text to sql sample rows, adding row retrieval for few-shot prompts (#17479)
- Made the message role of ReAct observation configurable (#17521)
- fix reconstructing a tool in AgentWorkflow (#17596)
- support content blocks in chat templates (#17603)
- Add contextual retrieval support with a new `DocumentContextExtractor` (#17367)

### `llama-index-graph-stores-memgraph` [0.2.1]

- Vector index support for Memgraph's integration (#17570)

### `llama-index-graph-stores-neo4j` [0.4.6]

- Improves connections for neo4j objects and adds some tests (#17562)

### `llama-index-indices-managed-llama-cloud` [0.6.4]

- Add framework integration for composite retrieval (#17536)

### `llama-index-llms-langchain` [0.5.1]

- get valid string when streaming (#17566)

### `llama-index-llms-mistralai` [0.3.2]

- update function calling models in mistral (#17604)

### `llama-index-llms-openai` [0.3.14]

- fix openai.BadRequestError: Invalid value for 'content': expected a string, got null for tool calls (#17556)

### `llama-index-readers-file` [0.4.3]

- Refactor markdown_to_tups method to better handle multi-level headers (#17508)

### `llama-index-readers-web` [0.3.5]

- feat: Agentql Web Loader (#17575)

### `llama-index-tools-linkup-research` [0.3.0]

- add linkup tool (#17541)

### `llama-index-tools-notion` [0.3.1]

- fix: correct the input params of "load_data" in NotionPageReader (#17529)

### `llama-index-vector-stores-pinecone` [0.4.3]

- build: 🆙 replace pinecone-client with pinecone package (#17587)

### `llama-index-vector-stores-postgres` [0.4.2]

- Add support for halfvec vector type (#17534)

## [2025-01-20]

### `llama-index-core` [0.12.12]

- feat: add AgentWorkflow system to support single and multi-agent workflows (#17237)
- Fix image-path validation in ImageNode (#17558)

### `llama-index-indices-managed-vectara` [0.4.0]

- (breaking change) API Migration (#17545)

### `llama-index-llms-anthropic` [0.6.4]

- feat: support direct PDF handling for Anthropic (#17506)

### `llama-index-llms-fireworks` [0.3.1]

- Deepseek-v3 is now supported by fireworks (#17518)

### `llama-index-llms-stepfun` [1.0.0]

- feat: add stepfun integrations (#17514)

### `llama-index-multi-modal-llms-gemini` [0.5.0]

- refact: make GeminiMultiModal a thin wrapper around Gemini (#17501)

### `llama-index-postprocessor-longllmlingua` [0.4.0]

- Add longllmlingua2 integration (#17531)

### `llama-index-readers-web` [0.3.4]

- feat: Hyperbrowser Web Reader (#17489)

## [2025-01-15]

### `llama-index-core` [0.12.11]
Expand Down
2 changes: 1 addition & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ LlamaIndex.TS [(Typescript/Javascript)](https://github.com/run-llama/LlamaIndexT

[Documentation](https://docs.llamaindex.ai/en/stable/)

[Twitter](https://twitter.com/llama_index)
[X (formerly Twitter)](https://x.com/llama_index)

[Discord](https://discord.gg/dGcwcsnxhU)

Expand Down
117 changes: 117 additions & 0 deletions docs/docs/CHANGELOG.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,122 @@
# ChangeLog

## [2025-01-25]

### `llama-index-core` [0.12.14]

- Fix agentworkflow handoffs for non-openai llms (#17631)
- small fixes to the multi-agent workflow demo notebook (#17628)

### `llama-index-embeddings-bedrock` [0.5.0]

- Implement async bedrock embeddings (#17610)

### `llama-index-llms-bedrock-converse` [0.4.4]

- Fix prompt stacking in bedrock converse (#17613)

### `llama-index-llms-deepseek` [0.1.0]

- DeepSeek official API LLM (#17625)

### `llama-index-readers-google` [0.6.0]

- GoogleDriveReader support file extensions (#17620)

## [2025-01-23]

### `llama-index-core` [0.12.13]

- Fixing header_path bug re: markdown level vs. stack depth in MarkdownNodeParser (#17602)
- Advanced text to sql sample rows, adding row retrieval for few-shot prompts (#17479)
- Made the message role of ReAct observation configurable (#17521)
- fix reconstructing a tool in AgentWorkflow (#17596)
- support content blocks in chat templates (#17603)
- Add contextual retrieval support with a new `DocumentContextExtractor` (#17367)

### `llama-index-graph-stores-memgraph` [0.2.1]

- Vector index support for Memgraph's integration (#17570)

### `llama-index-graph-stores-neo4j` [0.4.6]

- Improves connections for neo4j objects and adds some tests (#17562)

### `llama-index-indices-managed-llama-cloud` [0.6.4]

- Add framework integration for composite retrieval (#17536)

### `llama-index-llms-langchain` [0.5.1]

- get valid string when streaming (#17566)

### `llama-index-llms-mistralai` [0.3.2]

- update function calling models in mistral (#17604)

### `llama-index-llms-openai` [0.3.14]

- fix openai.BadRequestError: Invalid value for 'content': expected a string, got null for tool calls (#17556)

### `llama-index-readers-file` [0.4.3]

- Refactor markdown_to_tups method to better handle multi-level headers (#17508)

### `llama-index-readers-web` [0.3.5]

- feat: Agentql Web Loader (#17575)

### `llama-index-tools-linkup-research` [0.3.0]

- add linkup tool (#17541)

### `llama-index-tools-notion` [0.3.1]

- fix: correct the input params of "load_data" in NotionPageReader (#17529)

### `llama-index-vector-stores-pinecone` [0.4.3]

- build: 🆙 replace pinecone-client with pinecone package (#17587)

### `llama-index-vector-stores-postgres` [0.4.2]

- Add support for halfvec vector type (#17534)

## [2025-01-20]

### `llama-index-core` [0.12.12]

- feat: add AgentWorkflow system to support single and multi-agent workflows (#17237)
- Fix image-path validation in ImageNode (#17558)

### `llama-index-indices-managed-vectara` [0.4.0]

- (breaking change) API Migration (#17545)

### `llama-index-llms-anthropic` [0.6.4]

- feat: support direct PDF handling for Anthropic (#17506)

### `llama-index-llms-fireworks` [0.3.1]

- Deepseek-v3 is now supported by fireworks (#17518)

### `llama-index-llms-stepfun` [1.0.0]

- feat: add stepfun integrations (#17514)

### `llama-index-multi-modal-llms-gemini` [0.5.0]

- refact: make GeminiMultiModal a thin wrapper around Gemini (#17501)

### `llama-index-postprocessor-longllmlingua` [0.4.0]

- Add longllmlingua2 integration (#17531)

### `llama-index-readers-web` [0.3.4]

- feat: Hyperbrowser Web Reader (#17489)

## [2025-01-15]

### `llama-index-core` [0.12.11]
Expand Down
4 changes: 4 additions & 0 deletions docs/docs/api_reference/extractors/documentcontext.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.extractors
options:
members:
- DocumentContextExtractor
4 changes: 4 additions & 0 deletions docs/docs/api_reference/llms/deepseek.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.llms.deepseek
options:
members:
- DeepSeek
4 changes: 4 additions & 0 deletions docs/docs/api_reference/tools/linkup_research.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,4 @@
::: llama_index.tools.linkup_research
options:
members:
- LinkupToolSpec
10 changes: 6 additions & 4 deletions docs/docs/examples/agent/agent_workflow_basic.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -143,7 +143,7 @@
"source": [
"## Maintaining State\n",
"\n",
"By default, the `AgentWorkflow` will maintain statless between runs. This means that the agent will not have any memory of previous runs.\n",
"By default, the `AgentWorkflow` will maintain stateless between runs. This means that the agent will not have any memory of previous runs.\n",
"\n",
"To maintain state, we need to keep track of the previous state. Since the `AgentWorkflow` is a `Workflow`, the state is stored in the `Context`. This can be passed between runs to maintain state and history."
]
Expand Down Expand Up @@ -332,7 +332,9 @@
"\n",
"\n",
"async def set_name(ctx: Context, name: str) -> str:\n",
" await ctx.set(\"name\", name)\n",
" state = await ctx.get(\"state\")\n",
" state[\"name\"] = name\n",
" await ctx.set(\"state\", state)\n",
" return f\"Name set to {name}\"\n",
"\n",
"\n",
Expand All @@ -348,8 +350,8 @@
"response = await workflow.run(user_msg=\"My name is Logan\", ctx=ctx)\n",
"print(str(response))\n",
"\n",
"name = await ctx.get(\"name\")\n",
"print(name)"
"state = await ctx.get(\"state\")\n",
"print(state[\"name\"])"
]
},
{
Expand Down
Loading

0 comments on commit fc8489a

Please sign in to comment.