Skip to content

Commit a7ba436

Browse files
barduinorlogan-markewichravi03071991dependabot[bot]spreeni
authored
merge from llamaindex main (#15)
* v0.11.5 (run-llama#15834) * Box bug fix 202409 (run-llama#15836) * fix document matadata for box file * fix BoxSearchOptions class initialization * bump versions to 0.2.1 * Add oreilly course cookbooks (run-llama#15845) * correct course name (run-llama#15859) * clone llama-deploy docs during docs builds (run-llama#15794) * chore(deps-dev): bump cryptography from 43.0.0 to 43.0.1 in /llama-index-core (run-llama#15840) * Refresh Opensearch index after delete operation (run-llama#15854) * Update ImageReader file loading logic (run-llama#15848) * Minor bug in SQQE (run-llama#15861) * v0.11.6 (run-llama#15863) * remove widgets from recent notebooks (run-llama#15864) * Fix image document deserialization issue (run-llama#15857) * Bugfix: Remove warnings during data insertion for Hybrid Search due to duplicated columns in the schema (run-llama#15846) * Add TablestoreVectorStore. (run-llama#15657) * Removed unused `llama-index-llms-anthropic` dependency from Bedrock Converse (run-llama#15869) Removed unused `llama-index-llms-anthropic` dependency. Incremented to `0.3.0`. * Update bedrock.py (run-llama#15879) Fix minor issue in error message * Add tool calling example in NVIDIA notebook (run-llama#15842) * NVIDIA: Completion fixes (run-llama#15820) * Fix PGVectorStore with latest pydantic, update pydantic imports (run-llama#15886) * add type ignore for streaming agents (run-llama#15887) * [wip] force openai structured output (run-llama#15706) * feature: Make SentenceSplitter's secondary_chunking_regex optional (run-llama#15882) * Bug fix for KuzuPropertyGraphStore: Allow upserting relations even when chunks are absent (run-llama#15889) * Missing tablestore (run-llama#15890) * v0.11.7 (run-llama#15891) * nit: extend workflow tutorial (run-llama#15901) * Modify generate_qa_embedding_pairs to use LLM from Settings (run-llama#15904) * docs: document how to disable global timeout along with more docs (run-llama#15912) document how to disable global timeout along with more docs * fix: model_name property and pydantic v2 in Azure Inference package (run-llama#15907) * fix: model_name property and pydantic v2 * fix: tests * Add InternalServerError to retry decorator (run-llama#15921) * Add InternalServerError to retry decorator * Bump version * docs: fix broken link (run-llama#15924) * Adding vector store for Azure Cosmos DB NoSql (run-llama#14158) * Initial changes * initial changes fixed with example * Initial commit with all the code and test changes * fixing test cases * adding jupyter notebook * cleaning files * resolving comments * fix linting * more linting * add a basic readme --------- Co-authored-by: Aayush Kataria <aayushkataria3011@gmail.com> Co-authored-by: Massimiliano Pippi <mpippi@gmail.com> * update docs for custom embeddings (run-llama#15929) * Add doc.id to Langchain format conversions (run-llama#15928) * Update RankLLM with new rerankers (run-llama#15892) * Feature Request run-llama#15810 :: Add DynamoDBChatStore (run-llama#15917) * Adding support for `MetadataFilters` to WordLift Vector Store (run-llama#15905) * Repair and route colab links to official repo (run-llama#15900) * feat: Add a retry policy to steps (run-llama#15757) * Fix error handling in sharepoint reader (run-llama#15868) * feat: add llama-index llms alibabacloud_aisearch integration (run-llama#15850) * Opensearch Serverless filtered query support using knn_score script (run-llama#15899) * Mistral AI LLM Integration fixes (run-llama#15906) * Fix RagCLI (run-llama#15931) * v0.11.8 (run-llama#15933) * docs: document retry policy (run-llama#15938) * docs: update AI21 docs and readme (run-llama#15937) * docs: update ai21 docs and readme with jamba 1.5 and tool calling * docs: update ai21 docs and readme with jamba 1.5 and tool calling * docs: update ai21 docs and readme with jamba 1.5 and tool calling * Retry retriable errors in neo4j integrations (run-llama#15915) * Add four alibabacloud-aisearch llama-index integrations: rerank, node_parser, readers, embeddings (run-llama#15934) * Differentiate sync and async calls in OpenSearchVectorClient (run-llama#15945) * feat(postgres): add support for engine parameters (run-llama#15951) * feat(postgres): add support for engine parameters - Introduced engine_params to support passing parameters to create_engine. - Updated create_engine and create_async_engine calls to include engine_params. - Initialized engine_params in the constructor. * style(lint): reformat for readability * refactor(postgres): rename engine_params to create_engine_kwargs * refactor(postgres): rename engine_params to create_engine_kwargs * chore: bump version to 0.2.3 * fix(postgres): rename engine_params to create_engine_kwargs * update falkordb client (run-llama#15940) * update falkordb client * bump version * update version * fix * fix: Error when parsing output if tool name contains non-English characters (run-llama#15956) * fix attribute error in PGVectorStore (run-llama#15961) * Catch nest_asyncio errors (run-llama#15975) * NUDGE (run-llama#15954) * Add support for o1 openai models (run-llama#15979) * force temp to 1.0 for o1 (run-llama#15983) * Update concepts.md - fix link for Structured Data Extraction page (run-llama#15982) * Fix the import path example for SimpleMongoReader (run-llama#15988) * Do not pass system prompt from fn calling runner to fn calling worker (run-llama#15986) * Add callback manager to retriever query engine from args (run-llama#15990) * v0.11.9 (run-llama#15992) * Fix: get all documents from Elasticsearch KVStore (run-llama#16006) * Fix Pydantic numeric validation in openai/base.py (run-llama#15993) * feat: add quip reader (run-llama#16000) * [docs/example] Human in loop workflow example (run-llama#16011) * start choose own adventure hitl nb * hitl example * note on alternative implementation * add module guides and run prepare_for_build * fix: removed author info from quip reader (run-llama#16012) fix: removed author info * Fix Pydantic models definition (run-llama#16008) * chore(deps): bump litellm from 1.43.7 to 1.44.8 in /llama-index-integrations/embeddings/llama-index-embeddings-litellm (run-llama#16013) * Hotfix: Fix Citations Text (run-llama#16015) * Jacques/opik integration (run-llama#16007) * update llamacloud index with image nodes (run-llama#15996) * chore: add o1 models pinned versions (run-llama#16025) * add sparse embedding abstraction (run-llama#16018) * Implement `get_nodes` on `PGVectorStore` (run-llama#16026) * Implement `get_nodes` on `PGVectorStore` * Bump version num in pyproject.toml * Update docstring in base.py in MilvusVectorStore adding COSINE as available similarity metric (run-llama#16031) Update docstring in base.py adding COSINE as similarity metric available According to Milvus Documentation, **COSINE** as similarity metric is supported (Both Milvus and Milvus Lite) but in Llama-Index docs was missing. [Link to Milvus official docs](https://milvus.io/docs/metric.md?tab=floating#Similarity-Metrics) I've checked in the [code](https://github.com/run-llama/llama_index/blob/723c2533ed4b7b43b7d814c89af1838f0f1994c2/llama-index-integrations/vector_stores/llama-index-vector-stores-milvus/llama_index/vector_stores/milvus/base.py#L256) , so indeed COSINE is supported, so no more further changes are needed. * Fix: unnecessary warning issue in HuggingFace LLM when tokenizer is provided as argument(run-llama#16035) (run-llama#16037) * Attempt #3 of context/result refactor (run-llama#16036) * temporarily limit lancedb version (run-llama#16045) * fix: new Data Connector adaption for DashVector (run-llama#16028) * v0.11.10 (run-llama#16046) * Fix serde issue for huggingface inference API embedding (run-llama#16053) * wip * wip * fix: fix regression in OctoAI llm provider after 0.11 (run-llama#16002) * fix OctoAI llm provider * adjust code to the latest client * resolve conflicts * fix message format conversion * do not pass max_tokens if None * add chat test * bump version * update LanceDB integration (run-llama#16057) * Adding docstring inside preprocess base code. (run-llama#16060) * Add default to optional args for BedrockEmbedding (run-llama#16067) * wip * wip * Fix optional type for Cohere embedding (run-llama#16068) wip * fix vertex pydantic arguments (run-llama#16069) * Fix result order (run-llama#16078) * Fix result order * Bump version * fix elasticsearch embedding async function (run-llama#16083) * docs: add decorators to the api reference (run-llama#16081) * fix incorrect parameters in VertexAIIndex (run-llama#16080) * feat: update JinaEmbedding for v3 release (run-llama#15971) * Mistralai enable custom endpoint from env (run-llama#16084) * Implement async for multi modal ollama (run-llama#16091) * Remove circular package dep (run-llama#16070) * [chore] fix incorrect `handler.context` for `handler.ctx` in docs (run-llama#16101) fix * fix bug missing import (run-llama#16096) * Add more workflow references to docs (run-llama#16102) * fix cohere tests (run-llama#16104) * Use response synthesizer in context chat engines (run-llama#16017) * Fix mongodb hybrid search, also pass hybrid_top_k in vector retriever (run-llama#16105) * bump openai agent deps (run-llama#16112) * Fixed up test_vectorstore. (run-llama#16113) * Issue 16071: wordpress requires username, password (run-llama#16072) * Issue 16071: wordpress requires username, password * Adding changes suggested in PR template * Use Optional typing keyword * feat: add configurable base_url field in rerank (run-llama#16050) * add base_url * version bump * add default --------- Co-authored-by: Massimiliano Pippi <mpippi@gmail.com> Co-authored-by: Logan <logan.markewich@live.com> * Enhance Pandas Query Engine Output Processor (run-llama#16052) * Enhance output processor to temporarily adjust display options. * Make format changes. * Update pyproject.toml --------- Co-authored-by: Logan <logan.markewich@live.com> Co-authored-by: Massimiliano Pippi <mpippi@gmail.com> * fix workflow docs (run-llama#16117) cr * Update OpenVINO LLM pyproject.toml (run-llama#16130) * Wordpress: Allow control of whether Pages and/or Posts are retrieved (run-llama#16128) * Async achat put operation (run-llama#16127) * [fix] handler.stream_events() doesn't yield `StopEvent` (run-llama#16115) * Improved TLM Rag cookbook (run-llama#16109) * Update chat message class for multi-modal (run-llama#15969) * Add support for Path for SimpleDirectoryReader (run-llama#16108) * Add TopicNodeParser based on MedGraphRAG paper (run-llama#16131) * Sql markdown response (run-llama#16103) * v0.11.11 (run-llama#16134) * Correct Pydantic warning(s) issed for llama-index-llms-ibm (run-llama#16141) Fix Pydantic warnings in llam-index-llms-ibm * feat: add drive link to google drive reader (run-llama#16156) * Introducing new VoyageAI models (run-llama#16150) * Add `required_exts` option to SharePoint reader (run-llama#16152) * User-defined schema in MilvusVectorStore (run-llama#16151) * safe format prompt variables in strings with JSON (run-llama#15734) * account for tools in prompt helper (run-llama#16157) * v0.11.12 (run-llama#16159) --------- Co-authored-by: Logan <logan.markewich@live.com> Co-authored-by: Ravi Theja <ravi03071991@gmail.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: Yannic Spreen-Ledebur <35889034+spreeni@users.noreply.github.com> Co-authored-by: Adnan Alkattan <Adnankattan9@gmail.com> Co-authored-by: Laurie Voss <github@seldo.com> Co-authored-by: saipjkai <84132316+saipjkai@users.noreply.github.com> Co-authored-by: ScriptShi <xunjian.sl@alibaba-inc.com> Co-authored-by: Bryce Freshcorn <26725654+brycecf@users.noreply.github.com> Co-authored-by: Graham Tibbitts <grahamtt@users.noreply.github.com> Co-authored-by: Rashmi Pawar <168514198+raspawar@users.noreply.github.com> Co-authored-by: Jerry Liu <jerryjliu98@gmail.com> Co-authored-by: Caroline Binley <39920563+carolinebinley@users.noreply.github.com> Co-authored-by: Prashanth Rao <35005448+prrao87@users.noreply.github.com> Co-authored-by: Asi Greenholts <88270351+TupleType@users.noreply.github.com> Co-authored-by: Massimiliano Pippi <mpippi@gmail.com> Co-authored-by: Facundo Santiago <fasantia@microsoft.com> Co-authored-by: gsa9989 <117786401+gsa9989@users.noreply.github.com> Co-authored-by: Aayush Kataria <aayushkataria3011@gmail.com> Co-authored-by: Ryan Nguyen <96593302+xpbowler@users.noreply.github.com> Co-authored-by: David Riccitelli <david@wordlift.io> Co-authored-by: GICodeWarrior <GICodeWarrior@gmail.com> Co-authored-by: Javier Torres <javierandrestorresreyes@gmail.com> Co-authored-by: 才胜 <1392197444@qq.com> Co-authored-by: George Dittmar <georgedittmar@gmail.com> Co-authored-by: Tarun Jain <jaintarun.abd17@gmail.com> Co-authored-by: miri-bar <160584887+miri-bar@users.noreply.github.com> Co-authored-by: Tomaz Bratanic <bratanic.tomaz@gmail.com> Co-authored-by: kobiche <56874660+kobiche@users.noreply.github.com> Co-authored-by: Arthur Moura Carvalho <141850116+armoucar-neon@users.noreply.github.com> Co-authored-by: Avi Avni <avi.avni@gmail.com> Co-authored-by: Cheese <11363971+cheese-git@users.noreply.github.com> Co-authored-by: Zac Wellmer <9603276+zacwellmer@users.noreply.github.com> Co-authored-by: Ragul Kachiappan <ragul.kachiappan@techjays.com> Co-authored-by: Vinicius <53495782+Vractos@users.noreply.github.com> Co-authored-by: Bidimpata-Kerim Aramyan-Tshimanga <bk.tshimanga@gmail.com> Co-authored-by: Laura Ceconi <laura.cec@hotmail.com> Co-authored-by: N1eo <zzhsaga@gmail.com> Co-authored-by: Chirag Agrawal <chirag.agrawal93@gmail.com> Co-authored-by: Andrei Fajardo <92402603+nerdai@users.noreply.github.com> Co-authored-by: David Oplatka <david.oplatka@vectara.com> Co-authored-by: Jacques Verré <jverre@gmail.com> Co-authored-by: Harpinder <harpinderjot36@gmail.com> Co-authored-by: Richard Liu <richardliu7896@gmail.com> Co-authored-by: Jorge Barrachina Gutiérrez <1333901+ntkog@users.noreply.github.com> Co-authored-by: Sourabh <Sourabh72101@gmail.com> Co-authored-by: OceanPresent <oceanpresent@163.com> Co-authored-by: Simon Suo <simonsdsuo@gmail.com> Co-authored-by: Ayush Chaurasia <ayush.chaurarsia@gmail.com> Co-authored-by: preprocess-co <137915090+preprocess-co@users.noreply.github.com> Co-authored-by: guodong <songoodong@163.com> Co-authored-by: polarbear567 <269739606@qq.com> Co-authored-by: Aaron Ji <127167174+DresAaron@users.noreply.github.com> Co-authored-by: enrico-stauss <73635664+enrico-stauss@users.noreply.github.com> Co-authored-by: Selim Çavaş <92586913+selimcavas@users.noreply.github.com> Co-authored-by: Casey Clements <caseyclements@users.noreply.github.com> Co-authored-by: Jonathan Springer <jonpspri@gmail.com> Co-authored-by: Anirudh31415926535 <anirudh@cohere.com> Co-authored-by: Matin Khajavi <58955268+MatinKhajavi@users.noreply.github.com> Co-authored-by: Ethan Yang <ethan.yang@intel.com> Co-authored-by: Matthew Turk <mturk24@users.noreply.github.com> Co-authored-by: Andrew Kim <59037923+andrewwkimm@users.noreply.github.com> Co-authored-by: José Henrique Luckmann <133380513+JoseLuckmann@users.noreply.github.com> Co-authored-by: Emanuel Ferreira <contatoferreirads@gmail.com> Co-authored-by: fzowl <160063452+fzowl@users.noreply.github.com> Co-authored-by: João Martins <11438285+jl-martins@users.noreply.github.com> Co-authored-by: Petros Mitseas <petros_94@icloud.com>
1 parent 0b92dd8 commit a7ba436

File tree

688 files changed

+31806
-3364
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

688 files changed

+31806
-3364
lines changed

.readthedocs.yaml

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,6 +9,9 @@ build:
99
os: ubuntu-22.04
1010
tools:
1111
python: "3.12"
12+
jobs:
13+
pre_build:
14+
- python docs/merge_llama_deploy_docs.py
1215

1316
mkdocs:
1417
configuration: docs/mkdocs.yml

CHANGELOG.md

Lines changed: 299 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,304 @@
11
# ChangeLog
22

3+
## [2024-09-22]
4+
5+
### `llama-index-core` [0.11.12]
6+
7+
- Correct Pydantic warning(s) issed for llm base class (#16141)
8+
- globally safe format prompt variables in strings with JSON (#15734)
9+
- account for tools in prompt helper and response synthesizers (#16157)
10+
11+
### `llama-index-readers-google` [0.4.1]
12+
13+
- feat: add drive link to google drive reader metadata (#16156)
14+
15+
### `llama-index-readers-microsoft-sharepoint` [0.3.2]
16+
17+
- Add required_exts option to SharePoint reader (#16152)
18+
19+
### `llama-index-vector-stores-milvus` [0.2.4]
20+
21+
- Support user-defined schema in MilvusVectorStore (#16151)
22+
23+
## [2024-09-20]
24+
25+
### `llama-index-core` [0.11.11]
26+
27+
- Use response synthesizer in context chat engines (#16017)
28+
- Async chat memory operation (#16127)
29+
- Sql query add option for markdown response (#16103)
30+
- Add support for Path for SimpleDirectoryReader (#16108)
31+
- Update chat message class for multi-modal (#15969)
32+
- fix: `handler.stream_events()` doesn't yield StopEvent (#16115)
33+
- pass `hybrid_top_k` in vector retriever (#16105)
34+
35+
### `llama-index-embeddings-elasticsearch` [0.2.1]
36+
37+
- fix elasticsearch embedding async function (#16083)
38+
39+
### `llama-index-embeddings-jinaai` [0.3.1]
40+
41+
- feat: update JinaEmbedding for v3 release (#15971)
42+
43+
### `llama-index-experimental` [0.3.3]
44+
45+
- Enhance Pandas Query Engine Output Processor (#16052)
46+
47+
### `llama-index-indices-managed-vertexai` [0.1.1]
48+
49+
- fix incorrect parameters in VertexAIIndex client (#16080)
50+
51+
### `llama-index-node-parser-topic` [0.1.0]
52+
53+
- Add TopicNodeParser based on MedGraphRAG paper (#16131)
54+
55+
### `llama-index-multi-modal-llms-ollama` [0.3.2]
56+
57+
- Implement async for multi modal ollama (#16091)
58+
59+
### `llama-index-postprocessor-cohere-rerank` [0.2.1]
60+
61+
- feat: add configurable base_url field in rerank (#16050)
62+
63+
### `llama-index-readers-file` [0.2.2]
64+
65+
- fix bug missing import for bytesio (#16096)
66+
67+
### `llama-index-readers-wordpress` [0.2.2]
68+
69+
- Wordpress: Allow control of whether Pages and/or Posts are retrieved (#16128)
70+
- Fix Issue 16071: wordpress requires username, password (#16072)
71+
72+
### `llama-index-vector-stores-lancedb` [0.2.1]
73+
74+
- fix hybrid search with latest lancedb client (#16057)
75+
76+
### `llama-index-vector-stores-mongodb` [0.3.0]
77+
78+
- Fix mongodb hybrid search top-k specs (#16105)
79+
80+
## [2024-09-16]
81+
82+
### `llama-index-core` [0.11.10]
83+
84+
- context/result refactor for workflows (#16036)
85+
- add sparse embedding abstraction (#16018)
86+
- Fix Pydantic models numeric validation (#16008)
87+
- Human in loop workflow example (#16011)
88+
89+
### `llama-index-callbacks-opik` [0.1.0]
90+
91+
- opik integration (#16007)
92+
93+
### `llama-index-indices-managed-llama-cloud` [0.3.1]
94+
95+
- update llamacloud index with image nodes (#15996)
96+
97+
### `llama-index-indices-managed-vectara` [0.2.2]
98+
99+
- Hotfix: Fix Citations Text (#16015)
100+
101+
### `llama-index-llms-huggingface` [0.3.4]
102+
103+
- Fix: unnecessary warning issue in HuggingFace LLM when tokenizer is provided as argument (#16037)
104+
105+
### `llama-index-readers-dashvector` [0.3.0]
106+
107+
- fix: new Data Connector adaption for DashVector (#16028)
108+
109+
### `llama-index-readers-quip` [0.1.0]
110+
111+
- add quip reader (#16000)
112+
113+
### `llama-index-sparse-embeddings-fastembed` [0.1.0]
114+
115+
- add fastembed sparse embeddings (#16018)
116+
117+
### `llama-index-vector-stores-elasticsearch` [0.2.1]
118+
119+
- Fix: get all documents from Elasticsearch KVStore (#16006)
120+
121+
### `llama-index-vector-stores-lancedb` [0.2.3]
122+
123+
- temporarily limit lancedb version (#16045)
124+
125+
### `llama-index-vector-stores-postgres` [0.2.5]
126+
127+
- Implement `get_nodes()` on PGVectorStore (#16026)
128+
129+
## [2024-09-12]
130+
131+
### `llama-index-core` [0.11.9]
132+
133+
- Add callback manager to retriever query engine from args (#15990)
134+
- Do not pass system prompt from fn calling runner to fn calling worker (#15986)
135+
- fix: Error when parsing react output if tool name contains non-English characters (#15956)
136+
137+
### `llama-index-embeddings-alibabacloud-aisearch` [0.1.0]
138+
139+
- Add four alibabacloud-aisearch llama-index integrations: rerank, node_parser, readers, embeddings (#15934)
140+
141+
### `llama-index-experimental` [0.3.1]
142+
143+
- Add NUDGE Finetuning (#15954)
144+
145+
### `llama-index-graph-stores-falkordb` [0.2.2]
146+
147+
- update falkordb client (#15940)
148+
149+
### `llama-index-llms-openai` [0.2.5]
150+
151+
- Add support for o1 openai models (#15979)
152+
- force temp to 1.0 for o1 (#15983)
153+
154+
### `llama-index-node-parser-alibabacloud-aisearch` [0.1.0]
155+
156+
- Add four alibabacloud-aisearch llama-index integrations: rerank, node_parser, readers, embeddings (#15934)
157+
158+
### `llama-index-postprocessor-alibabacloud-aisearch-rerank` [0.1.0]
159+
160+
- Add four alibabacloud-aisearch llama-index integrations: rerank, node_parser, readers, embeddings (#15934)
161+
162+
### `llama-index-readers-alibabacloud-aisearch` [0.1.0]
163+
164+
- Add four alibabacloud-aisearch llama-index integrations: rerank, node_parser, readers, embeddings (#15934)
165+
166+
### `llama-index-vector-stores-opensearch` [0.3.0]
167+
168+
- Differentiate sync and async calls in OpenSearchVectorClient (#15945)
169+
170+
### `llama-index-vector-stores-postgres` [0.2.4]
171+
172+
- fix attribute error in PGVectorStore (#15961)
173+
- add support for engine parameters (#15951)
174+
175+
### `llama-index-vector-stores-wordlift` [0.4.5]
176+
177+
- Catch nest_asyncio errors (#15975)
178+
179+
## [2024-09-09]
180+
181+
### `llama-index-core` [0.11.8]
182+
183+
- feat: Add a retry policy config to workflow steps (#15757)
184+
- Add doc id to Langchain format conversions (#15928)
185+
186+
### `llama-index-chat-store-dynamodb` [0.1.0]
187+
188+
- Add DynamoDBChatStore (#15917)
189+
190+
### `llama-index-cli` [0.3.1]
191+
192+
- Fix RagCLI pydantic error (#15931)
193+
194+
### `llama-index-llms-alibabacloud-aisearch` [0.1.0]
195+
196+
- add llama-index llms alibabacloud_aisearch integration (#15850)
197+
198+
### `llama-index-llms-mistralai` [0.2.3]
199+
200+
- Make default mistral model support function calling with `large-latest` (#15906)
201+
202+
### `llama-index-llms-vertex` [0.3.4]
203+
204+
- Add InternalServerError to retry decorator (#15921)
205+
206+
### `llama-index-postprocessor-rankllm-rerank` [0.3.0]
207+
208+
- Update RankLLM with new rerankers (#15892)
209+
210+
### `llama-index-vector-stores-azurecosmosnosql` [1.0.0]
211+
212+
- Adding vector store for Azure Cosmos DB NoSql (#14158)
213+
214+
### `llama-index-readers-microsoft-sharepoint` [0.3.1]
215+
216+
- Fix error handling in sharepoint reader, fix error with download file (#15868)
217+
218+
### `llama-index-vector-stores-wordlift` [0.4.4]
219+
220+
- Adding support for MetadataFilters to WordLift Vector Store (#15905)
221+
222+
### `llama-index-vector-stores-opensearch` [0.2.2]
223+
224+
- Opensearch Serverless filtered query support using knn_score script (#15899)
225+
226+
## [2024-09-06]
227+
228+
### `llama-index-core` [0.11.7]
229+
230+
- Make SentenceSplitter's secondary_chunking_regex optional (#15882)
231+
- force openai structured output (#15706)
232+
- fix assert error, add type ignore for streaming agents (#15887)
233+
- Fix image document deserialization issue (#15857)
234+
235+
### `llama-index-graph-stores-kuzu` [0.3.2]
236+
237+
- Bug fix for KuzuPropertyGraphStore: Allow upserting relations even when chunks are absent (#15889)
238+
239+
### `llama-index-llms-bedrock-converse` [0.3.0]
240+
241+
- Removed unused llama-index-llms-anthropic dependency from Bedrock Converse (#15869)
242+
243+
### `llama-index-vector-stores-postgres` [0.2.2]
244+
245+
- Fix PGVectorStore with latest pydantic, update pydantic imports (#15886)
246+
247+
### `llama-index-vector-stores-tablestore` [0.1.0]
248+
249+
- Add TablestoreVectorStore (#15657)
250+
251+
## [2024-09-05]
252+
253+
### `llama-index-core` [0.11.6]
254+
255+
- add llama-deploy docs to docs builds (#15794)
256+
- Add oreilly course cookbooks (#15845)
257+
258+
### `llama-index-readers-box` [0.2.1]
259+
260+
- Various bug fixes (#15836)
261+
262+
### `llama-index-readers-file` [0.2.1]
263+
264+
- Update ImageReader file loading logic (#15848)
265+
266+
### `llama-index-tools-box` [0.2.1]
267+
268+
- Various bug fixes (#15836)
269+
270+
### `llama-index-vector-stores-opensearch` [0.2.1]
271+
272+
- Refresh Opensearch index after delete operation (#15854)
273+
274+
## [2024-09-04]
275+
276+
### `llama-index-core` [0.11.5]
277+
278+
- remove unneeded assert in property graph retriever (#15832)
279+
- make simple property graphs serialize again (#15833)
280+
- fix json schema for fastapi return types on core components (#15816)
281+
282+
### `llama-index-llms-nvidia` [0.2.2]
283+
284+
- NVIDIA llm: Add Completion for starcoder models (#15802)
285+
286+
### `llama-index-llms-ollama` [0.3.1]
287+
288+
- add ollama response usage (#15773)
289+
290+
### `llama-index-readers-dashscope` [0.2.1]
291+
292+
- fix pydantic v2 validation errors (#15800)
293+
294+
### `llama-index-readers-discord` [0.2.1]
295+
296+
- fix: convert Document id from int to string in DiscordReader (#15806)
297+
298+
### `llama-index-vector-stores-mariadb` [0.1.0]
299+
300+
- Add MariaDB vector store integration package (#15564)
301+
3302
## [2024-09-02]
4303

5304
### `llama-index-core` [0.11.4]

0 commit comments

Comments
 (0)