@@ -5,6 +5,39 @@ All notable changes to this project will be documented in this file.
55The format is based on [ Keep a Changelog] ( https://keepachangelog.com/en/1.0.0/ ) ,
66and this project adheres to [ Semantic Versioning] ( https://semver.org/spec/v2.0.0.html ) .
77
8+ ## v0.1.3 (2023-11-08)
9+
10+ ### New Features
11+
12+ - <csr-id-1019402eeaa6bff176a228b477486105d16d36ef /> more ` async ` function variants
13+ - <csr-id-c190df6ebfd02ef5f3e0fd50d82a456ef426e6e6 /> add ` LlamaSession.model `
14+
15+ ### Other
16+
17+ - <csr-id-0a0d5f3fce1c46f914b5f48802241f200538c4f7 /> typo
18+
19+ ### Commit Statistics
20+
21+ <csr-read-only-do-not-edit />
22+
23+ - 5 commits contributed to the release.
24+ - 3 commits were understood as [ conventional] ( https://www.conventionalcommits.org ) .
25+ - 0 issues like '(#ID)' were seen in commit messages
26+
27+ ### Commit Details
28+
29+ <csr-read-only-do-not-edit />
30+
31+ <details ><summary >view details</summary >
32+
33+ * ** Uncategorized**
34+ - Typo ([ ` 0a0d5f3 ` ] ( https://github.com/binedge/llama_cpp-rs/commit/0a0d5f3fce1c46f914b5f48802241f200538c4f7 ) )
35+ - Release llama_cpp v0.1.2 ([ ` 4d0b130 ` ] ( https://github.com/binedge/llama_cpp-rs/commit/4d0b130be8f250e599908bab042431db8aa2f553 ) )
36+ - More ` async ` function variants ([ ` 1019402 ` ] ( https://github.com/binedge/llama_cpp-rs/commit/1019402eeaa6bff176a228b477486105d16d36ef ) )
37+ - Add ` LlamaSession.model ` ([ ` c190df6 ` ] ( https://github.com/binedge/llama_cpp-rs/commit/c190df6ebfd02ef5f3e0fd50d82a456ef426e6e6 ) )
38+ - Release llama_cpp_sys v0.2.1, llama_cpp v0.1.1 ([ ` a9e5813 ` ] ( https://github.com/binedge/llama_cpp-rs/commit/a9e58133cb1c1d4d45f99a7746e0af7da1a099e1 ) )
39+ </details >
40+
841## v0.1.2 (2023-11-08)
942
1043### New Features
@@ -16,7 +49,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
1649
1750<csr-read-only-do-not-edit />
1851
19- - 2 commits contributed to the release.
52+ - 3 commits contributed to the release.
2053 - 2 commits were understood as [ conventional] ( https://www.conventionalcommits.org ) .
2154 - 0 issues like '(#ID)' were seen in commit messages
2255
@@ -27,6 +60,7 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
2760<details ><summary >view details</summary >
2861
2962 * ** Uncategorized**
63+ - Release llama_cpp v0.1.2 ([ ` 368a5de ` ] ( https://github.com/binedge/llama_cpp-rs/commit/368a5dec4379ccdbe7b68c40535f30e13f23d8c2 ) )
3064 - More ` async ` function variants ([ ` dcfccdf ` ] ( https://github.com/binedge/llama_cpp-rs/commit/dcfccdf721eb47a364cce5b1c7a54bcf94335ac0 ) )
3165 - Add ` LlamaSession.model ` ([ ` 56285a1 ` ] ( https://github.com/binedge/llama_cpp-rs/commit/56285a119633682951f8748e85c6b8988e514232 ) )
3266</details >
@@ -39,24 +73,33 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
3973
4074 - <csr-id-3eddbab3cc35a59acbe66fa4f5333a9ca0edb326 /> Remove debug binary from Cargo.toml
4175
76+ ### Chore
77+
78+ - <csr-id-dbdd9a4a2d813d990e5829a09fc5c8df75d9d54b /> Remove debug binary from Cargo.toml
79+
4280### New Features
4381
4482 - <csr-id-3bada658c9139af1c3dcdb32c60c222efb87a9f6 /> add ` LlamaModel::load_from_file_async `
83+ - <csr-id-bbf9f69a2dd068a3a20199ffce44d3c8a25b64d5 /> add ` LlamaModel::load_from_file_async `
4584
4685### Bug Fixes
4786
4887 - <csr-id-b676baa3c1a6863c7afd7a88b6f7e8ddd2a1b9bd /> require ` llama_context ` is accessed from behind a mutex
4988 This solves a race condition when several ` get_completions ` threads are spawned at the same time
5089 - <csr-id-4eb0bc9800877e460fe0d1d25398f35976b4d730 /> ` start_completing ` should not be invoked on a per-iteration basis
5190 There's still some UB that can be triggered due to llama.cpp's threading model, which needs patching up.
91+ - <csr-id-81e5de901a3da88a97ba00c6a36e303d8708380d /> require ` llama_context ` is accessed from behind a mutex
92+ This solves a race condition when several ` get_completions ` threads are spawned at the same time
93+ - <csr-id-27706de1a471b317e4b7b4fdd4c5bbabfbd95ed6 /> ` start_completing ` should not be invoked on a per-iteration basis
94+ There's still some UB that can be triggered due to llama.cpp's threading model, which needs patching up.
5295
5396### Commit Statistics
5497
5598<csr-read-only-do-not-edit />
5699
57- - 6 commits contributed to the release.
100+ - 11 commits contributed to the release.
58101 - 13 days passed between releases.
59- - 4 commits were understood as [ conventional] ( https://www.conventionalcommits.org ) .
102+ - 8 commits were understood as [ conventional] ( https://www.conventionalcommits.org ) .
60103 - 0 issues like '(#ID)' were seen in commit messages
61104
62105### Commit Details
@@ -67,6 +110,11 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0
67110
68111 * ** Uncategorized**
69112 - Release llama_cpp_sys v0.2.1, llama_cpp v0.1.1 ([ ` ef4e3f7 ` ] ( https://github.com/binedge/llama_cpp-rs/commit/ef4e3f7a3c868a892f26acfae2a5211de4900d1c ) )
113+ - Add ` LlamaModel::load_from_file_async ` ([ ` bbf9f69 ` ] ( https://github.com/binedge/llama_cpp-rs/commit/bbf9f69a2dd068a3a20199ffce44d3c8a25b64d5 ) )
114+ - Remove debug binary from Cargo.toml ([ ` dbdd9a4 ` ] ( https://github.com/binedge/llama_cpp-rs/commit/dbdd9a4a2d813d990e5829a09fc5c8df75d9d54b ) )
115+ - Require ` llama_context ` is accessed from behind a mutex ([ ` 81e5de9 ` ] ( https://github.com/binedge/llama_cpp-rs/commit/81e5de901a3da88a97ba00c6a36e303d8708380d ) )
116+ - ` start_completing ` should not be invoked on a per-iteration basis ([ ` 27706de ` ] ( https://github.com/binedge/llama_cpp-rs/commit/27706de1a471b317e4b7b4fdd4c5bbabfbd95ed6 ) )
117+ - Update to llama.cpp 0a7c980 ([ ` eb8f627 ` ] ( https://github.com/binedge/llama_cpp-rs/commit/eb8f62777aa63787004771d86d34a8862b3a4157 ) )
70118 - Add ` LlamaModel::load_from_file_async ` ([ ` 3bada65 ` ] ( https://github.com/binedge/llama_cpp-rs/commit/3bada658c9139af1c3dcdb32c60c222efb87a9f6 ) )
71119 - Remove debug binary from Cargo.toml ([ ` 3eddbab ` ] ( https://github.com/binedge/llama_cpp-rs/commit/3eddbab3cc35a59acbe66fa4f5333a9ca0edb326 ) )
72120 - Require ` llama_context ` is accessed from behind a mutex ([ ` b676baa ` ] ( https://github.com/binedge/llama_cpp-rs/commit/b676baa3c1a6863c7afd7a88b6f7e8ddd2a1b9bd ) )
0 commit comments