Skip to content

Releases: mgonzs13/llama_ros

4.2.0

10 Jan 11:45
Compare
Choose a tag to compare

Changelog from version 4.1.8 to 4.2.0:

212f5f1 new version 4.2.0
40121ba new line removed from llava
46f3f05 new reranking and embeddings
1292f11 llama.cpp updated
c68761a cleaning chat llama ros
7f9d184 minor fixes to goal in execute
43173bb not reset image in llava
490f4a0 minor fix in README
b88d44b minor fixes
d309f5c minor fix in llava
56cd9d0 llama.cpp updated
4f203c5 demos and examples fixed in README
8292411 fixing python imports in demos
566a736 fixing rag demo
8a01839 chatllama_tools_node renamed to chatllama_tools_demo_node
b8ed82a updating langchain versions
4abf2d0 sorting python imports
bfa6e2a updating chroma version
8c191ee moving get_metada service - embedding and rerank models will not have get_metada service
51fff6c fixing rerank by setting nomalization to -1
2eab6d6 LangChain Tools on Chat (#12)
b037ad3 llama.cpp updated
17b8719 phi-4 added
d1a24c4 new embedding models

4.1.8

07 Jan 12:02
Compare
Choose a tag to compare

Changelog from version 4.1.7 to 4.1.8:

9bf2fce new version 4.1.8
0ee364f Qwen2-VL yaml updated
ed22b00 llama.cpp updated
e005735 llava override eval batch instead of vector
6262e24 Qwen2-VL support added
335250b frieren image for llava demo
e261285 llama.cpp updated
b9e5d18 llama.cpp updated

4.1.7

28 Dec 17:13
Compare
Choose a tag to compare

Changelog from version 4.1.6 to 4.1.7:

f0a0af9 new version 4.1.7
efd8c97 fixing license comments
8275fe4 ifndef guard names fixed
eff180d new llama logs
60187e4 huggingface-hub upgraded to 0.27.0
44bbb71 Falcon3 example added
9c2ac6d llama.cpp updated
beb4a22 llama.cpp updated
741b01e vendro C++ standard set to 17
39db082 llama.cpp updated + new penalize sampling
8bf89d9 Update README.md
2b9a0df workflow names fixed

4.1.6

13 Dec 10:58
Compare
Choose a tag to compare

Changelog from version 4.1.5 to 4.1.6:

379e971 new version 4.1.6
b2e0997 close inactive issues workflow
a1db9de updating workflows to use permissions
3679ba1 llama.cpp updated + new kv_cache types
f977a38 llama.cpp updated

4.1.5

03 Dec 10:17
Compare
Choose a tag to compare

Changelog from version 4.1.4 to 4.1.5:

701dda1 new version 4.1.5
460bfaf license comments fixed
f32182d llama_params ifndef fixed
868c70c llama.cpp updated
efad6bc Mistral model added
59957a0 Hermes model updated
edd93d3 llama.cpp updated
1106ec7 cron added to docker build workflow

4.1.4

01 Dec 17:37
Compare
Choose a tag to compare

Changelog from version 4.1.3 to 4.1.4:

893ae80 new version 4.1.4
7374334 fixing logs
be3ad64 llama.cpp updated
39f5283 llama.cpp updated + ggml_amx removed

4.1.3

28 Nov 20:32
Compare
Choose a tag to compare

Changelog from version 4.1.2 to 4.1.3:

5212102 new version 4.1.3
a729e44 new workflow to create releases
d0294ea removig wrong option for CURL
0fe12b0 llama.cpp updated
c86501b debug param removed
4ca5739 llama.cpp updated
3e770cf devices param added
879a1ab llama.cpp updated + new params sampling

4.1.2

24 Nov 22:11
Compare
Choose a tag to compare
  • Improving includes order
  • Remove bos from llama-3 prompt
  • llama.cpp b4157

4.1.1

23 Nov 21:00
Compare
Choose a tag to compare

4.1.0

21 Nov 11:03
Compare
Choose a tag to compare
  • Getting metadata from GGUF model through llama.cpp
  • New metadata msgs (Metadata, GeneralInfo, ModelInfo, AttentionInfo, RoPEInfo, TokenizerInfo)
  • New service to get the metadata of the LLM/VLM
  • llama.cpp b4149