Skip to content

Releases: av/harbor

v0.1.19

13 Sep 10:31
@av av
Compare
Choose a tag to compare

v0.1.19 - lm-evaluation-harness integration

DOI

This project provides a unified framework to test generative language models on a large number of different evaluation tasks.

Starting

# [Optional] pre-build the image
harbor build lmeval

Refer to the configuration for Harbor services

# Run evals
harbor lmeval --tasks gsm8k,hellaswag

# Open results folder
harbor lmeval results

Full Changelog: v0.1.18...v0.1.19

v0.1.18

12 Sep 16:54
@av av
Compare
Choose a tag to compare

v0.1.18

This is another maintenance release mainly focused on the bench functionality

  • vllm is bumped to v0.6.0 by default, harbor now also uses a version with bitsandbytes pre-installed (run harbor build vllm to pre-build it)
  • bench - judge prompt, eval log, exp. backoff for the LLM
  • CheeseBench is out, smells good though

Full Changelog: v0.1.17...v0.1.18

v0.1.17

09 Sep 22:55
@av av
Compare
Choose a tag to compare

v0.1.17

This is a maintenance and bugfixes release without new service integrations.

  • bench service fixes
    • correctly handling interrupts
    • fixing broken API key support for the LLM and the Judge
  • bench now renders a simple HTML report
  • bench now records task completion time
  • Breaking change harbor bench is now harbor bench run
  • aphrodite - switching to 0.6.0 release images (different docker repo, changed internal port)
  • aphrodite - configurable version
  • #12 fixed - using nvidia-container-toolkit presence for nvidia detection, instead of the docker runtimes check

Full Changelog: v0.1.16...v0.1.17

v0.1.16

08 Sep 22:52
@av av
Compare
Choose a tag to compare

v0.1.16 bench

screenshot of apache superset with the data from bench

Something new this time - not an integration, but rather a custom-built service for Harbor.

bench is a built-in benchmark service for measuring the quality of LLMs. It has a few specific design goals in mind:

  • Work with OpenAI-compatible APIs (not running LLMs on its own)
  • Benchmark tasks and success criteria are defined by you
  • Focused on chat/instruction tasks
# [Optional] pre-build the image
harbor build bench

# Run the benchmark
# --name is required to give this run a meaningful name
harbor bench --name bench

# Open the results (folder)
harbor bench results

harbor doctor

A very lightweight troubleshooting utility

user@os:~/code/harbor$ ▼ h doctor
00:52:24 [INFO] Running Harbor Doctor...
00:52:24 [INFO] ✔ Docker is installed and running
00:52:24 [INFO] ✔ Docker Compose is installed
00:52:24 [INFO] ✔ .env file exists and is readable
00:52:24 [INFO] ✔ default.env file exists and is readable
00:52:24 [INFO] ✔ Harbor workspace directory exists
00:52:24 [INFO] ✔ CLI is linked
00:52:24 [INFO] Harbor Doctor checks completed successfully.

Full Changelog: v0.1.15...v0.1.16

v0.1.15

07 Sep 14:23
@av av
Compare
Choose a tag to compare

omnichain integration

Handle: omnichain
URL: http://localhost:34081

Efficient visual programming for AI language models.

omnichain UI screenshot

Starting

# [Optional] pre-build the image
harbor build omnichain

# Start the service
harbor up omnichain

# [Optional] Open the UI
harbor open omnichain

Harbor runs a custom version of omnichain that is compatible with webui. See example workflow (Chat about Harbor CLI) in the service docs.

Misc

  • webui config cleanup
  • Instructions for copilot in Harbor repo
  • Fixing workspace for bionicgpt service: missing gitignore, fixfs routine

Full Changelog: v0.1.14...v0.1.15

v0.1.14

05 Sep 22:00
@av av
Compare
Choose a tag to compare

Lobe Chat integration

Lobe Chat splash image

Lobe Chat - an open-source, modern-design AI chat framework.

Starting

# Will start lobechat alongside
# the default webui
harbor up lobechat

If you want to make LobeChat your default UI, please see the information below:

# Replace the default webui with lobechat
# afterwards, you can just run `harbor up`
harbor defaults rm webui
harbor defaults add lobechat

Note

LobeChat supports only a list of predefined models for Ollama that can't be pre-configured and has to be selected from the UI at runtime

Misc

  • half-baked autogpt service, not documented as it's not integrated with the any of the harbor services due to its implementation
  • Updating harbor how prompt to reflect on recent releases
  • Harbor User Guide - high-level user documentation

Full Changelog: v0.1.13...v0.1.14

v0.1.13

05 Sep 08:15
@av av
Compare
Choose a tag to compare

v0.1.13

  • It's now possible to set desired mistralrs version:
# Alias
harbor mistralrs version

# Via config
harbor config get mistralrs.version

# Update
harbor mistralrs version 0.4
harbor config set mistralrs.version
  • MacOS compatibility fixes
    • Log level selection (MacOS uses bash v3 without declare -A)
    • sed signature is different requiring -i to have an '' empty string set, affecting harbor config update

Full Changelog: v0.1.12...v0.1.13

v0.1.12

04 Sep 10:40
@av av
Compare
Choose a tag to compare

v0.1.12 - aichat integration

CI
Crates
Discord

AIChat is an all-in-one AI CLI tool featuring Chat-REPL, Shell Assistant, RAG, AI Tools & Agents, and More.

# [Optional] pre-build the image
harbor build aichat

# Use aichat CLI
harbor aichat -f ./migrations.md how to run migrations?
harbor aichat -e install nvm
harbor aichat -c fibonacci in js
harbor aichat -f https://github.com/av/harbor/wiki/Services list supported services

Misc

Log levels

Harbor now routes all non-parseable logs to stderr by default. You can also adjust log level for the CLI output (most existing logs are INFO)

# Show current log level, INFO by default
harbor config get log.level

# Set log level
harbor config set log.level ERROR

# Log levels
# DEBUG | INFO | WARN | ERROR

Ollama modelfiles

Harbor workspace now has a place for custom modelfiles for Ollama.

# See existing modelfiles
ls $(harbor home)/ollama/modelfiles

# Copy custom modelfiles to the workspace
cp /path/to/Modelfile $(harbor home)/ollama/modelfiles/my.Modelfile

# Create custom model from a modelfile
harbor ollama create -f /modelfiles/my.Modelfile my-model

Full Changelog: v0.1.11...v0.1.12

v0.1.11

03 Sep 10:39
@av av
Compare
Choose a tag to compare

v0.1.11 - Perplexica integration

video-preview

Perplexica is an AI-powered search engine. It is an Open source alternative to Perplexity AI

Starting

# [Optional] Pull the perplexica images
# ahead of starting the service
harbor pull perplexica

# Start the service, it makes
# little sense to run it without searxng
harbor up perplexica searxng

# [Optional] Open the service in browser
harbor open perplexica

Harbor will automatically connect perplexica with ollama and searxng if running together.

Full Changelog: v0.1.10...v0.1.11

v0.1.10

02 Sep 12:37
@av av
Compare
Choose a tag to compare

v0.1.10 - ComfyUI Integration

ComfyUI_00010_

ComfyUI Screenshot




This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface.

Starting

# [Optional] Pre-pull the comfyui images, ~4GB
# otherwise will be pulled on start
harbor pull comfyui

# Start the service
# Note that this will
harbor up comfyui

# [Recommended] see service logs to monitor
# initial downloads and setup
harbor logs comfyui -n 100

# Integrated with Open WebUI by default
harbor open

# [Optional] once started, open the UI
# in your default browser
harbor open comfyui

# [Optional] open output folder in File Manager
# to get access to the output files
harbor comfyui output
Splash image was, of course, generated using this new integration in Harbor

image

Full Changelog: v0.1.9...v0.1.10