Skip to content

Releases: mnemonica-ai/oshepherd

v0.0.12

07 Nov 16:38
52f32e8
Compare
Choose a tag to compare

Version 0.0.12

Commits

  • [52f32e8] Merge pull request #17 from mnemonica-ai/uvicorn
  • [631d20c] version bumped to 0.0.12
  • [7a93bb9] better logs and workers setting
  • [85b68c4] workers correct default value added
  • [6511dd4] correct i/o types added
  • [d9d38ed] uvicorn server implemented in main function
  • [867ed1e] workers default added
  • [446f693] simplified server using uvicorn instead of gunicorn

v0.0.11

03 Nov 01:17
47019ed
Compare
Choose a tag to compare

Version 0.0.11

Commits

  • [47019ed] Merge pull request #16 from mnemonica-ai/release_dependencies
  • [0f19012] Merge branch 'main' into release_dependencies
  • [8ac9cd3] version bumped to 0.0.11
  • [df113dd] docs and dependencies updated
  • [65a7434] Merge pull request #15 from mnemonica-ai/fastapi
  • [346c1d4] local oshepherd reference removed
  • [933958e] dependencies updated and tested
  • [6e6ecca] common req headers params abstracted
  • [a84cef9] host and port added as env vars
  • [e388949] routes abstracted to its own modules for all model endpoints
  • [79564fd] health status endpoint test adde
  • [924edf2] readme docs updated, dependencies updated
  • [b6f94d9] constants for server data e2e tests
  • [5fab7eb] api server basic implementation using fastapi
  • [cdf7489] gunicorn app added to main server
  • [956e6bd] celery app for fast api
  • [1e78711] custom gunicorn server
  • [5c71292] fast api added as dependency
  • [b75f0f1] requirements updated
  • [f31bfa0] cli and worker updated
  • [1567b33] app updated
  • [49351cb] special gunicorn class

v0.0.9

17 Jun 02:31
f1a198b
Compare
Choose a tag to compare

Version 0.0.9

Commits

  • [f1a198b] Merge pull request #12 from mnemonica-ai/development
  • [40f1d04] version bumped to 0.0.9
  • [0ca227f] Merge pull request #11 from mnemonica-ai/only_redis
  • [222a752] Update README.md
  • [97da6b1] better endpoints parity section
  • [21c5ebd] typo fixed
  • [77cfd28] api parity section added
  • [e4d5559] pytest added to requirements
  • [6eb20d1] direct reference to rabbitmq protocol removed
  • [26d7549] rabbit references removed from readme, using redis only
  • [6612b98] broker and backend references changed to be general instead of direct references to rabbit and redis
  • [ecd3d76] Smol typo fixes on README file
  • [5e117fa] Merge pull request #7 from mnemonica-ai/feature_embeddings
  • [15e31d2] basic test added for raw http request
  • [2651fdc] e2e basic test for embeddings endpoint added
  • [6c481de] blueprint available in flask server
  • [0dc9886] worker support for embeddings executiong added
  • [3cab250] embeddings endpoint implementation added, including request and response types, along with endpoint blueprint for flask

v0.0.6

03 May 21:14
7e8a052
Compare
Choose a tag to compare

Version 0.0.6

Commits

  • [7e8a052] Merge pull request #6 from mnemonica-ai/development
  • [5785191] version bumped to 0.0.6
  • [7517bbc] Merge pull request #5 from mnemonica-ai/feature_chat
  • [7da4607] linter applied
  • [ad05ac0] better style for readme, formatted
  • [573ce58] comments improved in ollama endpoints definitions
  • [c91e550] readme improved
  • [5184bcc] better defaul values for redis backend configuration
  • [adcf0a1] better definition of default values for chat and generate request payloads types
  • [4ff05cf] tests added for chat completion basic cases using ollama python package and http requests
  • [4854abb] ollama chat response type added, default values for chat request fixed accoding to what is expected by server
  • [bbeb978] generalization of completions execution into the same task, ready for generate and chat endpoints for now oshepherd.worker.tasks.exec_completion
  • [0282b9a] basic generalization of generate completion endpoint implemented
  • [8ae0fe3] chat completion endpoint added to main api definitions
  • [15955d9] basic chat completion flask endpoint implemented

v0.0.5

29 Apr 13:16
e454f17
Compare
Choose a tag to compare

Version 0.0.5

Commits

  • [e454f17] Merge pull request #4 from mnemonica-ai/development
  • [a369a09] version bumped to 0.0.5
  • [c21acac] Merge pull request #3 from mnemonica-ai/fix_connections
  • [077255e] better comments for ollama tasks
  • [cb58ca4] ollama celery task abstracted to its own module
  • [202d3ef] Merge branch 'development' into fix_connections

v0.0.4

28 Apr 16:26
407b744
Compare
Choose a tag to compare

Version 0.0.4

Commits

  • [407b744] Merge pull request #2 from mnemonica-ai/development
  • [37a5f26] version bumped to 0.0.4
  • [6b3b51c] Merge pull request #1 from mnemonica-ai/versioning
  • [ac14f54] bump & release gh action added pointing to main branch

v0.0.3

28 Apr 16:20
407b744
Compare
Choose a tag to compare

First Release: https://pypi.org/project/oshepherd/0.0.3/

  • Basic API behavior: serving one http endpoint as a copy of the original generate() Ollama server endpoint, receiving incoming request parameters, queuing a message with those parameters in RabbitMQ through Celery, waiting for the Celery Task to finish, extracting the returned response from Redis backend, and then responding to http Ollama client the response from a remote Ollama server.
  • Basic WORKER behavior: Respond to messages queued in RabbitMQ using Celery, fire generate() request pointing to local Ollama server within worker instance, also using Ollama python package client, and return response to Redis backend.