Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ollama create can't access local files #46

Open
lee-b opened this issue Sep 30, 2024 · 1 comment
Open

ollama create can't access local files #46

lee-b opened this issue Sep 30, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@lee-b
Copy link

lee-b commented Sep 30, 2024

ollama create -f some_file_path some_name should be able to access some_file_path, in order to import the files. However, since it's a client to the docker instance, that doesn't work.

This is far from perfect, but I wrote this little script a while ago, before switching to harbor from my own setup. It works by mapping the current folder to its equivalent that is mounted inside the ollama docker instance, and then dumping the appropriate ollama command to run:

#!/usr/bin/env python3

import os
import os.path
import sys
from pathlib import Path
import re

def main():
    this_dir = Path(sys.argv[0]).parent.absolute()

    try:
        gguf_path = Path(sys.argv[1]).absolute()
    except IndexError:
        print("No gguf file given to import!", file=sys.stderr)
        return 20

    if not gguf_path.exists():
        print(f"ERROR: expected {gguf_path!r} to be a valid, existing gguf file", file=sys.stderr)
        return 20

    if Path(os.path.commonpath((gguf_path, this_dir))) != this_dir:
        print(f"ERROR: expected {gguf_path!r} to be under {this_dir!r}", file=sys.stderr)
        return 20

    model_name = gguf_path.stem

    if "-of-0" in model_name:
        model_name = re.sub(r'-\d+-of-\d+', '', model_name)

    rel_gguf_path = Path(gguf_path.relative_to(this_dir))
    rel_model_path = rel_gguf_path.parent / f"{model_name}.model"

    container_root = "/data/llms/"

    container_gguf_path = container_root / rel_gguf_path
    container_model_path = container_root / rel_model_path

    with open(rel_model_path, 'w') as model_fp:
        model_fp.write(f"""FROM {container_gguf_path}\n""")

    print(f"Model file created/updated. To import, run:\n\nollama create -f '{container_model_path}' '{model_name}'\n")

    return 0

if __name__ == "__main__":
    sys.exit(main())

I think harbor does some similar llm folder mapping to share llm files with as many services as possible. Perhaps not for ollama since it stores models in its own custom format for most files, but if mapping that llm folder in a standard way, then harbor ollama create could map the create file, if it's in that shared folder, at least.

NOTE: being able to create ollama models from modelfiles is very important, to be able to tune models with large contexts so that they use less context and therefore fit into VRAM.

@av
Copy link
Owner

av commented Sep 30, 2024

Hi, ollama is one of the older services so it lacks the feature you're describing, many of the CLI services, however, do have it, I agree that allowing ollama CLI access to PWD is a good enhancement for usability when working with modelfiles

@av av added the enhancement New feature or request label Sep 30, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants