Skip to content
@LLaMANexus

LLaMANexus

Popular repositories Loading

  1. llama3 llama3 Public

    Forked from meta-llama/llama3

    The official Meta Llama 3 GitHub site

    Python 1

  2. llama.cpp llama.cpp Public

    Forked from ggml-org/llama.cpp

    LLM inference in C/C++

    C++ 1

  3. PurpleLlama PurpleLlama Public

    Forked from meta-llama/PurpleLlama

    Set of tools to assess and improve LLM security.

    Python 1

  4. executorch executorch Public

    Forked from pytorch/executorch

    On-device AI across mobile, embedded and edge for PyTorch

    C++ 1

  5. llama-cpp-wasm llama-cpp-wasm Public

    Forked from tangledgroup/llama-cpp-wasm

    WebAssembly (Wasm) Build and Bindings for llama.cpp

    JavaScript 1

  6. maid maid Public

    Forked from Mobile-Artificial-Intelligence/maid

    Maid is a cross-platform Flutter app for interfacing with GGUF / llama.cpp models locally, and with Ollama and OpenAI models remotely.

    Dart 1

Repositories

Showing 10 of 45 repositories
  • unsloth Public Forked from unslothai/unsloth

    Finetune Llama 3.1, Mistral, Phi & Gemma LLMs 2-5x faster with 80% less memory

    LLaMANexus/unsloth’s past year of commit activity
    Python 0 Apache-2.0 2,878 0 0 Updated Feb 7, 2025
  • llama-gguf-optimize Public Forked from robbiemu/llama-gguf-optimize

    Scripts and tools for optimizing quantizations in llama.cpp with GGUF imatrices.

    LLaMANexus/llama-gguf-optimize’s past year of commit activity
    Python 0 LGPL-3.0 1 0 0 Updated Feb 2, 2025
  • llama.cpp Public Forked from ggml-org/llama.cpp

    LLM inference in C/C++

    LLaMANexus/llama.cpp’s past year of commit activity
    C++ 1 MIT 11,596 0 0 Updated Jan 23, 2025
  • LlamaEdge Public Forked from LlamaEdge/LlamaEdge

    The easiest & fastest way to run customized and fine-tuned LLMs locally or on the edge

    LLaMANexus/LlamaEdge’s past year of commit activity
    Rust 0 Apache-2.0 113 0 0 Updated Jan 14, 2025
  • executorch Public Forked from pytorch/executorch

    On-device AI across mobile, embedded and edge for PyTorch

    LLaMANexus/executorch’s past year of commit activity
    C++ 1 511 0 0 Updated Jan 8, 2025
  • LLaMA-Factory Public Forked from hiyouga/LLaMA-Factory

    Efficiently Fine-Tune 100+ LLMs in WebUI (ACL 2024)

    LLaMANexus/LLaMA-Factory’s past year of commit activity
    Python 0 Apache-2.0 6,047 0 0 Updated Jan 4, 2025
  • llama-cpp-python Public Forked from abetlen/llama-cpp-python

    Python bindings for llama.cpp

    LLaMANexus/llama-cpp-python’s past year of commit activity
    Python 0 MIT 1,156 0 0 Updated Jan 1, 2025
  • ik_llama.cpp Public Forked from ikawrakow/ik_llama.cpp

    llama.cpp fork with additional SOTA quants and improved performance

    LLaMANexus/ik_llama.cpp’s past year of commit activity
    C++ 0 MIT 17 0 0 Updated Dec 20, 2024
  • wllama Public Forked from ngxson/wllama

    WebAssembly binding for llama.cpp - Enabling in-browser LLM inference

    LLaMANexus/wllama’s past year of commit activity
    C++ 0 MIT 37 0 0 Updated Dec 19, 2024
  • node-llama-cpp Public Forked from withcatai/node-llama-cpp

    Run AI models locally on your machine with node.js bindings for llama.cpp. Enforce a JSON schema on the model output on the generation level

    LLaMANexus/node-llama-cpp’s past year of commit activity
    TypeScript 0 MIT 121 0 0 Updated Dec 9, 2024

People

This organization has no public members. You must be a member to see who’s a part of this organization.

Top languages

Loading…

Most used topics

Loading…