-
Notifications
You must be signed in to change notification settings - Fork 0
Open
0 / 40 of 4 issues completedOpen
0 / 40 of 4 issues completed
Copy link
Labels
enhancementNew feature or requestNew feature or request
Description
Summary
Set up local LLM inference infrastructure for FunctionGemma experimentation.
Background
FunctionGemma is a 270M parameter model purpose-built for function calling. Runs locally via Ollama with ~300MB footprint - suitable for laptop deployment.
Reference: https://blog.google/technology/developers/functiongemma/
Tasks
- Install Ollama on dev machine
- Pull FunctionGemma model (
ollama pull functiongemma) - Add
autogen-ext[ollama]to requirements.txt - Verify Ollama server runs on default port (11434)
- Basic smoke test: call FunctionGemma with simple prompt
Acceptance Criteria
- Ollama running locally
- FunctionGemma responds to basic function-calling prompts
- No changes to existing codebase yet
Notes
This is infrastructure setup only. No integration with existing code.
Sub-issues
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request