Skip to content

spike: install Ollama + FunctionGemma local inference stack #533

@iAmGiG

Description

@iAmGiG

Summary

Set up local LLM inference infrastructure for FunctionGemma experimentation.

Background

FunctionGemma is a 270M parameter model purpose-built for function calling. Runs locally via Ollama with ~300MB footprint - suitable for laptop deployment.

Reference: https://blog.google/technology/developers/functiongemma/

Tasks

  • Install Ollama on dev machine
  • Pull FunctionGemma model (ollama pull functiongemma)
  • Add autogen-ext[ollama] to requirements.txt
  • Verify Ollama server runs on default port (11434)
  • Basic smoke test: call FunctionGemma with simple prompt

Acceptance Criteria

  • Ollama running locally
  • FunctionGemma responds to basic function-calling prompts
  • No changes to existing codebase yet

Notes

This is infrastructure setup only. No integration with existing code.

Sub-issues

Metadata

Metadata

Assignees

Labels

enhancementNew feature or request

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions