This is a sample code: just to illustrate the methodology of do k8s(kubectl
) operation with LLM assistant.
Thanks to project Kubectl-GPT for the great prompt as my reference.
Usage:
pip instal openai
export OPENAI_API_KEY="$your-key"
python3 kubectl-chat.py "List pods which is Pending and show their pending reason"
If using openAI proxy, export environment variables before run the script:
export OPENAI_API_BASE=http://$your-proxy
export OPENAI_API_KEY="$your-key"
export MODEL="$other-model-name" #"gpt-3.5-turbo-16k"
Theory:
- Create a prompt to ask LLM(openAI by default) to acts as a script generator
- Show LLM some example (good example and bad ones)
- execute the shell script locally
Relevant projects: