Flare AI Kit template for Social AI Agents.
-
Secure AI Execution
Runs within a Trusted Execution Environment (TEE) featuring remote attestation support for robust security. -
Built-in Chat UI
Interact with your AI via a TEE-served chat interface. -
Gemini Fine-Tuning Support
Fine-tune foundational models with custom datasets. -
Social media integrations
X and Telegram integrations with with rate limiting and retry mechanisms.
-
Prepare Environment File: Rename
.env.example
to.env
and update these model fine-tuning parameters:Parameter Description Default tuned_model_name
Name of the newly tuned model pugo-hilion tuning_source_model
Name of the foundational model to tune on models/gemini-1.5-flash-001-tuning epoch_count
Number of tuning epochs to run. An epoch is a pass over the whole dataset 30 batch_size
Number of examples to use in each training batch 4 learning_rate
Step size multiplier for the gradient updates 0.001 Ensure your Gemini API key is setup.
-
Install dependencies:
uv sync --all-extras
-
Prepare a dataset: An example dataset is provided in
src/data/training_data.json
, which consists of tweets from Hugo Philion's X account. You can use any publicly available dataset for model fine-tuning. -
Tune a new model: Depending on the size of your dataset, this process can take several minutes:
uv run start-tuning
-
Observe loss parameters: After tuning in complete, a training loss PNG will be saved in the root folder corresponding to the new model. Ideally the loss should minimize to near 0 after several training epochs.
-
Test the new model: Select the new tuned model and compare it against a set of prompting techniques (zero-shot, few-shot and chain-of-thought):
uv run start-compare
-
Start Social Bots (optional):
- Set up Twitter/X API credentials
- Configure Telegram bot token
- Enable/disable platforms as needed
uv run start-bots
The Docker setup mimics a TEE environment and includes an Nginx server for routing, while Supervisor manages both the backend and frontend services in a single container.
-
Build the Docker image:
docker build -t flare-ai-social .
NOTE: Windows users may encounter issues with
uv
due to incorrect parsing. For this try converting thepyproject.toml
anduv.lock
files to unix format. -
Run the Docker Container:
docker run -p 80:80 -it --env-file .env flare-ai-social
-
Access the Frontend:
Open your browser and navigate to http://localhost:80 to interact with the tuned model via the Chat UI.
Flare AI Social is composed of a Python-based backend and a JavaScript frontend. Follow these steps for manual setup:
-
Install Dependencies: Use uv to install backend dependencies:
uv sync --all-extras
-
Start the Backend: The backend runs by default on
0.0.0.0:80
:uv run start-backend
-
Install Dependencies: In the
chat-ui/
directory, install the required packages using npm:cd chat-ui/ npm install
-
Configure the Frontend: Update the backend URL in
chat-ui/src/App.js
for testing:const BACKEND_ROUTE = "http://localhost:8080/api/routes/chat/";
Note: Remember to change
BACKEND_ROUTE
back to'api/routes/chat/'
after testing. -
Start the Frontend:
npm start
src/flare_ai_social/
├── ai/ # AI Provider implementations
│ ├── base.py # Base AI provider abstraction
│ ├── gemini.py # Google Gemini integration
│ └── openrouter.py # OpenRouter integration
├── api/ # API layer
│ └── routes/ # API endpoint definitions
├── attestation/ # TEE attestation implementation
│ ├── vtpm_attestation.py # vTPM client
│ └── vtpm_validation.py # Token validation
├── prompts/ # Prompt engineering templates
│ └── templates.py # Different prompt strategies
├── telegram/ # Telegram bot implementation
│ └── service.py # Telegram service logic
├── twitter/ # Twitter bot implementation
│ └── service.py # Twitter service logic
├── bot_manager.py # Bot orchestration
├── main.py # FastAPI application
├── settings.py # Configuration settings
└── tune_model.py # Model fine-tuning utilities
Deploy on Confidential Space using AMD SEV.
-
Google Cloud Platform Account:
Access to theverifiable-ai-hackathon
project is required. -
Gemini API Key:
Ensure your Gemini API key is linked to the project. -
gcloud CLI:
Install and authenticate the gcloud CLI.
-
Set Environment Variables:
Update your.env
file with:TEE_IMAGE_REFERENCE=ghcr.io/YOUR_REPO_IMAGE:main # Replace with your repo build image INSTANCE_NAME=<PROJECT_NAME-TEAM_NAME>
-
Load Environment Variables:
source .env
Reminder: Run the above command in every new shell session or after modifying
.env
. On Windows, we recommend using git BASH to access commands likesource
. -
Verify the Setup:
echo $TEE_IMAGE_REFERENCE # Expected output: Your repo build image
Run the following command:
gcloud compute instances create $INSTANCE_NAME \
--project=verifiable-ai-hackathon \
--zone=us-central1-c \
--machine-type=n2d-standard-2 \
--network-interface=network-tier=PREMIUM,nic-type=GVNIC,stack-type=IPV4_ONLY,subnet=default \
--metadata=tee-image-reference=$TEE_IMAGE_REFERENCE,\
tee-container-log-redirect=true,\
tee-env-GEMINI_API_KEY=$GEMINI_API_KEY,\
tee-env-TUNED_MODEL_NAME=$TUNED_MODEL_NAME,\
--maintenance-policy=MIGRATE \
--provisioning-model=STANDARD \
--service-account=confidential-sa@verifiable-ai-hackathon.iam.gserviceaccount.com \
--scopes=https://www.googleapis.com/auth/cloud-platform \
--min-cpu-platform="AMD Milan" \
--tags=flare-ai,http-server,https-server \
--create-disk=auto-delete=yes,\
boot=yes,\
device-name=$INSTANCE_NAME,\
image=projects/confidential-space-images/global/images/confidential-space-debug-250100,\
mode=rw,\
size=11,\
type=pd-standard \
--shielded-secure-boot \
--shielded-vtpm \
--shielded-integrity-monitoring \
--reservation-affinity=any \
--confidential-compute-type=SEV
-
After deployment, you should see an output similar to:
NAME ZONE MACHINE_TYPE PREEMPTIBLE INTERNAL_IP EXTERNAL_IP STATUS social-team1 us-central1-a n2d-standard-2 10.128.0.18 34.41.127.200 RUNNING
-
It may take a few minutes for Confidential Space to complete startup checks. You can monitor progress via the GCP Console logs. Click on Compute Engine → VM Instances (in the sidebar) → Select your instance → Serial port 1 (console).
When you see a message like:
INFO: Uvicorn running on http://0.0.0.0:8080 (Press CTRL+C to quit)
the container is ready. Navigate to the external IP of the instance (visible in the VM Instances page) to access the Chat UI.
If you encounter issues, follow these steps:
-
Check Logs:
gcloud compute instances get-serial-port-output $INSTANCE_NAME --project=verifiable-ai-hackathon
-
Verify API Key(s):
Ensure that all API Keys are set correctly (e.g.GEMINI_API_KEY
). -
Check Firewall Settings:
Confirm that your instance is publicly accessible on port80
.
Below are several project ideas demonstrating how the template can be used to build useful social AI agents:
-
Integrate with flare-ai-rag:
Combine the social AI agent with the flare-ai-rag model trained on the Flare Developer Hub dataset. -
Enhanced Developer Interaction:
-
Action Steps:
- Connect the model to GitHub repositories to fetch live code examples.
- Fine-tune prompt templates using technical documentation to improve precision in code-related queries.
-
Simplify Technical Updates:
- Convert detailed Flare governance proposals into concise, accessible summaries for community members.
-
Real-Time Monitoring and Q&A:
- Monitor channels like the Flare Telegram for live updates.
- Automatically answer common community questions regarding platform changes.
-
Action Steps:
- Integrate modules for content summarization and sentiment analysis.
- Establish a feedback loop to refine responses based on community engagement.
-
Purpose:
Analyze sentiment on platforms like Twitter, Reddit, or Discord to monitor community mood, flag problematic content, and generate real-time moderation reports. -
Action Steps:
- Leverage NLP libraries for sentiment analysis and content filtering.
- Integrate with social media APIs to capture and process live data.
- Set up dashboards to monitor trends and flagged content.
-
Purpose:
Curate personalized content such as news, blog posts, or tutorials tailored to user interests and engagement history. -
Action Steps:
- Employ user profiling techniques to analyze preferences.
- Use machine learning algorithms to recommend content based on past interactions.
- Continuously refine the recommendation engine with user feedback and engagement metrics.