Skip to content

adrianwedd/lunar_tools_prototypes

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Lunar Tools Project

This repository contains a collection of interactive audiovisual art installations built with Lunar Tools. Leveraging AI-driven text, audio, and image generation, real-time MIDI and keyboard controls, and dynamic visuals, this toolkit enables immersive, reactive experiences.

Table of Contents 1. Features 2. Prototypes & Demos 3. CLI Entrypoint 4. Shared Core 5. Repository Structure 6. Getting Started 7. Customization 8. Troubleshooting 9. Contributing 10. License

Features • Speech-to-Text Input: Capture visitor input via microphone using Lunar Tools’ Speech2Text. • AI Story Generation: Continue the narrative in real time with GPT-4 (including gpt-4o-mini). • Audio Narration: Convert generated story segments back to audio using OpenAI’s Text-to-Speech. • Dynamic Background Music: Generate ambient background music via Meta’s MusicGen API. • Audio Prompt Signals: Generate gentle prompt tones with pydub to cue visitor input. • Real-Time Visuals: Render images generated by DALL·E-3/SDXL Turbo and display them via Tkinter or Lunar Tools’ OpenGL renderer. • MIDI & Keyboard Control: Pause/resume or stop the experience using a MIDI controller or keyboard inputs. • Session Monitoring: Track and log interactions using LangSmith sessions and API. • Comprehensive Logging: Detailed console and file logging of all requests, responses, and events. • Shared Core: Centralized initialization and management of Lunar Tools instances for easier configuration and extension. • CLI Entrypoint: Unified command-line interface to launch any demo.

Prototypes & Demos

Explore experimental prototypes demonstrating Lunar Tools’ versatility in audiovisual installations, located in the prototypes/ directory:

Script Description interactive_storytelling.py The core interactive storytelling application. acoustic-fingerprint-painter.py "Paints" unique abstract brushstrokes driven by each visitor’s voice fingerprint. ai-dream-interpreter-prototype.py Listens to visitor speech, interprets dream-like phrases, and visualizes them via Stable Diffusion pipelines. ai-fashion-show-prototype.py Generates runway visuals from AI prompts and syncs audio beats with fashion transitions via MIDI triggers. apocalypse_experience.py Encapsulates logic for an apocalypse experience with visual generation. audio-reactive-fractal-forest.py Creates an ever-evolving fractal “forest” whose shape and colors respond in real time to ambient audio. augmented_audio_tours.py Encapsulates logic for augmented audio tours with position detection. chat-room-narrative-quilt.py Hosts a live chat where each message spawns a visual “patch” in a growing narrative quilt. collaborative_art.py Encapsulates logic for collaborative art with server management. collaborative-canvas.py Enables multiple visitors on different machines to paint together on a shared digital canvas, with periodic AI style suggestions. cosmic-soundscape.py Creates an evolving soundscape using 4-channel audio mixing, controlled via OSC and randomized AI prompts. data-driven-cityscape.py Renders a generative skyline whose architecture morphs according to live data feeds (weather, markets, social media). dynamic_visuals.py Encapsulates logic for dynamic visuals with actual visual generation. emotional-landscape-generator-prototype.py Captures ambient sounds, analyzes sentiment via GPT, and renders abstract landscapes reflecting emotions. escape_room.py Encapsulates logic for an escape room game with puzzle logic. evolving-cosmic-mural-prototype.py Generates a continuously morphing mural using SDL rendering, driven by AI descriptions and MIDI controls. generative-poetry-mosaic.py Builds an interactive, growing mosaic of AI-written couplets illustrated by DALL·E backgrounds. interactive-storytelling-canvas-prototype.py Integrates Canvas UI to display story text, images, and controls in a single web-like interface. neural-transfer-music-visualizer.py Synchronizes neural style-transfer effects to live music beats for a dynamic visual experience. real-time-glitch-art-lab.py Streams live camera frames through a glitch “corruption” pipeline for surreal visual art. sentiment_analysis_display.py Encapsulates logic for sentiment analysis display with actual sentiment analysis. speech_activated_art.py Encapsulates logic for speech-activated art with improved error handling. temporal-art-gallery-prototype.py Streams remote images and audio across locations, creating a synchronized, time-based art exhibition. time-shifted-echo-chamber.py Constructs a looping echo chamber that replays what visitors say after programmable delays, layered and pitch-shifted. virtual_time_travel.py Encapsulates logic for virtual time travel with MIDI input handling. virtual-cloud-chamber.py Simulates a 2D particle-track cloud chamber where events spawn drifting “particles.” whispers.py (Add description if available)

CLI Entrypoint

The lunar_tools_demo.py script provides a unified command-line interface to launch any of the installations by name.

Usage: python lunar_tools_demo.py --demo <demo_name> [--config <config_string>]

Example: python lunar_tools_demo.py --demo fractal_forest --config "{'mic_device': 'default', 'window_size': (800, 600)}"

Shared Core

The lunar_tools_art.py module centralizes the initialization and management of Lunar Tools instances, such as Speech2Text, GPT4, Text2SpeechOpenAI, AudioRecorder, SoundPlayer, Renderer, KeyboardInput, and WebCam. This promotes code reusability and simplifies configuration across all demos.

Repository Structure

.
├── .gemini/                     # Gemini agent configuration and tasks
├── .github/                     # GitHub Actions workflows (e.g., CI)
├── .output/                     # Generated output files (audio, images, logs)
├── .pytest_cache/               # Pytest cache directory
├── .ruff_cache/                 # Ruff linter cache directory
├── .temp/                       # Temporary files
├── logs/                        # Application logs
├── prototypes/                  # All interactive art installation prototypes
│   ├── acoustic-fingerprint-painter.py
│   ├── ai-dream-interpreter-prototype.py
│   ├── ai-fashion-show-prototype.py
│   ├── apocalypse_experience.py
│   ├── audio-reactive-fractal-forest.py
│   ├── augmented_audio_tours.py
│   ├── chat-room-narrative-quilt.py
│   ├── collaborative_art.py
│   ├── collaborative-canvas.py
│   ├── cosmic-soundscape.py
│   ├── data-driven-cityscape.py
│   ├── dynamic_visuals.py
│   ├── emotional-landscape-generator-prototype.py
│   ├── escape_room.py
│   ├── evolving-cosmic-mural-prototype.py
│   ├── generative-poetry-mosaic.py
│   ├── interactive_storytelling.py
│   ├── interactive-storytelling-canvas-prototype.py
│   ├── neural-transfer-music-visualizer.py
│   ├── real-time-glitch-art-lab.py
│   ├── sentiment_analysis_display.py
│   ├── speech_activated_art.py
│   ├── temporal-art-gallery-prototype.py
│   ├── time-shifted-echo-chamber.py
│   ├── virtual_time_travel.py
│   ├── virtual-cloud-chamber.py
│   └── whispers.py
├── tests/                       # Unit and integration tests
│   ├── conftest.py
│   ├── test_lunar_tools_art.py
│   └── test_utils.py
├── .DS_Store                    # macOS directory metadata
├── .env                         # Environment variables (e.g., API keys)
├── ai_dream_interpreter_20250627_014142.log # Example log file
├── ai_dream_interpreter_20250627_014159.log # Example log file
├── interactive_storytelling_20250627_013530.log # Example log file
├── interactive_storytelling_20250627_013555.log # Example log file
├── interactive_storytelling_20250627_014002.log # Example log file
├── lunar_tools_art.py           # Shared core module for Lunar Tools instances
├── lunar_tools_demo.py          # CLI entrypoint for launching demos
├── README.md                    # Project documentation
├── requirements.txt             # Python dependencies
├── settings.json                # Project settings
├── setup.py                     # Package setup file
├── test_response_<MagicMock name='datetime.datetime.now().strftime()' id='5622620448'>.txt # Example test response
├── test_response_20240101_120000.txt # Example test response
├── utils.py                     # Utility functions
└── __pycache__/                 # Python cache directory

Getting Started

  1. Clone the Repository

git clone https://github.com/yourusername/lunar_tools.git cd lunar_tools

  1. Create a Virtual Environment

python3 -m venv env source env/bin/activate

  1. Install Dependencies

pip install -r requirements.txt pip install . (to install the lunar_tools_art package)

  1. Configure API Keys

Create a .env file with your API keys:

OPENAI_API_KEY="<your_openai_api_key>" REPLICATE_API_TOKEN="<your_replicate_api_token>" (Add any other required API keys here)

  1. Run a Demo

To run a specific demo, use the lunar_tools_demo.py script:

python lunar_tools_demo.py --demo interactive_storytelling

To see available demos and options:

python lunar_tools_demo.py --help

Customization

(To be added: Details on how to customize various aspects of the demos, e.g., model names, canvas size, audio durations, by modifying settings.json or passing --config arguments.)

Troubleshooting

(To be added: Common issues and their solutions.)

Contributing

(To be added: Guidelines for contributing to the project.)

License

(To be added: Project license information.)

About

Oneiric interfaces and audiovisual installations: Dream Interpreters, Fingerprint Painters, and Cosmic Murals.

Topics

Resources

License

Code of conduct

Contributing

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages