Video Exploratorium
is an experiment project showcasing how to analyze and explore Kaltura videos using Large Language Models (LLMs). Built with AWS Lambda and Chalice, it offers a way to interact with video content through real-time analysis and a chat-based interface utilizing LLMs.
By leveraging LLMs, this project provides detailed summaries, key insights, and interactive chat features, enhancing the way users engage with videos.
This project makes use of AWS Bedrock APIs to interact with the Claude 3 Sonnet LLM. Thanks to the use of Pydantic_Prompter
it is very easy to switch an LLM provider.
- Ability to analyze and interact across multiple videos to gain shared insights and summaries.
- On-the-fly Video Analysis: Generate comprehensive summaries and insights from video transcripts.
- Interactive Chat: Ask questions and receive detailed answers based on video content.
- Education: Enhance learning with interactive video summaries and insights.
- Market Research: Extract key messages and understand audience engagement from videos.
- Content Management: Manage large video libraries with automatic summaries and insights.
graph LR
%% End User Interactions
A1[End User] -->|WebSocket/HTTP Request| B1[API Gateway]
%% API Gateway Interactions
B1 -->|Routes Request| C1[AWS Lambda]
%% Lambda Interactions for Analyzing Videos
C1 -->|Fetch Videos| D1[Kaltura API]
D1 -->|Return Video Data| C1
C1 -->|Analyze Videos| D2[AWS Bedrock]
D2 -->|Return Analysis Results| C1
%% Lambda Interactions for Generating Follow-up Questions
C1 -->|Generate Questions| D3[AWS Bedrock]
D3 -->|Return Questions| C1
%% Lambda Interactions for Answering Questions
C1 -->|Answer Question| D4[AWS Bedrock]
D4 -->|Return Answer| C1
%% Lambda Interactions for Static File Deployment
C1 -->|Upload Files| E1[AWS S3]
C1 -->|Invalidate Cache| E2[AWS CloudFront]
%% Responses to End User
C1 -->|WebSocket/HTTP Response| A1
Future plans include integrating interactive video experiences with cue points and layers, and combining LLM insights with real-time chat exploration during video playback.
- Python 3.9.13 (required by AWS Lambda, the code is compat. 3.9+)
- AWS Account with appropriate (lambda/api-gateway/bedrock/s3/cloudfront) permissions
- Kaltura Account (you'll need to provide PartnerID and KS via URL params)
- Clone the repository:
git clone https://github.com/zoharbabin/video-exploratorium.git
cd video-exploratorium
- Create a virtual environment and activate it:
python -m venv venv
source venv/bin/activate # On Windows, use `venv\Scripts\activate`
- Install the required dependencies:
pip install -r requirements.txt
- Configure AWS CLI with your credentials:
aws configure
- Deploy the Chalice app to AWS:
chalice deploy
- Upload static files to S3 and invalidate CloudFront cache:
python deploy_static.py
- Define your route in
app.py
or create a new file inchalicelib
and import it inapp.py
.
@app.route('/new-route')
def new_route():
return {'hello': 'world'}
- Define your WebSocket handler in
routes.py
or create a new file inchalicelib
and import it inroutes.py
.
@app.on_ws_message()
def message(event: WebsocketEvent):
return websocket_handler(event, app)
-
Add your HTML template files in
chalicelib/templates
. -
Render your templates using the
render_template
function inapp.py
.
def render_template(template_name, context):
template = template_env.get_template(template_name)
return template.render(context)
-
Add your static files in the
static
directory. -
Deploy the static files using
deploy_static.py
.
Contributions are welcome! Please open an issue or submit a pull request for any improvements or bug fixes.
This project is licensed under the MIT License. See the LICENSE file for details.
- GitHub: zoharbabin
- Twitter: @zohar