As digital media use continues to evolve and influence various aspects of life, developing flexible and scalable tools to study complex media experiences is essential. This study introduces the Media Content Atlas (MCA), a novel pipeline designed to help researchers investigate large-scale screen data beyond traditional screen-use metrics. Leveraging state-of-the-art multimodal large language models (MLLMs), MCA enables moment-by-moment content analysis, content-based clustering, topic modeling, image retrieval, and interactive visualizations. Evaluated on 1.12 million smartphone screenshots continuously captured during screen use from 112 adults over an entire month, MCA facilitates open-ended exploration and hypothesis generation as well as hypothesis-driven investigations at an unprecedented scale. Expert evaluators underscored its usability and potential for research and intervention design, with clustering results rated 96% relevant and descriptions 83% accurate. By bridging methodological possibilities with domain-specific needs, MCA accelerates both inductive and deductive inquiry, presenting new opportunities for media and HCI research.
In this repo, you will find the code files for the following parts of MCA:
- Generate Screenshot Image Embeddings with CLIP
- Generate Screenshot Descriptions with Llava-OneVision
- Generate Description Embeddings using GTE-large
- Conduct Clustering and Topic Modeling with BERTopic and Llama2
- Create Interactive Visualizations with DataMapPlot
- Retrieve Relevant Images with CLIP and LlaVa-OneVision+GTE-Large
- This repo will be updated regularly for better reproducibility, meanwhile open issues for any questions.
- Working on creating a synthetic dataset to showcase the whole pipeline.