-
Notifications
You must be signed in to change notification settings - Fork 7
/
Copy pathparams.json
1 lines (1 loc) · 25.4 KB
/
params.json
1
{"name":"On Available Corpora for Empirical Methods in Vision & Language","tagline":"","body":"## **1. Introduction**\r\n\r\nIntegrating vision and language has long been a dream in work on artificial intelligence (AI).\r\nIn the past two years, we have witnessed an explosion of work that brings together vision and language from images to videos and beyond.\r\nThe available corpora have played a crucial role in advancing this area of research.\r\nIn this paper we propose a set of quality metrics for evaluating and analyzing the vision-&-language datasets and classify them accordingly.\r\nOur analyses show that the most recent datasets have been using more complex language.\r\n\r\n## **2. Image Captioning**\r\n\r\n### 2-1. User-generated Captions\r\n\r\n* **SBU Captioned Photo Dataset** (Stony Brook University, 2011) [[**Project Page**]](http://tlberg.cs.unc.edu/vicente/sbucaptions/)\r\n\r\n - This dataset contains 1 million images with original user-generated captions, collected in the wild by systematic querying (specific terms such as objects and actions) and then filtering Flickr photos with descriptions longer than certain mean length.\r\n\r\n - Vicente Ordonez, Girish Kulkarni, Tamara L. Berg. s.\r\n *Im2Text: Describing Images Using 1 Million Captioned Photograph.*\r\n Neural Information Processing Systems(NIPS), 2011.\r\n [[PDF]](http://tamaraberg.com/papers/generation_nips2011.pdf)\r\n\r\n* **Yahoo Flickr Creative Commons 100M Dataset (YFCC-100M)** (Yahoo! Lab, 2015) [[**Project Page**]](http://labs.yahoo.com/news/yfcc100m/)\r\n\r\n - YFCC-100M contains 100 million media objects (together with their original metadata), about 99.2 million photos\r\nand 0.8 million videos from Flickr (taken from 2004 until early 2014), all of which are licensed as Creative Commons.\r\n\r\n - Bart Thomee, David A. Shamma, Gerald Friedland, Benjamin Elizalde, Karl Ni, Douglas Poland, Damian Borth, Li-Jia Li.\r\n *The New Data and New Challenges in Multimedia Research*.\r\n arXiv:1503.01817 [cs.MM].\r\n [[PDF]](http://arxiv.org/pdf/1503.01817v1.pdf)\r\n [[Arxiv]](http://arxiv.org/abs/1503.01817)\r\n\r\n* **Déjà Images Dataset** (Stony Brook University & UW, 2015) [[**Project Page**]](http://nlclient83.cs.stonybrook.edu:8081/static/index.html)\r\n\r\n - Déjà Images Dataset consists of 180K unique user-generated captions associated with about 4M Flickr images, where one caption is enforced to be associated with multiple images. They query Flickr for 693 of high frequency nouns and further filter captions for containing at least one verb and be \"good\" captions as judged by Turkers.\r\n\r\n - Jianfu Chen, Polina Kuznetsova, David Warren, Yejin Choi.\r\n *Déjà Image-Captions: A Corpus of Expressive Image Descriptions in Repetition.*\r\n North American Chapter of the Association for Computational Linguistics (NAACL), 2015.\r\n [[PDF]](http://www3.cs.stonybrook.edu/~jianchen/papers/naacl2015.pdf)\r\n\r\n### 2-2. Crowd-sourced Captions\r\n\r\n* **PASCAL Dataset (1K)** (UIUC, 2010) [[**Project Page**]](http://vision.cs.uiuc.edu/pascal-sentences/)\r\n\r\n - PASCAL is probably one of the first datasets aligning images with captions. Pascal dataset contains 1,000 images with 5 sentences per image written by Amazon Turkers.\r\n\r\n - Ali Farhadi, Mohsen Hejrati, Amin Sadeghi, Peter Young, Cyrus Rashtchian, Julia Hockenmaier, David Forsyth.\r\n *Every Picture Tells a Story: Generating Sentences for Images.*\r\n In proceedings of European conference on Computer Vision (ECCV'10).\r\n [[PDF]](http://web.engr.illinois.edu/~msadegh2/publications/sentence.pdf)\r\n\r\n* **Flickr 8K Images** (UIUC, 2010) [[**Project Page**]](http://nlp.cs.illinois.edu/HockenmaierGroup/8k-pictures.html)\r\n\r\n - This dataset consists of 8,092 Flickr images, each captioned by multiple Amazon Turkers totalling more than 40,000 image description. The focus of the dataset is on people or animals (mainly dogs) performing some specific action. \r\n\r\n - Cyrus Rashtchian, Peter Young, Micah Hodosh and Julia Hockenmaier.\r\n *Collecting Image Annotations Using Amazon's Mechanical Turk.*\r\n Proc. the NAACL HLT 2010 Workshop on Creating Speech and Language Data with Amazon's Mechanical Turk.\r\n [[PDF]](http://nlp.cs.illinois.edu/HockenmaierGroup/Papers/AMT2010/W10-0721.pdf)\r\n\r\n* **Flickr 30K Images** (UIUC, 2014) [[**Project Page**]](http://shannon.cs.illinois.edu/DenotationGraph/)\r\n\r\n - This dataset is an extention of Flickr 8K dataset consisting of 158,915 crowd-sourced captions which describe 31,783 images. This dataset mainly focuses on people performing everyday activities and involved in everyday events.\r\n\r\n - Peter Young, Alice Lai, Micah Hodosh, Julia Hockenmaier.\r\n *From image descriptions to visual denotations: New similarity metrics for semantic inference over event descriptions.*\r\n Transactions of the Association for Computational Linguistics 2 (2014): 67-78.\r\n [[PDF]](http://shannon.cs.illinois.edu/DenotationGraph/TACLDenotationGraph.pdf)\r\n\r\n* **Flickr 30K Entities** (UIUC, 2015) [[**Project Page**]](http://web.engr.illinois.edu/~bplumme2/Flickr30kEntities/)\r\n\r\n - This datasets augments the Flickr 30K dataset with additional layers of annotation such as 244K coreference chains as\r\nwell as 276K manually annotated bounding boxes for entities.\r\n \r\n - Bryan Plummer, Liwei Wang, Chris Cervantes, Juan Caicedo, Julia Hockenmaier, and Svetlana Lazebnik.\r\n *Flickr30k Entities: Collecting Region-to-Phrase Correspondences for Richer Image-to-Sentence Models.*\r\n arXiv:1505.04870, 2015.\r\n [[PDF]](http://arxiv.org/pdf/1505.04870v1.pdf)\r\n [[Arxiv]](http://arxiv.org/abs/1505.04870)\r\n\r\n* **Microsoft Research Dense Visual Annotation Corpus** (Microsoft Research, 2014) [[**Project Page**]](http://research.microsoft.com/en-us/downloads/b8887ebe-dc2f-4f4b-94d4-65b8432f7df4/)\r\n\r\n - This work provides a set of 500 images selected from Flickr 8K dataset that are densely labeled with 100,000\r\ntextual labels (with bounding boxes and facets annotated for each object) in order to approximate gold standard visual recognition.\r\n\r\n - Mark Yatskar, Michel Galley, Lucy Vanderwende, and Luke Zettlemoyer.\r\n *See No Evil, Say No Evil: Description Generation from Densely Labeled Images.*\r\n In Third Joint Conference on Lexical and Computation Semantics (\\*SEM) , 2014.\r\n [[Code and Data]](http://homes.cs.washington.edu/~my89/)\r\n [[PDF]](http://homes.cs.washington.edu/~my89/publications/StarSem2014-SeeNoEvil.pdf)\r\n\r\n* **Microsoft COCO Dataset (MS COCO)** (Microsoft Research, 2014) [[**Project Page**]](http://mscoco.org/)\r\n\r\n - Lin et al. gathers images of complex everyday scenes which contain common objects in naturally\r\noccurring contexts, with the goal of enhancing scene understanding. In this dataset the objects\r\nin the scene are labeled using per-instance segmentations. In total it contains photos of 91 basic\r\nobject type with 2.5 million labeled instances in 328k images, each paired with 5 captions. This\r\ndataset gave rise to CVPR 2015 image captioning challenge and is continuing to be a benchmark for\r\ncomparing various aspects of vision and language research.\r\n\r\n - Tsung-Yi Lin, Michael Maire, Serge Belongie, Lubomir Bourdev, Ross Girshick, James Hays, Pietro Perona, Deva Ramanan, C. Lawrence Zitnick, Piotr Dollár.\r\n *Microsoft COCO: Common Objects in Context.*\r\n arXiv:1405.0312 [cs.CV].\r\n [[arxiv]](http://arxiv.org/abs/1405.0312)\r\n\r\n* **Abstract Scene Dataset (Clipart)** (MSR, Virginia Tech, CMU, 2013) [[**Project Page**]](http://research.microsoft.com/en-us/um/people/larryz/clipart/abstract_scenes.html)\r\n\r\n - This dataset was created with the goal of representing real-world scenes by clip arts to study semantic scene understanding isolated from object recognition and segmentation issues in image processing. This dataset contains 10,020 images of children playing outdoors associated with total 60,396 descriptions.\r\n\r\n - C. L. Zitnick and D. Parikh.\r\n *Bringing Semantics Into Focus Using Visual Abstraction.*\r\n IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2013.\r\n [[PDF]](http://research.microsoft.com/en-us/um/people/larryz/ZitnickParikhAbstractScenes.pdf)\r\n\r\n<!--[wrong citation]\r\n - Luis Gilberto Mateos Ortiz, Clemens Wolff and Mirella Lapata.\r\n *Learning to Interpret and Describe Abstract Scenes.*\r\n Human Language Technologies: The 2015 Annual Conference of the North American Chapter of the ACL, pages 1505–1515.\r\n [[PDF]](http://www.aclweb.org/anthology/N15-1174)\r\n-->\r\n\r\n* **Visual and Linguistic Treebank (Visual Dependency Representations, VDR)** (University of Edinburgh, 2013) [[**Project Page**]](http://homepages.inf.ed.ac.uk/s0128959/dataset/)\r\n\r\n - This dataset consists of a set of 2,424 images with 3 one-sentence captions sourced to Amazon Turkers describing the main action in the photo (for the 10 types of actions in the set) and one sentence describing the other regions not involved in the action.\r\n\r\n - Desmond Elliott and Frank Keller.\r\n *Image Description using Visual Dependency Representations.*\r\n EMNLP 2013.\r\n [[PDF]](http://aclweb.org/anthology/D/D13/D13-1128.pdf)\r\n\r\n* **PASCAL-50S and ABSTRACT-50S** (Virginia Tech, MSR, 2015) [[**Project Page**]](http://ramakrishnavedantam928.github.io/cider/)\r\n\r\n - The ABSTRACT-50S and PASCAL-50S datasets both contain 50 human sentences for each image. The PASCAL-50S dataset is built upon 1000 images from the UIUC Pascal Sentence Dataset while the ABSTRACT-50S dataset is built upon 500 images from the Abstract Scenes Dataset.\r\n\r\n - Ramakrishna Vedantam, C. Lawrence Zitnick, Devi Parikh.\r\n *Consensus-based Image Description Evaluation.*\r\n IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.\r\n [[PDF]](http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Vedantam_CIDEr_Consensus-Based_Image_2015_CVPR_paper.pdf)\r\n\r\n## **3. Video Captioning**\r\n\r\n* **Montreal Video Annotation Dataset**\r\n - Dataset: http://www.mila.umontreal.ca/Home/public-datasets/montreal-video-annotation-dataset\r\n - PDF: http://arxiv.org/pdf/1503.01070v1.pdf\r\n\r\n* **Multilingual Corpus of Robocup Soccer Events** (UT Austin, 2010) [[**Project Page**]](http://www.cs.utexas.edu/~ml/clamp/sportscasting/)\r\n\r\n - This dataset is a multilingual corpus of Robocup soccer events (e.g., kicking and passing) aligned with human-generated comments in Korean and English. It contains total of four games, 2,036 English and 1,999 Korean comments which are very short in length and limited in vocabulary.\r\n\r\n - David L. Chen, Joohyun Kim, Raymond J. Mooney.\r\n *Training a Multilingual Sportscaster: Using Perceptual Context to Learn Language.*\r\n In Journal of Artificial Intelligence Research (JAIR) , 37, pages 397-435, 2010.\r\n [[ACM]](http://dl.acm.org/citation.cfm?id=1861761)\r\n [[PDF]](https://www.jair.org/media/2962/live-2962-4903-jair.pdf)\r\n [[JAIR link]](http://www.jair.org/papers/paper2962.html) \r\n\r\n* **Short Videos Described with Sentences** (Purdue University, 2013) [[**Project Page**]](http://haonanyu.com/research/acl2013/)\r\n\r\n - This work provides a dataset for learning word meanings from short video clips which are manually annotated with one or more sentences. Their dataset manually annotates 3-5 second long 61 video clips with sentences which are highly resctricted in terms of grammar and language.\r\n\r\n - H. Yu and J. M. Siskind.\r\n *Grounded Language Learning from Video Described with Sentences.*\r\n In Proceedings of the 51st Annual Meeting of the Association for Computational Linguistics, 2013, *best paper award*.\r\n [[PDF]](http://haonanyu.com/wp-content/uploads/2013/05/yu13.pdf)\r\n\r\n<!--[No longer included]\r\n* Story-Driven Summarization for Egocentric Video.\r\n Zheng Lu and Kristen Grauman.\r\n In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Portland, OR, June 2013.\r\n [[**Project Page**]](http://vision.cs.utexas.edu/projects/egocentric/storydriven.html)\r\n [[PDF]](http://www.cs.utexas.edu/~grauman/papers/lu-grauman-cvpr2013.pdf)\r\n-->\r\n\r\n<!-- [This work is on movie \"script\" Summarization, has nothing to do with videos]\r\n* **Movie Script Summarization as Graph-based Scene Extraction**.\r\nPhilip John Gorinski and Mirella Lapata.\r\nProc. Human Language Technologies: The 2015 Annual Conference of the North American Chapter of the ACL (NAACL 2015), pages 1066–1076.\r\nMay 31 – June 5, 2015.\r\n[[PDF]](http://www.aclweb.org/anthology/N/N15/N15-1113.pdf)\r\n-->\r\n\r\n* **Microsoft Research Video Description Corpus (MS VDC)** (UT Austin & MSR, 2011) [[**Project Page**]](http://www.cs.utexas.edu/users/ml/clamp/videoDescription/)\r\n[[Data]](http://research.microsoft.com/en-us/downloads/38cf15fd-b8df-477e-a4e4-a4680caa75af/default.aspx)\r\n\r\n - MS VDC contains parallel descriptions (85,550 English ones) of 2,089 short video snippets (10-25 seconds long). The descriptions are one sentence summary about the action or event in the video as described by Amazon Turkers. In this dataset both paraphrase and bilingual alternatives are captured, hence, the dataset can be useful translation, paraphrasing, and video description purposes.\r\n\r\n - David L. Chen and William B. Dolan.\r\n *Collecting Highly Parallel Data for Paraphrase Evaluation.*\r\n Annual Meetings of the Association for Computational Linguistics (ACL), 2011.\r\n [[PDF]](http://www.cs.utexas.edu/users/ml/papers/chen.acl11.pdf)\r\n\r\n\r\n\r\n\r\n\r\n\r\n\r\n* **MPII Movie Description dataset**\r\n - Dataset: www.mpi-inf.mpg.de/movie-description\r\n - PDF: http://www.cv-foundation.org/openaccess/content_cvpr_2015/papers/Rohrbach_A_Dataset_for_2015_CVPR_paper.pdf\r\n\r\n* **MPII Cooking Activities Dataset** (Max Planck Institute for Informatics, 2012) [[**Project Page**]](https://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/human-activity-recognition/mpii-cooking-activities-dataset/)\r\n\r\n - This is a video corpus that annotates 41 low-level different cooking activities (e.g., \"separating eggs\" or \"cutting veggies\") in 212 video segments with average 4.5 minutes length. This corpus specifically annotates participating objects in activities (e.g., TAKE OUT activity has [HAND, KNIFE, DRAWER] participants).\r\n\r\n - M. Rohrbach, S. Amin, M. Andriluka and B. Schiele.\r\n *A Database for Fine Grained Activity Detection of Cooking Activities.*\r\n IEEE Conference on Computer Vision and Pattern Recognition (CVPR), June, (2012).\r\n [[PDF]](https://www.mpi-inf.mpg.de/fileadmin/inf/d2/amin/rohrbach12cvpr.pdf)\r\n\r\n* **TACoS Multi-Level Corpus**\r\n - Dataset: www.mpi-inf.mpg.de/tacos\r\n - PDF: https://www.d2.mpi-inf.mpg.de/sites/default/files/rohrbach14gcpr_1.pdf\r\n\r\n* **Saarbrucken Corpus of Textually Annotated Scenes (TACOS Corpus)** (Saarland University & \u0005 Max Planck Institute for Informatics, 2013) [[**Project Page**]](http://www.coli.uni-saarland.de/projects/smile/page.php?id=tacos)\r\n\r\n - The TACOS dataset extends MPII Cooking Activities Dataset by aligning textual descriptions with video segments.\r\n\r\n - Michaela Regneri, Marcus Rohrbach, Dominikus Wetzel, Stefan Thater, Bernt Schiele, and Manfred Pinkal.\r\n *Grounding Action Descriptions in Videos.*\r\n TACL 2013.\r\n [[PDF]](http://www.aclweb.org/anthology/Q13-1003)\r\n\r\n* **Instructional Video Captions** (Google Inc, 2015 & University of Rochester, 2015)\r\n\r\n - Some recent works have proposed unsupervised learning algorithms for automatically associating sentences in a document with video segments.\r\nMalmaud et al. focus on the cooking domain, aligning written recipe steps with a videos.\r\nNaim et al. align the natural language instructions for biological experiments in \"wet laboratories\" with recorded videos of people performing these experiments.\r\n\r\n - References\r\n\r\n - What's Cookin'? Interpreting Cooking Videos using Text, Speech and Vision.\r\n Jonathan Malmaud, Jonathan Huang, Vivek Rathod, Nick Johnston, Andrew Rabinovich, and Kevin Murphy.\r\n NAACL 2015.\r\n [[PDF]](http://www.cs.ubc.ca/~murphyk/Papers/naacl15.pdf)\r\n\r\n - Discriminative Unsupervised Alignment of Natural Language Instructions with Corresponding Video Segments.\r\n I. Naim, Y. Song, Q. Liu, L. Huang, H. Kautz, J. Luo, and D. Gildea. \r\n Proc. NAACL 2015.\r\n [[PDF]](http://acl.cs.qc.edu/~lhuang/papers/naim-video.pdf)\r\n\r\n<!-- [This work uses Microsoft Research Video Description Corpus, so we will not mention it in corpora, however, we can cite it in intro/elsewhere as the most recent video description paper.]\r\n* **Translating Videos to Natural Language Using Deep Recurrent Neural Networks**.\r\nSubhashini Venugopalan, Huijun Xu, Jeff Donahue, Marcus Rohrbach, Raymond Mooney, Kate Saenko.\r\nNorth American Chapter of the Association for Computational Linguistics, Denver, Colorado, June 2015. (NAACL-HLT 2015)\r\n[[PDF]](https://www.cs.utexas.edu/~vsub/pdf/Translating_Videos_NAACL15.pdf)\r\n[[Code]](https://github.com/vsubhashini/caffe/tree/recurrent/examples/youtube)\r\n-->\r\n\r\n<!-- [!! I propose not to include this work after reading the paper now. Their dataset is not gold standard, i.e., their alignments are error-prone and I wouldn't list among the other available datasets, but approaches.]\r\n* **Book2movie Dataset** (Karlsruhe Institute of Technology)\r\n\r\n - One sentence: This dataset captures the alignment of a movie scene with a book chapter and can be viewed as an example of user-generated captioning, though the movies were created as the images for pre-exsting \"captions\", i.e. novel text.\r\n\r\n - Book2movie: Aligning video scenes with book chapters.**\r\n Tapaswi, Makarand, Martin Bäuml, and Rainer Stiefelhagen.\r\n Proc. the IEEE Conference on Computer Vision and Pattern Recognition. 2015.\r\n [[PDF]](https://cvhci.anthropomatik.kit.edu/~mtapaswi/papers/CVPR2015.pdf)\r\n-->\r\n\r\n## **4. Beyond Visual Description Datasets**\r\n\r\n* **Visual MadLibs (VM)** (UNC, 2015) [[**Project Page**]](http://tamaraberg.com/visualmadlibs/)\r\n\r\n - VM is a subset of 10,783 images from the MS COCO dataset which aims to go beyond describing which objects are in the image. For a given image, three Amazon Turkers are prompted to complete any of the 12 fill-in-the-blank template questions, such as \"when I look at this picture, I feel --\", selected automatically based on the image content. This dataset contains total of 360,001 MadLib question and answers.\r\n\r\n - Licheng Yu, Eunbyung Park, Alexander C. Berg, Tamara L. Berg.\r\n *Visual Madlibs: Fill in the blank Image Generation and Question Answering.*\r\n arXiv:1506.00278 [cs.CV].\r\n [[Arxiv]](http://arxiv.org/abs/1506.00278)\r\n [[PDF]](http://arxiv.org/pdf/1506.00278.pdf)\r\n\r\n* **ReferIt Dataset** (UNC, 2014) [[**Project Page**]](http://tamaraberg.com/referitgame/)\r\n\r\n - This dataset contains 130,525 expressions, referring to 96,654 distinct objects, in 19,894 photographs of natural scenes.\r\n\r\n - Sahar Kazemzadeh, Vicente Ordonez, Mark Matten, Tamara L. Berg.\r\n *ReferItGame: Referring to Objects in Photographs of Natural Scenes.*\r\n Empirical Methods in Natural Language Processing (EMNLP) 2014. Doha, Qatar. October 2014.\r\n [[PDF]](http://tamaraberg.com/papers/referit.pdf)\r\n\r\n* **Visual Question Answering (VQA) Dataset** (Virginia Tech & MSR, 2015) [[**Project Page**]](http://www.visualqa.org/)\r\n\r\n - VQA Dataset is created for the task of open-ended VQA, where a system is presented with an image and a free-form natural-language question (e.g., \"how many people are in the photo\") about the image and should answer the question. This dataset contains both real images and abstract scenes. For the real images they have selected 123,285 images from MS COCO dataset. In order to remove the burden of low-level vision task, they also crowd-source 10,000 clip-art abstract scenes, made up from 20 \"paperdoll\" human models with adjustable limbs and over 100 objects and 31 animals. They prompted Amazon Turkers for creating \"interesting\" questions resulting in 215,150 questions and 430,920 answers.\r\n\r\n - Stanislaw Antol, Aishwarya Agrawal, Jiasen Lu, Margaret Mitchell, Dhruv Batra, C. Lawrence Zitnick, Devi Parikh.\r\n *VQA: Visual Question Answering.*\r\n arXiv:1505.00468 [cs.CL]\r\n [[Arxiv]](http://arxiv.org/abs/1505.00468)\r\n [[PDF]](http://arxiv.org/pdf/1505.00468v1.pdf)\r\n\r\n<!-- [Not sure if we keep this]\r\n(16) mQA (might not be released yet?) Baidu - 2015 - captions converted to QA http://arxiv.org/pdf/1505.05612.pdf, NIPS 2015\t\r\n* **Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering.**\r\n Haoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, Wei Xu.\r\n arXiv:1505.05612 [cs.CV].\r\n [[Arxiv]](http://arxiv.org/abs/1505.05612)\r\n [[PDF]](http://arxiv.org/pdf/1505.05612v1.pdf)\r\n-->\r\n\r\n* **Toronto COCO-QA Dataset** (University of Toronto, 2015) [[**Project Page**]](http://www.cs.toronto.edu/~mren/imageqa/data/cocoqa/)\r\n\r\n - This is a simpler VQA dataset where the questions are automatically generated from image captions of MS COCO Dataset. This dataset has total 123,287 images with 117,684 questions with one-word answer about objects, numbers, colors, or locations.\r\n\r\n - Mengye Ren, Ryan Kiros, Richard Zemel.\r\n *Image Question Answering: A Visual Semantic Embedding Model and a New Dataset.*\r\n arXiv:1505.02074 [cs.LG].\r\n [[Arxiv]](http://arxiv.org/abs/1505.02074)\r\n [[PDF]](http://arxiv.org/pdf/1505.02074v1.pdf)\r\n\r\n<!--\r\n* dataset from (\"Joint Photo Stream and Blog Post Summarization and Exploration\" and \"Ranking and Retrieval of Image Sequences from Multiple Paragraph Queries\"), CVPR 2015\r\n\r\n* Disney dataset (check that is same as above/hasn't changed)\r\n-->\r\n\r\n\r\n* **DAQUAR - DAtaset for QUestion Answering on Real-world images**\r\n - Dataset: http://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/vision-and-language/visual-turing-challenge/\r\n - PDF: http://arxiv.org/pdf/1410.0210v4.pdf\r\n\r\n* **Dataset of Structured Queries and Spatial Relations**\r\n - Dataset: http://www.mpi-inf.mpg.de/departments/computer-vision-and-multimodal-computing/research/vision-and-language/learning-spatial-relations/\r\n - PDF: http://arxiv.org/pdf/1411.5190v2.pdf\r\n\r\n\r\n* **Fill-in-the-blank (FITB) & Visual Paraphrasing (VP) Dataset** (Virginia Tech, 2015) [[**Project Page**]](https://filebox.ece.vt.edu/~linxiao/imagine/)\r\n\r\n - This work leverages semantic common sense knowledge learned from images in two textual tasks: fill-in-the-blank and visual paraphrasing. We propose to \"imagine\" the scene behind the text, and leverage visual cues from the \"imagined\" scenes in addition to textual cues while answering these questions. We imagine the scenes as a visual abstraction.\r\n\r\n - Xiao Lin, Devi Parikh.\r\n *Don't Just Listen, Use Your Imagination: Leveraging Visual Common Sense for Non-Visual Tasks.*\r\n arXiv:1502.06108 [cs.CV].\r\n [[Arxiv]](http://arxiv.org/abs/1502.06108)\r\n [[PDF]](http://arxiv.org/pdf/1502.06108v2.pdf)\r\n\r\n* **Freestyle Multilingual Image Question Answering (FM-IQA) Dataset** (Baidu Research & UCLA, 2015)\r\n\r\n - This work focuses on the task of visual question answering, in which the method needs to provide an answer to a freestyle question about the content of an image.\r\nThe dataset is constructed based on the MS COCO dataset.\r\nIt contains 120,360 images with 250,569 Chinese question-answer pairs and their corresponding English translations.\r\n\r\n - Haoyuan Gao, Junhua Mao, Jie Zhou, Zhiheng Huang, Lei Wang, Wei Xu.\r\n *Are You Talking to a Machine? Dataset and Methods for Multilingual Image Question Answering.*\r\n arXiv:1505.05612 [cs.CV].\r\n [[Arxiv]](http://arxiv.org/abs/1505.05612)\r\n [[PDF]](http://arxiv.org/pdf/1505.05612v1.pdf)\r\n\r\n* **Disneyland Dataset for Blogs and Photo Streams** (2015)\r\n - This dataset consists of two resources: \r\n1. Photo Stream Data: They queried Flicker with keywords related to Disneyland, retrieving photo streams taken by one photographer in a day. Then they manually filtered the streams which were not about Disneyland or contained less than 30 images. Overall, they collected 542,217 unique images of 6,026 valid photo streams.\r\n2. Blog data: They crawled 53,091 unique blog posts and 128,563 pictures from blogspot, wordpress, and typepad by querying Google. Then park experts manually classified the blog posts into three groups: Travelogue, Disney and Junk. The Travelogue category is the one describing stories and events with multiple images in Disneyland, which is the focus of this work.\r\nThis dataset can be used for joint alignment of photo streams and blogs, where each can help the other for summarization and exploration.\r\n \r\n - Gunhee Kim, Seungwhan Moon, Leonid Sigal.\r\n *Joint Photo Stream and Blog Post Summarization and Exploration.*\r\n 28th IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2015.\r\n [[PDF]](http://cs.brown.edu/~ls/Publications/cvpr2015_blogstory.pdf)\r\n\r\n<!-- Do it later\r\n## **5. More Possibilities**\r\n* What, Where, Who? (ICCV 2007) \\cite{fei2010whathwherewho}\r\n* Ramanath event-centric paper (with Fei Fei) (ICCV 2013)\r\n* Ontology of events and social settings (Karpathy and Fei Fei 2015)\r\n* VizWiz!\r\n-->","google":"","note":"Don't delete this file! It's used internally to help with page regeneration."}