Skip to content

Commit

Permalink
Update ACL tutorials
Browse files Browse the repository at this point in the history
  • Loading branch information
mjpost committed Oct 29, 2024
1 parent 601aef2 commit 8871ba9
Showing 1 changed file with 21 additions and 21 deletions.
42 changes: 21 additions & 21 deletions data/xml/2024.acl.xml
Original file line number Diff line number Diff line change
Expand Up @@ -14319,11 +14319,11 @@
<address>Bangkok, Thailand</address>
<month>August</month>
<year>2024</year>
<url hash="bfe13ead">2024.acl-tutorials</url>
<url hash="ccd1dd1c">2024.acl-tutorials</url>
<venue>acl</venue>
</meta>
<frontmatter>
<url hash="26fcc4e2">2024.acl-tutorials.0</url>
<url hash="7ce99bfa">2024.acl-tutorials.0</url>
<bibkey>acl-2024-tutorials</bibkey>
</frontmatter>
<paper id="1">
Expand All @@ -14334,8 +14334,8 @@
<author><first>Jixing</first><last>Li</last></author>
<author><first>Marie-Francine</first><last>Moens</last></author>
<pages>1-2</pages>
<abstract>Computational linguistics (CL) has witnessed tremendous advancementsin recent years, with models such as large language models demonstratingexceptional performance in various natural language processing tasks. Theseadvancements highlight their potential to help understand brain languageprocessing, especially through the lens of brain encoding and decoding.Brain encoding involves the mapping of linguistic stimuli to brain activity,while brain decoding is the process of reconstructing linguistic stimulifrom observed brain activities. CL models that excel at capturing andmanipulating linguistic features are crucial for mapping linguistic stimulito brain activities and vice versa. Brain encoding and decoding have vastapplications, from enhancing human-computer interaction to developingassistive technologies for individuals with communication impairments. Thistutorial will focus on elucidating how computational linguistics canfacilitate brain encoding and decoding. We will delve into the principlesand practices of using computational linguistics methods for brain encodingand decoding. We will also discuss the challenges and future directions ofbrain encoding and decoding. Through this tutorial, we aim to provide acomprehensive and informative overview of the intersection betweencomputational linguistics and cognitive neuroscience, inspiring futureresearch in this exciting and rapidly evolving field.</abstract>
<url hash="1f40b0da">2024.acl-tutorials.1</url>
<abstract>Computational linguistics (CL) has witnessed tremendous advancements in recent years, with models such as large language models demonstrating exceptional performance in various natural language processing tasks. These advancements highlight their potential to help understand brain language processing, especially through the lens of brain encoding and decoding. Brain encoding involves the mapping of linguistic stimuli to brain activity, while brain decoding is the process of reconstructing linguistic stimuli from observed brain activities. CL models that excel at capturing and manipulating linguistic features are crucial for mapping linguistic stimuli to brain activities and vice versa. Brain encoding and decoding have vast applications, from enhancing human-computer interaction to developing assistive technologies for individuals with communication impairments. This tutorial will focus on elucidating how computational linguistics can facilitate brain encoding and decoding. We will delve into the principles and practices of using computational linguistics methods for brain encoding and decoding. We will also discuss the challenges and future directions of brain encoding and decoding.</abstract>
<url hash="8c531153">2024.acl-tutorials.1</url>
<bibkey>sun-etal-2024-computational</bibkey>
<doi>10.18653/v1/2024.acl-tutorials.1</doi>
</paper>
Expand All @@ -14346,8 +14346,8 @@
<author><first>Claire</first><last>Gardent</last></author>
<author><first>Wei</first><last>Xu</last></author>
<pages>3-4</pages>
<abstract>In this tutorial, we focus on text-to-text generation, a class ofnatural language generation (NLG) tasks, that takes a piece of text as inputand then generates a revision that is improved according to some specificcriteria (e.g., readability or linguistic styles), while largely retainingthe original meaning and the length of the text. This includes many usefulapplications, such as text simplification, paraphrase generation, styletransfer, etc. In contrast to text summarization and open-ended textcompletion (e.g., story), the text-to-text generation tasks we discuss inthis tutorial are more constrained in terms of semantic consistency andtargeted language styles. This level of control makes these tasks idealtestbeds for studying the ability of models to generate text that is bothsemantically adequate and stylistically appropriate. Moreover, these tasksare interesting from a technical standpoint, as they require complexcombinations of lexical and syntactical transformations, stylistic control,and adherence to factual knowledge, – all at once. With a special focus ontext simplification and revision, this tutorial aims to provide an overviewof the state-of-the-art natural language generation research from four majoraspects – Data, Models, Human-AI Collaboration, and Evaluation – and todiscuss and showcase a few significant and recent advances: (1) the use ofnon-retrogressive approaches; (2) the shift from fine-tuning to promptingwith large language models; (3) the development of new learnable metric andfine-grained human evaluation framework; (4) a growing body of studies anddatasets on non-English languages; (5) the rise of HCI+NLP+Accessibilityinterdisciplinary research to create real-world writing assistant systems.</abstract>
<url hash="789aa287">2024.acl-tutorials.2</url>
<abstract>Automatic and Human-AI Interactive Text Generation focuses on text-to-text generation, a class of natural language generation (NLG) tasks, where the goal is to revise an input text according to specific criteria, such as readability or linguistic styles. Applications include text simplification, paraphrase generation, and style transfer. This tutorial provides an overview of the state-of-the-art in text generation, with a focus on recent advances in models, data, human-AI collaboration, and evaluation methods.</abstract>
<url hash="982e0641">2024.acl-tutorials.2</url>
<bibkey>dou-etal-2024-automatic</bibkey>
<doi>10.18653/v1/2024.acl-tutorials.2</doi>
</paper>
Expand All @@ -14357,45 +14357,45 @@
<author><first>Ryan</first><last>Cotterell</last></author>
<author><first>Anej</first><last>Svete</last></author>
<pages>5-5</pages>
<abstract>Language models (LMs) are currently at the forefront of NLP researchdue to their remarkable versatility across diverse tasks. However, a largegap exists between their observed capabilities and the explanations proposedby established formal machinery. To motivate a better theoreticalcharacterization of LMs’ abilities and limitations, this tutorial aims toprovide a comprehensive introduction to a specific framework for formalanalysis of modern LMs using tools from formal language theory (FLT). Wepresent how tools from FLT can be useful in understanding the inner workingsand predicting the capabilities of modern neural LM architectures. We willcover recent results using FLT to make precise and practically relevantstatements about LMs based on recurrent neural networks and transformers byrelating them to formal devices such as finite-state automata, Turingmachines, and analog circuits. Altogether, the results covered in thistutorial will allow us to make precise statements and explanations about theobserved as well as predicted behaviors of LMs, as well as providetheoretically motivated suggestions on the aspects of the architectures thatcould be improved.</abstract>
<url hash="adedb189">2024.acl-tutorials.3</url>
<abstract>Computational Expressivity of Neural Language Models aims to provide a comprehensive introduction to the formal analysis of modern language models (LMs) using tools from formal language theory (FLT). The tutorial explains how FLT can help us understand the inner workings of LMs, including recurrent neural networks and transformers, and predict their behavior in natural language processing tasks.</abstract>
<url hash="438ff09f">2024.acl-tutorials.3</url>
<bibkey>butoi-etal-2024-computational</bibkey>
<doi>10.18653/v1/2024.acl-tutorials.3</doi>
</paper>
<paper id="4">
<title>Presentation Matters: How to Communicate Science in the <fixed-case>NLP</fixed-case> Venues and in the Wild?</title>
<title>Presentation Matters: How to Communicate Science in the <fixed-case>NLP</fixed-case> Venues and in the Wild</title>
<author><first>Sarvnaz</first><last>Karimi</last></author>
<author><first>Cecile</first><last>Paris</last></author>
<author><first>Gholamreza</first><last>Haffari</last></author>
<pages>6-7</pages>
<abstract>Each year a large number of early career researchers join the NLP/Computational Linguistics community, with most starting by presenting their research in the *ACL conferences and workshops. While writing a paper that has made it to these venues is one important step, what comes with communicating the outcome is equally important and sets the path to impact of a research outcome. In addition, not all PhD candidates get the chance of being trained for their presentation skills. Research methods courses are not all of the same quality and may not cover scientific communications, and certainly not all are tailored to the NLP community. We are proposing an introductory tutorial that covers a range of different communication skills, including writing, oral presentation (posters and demos), and social media presence. This is to fill in the gap for the researchers who may not have access to research methods courses or other mentors who could help them acquire such skills. The interactive nature of such a tutorial would allow attendees to ask questions and clarifications which would not be possible from reading materials alone.</abstract>
<url hash="cb503c09">2024.acl-tutorials.4</url>
<abstract>Presentation Matters: How to Communicate Science in the NLP Venues and in the Wild. This tutorial is aimed at early-career researchers, covering a range of communication skills such as writing, oral presentations, and social media presence. The tutorial is interactive, allowing attendees to ask questions and gain insights that may not be available from written materials alone.</abstract>
<url hash="b5d0c212">2024.acl-tutorials.4</url>
<bibkey>karimi-etal-2024-presentation</bibkey>
<doi>10.18653/v1/2024.acl-tutorials.4</doi>
</paper>
<paper id="5">
<title>Vulnerabilities of Large Language Models to Adversarial Attacks</title>
<author><first>Yu</first><last>Fu</last></author>
<author><first>Erfan</first><last>Shayegan</last></author>
<author><first>Md.</first><last>Mamun Al Abdullah</last></author>
<author><first>Md. Mamun Al</first><last>Abdullah</last></author>
<author><first>Pedram</first><last>Zaree</last></author>
<author><first>Nael</first><last>Abu-Ghazaleh</last></author>
<author><first>Yue</first><last>Dong</last></author>
<pages>8-9</pages>
<abstract>This tutorial serves as a comprehensive guide on the vulnerabilities of Large Language Models (LLMs) to adversarial attacks, an interdisciplinary field that blends perspectives from Natural Language Processing (NLP) and Cybersecurity. As LLMs become more complex and integrated into various systems, understanding their security attributes is crucial. However, current research indicates that even safety-aligned models are not impervious to adversarial attacks that can result in incorrect or harmful outputs. The tutorial first lays the foundation by explaining safety-aligned LLMs and concepts in cybersecurity. It then categorizes existing research based on different types of learning architectures and attack methods. We highlight the existing vulnerabilities of unimodal LLMs, multi-modal LLMs, and systems that integrate LLMs, focusing on adversarial attacks designed to exploit weaknesses and mislead AI systems. Finally, the tutorial delves into the potential causes of these vulnerabilities and discusses potential defense mechanisms.</abstract>
<url hash="383af55e">2024.acl-tutorials.5</url>
<abstract>Vulnerabilities of Large Language Models to Adversarial Attacks. This tutorial explores the vulnerabilities of large language models (LLMs) in the face of adversarial attacks, blending perspectives from natural language processing and cybersecurity. It reviews safety-aligned LLMs, different attack architectures, and potential defenses to safeguard AI systems.</abstract>
<url hash="6266da91">2024.acl-tutorials.5</url>
<bibkey>fu-etal-2024-vulnerabilities</bibkey>
<doi>10.18653/v1/2024.acl-tutorials.5</doi>
</paper>
<paper id="6">
<title>Detecting Machine-Generated Text: Techniques and Challenges</title>
<author><first>Li</first><last>Gao</last></author>
<author><first>Wenhan</first><last>Xiong</last></author>
<author><first>Taewoo</first><last>Kim</last></author>
<title>Watermarking for Large Language Models</title>
<author><first>Xuandong</first><last>Zhao</last></author>
<author><first>Yu-Xiang</first><last>Wang</last></author>
<author><first>Lei</first><last>Li</last></author>
<pages>10-11</pages>
<abstract>As AI-generated text increasingly resembles human-written content, the ability to detect machine-generated text becomes crucial in many applications. This tutorial aims to provide a comprehensive overview of text detection techniques, focusing on machine-generated text and deepfakes. We will discuss various methods for distinguishing between human-written and machine-generated text, including statistical methods, neural network-based techniques, and hybrid approaches. The tutorial will also cover the challenges in the detection process, such as dealing with evolving models and maintaining robustness against adversarial attacks. By the end of the session, attendees will have a solid understanding of current techniques and future directions in the field of text detection.</abstract>
<url hash="e3a066a0">2024.acl-tutorials.6</url>
<bibkey>gao-etal-2024-detecting</bibkey>
<abstract>Watermarking for Large Language Models. This tutorial provides an in-depth exploration of text watermarking, a subfield of linguistic steganography, to embed hidden messages within a text passage. It covers the fundamentals of watermarking, challenges in identifying AI-generated text, and future directions for this field.</abstract>
<url hash="8f4901ef">2024.acl-tutorials.6</url>
<bibkey>zhao-etal-2024-watermarking</bibkey>
<doi>10.18653/v1/2024.acl-tutorials.6</doi>
</paper>
</volume>
Expand Down

0 comments on commit 8871ba9

Please sign in to comment.