|
8642 | 8642 | <author><first>Ashutosh</first><last>Modi</last><affiliation>IIT Kanpur</affiliation></author>
|
8643 | 8643 | <pages>11460-11499</pages>
|
8644 | 8644 | <abstract>Legal systems worldwide are inundated with exponential growth in cases and documents. There is an imminent need to develop NLP and ML techniques for automatically processing and understanding legal documents to streamline the legal system. However, evaluating and comparing various NLP models designed specifically for the legal domain is challenging. This paper addresses this challenge by proposing : Benchmark for Indian Legal Text Understanding and Reasoning. contains monolingual (English, Hindi) and multi-lingual (9 Indian languages) domain-specific tasks that address different aspects of the legal system from the point of view of understanding and reasoning over Indian legal documents. We present baseline models (including LLM-based) for each task, outlining the gap between models and the ground truth. To foster further research in the legal domain, we create a leaderboard (available at: https://exploration-lab.github.io/IL-TUR/ ) where the research community can upload and compare legal text understanding systems.</abstract>
|
8645 |
| - <url hash="7de7ce82">2024.acl-long.618</url> |
| 8645 | + <url hash="7fe016c0">2024.acl-long.618</url> |
8646 | 8646 | <bibkey>joshi-etal-2024-il</bibkey>
|
8647 | 8647 | <doi>10.18653/v1/2024.acl-long.618</doi>
|
| 8648 | + <revision id="1" href="2024.acl-long.618v1" hash="7de7ce82"/> |
| 8649 | + <revision id="2" href="2024.acl-long.618v2" hash="7fe016c0" date="2024-12-01">Added Acknowledgement Section (at the end of Appendix, on Page 39).</revision> |
8648 | 8650 | </paper>
|
8649 | 8651 | <paper id="619">
|
8650 | 8652 | <title><fixed-case>J</fixed-case>ump<fixed-case>C</fixed-case>oder: Go Beyond Autoregressive Coder via Online Modification</title>
|
|
11829 | 11831 | </paper>
|
11830 | 11832 | <paper id="843">
|
11831 | 11833 | <title><fixed-case>I</fixed-case>ndic<fixed-case>LLMS</fixed-case>uite: A Blueprint for Creating Pre-training and Fine-Tuning Datasets for <fixed-case>I</fixed-case>ndian Languages</title>
|
11832 |
| - <author><first>Mohammed</first><last>Khan</last><affiliation>Indian Institute of Technology, Madras, Dhirubhai Ambani Institute Of Information and Communication Technology</affiliation></author> |
| 11834 | + <author><first>Mohammed Safi Ur Rahman</first><last>Khan</last><affiliation>Indian Institute of Technology, Madras, Dhirubhai Ambani Institute Of Information and Communication Technology</affiliation></author> |
11833 | 11835 | <author><first>Priyam</first><last>Mehta</last><affiliation>Gujarat Technological University Ahmedabad</affiliation></author>
|
11834 | 11836 | <author><first>Ananth</first><last>Sankar</last><affiliation>Annamalai University</affiliation></author>
|
11835 | 11837 | <author><first>Umashankar</first><last>Kumaravelan</last><affiliation>AI4Bharat</affiliation></author>
|
|
11840 | 11842 | <author><first>Anoop</first><last>Kunchukuttan</last><affiliation>Microsoft</affiliation></author>
|
11841 | 11843 | <author><first>Pratyush</first><last>Kumar</last><affiliation>Indian Institute of Technology Madras, Dhirubhai Ambani Institute Of Information and Communication Technology</affiliation></author>
|
11842 | 11844 | <author><first>Raj</first><last>Dabre</last><affiliation>National Institute of Information and Communications Technology (NICT), National Institute of Advanced Industrial Science and Technology</affiliation></author>
|
11843 |
| - <author><first>Mitesh</first><last>Khapra</last><affiliation>Indian Institute of Technology, Madras, Dhirubhai Ambani Institute Of Information and Communication Technology</affiliation></author> |
| 11845 | + <author><first>Mitesh M.</first><last>Khapra</last><affiliation>Indian Institute of Technology, Madras, Dhirubhai Ambani Institute Of Information and Communication Technology</affiliation></author> |
11844 | 11846 | <pages>15831-15879</pages>
|
11845 | 11847 | <abstract>Despite the considerable advancements in English LLMs, the progress in building comparable models for other languages has been hindered due to the scarcity of tailored resources. Our work aims to bridge this divide by introducing an expansive suite of resources specifically designed for the development of Indic LLMs, covering 22 languages, containing a total of 251B tokens and 74.8M instruction-response pairs. Recognizing the importance of both data quality and quantity, our approach combines highly curated manually verified data, unverified yet valuable data, and synthetic data. We build a clean, open-source pipeline for curating pre-training data from diverse sources, including websites, PDFs, and videos, incorporating best practices for crawling, cleaning, flagging, and deduplication. For instruction-fine tuning, we amalgamate existing Indic datasets, translate/transliterate English datasets into Indian languages, and utilize LLaMa2 and Mixtral models to create conversations grounded in articles from Indian Wikipedia and Wikihow. Additionally, we address toxicity alignment by generating toxic prompts for multiple scenarios and then generate non-toxic responses by feeding these toxic prompts to an aligned LLaMa2 model. We hope that the datasets, tools, and resources released as a part of this work will not only propel the research and development of Indic LLMs but also establish an open-source blueprint for extending such efforts to other languages.</abstract>
|
11846 | 11848 | <url hash="28f6a48c">2024.acl-long.843</url>
|
|
0 commit comments