Skip to content

rudysemola/SQuAD-NLP-Model

 
 

Repository files navigation

ELECTRA and state of the art transformers-based models for Stanford Question Answering task

We use Transformers based NN models like BERT, DistilBERT, ELECTRA to Q&A tasks for the [HLT] project.

Abstract

The Stanford Question Answering Dataset (SQuAD) version 2.0 allowed us to investigate the performance of transformers-based models for reading comprehension tasks. As a starting point we introduce a baseline model BERT follow by DistilBERT and ELECTRA models. Our main objective is proposing a journey through transformers-based models, starting from a solid baseline and obtaining competitive results with the latter exploiting a lower need of computational resources and a fast to deploy DistilBERT and ELECTRA fine-tuned models for SQuAD task.

(Main) Frameworks

  • HuggingFace
  • PyTorch
  • Microsoft Azure
  • Flask

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%