Skip to content

Latest commit

 

History

History
32 lines (28 loc) · 1.49 KB

README.md

File metadata and controls

32 lines (28 loc) · 1.49 KB

build

Probing Pre-trained Language Models

Aims to ascertain what linguistic information is captured by Bert encoded representations

Getting started

Assuming Linux / Git Bash

  1. Clone the repo
    git clone https://github.com/JlKmn/ProbingPretrainedLM.git && cd ProbingPretrainedLM
  2. Create virtualenv
    python -m venv env
  3. Activate virtualenv
    source env/bin/activate
  4. Install requirements
    pip install -r requirements.txt
  5. Log into wandb
    wandb login
  6. Start training
    python run.py --model 1 --dataset pos --epochs 5

Results

Hyperparameters

Epochs Batch size Loss rate
50 64 0.01

The LinearBert model is an exception and was trained with an initial loss rate of 0.0001

POS

NER