Skip to content

Using pretrained model BERT on Text Classification Data

Notifications You must be signed in to change notification settings

RanaPrince/BERT-Deep-Learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

BERT-Deep-Learning

Using pretrained model BERT on Text Classification Data

BERT stands for Bidirectional Encoder Representations from Transformers. It is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context.

Here we're using the bert model to provide us pretrained embedding Vector for a given review, using transfer learning helps us reduce the training time & resources required.Building our model on top of a pretrained model gives us a remarkable performance

From the data displayed & the predictions made, we can get an idea how our model is making a good level of predictions for the reviews provided

of the scale on 1-5 the reviews are sorted to be positive only if the star rating goes above 3

Readable Jupyter notebook , step by step work-flow has been described there in. _

About

Using pretrained model BERT on Text Classification Data

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published