Using pretrained model BERT on Text Classification Data
BERT stands for Bidirectional Encoder Representations from Transformers. It is designed to pre-train deep bidirectional representations from unlabeled text by jointly conditioning on both left and right context.
Here we're using the bert model to provide us pretrained embedding Vector for a given review, using transfer learning helps us reduce the training time & resources required.Building our model on top of a pretrained model gives us a remarkable performance
From the data displayed & the predictions made, we can get an idea how our model is making a good level of predictions for the reviews provided
of the scale on 1-5 the reviews are sorted to be positive only if the star rating goes above 3
Readable Jupyter notebook , step by step work-flow has been described there in. _