Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sentiment predictions are not consistent #25

Open
fanyuyu opened this issue Nov 16, 2021 · 1 comment
Open

sentiment predictions are not consistent #25

fanyuyu opened this issue Nov 16, 2021 · 1 comment

Comments

@fanyuyu
Copy link

fanyuyu commented Nov 16, 2021

I am using your sentiment model to predict sentences from calls. There are two sentences:

  • The probability of neutral is .99 for the sentence 'Thanks, Martin.'
  • The probability of positive is .94 for the sentence 'Thank you.'

I am trying to understand why it gives quite different labels. Initially I thought it was the label confusion, but you have answered the question in Issue #17.
Could you explain more about how you fine-tuning the model for analyst tones and what date you use for the classification model? Thank you!

@fanyuyu fanyuyu changed the title sentiment labels are not consistent sentiment predictions are not consistent Nov 18, 2021
@yya518
Copy link
Owner

yya518 commented Jun 2, 2022

I cannot re-produce the issue that you described. I got Neutral for both sentences.

finbert = BertForSequenceClassification.from_pretrained('yiyanghkust/finbert-tone',num_labels=3)
tokenizer = BertTokenizer.from_pretrained('yiyanghkust/finbert-tone')
nlp = pipeline("text-classification", model=finbert, tokenizer=tokenizer)
results = nlp(['Thank you.', 
               'Thanks, Martin.'])
print(results)
#[{'label': 'Neutral', 'score': 0.8304300308227539},  {'label': 'Neutral', 'score': 0.9677242040634155}]

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants