Skip to content

Latest commit

 

History

History
176 lines (84 loc) · 3.84 KB

natural_language_processing.md

File metadata and controls

176 lines (84 loc) · 3.84 KB

Data Science Interview Questions And Answers

Natural Language Processing

Contents

  • RNNs and LSTMs
  • Density Estimation
  • Word Embeddings
  • TF/IDF and Cosine Similarity

  1. RNNs
    1. What’s the motivation for RNN?
    2. What’s the motivation for LSTM?
    3. How would you do dropouts in an RNN?
Answer

  1. What’s density estimation? Why do we say a language model is a density estimator?
Answer

  1. Language models are often referred to as unsupervised learning, but some say its mechanism isn’t that different from supervised learning. What are your thoughts?
Answer

  1. Word embeddings.
    1. Why do we need word embeddings?
    2. What’s the difference between count-based and prediction-based word embeddings?
    3. Most word embedding algorithms are based on the assumption that words that appear in similar contexts have similar meanings. What are some of the problems with context-based word embeddings?
Answer

  1. Given 5 documents:

     D1: The duck loves to eat the worm
     D2: The worm doesn’t like the early bird
     D3: The bird loves to get up early to get the worm
     D4: The bird gets the worm from the early duck
     D5: The duck and the birds are so different from each other but one thing they have in common is that they both get the worm
    
    1. Given a query Q: “The early bird gets the worm”, find the two top-ranked documents according to the TF/IDF rank using the cosine similarity measure and the term set {bird, duck, worm, early, get, love}. Are the top-ranked documents relevant to the query?
    2. Assume that document D5 goes on to tell more about the duck and the bird and mentions “bird” three times, instead of just once. What happens to the rank of D5? Is this change in the ranking of D5 a desirable property of TF/IDF? Why?
Answer

  1. Your client wants you to train a language model on their dataset but their dataset is very small with only about 10,000 tokens. Would you use an n-gram or a neural language model?
Answer

  1. For n-gram language models, does increasing the context length (n) improve the model’s performance? Why or why not?
Answer

  1. What problems might we encounter when using softmax as the last layer for word-level language models? How do we fix it?
Answer

  1. What's the Levenshtein distance of the two words “doctor” and “bottle”?
Answer

  1. BLEU is a popular metric for machine translation. What are the pros and cons of BLEU?
Answer

  1. On the same test set, LM model A has a character-level entropy of 2 while LM model A has a word-level entropy of 6. Which model would you choose to deploy?
Answer

  1. Imagine you have to train a NER model on the text corpus A. Would you make A case-sensitive or case-insensitive?
Answer

  1. Why does removing stop words sometimes hurt a sentiment analysis model?
Answer

  1. Many models use relative position embedding instead of absolute position embedding. Why is that?
Answer

  1. Some NLP models use the same weights for both the embedding layer and the layer just before softmax. What’s the purpose of this?
Answer