Skip to content

Commit

Permalink
update comments
Browse files Browse the repository at this point in the history
Signed-off-by: karthik2804 <karthik.ganeshram@fermyon.com>
  • Loading branch information
karthik2804 committed Sep 17, 2024
1 parent 9c61749 commit 542024c
Showing 1 changed file with 5 additions and 7 deletions.
12 changes: 5 additions & 7 deletions crates/llm-local/src/token_output_stream.rs
Original file line number Diff line number Diff line change
@@ -1,9 +1,8 @@
/// This is a wrapper around a tokenizer to ensure that tokens can be returned to the user in a
/// streaming way rather than having to wait for the full decoding.
/// Implementation for TokenOutputStream Code is borrowed from
///
/// Borrowed from https://github.com/huggingface/candle/blob/main/candle-examples/src/token_output_stream.rs
/// (Commit SHA 4fd00b890036ef67391a9cc03f896247d0a75711)
//! This is a wrapper around a tokenizer to ensure that tokens can be returned to the user in a
//! streaming way rather than having to wait for the full decoding.
//! Implementation for TokenOutputStream Code is borrowed from
//!
//! Borrowed from https://github.com/huggingface/candle/blob/4fd00b890036ef67391a9cc03f896247d0a75711/candle-examples/src/token_output_stream.rs
pub struct TokenOutputStream {
tokenizer: tokenizers::Tokenizer,
tokens: Vec<u32>,
Expand All @@ -24,7 +23,6 @@ impl TokenOutputStream {
/// Processes the next token in the sequence, decodes the current token stream,
/// and returns any newly decoded text.
///
/// Based on the following code: <https://github.com/huggingface/text-generation-inference/blob/5ba53d44a18983a4de32d122f4cb46f4a17d9ef6/server/text_generation_server/models/model.py#L68>
pub fn next_token(&mut self, token: u32) -> anyhow::Result<Option<String>> {
let prev_text = if self.tokens.is_empty() {
String::new()
Expand Down

0 comments on commit 542024c

Please sign in to comment.