Skip to content

Use Case #3

@sruckh

Description

@sruckh

I am trying to find the use case for this, because I don't think I am using it correctly. I have a 3-tier setup: a front end, middleware (OpenAI TTS to Runpod serverless bridge), and a backend Runpod serverless instance handling inference. I initially thought I would LinaCodec encode on the back end and LinaCodec decode on the front end, but that would require some horsepower on the front end and some dependencies I could not control. So I was encoding and decoding on the backend, thinking the overall token count would be lower and that I could send the decoded stream upsampled to 48kHz, but I don't believe that's the correct use case either. On paper, this sounds really good, but I am not sure in my scenario if I can take advantage of it.

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions