-
-
Notifications
You must be signed in to change notification settings - Fork 132
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
New streaming model #3577
Merged
Merged
New streaming model #3577
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
3a1cf55
to
214c5f6
Compare
af986d4
to
6b93328
Compare
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This PR implements the long-await change of our content-generation API.
With these changes, frames, our internal notion of content chunks being prepared for output, become variable-length small collections of tracks.
This should finally moves us away from the fantastic footgun that our current breaks/partial fill for track marks was.
Notes
Because track marks could appear before as part of the runtime streaming cycles, they were often implicit. For instance, if a source failed, a track mark would implicitly be added.
With the new API, we have to manually specify each track marks. This leads to situations were we don't really have a convention for. Typically, should all sources emit an initial track when starting?
If we want to be true to our muxers, it should be possible to drop all track marks, thus the answer to the above question is no. However, we have relied in the past on operators such as
on_track
to detect when a source becomes available again. this would also require to change existing code and, most likely, introduce a new operator.Most of the current work was focused on stabilizing our current set of tests. Next, we shall cleanup and implement the new conventions and operators that we need.
Changed operators
The following operators are directly impacted by these changes:
crossfade
: the non-conservative mode has been removed. It was already complicated enough to keep the conservative mode. Other than that, the operator is expected to operate as close as possible to the the existing one.source.dynamic
,switch
andsequence
: these operators , the only internal ones which deal with dynamic selection of source within a streaming cycle, have been rewritten to use a factored-outSource.generate_from_multiple_sources
class. This class is charged with capturing all the logic pertaining to composing a full frame from multiple sources. Although tricky to finalize, it seems to be satisfying. Most of the tricks were about:on_track
handlers at the right time when switching sources to allow implemtations based on them such asrotate
,random
etc to update their internal state accordingly.New API
The API for generating frames becomes, for sources:
TODO
Future work
can_generate_frame
. This needs some adjustments with synchronous operators such asinput.ffmpeg
get_mutated_field
is not optimized yet. This requires comme coordination with the clock (e.g. to know what operators are consuming a given source's data during the current streaming cycle) that should be implemented when reworking the clocks.