Skip to content

Implemented Dynamic Task Iteration#62922

Draft
dabla wants to merge 126 commits intoapache:mainfrom
dabla:feature/dynamic-task-iteration
Draft

Implemented Dynamic Task Iteration#62922
dabla wants to merge 126 commits intoapache:mainfrom
dabla:feature/dynamic-task-iteration

Conversation

@dabla
Copy link
Contributor

@dabla dabla commented Mar 5, 2026


Was generative AI tooling used to co-author this PR?
  • Yes (please specify the tool below)

Description

This PR is the initial implementation of Dynamic Task Iteration (DTI), as discussed in the devlist and building upon the foundations of AIP-98.

For further context on the use cases and performance benefits of DTI, see this Medium Article.

The XCom Database Constraint Challenge
While porting our internal "monkey-patched" version of DTI (used since Airflow 2.x) to the core, I've identified a significant technical hurdle regarding XCom handling.

The Issue

Around Airflow 2.10/2.11, a change was introduced to the database constraints for the XCom table. Specifically:

  • Current State: The DB prevents creating indexed XComs (map_index >= 0) unless a corresponding mapped TaskInstance exists in the task_instance table.
  • The Conflict: DTI is designed to process multiple indexed XComs within a single Task Instance. Because there is no 1-to-1 mapping of map_index to a physical TI row, the DB constraint blocks the insertion of these results.

Current Workaround in this PR

To maintain functionality without immediate schema changes, I have implemented a custom XComIterable. This appends the index directly to the XCom key to bypass the constraint and manages the iteration logic internally.

I believe the cleanest path forward is to adjust the DB constraint to allow indexed XComs even in the absence of an indexed TI. This would:

  • Simplify the DTI implementation (e.g. no more need for XComIterable) which would mean the already existing LazyXComIterator would be automatically used.
  • Align the DB schema with the more flexible task patterns introduced in Airflow 3.x.

What this PR doesn't implement yet

The partitioning feature, meaning combining the Dynamic Task Mapping with Dynamic Task Iteration in one fluent API.
Also it doesn't take into account pools yet, at the moment the concurrency is controlled via the max_active_tis_per_dag parameter which if not defined default to os.cpu_count().


  • Read the Pull Request Guidelines for more information. Note: commit author/co-author name and email in commits become permanently public when merged.
  • For fundamental code changes, an Airflow Improvement Proposal (AIP) is needed.
  • When adding dependency, check compliance with the ASF 3rd Party License Policy.
  • For significant user-facing changes create newsfragment: {pr_number}.significant.rst or {issue_number}.significant.rst, in airflow-core/newsfragments.

@dabla dabla requested review from amoghrajesh, ashb and kaxil as code owners March 5, 2026 09:28
@dabla dabla marked this pull request as draft March 5, 2026 09:35
@dabla dabla force-pushed the feature/dynamic-task-iteration branch 2 times, most recently from 23a6d2b to d8a30b9 Compare March 5, 2026 12:33
@dabla dabla force-pushed the feature/dynamic-task-iteration branch from d8a30b9 to edad5de Compare March 5, 2026 12:39
kaxil
kaxil previously requested changes Mar 5, 2026
Copy link
Member

@kaxil kaxil left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for working on this — excited to see DTI taking shape for Airflow 3.2. I've gone through the full diff and have feedback on the implementation, some are bugs that would crash at runtime, others are design choices worth iterating on.

A few high-level things:

  1. No tests. ~700 lines of new production code with zero test coverage. We need tests for IterableOperator, TaskExecutor, MappedTaskInstance, HybridExecutor, XComIterable, DecoratedDeferredAsyncOperator, and the iterate/iterate_kwargs methods — covering success, failure, retry, deferral, and edge cases.

  2. Worker resilience. Since DTI runs N sub-tasks inside a single worker process, we need to think through what happens when that worker dies mid-execution — the scheduler has no record of which sub-tasks completed. Worth documenting the expected behavior and trade-offs here (and whether we want to add checkpointing later).

  3. Thread safety. Several shared mutable structures (context dict, os.environ) are accessed concurrently from multiple threads without synchronization. This needs to be addressed before merge.

Inline comments below with specifics.

@dabla
Copy link
Contributor Author

dabla commented Mar 5, 2026

Thanks for working on this — DTI is an interesting concept and I can see the use case. I've gone through the full diff and have a number of concerns, some are bugs that would crash at runtime, others are architectural questions worth discussing before this goes further.

A few high-level things:

  1. No tests. ~700 lines of new production code with zero test coverage. We need tests for IterableOperator, TaskExecutor, MappedTaskInstance, HybridExecutor, XComIterable, DecoratedDeferredAsyncOperator, and the iterate/iterate_kwargs methods — covering success, failure, retry, deferral, and edge cases.

Thanks for pointing this out. As mentioned earlier on Slack, this PR is currently intended as an initial draft to demonstrate the concept and gather early architectural feedback.

I agree that proper test coverage is essential before this can move forward. The plan is to add unit tests covering the components you mentioned (IterableOperator, TaskExecutor, MappedTaskInstance, HybridExecutor, XComIterable, DecoratedDeferredAsyncOperator, and the iterate/iterate_kwargs APIs), including scenarios for success, retries, failures, deferral, and edge cases.

Once we converge on the architectural direction, I will add the corresponding test suite.

  1. Architectural concern. This builds a mini-executor inside an operator — running N tasks in threads with in-memory XCom, custom retry logic, and sleep()-based retry delays. The scheduler has no visibility into sub-task states, so if the worker dies mid-execution there's no record of which sub-tasks completed. This feels like it needs broader design discussion (probably an AIP) before merging, since it fundamentally changes how task execution works.

I agree this is an important architectural concern and worth discussing further.

The goal of this prototype is to explore a trade-off between observability and scheduling overhead, @ashb and @potiuk mentioned the same remark before. If we try to preserve the same visibility and lifecycle guarantees as Dynamic Task Mapping, we essentially end up re-implementing DTM semantics, which brings back the same scheduler overhead that this approach is trying to avoid.

This proposal intentionally explores a different point in that trade-off space: executing iterations within a single task while allowing controlled parallelism. That does mean the scheduler has indeed less visibility (but also less load) into the internal execution units.

  1. Thread safety. Several shared mutable structures (context dict, os.environ) are accessed concurrently from multiple threads without synchronization.

Good point — thread safety needs to be handled carefully here.

Regarding the task context, my understanding is that operators already receive a per-task context instance, but you're right that when running iterations concurrently we should avoid sharing mutable structures across threads. One possible approach would be to create a shallow or deep copy of the context for each execution unit to ensure isolation.

If you have concerns about specific structures (e.g., os.environ or others), I'm happy to address them and introduce appropriate synchronization or isolation mechanisms where needed.

@dabla dabla changed the title refactor: Implemented Dynamic Task Iteration Implemented Dynamic Task Iteration Mar 5, 2026
dabla added 21 commits March 18, 2026 16:23
def resume_execution(self, next_method: str, next_kwargs: dict[str, Any] | None, context: Context):
"""Entrypoint method called by the Task Runner (instead of execute) when this task is resumed."""
if next_kwargs is None:
next_kwargs = {}
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Bug: next_callable already binds **next_kwargs via partial(execute_callable, **next_kwargs) when kwargs are present, but resume_execution on the next line still passes **next_kwargs again:

return execute_callable(context, **next_kwargs)

When next_kwargs is non-empty, this calls partial(fn, **kw)(context, **kw), which raises TypeError: got multiple values for keyword argument.

The fix is likely:

return execute_callable(context)

since next_callable already handles binding kwargs when they exist.

except TaskDeferred as task_deferred:
self._task_deferred = task_deferred
# Recursively handle nested deferrals
return await self.aexecute(context=context)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is unbounded recursion. If the callback keeps raising TaskDeferred, Python will hit its recursion limit. A while True loop with a break would be safer:

while True:
    event = await run_trigger(self._task_deferred.trigger)
    if not event:
        return None
    if not self._task_deferred.method_name:
        return None
    try:
        ...
        return runner.run(context, event.payload)
    except TaskDeferred as td:
        self._task_deferred = td
        continue

# Export context in os.environ to make it available for operators to use.
airflow_context_vars = context_to_airflow_vars(context, in_env_var_format=True)
os.environ.update(airflow_context_vars)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

os.environ.update(airflow_context_vars) mutates the global process environment while child threads may be running concurrently (via HybridExecutor/ThreadPoolExecutor). This is a data race — os.environ is not thread-safe for concurrent reads and writes.

Consider passing context vars through the context dict or using thread-local storage instead of mutating the shared environment.

return isinstance(v, (MappedArgument, XComArg))


class ExpandInput(ABC, ResolveMixin):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changing ExpandInput from a Union type alias to an abstract class is a semantic breaking change for any downstream code that relied on the union (e.g., isinstance(x, get_args(ExpandInput)) or type-narrowing). Since this is in _internal, the blast radius is limited, but it's worth flagging — especially for providers or third-party code that may have imported it.

@dabla dabla force-pushed the feature/dynamic-task-iteration branch from 960438c to 765fcfb Compare March 18, 2026 23:10
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants