Conversation
23a6d2b to
d8a30b9
Compare
d8a30b9 to
edad5de
Compare
There was a problem hiding this comment.
Thanks for working on this — excited to see DTI taking shape for Airflow 3.2. I've gone through the full diff and have feedback on the implementation, some are bugs that would crash at runtime, others are design choices worth iterating on.
A few high-level things:
-
No tests. ~700 lines of new production code with zero test coverage. We need tests for
IterableOperator,TaskExecutor,MappedTaskInstance,HybridExecutor,XComIterable,DecoratedDeferredAsyncOperator, and theiterate/iterate_kwargsmethods — covering success, failure, retry, deferral, and edge cases. -
Worker resilience. Since DTI runs N sub-tasks inside a single worker process, we need to think through what happens when that worker dies mid-execution — the scheduler has no record of which sub-tasks completed. Worth documenting the expected behavior and trade-offs here (and whether we want to add checkpointing later).
-
Thread safety. Several shared mutable structures (
contextdict,os.environ) are accessed concurrently from multiple threads without synchronization. This needs to be addressed before merge.
Inline comments below with specifics.
Thanks for pointing this out. As mentioned earlier on Slack, this PR is currently intended as an initial draft to demonstrate the concept and gather early architectural feedback. I agree that proper test coverage is essential before this can move forward. The plan is to add unit tests covering the components you mentioned (IterableOperator, TaskExecutor, MappedTaskInstance, HybridExecutor, XComIterable, DecoratedDeferredAsyncOperator, and the iterate/iterate_kwargs APIs), including scenarios for success, retries, failures, deferral, and edge cases. Once we converge on the architectural direction, I will add the corresponding test suite.
I agree this is an important architectural concern and worth discussing further. The goal of this prototype is to explore a trade-off between observability and scheduling overhead, @ashb and @potiuk mentioned the same remark before. If we try to preserve the same visibility and lifecycle guarantees as Dynamic Task Mapping, we essentially end up re-implementing DTM semantics, which brings back the same scheduler overhead that this approach is trying to avoid. This proposal intentionally explores a different point in that trade-off space: executing iterations within a single task while allowing controlled parallelism. That does mean the scheduler has indeed less visibility (but also less load) into the internal execution units.
Good point — thread safety needs to be handled carefully here. Regarding the task context, my understanding is that operators already receive a per-task context instance, but you're right that when running iterations concurrently we should avoid sharing mutable structures across threads. One possible approach would be to create a shallow or deep copy of the context for each execution unit to ensure isolation. If you have concerns about specific structures (e.g., os.environ or others), I'm happy to address them and introduce appropriate synchronization or isolation mechanisms where needed. |
…variables outside multi-threading
…lt to only skip xcom_push if result is not None
…artially initialized modules
…sion 3.11 in IterableOperator
…of returned results by IterableOperator
…operty on the mapped task instance
…ty in MappedOperator for dag serialization
…d from Airflow 2.x
| def resume_execution(self, next_method: str, next_kwargs: dict[str, Any] | None, context: Context): | ||
| """Entrypoint method called by the Task Runner (instead of execute) when this task is resumed.""" | ||
| if next_kwargs is None: | ||
| next_kwargs = {} |
There was a problem hiding this comment.
Bug: next_callable already binds **next_kwargs via partial(execute_callable, **next_kwargs) when kwargs are present, but resume_execution on the next line still passes **next_kwargs again:
return execute_callable(context, **next_kwargs)When next_kwargs is non-empty, this calls partial(fn, **kw)(context, **kw), which raises TypeError: got multiple values for keyword argument.
The fix is likely:
return execute_callable(context)since next_callable already handles binding kwargs when they exist.
| except TaskDeferred as task_deferred: | ||
| self._task_deferred = task_deferred | ||
| # Recursively handle nested deferrals | ||
| return await self.aexecute(context=context) |
There was a problem hiding this comment.
This is unbounded recursion. If the callback keeps raising TaskDeferred, Python will hit its recursion limit. A while True loop with a break would be safer:
while True:
event = await run_trigger(self._task_deferred.trigger)
if not event:
return None
if not self._task_deferred.method_name:
return None
try:
...
return runner.run(context, event.payload)
except TaskDeferred as td:
self._task_deferred = td
continue| # Export context in os.environ to make it available for operators to use. | ||
| airflow_context_vars = context_to_airflow_vars(context, in_env_var_format=True) | ||
| os.environ.update(airflow_context_vars) | ||
|
|
There was a problem hiding this comment.
os.environ.update(airflow_context_vars) mutates the global process environment while child threads may be running concurrently (via HybridExecutor/ThreadPoolExecutor). This is a data race — os.environ is not thread-safe for concurrent reads and writes.
Consider passing context vars through the context dict or using thread-local storage instead of mutating the shared environment.
| return isinstance(v, (MappedArgument, XComArg)) | ||
|
|
||
|
|
||
| class ExpandInput(ABC, ResolveMixin): |
There was a problem hiding this comment.
Changing ExpandInput from a Union type alias to an abstract class is a semantic breaking change for any downstream code that relied on the union (e.g., isinstance(x, get_args(ExpandInput)) or type-narrowing). Since this is in _internal, the blast radius is limited, but it's worth flagging — especially for providers or third-party code that may have imported it.
…stIterableOperator
… typing for return value
…it and clearing it afterwards in IterableOperator
960438c to
765fcfb
Compare
Was generative AI tooling used to co-author this PR?
Description
This PR is the initial implementation of Dynamic Task Iteration (DTI), as discussed in the devlist and building upon the foundations of AIP-98.
For further context on the use cases and performance benefits of DTI, see this Medium Article.
The XCom Database Constraint Challenge
While porting our internal "monkey-patched" version of DTI (used since Airflow 2.x) to the core, I've identified a significant technical hurdle regarding XCom handling.
The Issue
Around Airflow 2.10/2.11, a change was introduced to the database constraints for the XCom table. Specifically:
Current Workaround in this PR
To maintain functionality without immediate schema changes, I have implemented a custom XComIterable. This appends the index directly to the XCom key to bypass the constraint and manages the iteration logic internally.
I believe the cleanest path forward is to adjust the DB constraint to allow indexed XComs even in the absence of an indexed TI. This would:
What this PR doesn't implement yet
The partitioning feature, meaning combining the Dynamic Task Mapping with Dynamic Task Iteration in one fluent API.
Also it doesn't take into account pools yet, at the moment the concurrency is controlled via the max_active_tis_per_dag parameter which if not defined default to os.cpu_count().
{pr_number}.significant.rstor{issue_number}.significant.rst, in airflow-core/newsfragments.