:::warning[In Development] Coming to production end of Q1 2025. Early developer access planned before then. :::
The Composability Stack solves a critical challenge in DeFi development: handling transactions where the input values depend on the results of previous transactions or bridging operations.
Until now, achieving transaction composability in DeFi required developers to write and deploy custom smart contracts. Here's why this was necessary and what it meant for development:
To chain DeFi operations together, developers had to:
- Deploy a new smart contract for each composite operation
- Write custom logic to handle the intermediate state
- Maintain separate contracts for different chains
This approach led to several challenges:
- Increased Development Time: Each new combination of DeFi operations required a new smart contract
- Higher Security Risks: More smart contracts meant more potential attack vectors
- Expensive Deployments: Each new contract needed to be deployed and maintained
- Limited Flexibility: Changes to the transaction flow required new contract deployments
- Complex Testing: Each contract needed extensive testing and auditing
Custom smart contracts often resulted in:
- Additional gas costs for contract deployment
- Extra storage operations for maintaining state
- Multiple contract calls instead of direct interactions
Developers faced significant hurdles when:
- Integrating with new protocols
- Updating existing flows
- Managing cross-chain operations
- Handling protocol upgrades
- Supporting multiple versions
This rigid architecture made it difficult to adapt to the rapidly evolving DeFi ecosystem and limited the possibilities for composing different protocols together.
Understanding these historical limitations helps appreciate how the Composability Stack addresses these pain points by:
- Eliminating the need for custom smart contracts
- Providing flexible parameter injection
- Reducing development overhead
- Enabling rapid protocol integration
This shift from custom smart contracts to a standardized composability layer represents a significant advancement in DeFi development.
When building DeFi applications that chain multiple transactions together, developers often can't determine the exact values needed for subsequent transactions until earlier ones complete. This creates significant complexity when trying to automate multi-step DeFi operations.
Here's a typical scenario developers face:
- Swap USDC for AAVE tokens on Uniswap
- Deposit the received AAVE tokens into Aave's lending protocol
The fundamental issue is that the exact number of AAVE tokens received from Uniswap can't be known beforehand. While developers can set amountOutMin
to specify a minimum acceptable amount, the actual received amount will vary based on:
- Current market conditions
- Slippage
- Transaction timing
This uncertainty makes it impossible to pre-encode the exact deposit amount for the Aave transaction, as this value directly depends on how many tokens were received from Uniswap.
This pattern appears frequently in DeFi development:
- Token swaps followed by lending
- Multi-hop trades across different DEXs
- Automated yield farming strategies
- Arbitrage operations
- Collateral management
In each case, subsequent transactions need accurate data from previous transaction results to execute correctly.
Without proper tooling, developers often resort to:
- Over-estimating values (leading to failed transactions)
- Building complex monitoring systems
- Creating multi-step processes requiring user intervention
- Limiting functionality to avoid dynamic values
The Composability Stack provides a standardized solution to handle these dynamic transaction dependencies, enabling developers to build more robust and automated DeFi applications.
The Composability Stack extends beyond single-chain operations to support cross-chain transactions. This is particularly powerful when dealing with:
- Bridge operations across different networks
- Multi-chain intent solving
- Complex cross-chain DeFi strategies
Just as with single-chain operations, cross-chain transactions face similar challenges with dynamic values. When bridging assets or using intent solvers:
- Bridge rates may fluctuate due to liquidity and market conditions
- Different bridging providers might offer varying rates
- Intent solvers can have variable slippage
- Multiple providers might need to be combined for optimal execution
The Composability Stack handles these scenarios by allowing dynamic parameter injection as values become available during execution. This means you can:
- Chain multiple bridge operations together
- Combine different intent solvers
- Optimize for best execution across multiple providers
- Handle cross-chain slippage dynamically
This flexibility enables developers to build sophisticated cross-chain applications without sacrificing reliability or user experience.
Let's look at a common DeFi operation: bridging tokens and supplying them to Aave.
:::tip[Pseudocode] The following is pseudocode to demonstrate the concept :::
// Create a transaction that will:
// 1. Bridge USDC from Arbitrum to Optimism
// 2. Approve Aave to use the bridged USDC
// 3. Supply all received USDC to Aave
transaction = {
steps: [
// Step 1: Bridge tokens
// This takes 1000 USDC from Arbitrum and bridges it to Optimism
bridgeTokens({
from: "arbitrum",
to: "optimism",
token: "USDC",
amount: 1000
}),
// Step 2: Approve Aave
// Uses whatever amount was actually received after bridging
approve({
token: "USDC",
spender: "aave",
amount: executionTimeBalanceOf(USDC) // References USDC balance when executing
}),
// Step 3: Supply to Aave
// Again uses the actual received amount
supplyToAave({
token: "USDC",
amount: executionTimeBalanceOf(USDC) // References USDC balance when executing
})
]
}
Here's a more complex example showing multiple dependent operations.
:::tip[Pseudocode] The following is pseudocode to demonstrate the concept :::
transaction = {
steps: [
// Step 1: Swap USDT for USDC on Arbitrum
swap({
from: "USDT",
to: "USDC",
amount: 1500,
dex: "uniswap"
}),
// Step 2: Bridge the received USDC to Polygon
bridgeTokens({
from: "arbitrum",
to: "polygon",
token: "USDC",
amount: executionTimeBalanceOf(USDC) // Uses whatever we got from the swap
}),
// Step 3: Supply bridged tokens to Aave on Polygon
supplyToAave({
token: "USDC",
amount: executionTimeBalanceOf(USDC) // Uses whatever we received after bridging
})
]
}
executionTimeBalanceOf(token)
is a special reference that means "use whatever balance exists for this token at the moment of execution"- Each step can use results from previous steps
- The actual amounts received might be different from what's expected due to:
- Bridge fees
- Slippage during swaps
- Exchange rate changes
- Network fees
Let's look at how to store NFT mint results and use them in subsequent operations.
Note: The following is pseudocode to demonstrate the concept
// Create a transaction that will:
// 1. Mint an NFT and store its ID
// 2. Read that stored ID
// 3. Use the ID for marketplace approval
transaction = {
steps: [
// Step 1: Mint NFT and store the returned tokenId
mintNFT({
contract: "coolNFT",
chain: "arbitrum",
recipient: userAddress,
// Store the mint result for later use
storeToStorageAfterExecution: {
outputIndex: 0, // First output from the mint function
key: "lastMintedNFT" // Key to store it under
}
}),
// Step 2: Approve marketplace using the stored ID
approveForTrading({
nftContract: "coolNFT",
operator: "marketplace",
tokenId: executionTimeStorageOf("lastMintedNFT") // Read the stored mint ID
})
]
}
Note: The following is pseudocode to demonstrate the concept
transaction = {
steps: [
// Step 1: Mint multiple NFTs and store their IDs
mintNFTBatch({
contract: "coolNFT",
chain: "optimism",
amount: 3,
// Store the array of minted IDs
storeToStorageAfterExecution: {
outputIndex: 0, // Array of IDs is first output
key: "batchMintIds"
}
}),
// Step 2: Read specific data from the first NFT and store it
readNFTAttributes({
nftId: executionTimeStorageOf("batchMintIds[0]"),
storeToStorageAfterExecution: {
// Store a specific storage slot from the NFT contract
slotIndex: "0x1234...", // Storage slot containing attributes
key: "nftAttributes"
}
}),
// Step 3: Use both stored values in a marketplace listing
listForSale({
marketplace: "nftMarket",
nftIds: executionTimeStorageOf("batchMintIds"),
attributes: executionTimeStorageOf("nftAttributes"),
price: 0.1
})
]
}
-
storeToStorageAfterExecution
lets you capture and store:- Function return values (
outputIndex
) - Contract storage slots (
slotIndex
) - Arrays and complex data structures
- Function return values (
-
Storage options:
- Store direct function outputs
- Store specific storage slot values
- Store arrays or batched results
-
Reading stored values:
- Use
executionTimeStorageOf(key)
to read previously stored values - Can read individual array elements with
key[index]
- Values are scoped to the current transaction sequence
- Use
This pattern enables complex operations where you need to track and use data across multiple steps in your transaction sequence.
While the above examples show simplified pseudocode, the actual implementation uses the Composability Stack SDK with more specific parameters and types. The key concept to understand is that you can reference token balances that will only exist at execution time, making complex multi-step transactions possible.