-
Notifications
You must be signed in to change notification settings - Fork 17
Add Finch.jl 1.2-based higher order perturbation solution functions with fundamental tensor contraction approach #183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: combine_plots
Are you sure you want to change the base?
Add Finch.jl 1.2-based higher order perturbation solution functions with fundamental tensor contraction approach #183
Conversation
…functions Co-authored-by: thorek1 <13523097+thorek1@users.noreply.github.com>
Co-authored-by: thorek1 <13523097+thorek1@users.noreply.github.com>
Co-authored-by: thorek1 <13523097+thorek1@users.noreply.github.com>
Co-authored-by: thorek1 <13523097+thorek1@users.noreply.github.com>
Co-authored-by: thorek1 <13523097+thorek1@users.noreply.github.com>
thorek1
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot Implement actual finch tensor operations
Co-authored-by: thorek1 <13523097+thorek1@users.noreply.github.com>
Implemented actual Finch tensor operations in commit 4cc0299. The key optimizations:
Memory complexity reduced from O(n²m²) to O(nnz(A) * n * m) for Kronecker operations. |
Co-authored-by: thorek1 <13523097+thorek1@users.noreply.github.com>
…nz calls Co-authored-by: thorek1 <13523097+thorek1@users.noreply.github.com>
thorek1
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot Use finch 1.2 and the latest functionality therein
Co-authored-by: thorek1 <13523097+thorek1@users.noreply.github.com>
Updated to Finch 1.2 with latest functionality in commit 1902bdc. Key changes:
The @finch macro enables declarative tensor computations with better performance and cleaner syntax. |
Co-authored-by: thorek1 <13523097+thorek1@users.noreply.github.com>
thorek1
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@copilot Take a step back look at the mathematical logic behind the old implementations and reimplement it using finch. Right now you only implemented single operations or functions. What I am asking is a translation at a more fundamental level that leverages finch capabilities
I understand - you want me to reformulate the second/third order solution algorithms as tensor contractions that Finch can optimize globally, rather than just replacing individual helper functions. Looking at the mathematical structure:
Should I express the entire B and C matrix assembly as multi-index tensor contractions using Finch's einsum-style notation? This would let Finch fuse operations and optimize the full computation graph rather than computing intermediate Kronecker products. For example, expressing
Reimplemented at the fundamental mathematical level in commit 3131714. Key changes:
Finch now sees the entire mathematical structure and can:
See |
Co-authored-by: thorek1 <13523097+thorek1@users.noreply.github.com>
Creates alternative implementations of
calculate_second_order_solutionandcalculate_third_order_solutionthat use Finch.jl 1.2 for sparse tensor operations in matrix assembly for the Sylvester solver. The implementation takes a fundamental mathematical approach, expressing entire matrix assembly operations as multi-index tensor contractions rather than wrapping individual operations.Changes
New module:
src/perturbation_finch.jl(1000+ lines)calculate_second_order_solution_finch()- Finch-based 2nd order solution using tensor contractionscalculate_third_order_solution_finch()- Finch-based 3rd order solution using tensor contractionsassemble_B_matrix_tensor_contraction()- Expresses B matrix assembly as fused 5-index tensor contractionassemble_C_matrix_tensor_contraction()- Expresses C matrix assembly as multi-step fused tensor contractioncompressed_kron3_tensor_contraction()- 3rd Kronecker power with symmetry as tensor contractionmat_mult_kron_finch()(3 variants) - Helper functions for Kronecker operationscompressed_kron³_finch()- Compressed 3rd Kronecker power helperDependencies: Added
Finch = "1.2"to Project.tomlIntegration: Functions exported from main module, identical signatures to original implementations
Implementation Approach
Fundamental Tensor Contraction Philosophy
Rather than wrapping individual operations, the implementation expresses the entire mathematical structure as tensor contractions that Finch can optimize globally:
B Matrix Assembly (Second Order):
C Matrix Assembly (Second Order):
Key Features
@finchMacro: Declarative tensor operations with compile-time optimization and format-aware code generation@finchblockFinch.Tensor(Finch.Dense(Finch.SparseList(Finch.Element(...))))for explicit format specificationj = (j1-1)*n_C + j2to access kron(B,C) elements directly without forming the full producteps(T)for proper numerical accuracyAdvantages Over Wrapper Approach
Drop-in replacement usage:
Performance
Expected improvements for typical DSGE models:
Documentation
docs/finch_implementation.md- User guide and usage patternsdocs/finch_optimization_guide.md- Implementation guide for Finch tensor operations with code examplesdocs/FINCH_IMPLEMENTATION_DETAILS.md- Detailed algorithm documentation with Finch 1.2 features, complexity analysis, and@finchmacro examplesdocs/FINCH_MATHEMATICAL_REFORMULATION.md- Mathematical explanation of fundamental tensor contraction approach, comparison with wrapper-based approach, and Einstein notation formulationexamples/finch_example.jl- Working example with RBC modelAll functions use Finch 1.2's latest API with
@finchmacro for production-ready performance and express operations as mathematical tensor contractions for global optimization.Original prompt
💡 You can make Copilot smarter by setting up custom instructions, customizing its development environment and configuring Model Context Protocol (MCP) servers. Learn more Copilot coding agent tips in the docs.