-
Notifications
You must be signed in to change notification settings - Fork 5
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
DLR creation memory usage is linear in $\beta$ (should be $\log\beta$) #3
Comments
Please see my response in the TRIQS issue report. I'm going to label this as a feature request here, not a bug report. |
Since this is really a cppdlr issue, let's discuss here. I am quoting your response in the corresponding TRIQS issue below:
This is basically my suggestion (3) in my previous reply, and if it appears to be working, I am all in favor of implementing it, with some caveats. Given that this is a heuristic, and to avoid a proliferation of flags, for the time being I would try to do this by modifying the current code to give the user the option to pass in either an Also, it would be great if you could include a reasonably convincing test that this works as well as as the full fine imaginary frequency grid (e.g., giving just as small interpolation error for a few examples). If that sounds reasonable to you and you want to go ahead and implement it, perhaps you can do it as a pull request and then we can look it over. Thanks, Jason |
Hey guys, just stumbled over this issue, hope it is okay to comment "across the aisle": I agree with @jasonkaye and @HugoStrand, I think strategy (3) should work nicely. For the IR basis, we found that you have considerable leeway in how you exactly choose the sampling frequencies without affecting the conditioning of the problem too much, as long as you make sure that the lowest couple of Matsubara frequencies are in your set. This is why we are fine with choosing the discrete sign changes of the highest-order basis function ... In general, we find that the conditioning of the fitting problem in tau-space in 0.5 \sqrt\Lambda, while in Matsubara, you get 2 \sqrt\Lambda if you do it optimally, and you should still be within 4 \sqrt\Lambda or so if you don't exactly nail your frequency points. As for strategies (1) and (2), what one finds empirically for many bases, for example the Fourier-transformed Chebyshev polynomials, is that for the lowest sampling frequencies, the "true" sign changes on the imaginary axis are extremely close to the Matsubara frequencies. So that is something you could think about. This might also explain why choosing Matsubara frequencies works reasonably well. |
Hey @mwallerb! Thank you for the input! I'll have a closer look at the prefactors on the coordination numbers of the transforms. Regarding the sign changes in the Matsubara frequency representation of the Chebyshev polynomials, we saw the same trend when extending the Matsubara sparse sampling to Legendre polynomials, see Appendix B in https://doi.org/10.1103/PhysRevB.106.125153. Cheers, Hugo |
Thanks a lot, @HugoStrand - I missed this appendix in your paper! |
Description
This is a reposting of the triqs issue: TRIQS/triqs#917 showing that the memory usage building the DLR basis is linear in the inverse temperature$\beta$ , preventing the use of DLR at low temperatures. This is probably caused by the current approach to select DLR Matsubara frequencies, since a dense Matsubara grid is used with an upper cutoff proportional to $\beta$ .
See the memory scaling test here: TRIQS/triqs#917
Expected behavior: The peak memory usage should be$\log \beta$ for the DLR basis to be applicable in the low temperature regime.
Actual behavior: Peak memory usage is linear in$\beta$
The text was updated successfully, but these errors were encountered: