Conversation
|
Your PR no longer requires formatting changes. Thank you for your contribution! |
|
This is very interesting. I would like to know more about how coarse graining is performed here. |
|
The diagram for this coarse graining is attached in the file. |
|
An observation: since system size is increased by three times after each RG step, the error accumulates much faster than that on the square lattice. The leading order perturbation can be estimated to be |
Moved all loop algorithms together to a folder. Deleted some repeated codes and defined two new abstract type called LinearLoopScheme <: LoopScheme <: TNRScheme
|
A little strange: tests can pass locally but it reports |
Strange error, why the CI cannot see the cft_data! in TNRKit but insists in using the cft_data! in Main?
|
As of #122 the cft data functions should not mutate the tensors its calculating with. |
|
@VictorVanthilt OK, will try to correct those parts. |
|
Summary of the modification:
|
|
The current one is not very compatible with NNR-TNR. I am preparing to isolate the |
By performing a step of contraction, a 2D tensor network on honeycomb graph can be transformed to one on a kagome graph, where the LoopTNR can be designed.
This PR realizes LoopTNR on kagome graph. It can be applied to the honeycomb graph as well. Most functions in LoopTNR can be used here. The only subtlety is the entanglement filtering and the final contraction.
Convention.pdf
Further test is still needed.