tip: 362
title: Optimized node broadcast data caching
author: forfreeday@163.com
discussions to: https://github.com/tronprotocol/TIPs/issues/348
status: Final
type: Standards Track
category: Core
created: 2022-01-17
The node caches a large amount of transaction data before broadcasting, and as a way to determine whether the transaction is duplicated, the default cache is 100_000 effect easy to ensure that there will be no duplicate transactions. And this part of the transaction in actual use found that caching 100_000 transactions, will occupy a lot of memory space, and there are many redundant objects, these cache occupies a lot of occupied memory.
Optimize the cache size of transaction broadcasts to save memory space occupied by broadcast transactions.
In tron's network node, the broadcast transactions to be sent are cached in the current local cache. The default cache size is currently set to 100_000. Both caches are used in practice, and data is added to the cache but not immediately cleared. Too many objects will occupy the memory space of JVM.
It is possible to optimize some of the JVM recycling.
Using guavaCache as a cache, the elimination policy is used, and the elimination mechanism will be unlocked only when the cache is filled. In other words, 100000 Iterm, Long KV objects will be cached in memory all the time. This is a waste of memory, and the number of objects cached in memory can actually be reduced to save memory space.
Reduce expired cache footprint and boost more memory space
Two important caches.
- advInvReceive: cache of incoming inv messages
- advInvSpread: cache of sent inv messages
The current default cache size is 100_000, adjust this cache size. These two caches are in actual use, data is added to the cache but not immediately cleared. Too many objects will take up memory space in the JVM.
It is possible to optimize some of the JVM recycling.
The guavaCache is used as a cache and uses a phaseout policy, where the phaseout mechanism is unlocked only when the cache is filled. This means that 100000 Iterm, Long KV objects will be cached in memory all the time. This is a waste of memory, and the number of objects cached in memory can actually be reduced to save memory space.
cache instance: 1310237 Memory Usage: 41927584 bytes
10: 1310237 41927584 org.tron.core.net.peer.Item
252: 27 4968 org.tron.core.net.peer.PeerConnection
2556: 1 24 org.tron.core.net.peer.PeerStatusCheck
3496: 1 16 org.tron.core.net.peer.PeerStatusCheck$$Lambda$411/507363919
The local cache relies on GuavaCache, and the reduction of objects largely reduces the number of cached objects created by GuavaCache. GuavaCache caches an object by constructing a series of objects based on the ReferenceEntry interface.
4: 2345662 150122368 com.google.common.cache.LocalCache$StrongAccessWriteEntry
10: 2345862 37533792 com.google.common.cache.LocalCache$StrongValueReference
108: 381 30480 com.google.common.cache.LocalCache$Segment
168: 96 12288 com.google.common.cache.LocalCache187:
200 9600 com.google.common.cache.LocalCache$StrongAccessEntry
195: 381 9144 com.google.common.cache.LocalCache$AccessQueue$1226:
265 6360 com.google.common.cache.LocalCache$WriteQueue$1
Most of the objects cached by Guave cache will be held cached in the new generation, and the usage of the new generation space will always be high.
Heap Usage:
New Generation (Eden + 1 Survivor Space):
capacity = 2899116032 (2764.8125MB)
used = 2630018496 (2508.1810913085938MB)
free = 269097536 (256.63140869140625MB)
90.7179452967821% used
11: 504003 16128096 org.tron.core.net.peer.Item
252: 30 5520 org.tron.core.net.peer.PeerConnection
2605: 1 24 org.tron.core.net.peer.PeerStatusCheck
3539: 1 16 org.tron.core.net.peer.PeerStatusCheck$$Lambda$411/757310910
After the adjustment, GuavaCache also requires 50% fewer objects to build the cache
6: 1145849 73334336 com.google.common.cache.LocalCache$StrongAccessWriteEntry
10: 1146053 18336848 com.google.common.cache.LocalCache$StrongValueReference
102: 417 33360 com.google.common.cache.LocalCache$Segment
164: 105 13440 com.google.common.cache.LocalCache188:
417 10008 com.google.common.cache.LocalCache$AccessQueue$1192:
204 9792 com.google.common.cache.LocalCache$StrongAccessEntry230:
289 6936 com.google.common.cache.LocalCache$WriteQueue$1
50% improvement in memory space for the new generation compared to pre-optimization
New Generation (Eden + 1 Survivor Space):
capacity = 2899116032 (2764.8125MB)
used = 1315915376 (1254.9546966552734MB)
free = 1583200656 (1509.8578033447266MB)
45.39022796863344% used