-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
cache size too big #15
Comments
here is my test code,the contract is bAR, ./redstone/state/data.mdb will gradually grow more and more large import { WarpFactory } from 'warp-contracts';
import path from "path";
import {LmdbCache} from "warp-contracts-lmdb";
const smartweave = WarpFactory
.forMainnet()
.useStateCache(new LmdbCache({inMemory: false, dbLocation: path.join(__dirname, 'redstone/state')}))
.useContractCache(new LmdbCache({inMemory: false, dbLocation: path.join(__dirname, 'redstone/contracts')}))
const contractTxId = 'VFr3Bk-uM-motpNNkkFg4lNW1BMmSfzqsVO551Ho4hA'; //bAR
async function updateState() {
try {
const result = await smartweave.contract(contractTxId)
.setEvaluationOptions({
allowBigInt: true,
allowUnsafeClient: true,
internalWrites:false
})
.readState();
console.log('res: ', 'get success');
} catch (error) {
console.log('readState error:', error, 'contractId:', contractTxId);
}
}
// 1 minute
const delay = 60000; // 1minute
updateState().then(() => {
setTimeout(function run() {
updateState().then(() => {
setTimeout(run, delay);
});
}, delay);
}); |
@janekolszak , could you please guide through the process of
I believe this should be added into readme. |
Hi @kevin-zhangzh! So as @ppedziwiatr said there are couple of things that you can do:
|
Experiencing the same issue, where a singular run of the state evaluation yields about 20mb in db size. Re-evaluating the state (after waiting a bit) essentially seem to duplicate that data, even using min & max entries to 1. The rewrite script does work in bringing down the size to where it should be, but would be great if it could be done without having to run such a process. @ppedziwiatr mentioned a possible use of better-sqlite3 to replace lmdb, so will keep an eye out for that |
We're (i.e. @Tadeuchi ) working on the better-sqlite3 implementation. I guess fighting with lmdb makes no sense. Will let you know when it will be ready! |
@ppedziwiatr is this still being developed? I'd like to try this feature but wondering if it is going to survive or not. |
When I upgraded leveldb to lmdb, the cache file became very large, with a size of 9G(file:data.mdb). What could be the reason for this?
The text was updated successfully, but these errors were encountered: