You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am working on a big data project using a Titan 1.0 graph backed by a HBase table. We are currently in the testing phase, and when we need to empty our graph for testing purposes, we use a groovy script which closes the graph, then executes the TitanCleanup.clear() method in order to empty the graph. However, due to the high number of elements in the graph (up to ~800k vertices after our performance tests), this script takes a large amount of time, which we would like to shorten as much as possible for the testers.
We found an alternative method, which is dropping and re-creating the titan table in Hbase, but we would like to avoid it if possible, as we do not know if this could cause issues, such as lingering locks.
Do direct actions on the HBase table, such as truncating or dropping/creating the table, present a risk for the stability of the graph database?
Are there valid alternatives, or steps to optimize the execution of the TitanCleanup.clear() method?
The text was updated successfully, but these errors were encountered:
Hello.
I am working on a big data project using a Titan 1.0 graph backed by a HBase table. We are currently in the testing phase, and when we need to empty our graph for testing purposes, we use a groovy script which closes the graph, then executes the TitanCleanup.clear() method in order to empty the graph. However, due to the high number of elements in the graph (up to ~800k vertices after our performance tests), this script takes a large amount of time, which we would like to shorten as much as possible for the testers.
We found an alternative method, which is dropping and re-creating the titan table in Hbase, but we would like to avoid it if possible, as we do not know if this could cause issues, such as lingering locks.
The text was updated successfully, but these errors were encountered: