-
Notifications
You must be signed in to change notification settings - Fork 2
Performance bottleneck on relation add #9
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Out of curiosity wouldn't a better speedup by doing:
instead of a static value? What does your benchmark says about that? |
it's trade-off between memory and performance. The problem is that we can't waste on all the relationships of all the nodes lots of memory.
|
If the maximum relationship size is less than 200, than a x200% is better without much cost. |
Memory matters and it definitively depends on the expected number of elements to be stored, but |
We have to take care if we optimise the serialisation: if someone commits after every add operation, then our optimisation would be useless... The additional cost for persisting is from my point of view not a big deal (and can be handled by a compression algorithm anyway). It is more the cost in-memory which is problematic not the additional disk space. |
The point is: this factor of increase is for relationship, so the memory increase will be: |
Hi everyone, Good discussion and good benchmarks to isolate the problem. |
After benchmarking, a big bottleneck currently is this code:
In the add method of the AbstractNode class
where the increment of long[] is only by one.
We should increment by 30% each time.
the perf increases from 5,000 values/sec to 25,000,000 v/s
The text was updated successfully, but these errors were encountered: