- The info (both in the post and here) about /dev/shm is completely wrong
- It is true that 64 GB is not currently required (and certainly not 128 GB which is actually overkill to the point of being useless) for a witness node. 32 GB works fine. 16 GB is pretty questionable and I'm doubtful that 8 GB is even usable (I'm doing some tests right now). 32 GB will be entering into that state once the file exceeds physical memory by a sufficient ratio, which from past experience at various sizes seems to be about 2x (I believe the file is around 35 GB currently). The memory mapped approach just does not do a very good job with delivering good performance when the data size is much larger than memory.
- Moving the data to a database should improve the scalability of data size with respect to physical memory dramatically, once development is finished, but may have other tradeoffs. We will have to see how that works out.
You are viewing a single comment's thread from: