Performance testing and tuning for Autopsy 4.18

One question I have, when tuning memory settings, is the Solr JVM part of the Maximum JVM memory or a separate JVM?

@mckw99 Those are two unrelated settings. The Maximum JVM memory is the (max) Java heap size that is allocated to the Autopsy process. In single user mode, Autopsy starts a separate Solr process to do indexing. The Solr JVM setting is the (max) amount of Java heap that will be allocated to the Solr process.

Originally I had my Max JVM Memory set to 8GB

That is a reasonable default and should work, though when possible I usually set this to 10-14GB.

my Max Solr JVM Memory at the default of 512k (running Autopsy 4.18)

This is definitely not enough for large (TBs) data sources. If you’re ingesting TBs, I’d use at least 4GB, better 8GB. In fact, I have analyzed the thread dump from your other thread ( Slow ingest dump - version 4.18 - Keyword search ) and I am quite confident that the issue is caused by this setting. I think Solr server has run out of memory and basically stopped working (at least it is no longer able to index new documents), which is drastically slowing down ingest, because we re-try indexing several times. If I’m correct, you will see a lot of “Unable to send document batch to Solr. All re-try attempts failed!” errors if you open the Autopsy debug logs:

image

I would also expect to see some error message notifications in the bottom right corner of the Autopsy UI. Let me know please if you don’t see any error notifications there.