Hello. I’m running a regex in attempt to extract Bitcoin wallets from an exhibit. I know, I know. I had one problem so I broke out the regex and now I have two problems! The regex is ‘(bc1|[13])[a-zA-HJ-NP-Z0-9]{25,39}’ and when it’s used Autopsy throws '“java.lang.OutOfMemoryError: GC overhead limit exceeded” errors. Is there anything I can do to tune the garbage collection process when starting Autopsy?
Hello. It’s always difficult to offer solutions to OutOfMemoryError, but the first thing I would try is to increase the Java heap size that is allocated to Autopsy. You can do that in Tools->Options->Application tab, by modifying the “Maximum JVM Memory” setting. If you are using a machine with 32GB of RAM, I would set this setting to something like 14GB and see what happens.
Hi folks, I’m getting the exact same issue as atd0 with version 4.10.1. I cannot successfully complete an ingest, even with all of the ingest modules except keyword search disabled. This is on a box with 32gb ram, 16gb dedicated to JVM memory. Box is a fresh Win10 pro install with some 3rd party software installed via ninite.com but very basic and freshly built solely to run Autopsy.
Exact same popup message as atdt0, makes me want to +++ath0 the Java VM still seems to be running after this error and the computer is exceedingly sluggish (as if it were still doing work) but the ingestion status bar is frozen at 90%.
I’ll try to find and install an older version and see if maybe this problem was introduced recently. I’m happy to pull logs if it will help. I appreciate any help that can be offered and the work that went into this tool (I didn’t know how good it had gotten until the last OSDFcon)
Mark
@mlachniet @atdt0 I’m guessing you are using a single-user Autopsy case? If so, then I wonder if it is the Solr (and not Autopsy) that is running out of heap. Try increasing the “Maximum Solr JVM Memory” in the Tools->Options->Application tab. In the upcoming Autopsy release we have increased the default to 2GB. If you have 32GB of RAM and you are running regex searches, you should probably set Solr JVM heap to 4GB or 8GB. Regex searches are known to use up all of the heap allocated to Solr. That seems to be how Solr works.
Thanks. Yes, single-user case in my case. I’ll re-run it with just the Solr JVM heap bumped to 8GB and leave the JVM memory setting at the default. I’ll report back with my findings.
Derrick
@atdt0 I think you should bump up the JVM memory setting as well. The Autopsy defaults are very low/conservative, they are designed to run on a machine with 8GB of RAM. I have seen many situations where those defaults are not sufficient. In my experience, Autopsy JVM heap size should be closer to 10GB, if the system allows.
Thank you kindly @Eugene_Livis
Also single-user here (though my keywords didn’t previously include regex, just static strings). I set SOLR @ 4gb and the other JAVA @ 12gb, leaving 16gb for the OS, restarted the software so the change took, and began indexing again. I should know something in 36hrs or so.
Done @Eugene_Livis. I bumped the JVM to 16GB and Solr JVM heap to 8GB and am re-running the keyword ingest.
Derrick
Just an update on this. After bumping up my memory limits it has not crashed but it’s not complete either. It’s been running for 6 days and is sitting at 24% completed. At this time I’m going to cancel the process, switch over to Autopsy 4.18.0, and then re-run the keyword search.
Update on my end as well - After I gave both the processes more memory they were able to complete without error.
However, I also tried running the same ingest on another machine and noticed something interesting. The ingest was MUCH faster on a much slower machine with gobs of RAM (240gb total, 32 dedicated to Solr) than it was on a much faster CPU with only 32gb of ram (8 for Solr). I was really surprised how much faster the high-memory system worked, even with really old Xeon processors. In the future I may try to use Autopsy on old discarded servers because it seems a heck of a lot faster than my fancy new I7-10700! I didn’t expect that.
Thanks for the help all