I’ve recently updated to Autopsy 4.19.1, on Linux, and I’ve noticed there are some changes with regards to artifact types, namely the two categories: analysis results and data artifacts.
This runs without error, it inserts data into the blackboard_artifacts and blackboard_attributes tables in autopsy.db corresponding to the current case, but the UI is not updated in order to display the new results. I have to close the case and reopen it in order to see the results in the UI.
Am I missing something? As much as I can tell from the source code there should be an event fired such that the UI is updated. Has anybody else dealt with this?
Hello, @amadan. Can you tell me what artifact types are not getting refreshes? Here at Basis Technology we are looking for bugs related to the data artifacts / analysis results refinement to fix in the next release, so further details would be helpful, thanks!
Hello, @Richard_Cordovano. Thank you for your willingness to talk and sorry for the belated reply.
I’m using a mix of artifact types already defined in Autopsy and custom data artifacts, with the ratio being about 8 from the artifact catalog to 67 custom ones. None of them are showing up and I have not noticed any errors in the log files or the bottom right corner of the main GUI.
If there are some extra verbosity settings that I can enable or maybe some particular usage patterns of the API that are known to be problematic and I should be aware of, please let me know. I’d like to get to the bottom of this and if that has the added benefit of making Autopsy a little bit better for everybody, then all the more reason to look into it.
@amadan, we are not seeing this issue in routine use by the Autopsy developers. I am going to write up a ticket in our internal bug tracking system to see if we can reproduce it by adding some additional custom artifacts into the mix beyond what we routinely run with. I’ll reference this thread in the bug report, so we’ll provide an update when we have one.
@Richard_Cordovano turned the investigation of this issue over to me. I have tried a few scenarios in an attempt to reproduce this problem, and I have discussed several possibilities with @Richard_Cordovano and @carrier . We haven’t come to any definitive conclusions at this point. Would you be able to provide us with thread dumps? A thread dump can be performed by going to Help > Thread Dump, which should generate a text file. If you would be able to provide us with two thread dumps taken while the ingest module is running preferably 5 minutes a part so we can see what has changed, that would be helpful. Also, would you be willing to run a special version of the code that will write diagnostic log messages to a dedicated log file, and send us the log file? The contents of the thread dump as well as the log file should only include diagnostic information, and of course, you would be welcome to confirm that there is no sensitive information or redact as necessary. Thank you either way.
Thank you for looking into this. I have run the ingest module again, doing a thread dump 5 minutes after the ingest job started, 10 minutes after, and 12 minutes after (when the ingest job finished). Please let me know if these log files are useful.
I believe I have reached a conclusion with regards to the possible cause of this error. I will give some more context about the module I am working on such that things may be clearer to anybody reading this.
The module I am working on reads a configuration file where each line is a SQL pattern describing a path. For example, /Documents/%.png to cover PNG files that may reside in /home/user/Documents/. The idea was to go over the files in the case and only parse the ones that I had a written a parser for, only the ones mentioned in the configuration file.
Initially, I went the route of a data source ingest module. The process() method would read the lines in the configuration file and, for each target pattern use org.sleuthkit.autopsy.casemodule.services.FileManager.findFiles(fileName, parentSubString) to identify the files that matched that pattern and then procede to parse them in a for loop. This was the state of the module when I created this topic asking for help.
Currently, I have rewritten the module as a file ingest module. This means reading the configuration file in the startUp method and then checking the AbstractFile passed to the process method against the patterns. If the file matches a pattern, then it is processed accordingly.
Now artifacts are displayed properly and the time it takes to parse the same data source is down to roughly 3 minutes from 12 minutes. From what I gather reading the documentation, this is because Autopsy can spawn more workers tasks for file ingest modules than for data source ingest modules. In hindsight, I makes more sense to have a file ingest module for the nature of the parsing I am interested in. I still have some bugs because now I have to use Java regexes instead of SQL wildcards and I’m getting quite a lot of Attempt to make artifact event duplicate, but these are separate issues.
I wrote this post in case anybody else faces this issue. I do not understand why this change from data source ingest module to file ingest module solves the problem, but hopefully @Richard_Cordovano, @apriestman, @gdicristofaro or the other people working on Autopsy may have more insight. Thank you for the willingness to help me!
Thanks for the additional information. That certainly helps. I have a few questions if you have a moment to answer.
It looks like the thread dumps may have been cut off. At the bottom, I see "C3P0PooledConnectionPoolManager[identityToken->z8kfsxakdufqv61k4d7cy|3709338f]-HelperThread- as opposed to a full stack trace. Would you happen to have more than that in the thread dumps?
When you are running your ingest module, are you seeing progress in the bottom right of the Autopsy window?
In your ingest module, is there the possibility of some sort of long-running database transaction or case lock?
Would you be able to provide us with your logs from an ingest? The logs can be gathered from Help > Open Log Folder and then the file should be “autopsy.log.0”. If necessary, a redacted log and/or just the exceptions would also be helpful.
Can you try downloading them from here, please? These are the files as they were generated by Autopsy. The autopsy.log.0 file has been slightly redacted, but there were no exceptions.
I see progress up to 2% and it remains stuck there until the whole module is done processing and I get an ingest message indicating so.
I don’t think so. Some of the files I am parsing generate around 4000 artifacts, but I am adding them one at a time, with org.sleuthkit.datamodel.BlackboardpostArtifact(BlackboardArtifact artifact, String moduleName). I know batching them would be more efficient, but I wanted to see progress more granularly.
I created an installer based on the upcoming release branches (Autopsy: 4.19.2, TSK: 4.11.1) with some additional logging of events in the user interface. You can download the installer and sleuthkit jar here: 8019download - Google Drive or you could build from my branches here: GitHub - gdicristofaro/sleuthkit at 8019-loggingBranch and here: GitHub - gdicristofaro/autopsy at 8019-loggingBranch. If you would be willing, could you run this on your machine, and if the problem still persists, could you provide me with the logs? This will likely generate large logs, and will likely cause some slowdown due to file IO. I included some logging of data so that I could track the beginning and end of event handling per event, but I suspect the issue will likely be logged as an exception. If you would be willing to provide me with your redacted logs, that would be wonderful. If that is too much effort, any exceptions logged would be helpful. Thanks once again for your time.
Library not found in jar (libtsk_jni)
SleuthkitJNI: failed to load libtsk_jni
Looking at the contents of the JAR file, it seems that there are native libs only for Windows on x86_64 and AMD64. Would you be so kind as to provide an updated JAR?
My mistake. Could you try running this script here to install the sleuthkit release: https://raw.githubusercontent.com/gdicristofaro/autopsy/7413-unixScripts/installation/scripts/install_tsk_from_src.sh. It can be called like install_tsk_from_src.sh -r <repo_parent_folder>/sleuthkit -b release-4.11.1. If you have a sleuthkit repo already, I would recommend using a different path for the script. When it completes, it should show you the currently installed sleuthkit jars and sleuthkit-4.11.1.jar should be in there. You should be able to run unix_setup.sh again and that should fix that error.
Ok. I updated the script: https://raw.githubusercontent.com/gdicristofaro/autopsy/7413-unixScripts/installation/scripts/install_tsk_from_src.sh. I think you can call install_tsk_from_src.sh -p <path>/sleuthkit -b 8019-loggingBranch -r https://github.com/gdicristofaro/sleuthkit.git to install the modified sleuthkit jar. If you already have a sleuthkit repo, I would recommend using a different path for the script. Then, when that script completes, you should see sleuthkit-4.11.1.jar as one of the jar files present. At that point, you should be able to run unix_setup.sh again to install this version of sleuthkit and fix the error. Thank you once again for your time.
Ok, I’ve uploaded the slightly redacted log file here. Please let me know if it provides any other useful information and thank you for looking into this.
Thank you for the log. Would you also be able to provide me with the relevant message.log file and it’s last modified timestamp? For me, I can find it in .autopsy/dev/var/log in my home directory (so /home/greg/.autopsy/dev/var/log for me).
Also, did you still encounter the same problem with this code? When it finished ingest, did you see an update in the UI by any chance?
Thank you for the log and sorry there hasn’t been any improvement. If it isn’t too much trouble, could you also provide me with the relevant autopsy.log from that .autopsy/dev/var/log folder. Also, going through the log, I noticed there were some factories that couldn’t be loaded. I don’t think the lack of these modules is the source of the issue, but did you happen to remove or disable the Recent Activity, Keyword Search, or Email Parser modules?