I’ve been developing a web interface for autopsy, and I just finished making my own directory tree, but it does have some differences to the directory tree presented by autopsy (mainly due to not being able to discern directories from files).
I did check out the source code for both the “diectorytree” and “datamodel” packages but they seem very confusing and deeply connected with the UI that I had to come up with a different approach to providing the ingested data to my front end client.
What would be the best way to tell wether a "child " is a directory? also being able to differenciate archives would be nice.
Thanks
Assuming you have an AbstractFile object (or one of the many classes that inherit from it), you should be able to call isDir() to see if it’s a directory.
For archives you could use the FileTypeExtensions class.
String ext = file.getNameExtension();
if (FileTypeExtensions.getArchiveExtensions().contains(ext)) {
// It's an archive
}
Another question, sorry for bothering…
I wrote a function to query only 100 Ids at a time, but after running this function a couple times the server seems to get stuck on a loop (probably caused by a read lock on the skCase instance…)
What would be the correct approach to write a custom query? Here’s what I was doing:
private static List<Long> getChildrenIdsByParentId(SleuthkitCase skCase, String id, String lastId) throws TskCoreException {
SleuthkitCase.CaseDbQuery caseDbQuery = skCase.executeQuery("SELECT obj_id FROM tsk_objects WHERE par_obj_id = " + id + " AND obj_id > " + lastId + " LIMIT 100");
ResultSet resultSet = caseDbQuery.getResultSet();
List<Long> ids = new ArrayList<>();
try {
while (resultSet.next()) {
ids.add(resultSet.getLong(1));
}
resultSet.close();
} catch (SQLException e) {
e.printStackTrace();
}
return ids;
}
Thanks
I believe the problem is that you’re not closing the query - either use a try-with-resources or close it manually. I don’t think you need to close the ResultSet - that should be taken care of when you close the CaseDbQuery, though you can also put it in the try-with-resources. A few examples:
try (SleuthkitCase.CaseDbQuery query = Case.getCurrentCaseThrows().getSleuthkitCase().executeQuery("SELECT time_zone FROM data_source_info WHERE obj_id = " + this.content.getId())) {
ResultSet timeZoneSet = query.getResultSet();
if (timeZoneSet.next()) {
sheetSet.put(new NodeProperty<>(Bundle.VirtualDirectoryNode_createSheet_timezone_name(),
Bundle.VirtualDirectoryNode_createSheet_timezone_displayName(),
Bundle.VirtualDirectoryNode_createSheet_timezone_desc(),
timeZoneSet.getString("time_zone")));
}
} catch (SQLException | TskCoreException | NoCurrentCaseException ex) {
logger.log(Level.SEVERE, "Failed to get time zone for the following image: " + this.content.getId(), ex);
}
try (SleuthkitCase.CaseDbQuery dbQuery = skCase.executeQuery(query)) {
ResultSet resultSet = dbQuery.getResultSet();
while (resultSet.next()) {
final String mime_type = resultSet.getString("mime_type"); //NON-NLS
if (!mime_type.isEmpty()) {
//if the mime_type contained multiple slashes then everything after the first slash will become the subtype
final String mediaType = StringUtils.substringBefore(mime_type, "/");
final String subType = StringUtils.removeStart(mime_type, mediaType + "/");
if (!mediaType.isEmpty() && !subType.isEmpty()) {
final long count = resultSet.getLong("count");
existingMimeTypeCounts.computeIfAbsent(mediaType, t -> new HashMap<>())
.put(subType, count);
}
}
}
} catch (TskCoreException | SQLException ex) {
logger.log(Level.SEVERE, "Unable to populate File Types by MIME Type tree view from DB: ", ex); //NON-NLS
}
One more question, sorry for bothering once more.
I’m having trouble differentiating the different kinds of results (blackboard artifacts).
I can get the structure for the Extracted Content, but everything else like the Keyword Hits or Interesting Items I cannot understand how to categorize the children (like Email Addresses, IP Addresses and so on inside Keyword Hits).
Does the Sleuthkit include these kinds of differentiation or are they specific to autopsy, and how could I be able to differentiate these entries?
Thanks
You’re probably going to need some custom code for those types of nodes. The category will generally be in the TSK_SET_NAME attribute. You can look at the file for making the Interesting Item nodes:
I’m now trying to replicate the “Data Content Viewers”, assuming I either have a Content or AbstractFile instance, how should I be able to access the different pieces of information Autopsy presents like hex data and file metadata?
I’ve looked at org.sleuthkit.autopsy.corecomponents.DataContentViewerHex and can’t understand where the hex data is coming from.
Thanks
In the hex viewer you’re looking for the setDataView() method. For metadata, it’s the setNode() method in Metadata.java.
Thanks for all your help so far, I’ve finished all the content viewers except for application and other occurrences.
When looking at org.sleuthkit.autopsy.centralrepository.contentviewers.OccurrencePanel I cannot quite understand when this component has information to render, I’ve also looked through my cases and cannot find an artifact who populates this panel, from my understanding it’s related to artifacts present in multiple cases, but I’ve ingested the same source to two cases and can’t find a single artifact with other occurrences on either case.
About the application content viewer, that seems to be a tricky one, I haven’t looked too deeply into the source code for these but I should have all information I need, the only issue here is my Autopsy is crashing when observing the application tab on images or html files, might also happen for other artifacts (sqlite browser works flawlessly).
Here’s the log output from one of the crashes:
INFO: Mimetype not known for file: index.html
#
# A fatal error has been detected by the Java Runtime Environment:
#
# SIGSEGV (0xb) at pc=0x00007f6065fba290, pid=13861, tid=0x00007f6061142700
#
# JRE version: Java(TM) SE Runtime Environment (8.0_221-b11) (build 1.8.0_221-b11)
# Java VM: Java HotSpot(TM) 64-Bit Server VM (25.221-b11 mixed mode linux-amd64 compressed oops)
# Problematic frame:
# C [libgtk-x11-2.0.so.0+0x1bb290]
#
# Failed to write core dump. Core dumps have been disabled. To enable core dumping, try "ulimit -c unlimited" before starting Java again
#
# An error report file with more information is saved as:
# /home/pedroferreira/Autopsy/autopsy/hs_err_pid13861.log
(java:13861): GLib-GObject-CRITICAL **: 15:56:34.664: g_param_spec_internal: assertion 'G_TYPE_IS_PARAM (param_type) && param_type != G_TYPE_PARAM' failed
#
# If you would like to submit a bug report, please visit:
# http://bugreport.java.com/bugreport/crash.jsp
# The crash happened outside the Java Virtual Machine in native code.
# See problematic frame for where to report the bug.
#
/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/nbexec: line 470: 13861 Aborted (core dumped) "/usr/java/jdk1.8.0_221-amd64/bin/java" -Djdk.home="/usr/java/jdk1.8.0_221-amd64" -classpath "/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/boot.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/org-openide-modules.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/org-openide-util.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/org-openide-util-lookup.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/org-openide-util-ui.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/boot_ja.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/boot_pt_BR.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/boot_ru.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/boot_zh_CN.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-modules_ja.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-modules_pt_BR.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-modules_ru.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-modules_zh_CN.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-util_ja.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-util-lookup_ja.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-util-lookup_pt_BR.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-util-lookup_ru.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-util-lookup_zh_CN.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-util_pt_BR.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-util_ru.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-util-ui_ja.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-util-ui_pt_BR.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-util-ui_ru.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-util-ui_zh_CN.jar:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform/lib/locale/org-openide-util_zh_CN.jar:/usr/java/jdk1.8.0_221-amd64/lib/dt.jar:/usr/java/jdk1.8.0_221-amd64/lib/tools.jar" -Dnetbeans.dirs="/home/pedroferreira/Autopsy/autopsy/build/cluster:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/harness:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/java:/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform" -Dnetbeans.home="/home/pedroferreira/Autopsy/autopsy/netbeans-plat/8.2/platform" '-Dnetbeans.logger.console=true' '-ea' '-Xms24m' '-Xmx4g' '-XX:MaxPermSize=128M' '-Xverify:none' '-XX:+UseG1GC' '-XX:+UseStringDeduplication' -DaddExports:java.desktop/sun.awt=ALL-UNNAMED -DaddExports:java.base/jdk.internal.jrtfs=ALL-UNNAMED -DaddExports:java.desktop/java.awt.peer=ALL-UNNAMED -DaddExports:java.desktop/com.sun.beans.editors=ALL-UNNAMED -DaddExports:java.desktop/sun.awt.im=ALL-UNNAMED -DaddExports:java.desktop/com.sun.java.swing.plaf.gtk=ALL-UNNAMED -DaddExports:java.management/sun.management=ALL-UNNAMED -XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath="/home/pedroferreira/Autopsy/autopsy/build/testuserdir/var/log/heapdump.hprof" org.netbeans.Main --cachedir "/home/pedroferreira/Autopsy/autopsy/build/testuserdir/var/cache" --userdir "/home/pedroferreira/Autopsy/autopsy/build/testuserdir" "--branding" "autopsy" 0<&0
Result: 134
I’m hoping this is caused by a missing library and I can fix it, but if it’s not I’ll probably open an issue on GitHub, I’m running CentOS 7 and I understand Autopsy isn’t fully supported on Linux.
There’s information on the Central Repo/Other Occurrences content viewer here:
http://sleuthkit.org/autopsy/docs/user-docs/4.13.0/central_repo_page.html
I don’t have any suggestions for the possible Linux library issue.
Hi, sorry to bother once more…
I’m now working on the keyword search service.
Where in the code is the service initiated?
In the Case class, when my code reaches the openAppServiceCaseResources function the lookup does not return any service (it does return plenty of services when I run Autopsy in debug mode).
Thanks
Hmm. I’m pretty sure openAppServiceCaseResources() is where everything is loaded. The line “for (AutopsyService service : Lookup.getDefault().lookupAll(AutopsyService.class)) {” should be returning all the classes that implement AutopsyService, including SolrSearchService (that’s the keyword search service). I’m not sure why it wouldn’t be finding it. Have you renamed/moved any classes?
The issue with the lookup is related to my framework (Quarkus) as it is able to recompile in runtime and after doing so the service seems to get destroyed.
I still have a question related to the previous one, how is the Solr endpoint started? right now I can execute solr queries but I have to start Solr through Autopsy since my code isn’t starting the endpoint…
You can disregard my previous question as I’ve been able to start the service properly, thanks
Hey,
I’m having trouble figuring out how the progress is calculated for modules based on work units, can you give me a little help?
Thanks
I’m going to assume you’re talking about data source ingest modules.The general idea is that the progress can be broken up into a series of tasks. So suppose we had a ExtractBookmarks ingest module (really this is part of Recent Activity) that calls the following methods:
- extractBookmarksIE
- extractBookmarksFirefox
- extractBookmarksSafari
If it doesn’t do much else, we could say that there are three work units (one for each method). The time each will take doesn’t have to be equal - this is just a way to give some kind of indication of how the processing is going. In this case, I believe we’d start by calling:
progressBar.switchToDeterminate(3);
Which tells it there will be three total work units. Then we can update the progress bar with an approximate amount done and the current processing:
progressBar.progress(“Extracting bookmarks from IE”, 0)
extractBookmarksIE()
progressBar.progress(“Extracting bookmarks from Firefox”, 1) <- The progress bar will be at around 33% after this call
extractBookmarksFirefox()
progressBar.progress(“Extracting bookmarks from Safari”, 2) <- The progress bar will be at around 67% after this call
extractBookmarksSafari()
progressBar.progress(“Done”, 3)
That might not be quite right but I’m pretty sure it’s the general idea. You can also update the work units complete and the message separately.
If this isn’t what you were asking or you have a more specific question let me know.
Yes that’s exactly what I need to know, except I have no idea where I can check the expected total work units for a module…
Edit: Okay I found it, your tip actually had all the info I needed, thanks
Hi once more, and happy new year,
Moving on to plugins, I got jython plugins running fine, but I’m puzzled about .nbm plugins, since no code inside Autopsy’s source indicates any kind of interaction with these files, it all seems to be handled by netbeans’ libraries.
Any pointers you can give me to achieve .nbm plugin installation and management, given that I can’t show the netbeans plugin management interface to my client…
Thanks
Have you looked at Tools->Plugins?
Yes I have, my issue is that I’m developing a web interface for autopsy and I wanted to replicate that plygin installation interface, the idea was to upload a .nbm module to the server and handle the installation on the background, but looking through autopsy’s source it doesn’t seem to have any procedures related to .nbm plugin installations, so I just wanted to know if you could provide some more information about how autopsy handles these installations.
Sorry if my first question wasn’t clear enough, thanks