Automate Module Ingest

several questions below.

Regarding Autopsy 4.12 on Windows, single user, single server (windows 10)
I frequently need to run several modules in a certain order when starting a new case. Is there a way to script/automate running modules?

I see the in documentation for multi-user case, that a multi-user setup has some auto ingest capabilities, but I have no need for a multi-user setup. Please let me know if you think there would be a performance boost using a multi-user setup on a single server (no share over the network).

I did look for some command line parameters for the autopsy64.exe, and if I read everything correctly, the parameters apply to a new case only, not an existing case. Am I correct?

Please also note:
My cases involve several images in a single case (5 - 20 usually). So I would like to also specify the image each module is running against. (my testing shows better performance running a single module across several images at time, is better than trying to run several modules on a single image)

The ability to select the order in which a module is run is important, example use case: I need to run the Embedded File Extraction before the Hash Lookup in order to apply the hash lookup to any files within an archive.

Thanks for your time,

It sounds like what you want to do is load all your images into the case, and then run one ingest module at a time on every image. I don’t think there’s any way to automate this. Auto-ingest isn’t going to help - it adds an image at a time and runs all the ingest modules you’ve selected on the new data source. There’s no way to then run different ingest modules apart from opening the case and doing it manually. Command line ingest isn’t going to work either - it only runs ingest on a single data source at a time and you have to open Autopsy to configure which ingest modules will be run.

I suspect you’re only mentioning order because it matters for your use case (where you’d run everything separately), but I just want to make sure you know that if you run both hash and embedded file extractor at the same time, the extracted files will be hashed and sent through any other enabled ingest modules.

1 Like

Cool, I wasn’t sure if the module engine was smart enough to recognize that one module has produced more data and to go and address it, because I know the documentation for some modules mention to run a prerequisite first. That’s a big thumbs up to the developers !

@apriestman Just to make sure. Would I expect the same for a third party module? Example, I like to several third party modules like Plaso from @Mark_McKinnon before running my keyword searches (to ensure the keyword search is going against everything).

Yes we’ve got what we call the “ingest pipeline” which runs all the file-type ingest modules on each file. There’s generally no order to the ingest modules run on a particular file, though I believe we force file type to run first since other modules use it. Any new files created during ingest should also go through this pipeline, though it’s dependent on the module writer to do so. I suspect Mark’s got it covered for his modules. :slight_smile:

1 Like

I believe most of my modules should index the artifacts as they are added to the case. If you find one that does not then let me know and I can fix it so it does.

1 Like