Hi all,
I have been using ollama (local llm) + claude code + autopsy mcp server for a long time and it is really slow and often unresponsive, while I tried integrating it into LMStudio (Developer → Local Server → edit mcp.json) using the json shown in Autopsy’s mcp options and trying a chat with the MCP Server Autopsy tool activated, even using a local template like gpt-oss:20b, it works great and is very fast, on my Acemagic F5A MiniPC - 64Gb DDR5 RAM + Ryzen 9 HX Ai + Radeon 890M (32Gb Ram dedicated). Just for your information, maybe it is better to integrate in the dedicated MCP Server website page to use also LM Studio instead of Claude Code or to use both. Ok just to add knowledge to your great work! Thanks
Hi Nanni,
I have been testing this out with Ollama + Jan and I am getting ok results with it. One of the models I have been testing with is qwen3.5:35b. I will try the gpt-oss model you are using to see how that one works.
Mark
I tested with Qwen 3.5:9b it seems better than gpt-oss:20b for autopsy mcp.
@Nanni_Bassetti and @Mark_McKinnon Do you have and resources for getting starting with MCP clients and ollama? I’ve been meaning to try it out.
Hi Brian,
as I have written before, I did it, using ollama with the command ollama launch claude and autopsy running its mcp server, but it is slow. It is faster using LMStudio with a local LLM. You can configure LMStudio for using autopsy mcp. Let me know what you need to know better.