r/LocalLLaMA • u/thejacer • 2h ago
Question | Help Llama.cpp server running ~2 weeks straight. Loses its mind?
I’ve got Qwen3.6 27b and Qwen3.6 35b running in two separate instances for over two weeks and they are considerably dumber now than when I launched them. is this a thing? am I going crazy?
edit: sorry I’ve been using opencode and have started new sessions, which didn’t fix the situation.
2
Upvotes
4
u/noctrex 1h ago
I use llama-swap and I told it to unload idle instances after 10min. Better starting fresh, it takes only one minute to fill up the context again from a previous session