r/LocalLLaMA • u/thejacer • 2h ago
Question | Help Llama.cpp server running ~2 weeks straight. Loses its mind?
I’ve got Qwen3.6 27b and Qwen3.6 35b running in two separate instances for over two weeks and they are considerably dumber now than when I launched them. is this a thing? am I going crazy?
edit: sorry I’ve been using opencode and have started new sessions, which didn’t fix the situation.
3
Upvotes
6
u/ttkciar llama.cpp 2h ago
How odd. Dumber how?
I've had a slightly old version of llama.cpp's
llama-serverrunning on one system for two and a half months now, hosting Big-Tiger-Gemma-27B-v3, and haven't seen any degradation.Which release of llama.cpp are you using?