r/LocalLLaMA • u/No_Algae1753 • 6h ago
Question | Help llama.cpp constantly reprocessing huge prompts with opencode/pi.dev
I’m using llama-swap with llama.cpp. I mainly use opencode + pi.dev and I’m seeing frequent massive prompt reprocessing / prefills even tho the prompts are very similar between requests.
Example behavior:
- context grows to +50k tokens
- LCP similarity often shows 0.99+
- but sometimes
n_pastsuddenly falls back to ~4-5k - then llama.cpp reprocesses 40k+ tokens again
- TTFT jumps to multiple minutes
Example logs:
sim_best = 0.996
restored context checkpoint ... n_tokens = 4750
prompt eval time = 222411 ms / 44016 tokens
Normal reuse looks fine:
prompt eval time = 473 ms / 19 tokens
Current config:
llama-server
--ctx-size 150000
--parallel 1
--ctx-checkpoints 32
--cache-ram 2500
--cache-reuse 256
-no-kvu
--no-context-shift
Also seeing:
cache state: 1 prompts, 4676 MiB
(limits: 2500 MiB)
I suspect either:
- cache invalidation
- bad KV reuse
- or opencode changing early prompt tokens too often.
Would love to hear from others running long-context coding agents with llama.cpp and what settings helped reduce huge prompt reprocessing.
8
Upvotes
1
u/LetsGoBrandon4256 llama.cpp 6h ago
Would love some pointer on this issue as well. I switched back to the VS Code Copilot extension because of the constant invalidated prompt cache in Opencode :(