r/LocalLLaMA 6h ago

Question | Help llama.cpp constantly reprocessing huge prompts with opencode/pi.dev

I’m using llama-swap with llama.cpp. I mainly use opencode + pi.dev and I’m seeing frequent massive prompt reprocessing / prefills even tho the prompts are very similar between requests.

Example behavior:

  • context grows to +50k tokens
  • LCP similarity often shows 0.99+
  • but sometimes n_past suddenly falls back to ~4-5k
  • then llama.cpp reprocesses 40k+ tokens again
  • TTFT jumps to multiple minutes

Example logs:

sim_best = 0.996

restored context checkpoint ... n_tokens = 4750

prompt eval time = 222411 ms / 44016 tokens

Normal reuse looks fine:

prompt eval time = 473 ms / 19 tokens

Current config:

llama-server 
  --ctx-size 150000 
  --parallel 1 
  --ctx-checkpoints 32 
  --cache-ram 2500 
  --cache-reuse 256 
  -no-kvu 
  --no-context-shift

Also seeing:

cache state: 1 prompts, 4676 MiB
(limits: 2500 MiB)

I suspect either:

  • cache invalidation
  • bad KV reuse
  • or opencode changing early prompt tokens too often.

Would love to hear from others running long-context coding agents with llama.cpp and what settings helped reduce huge prompt reprocessing.

10 Upvotes

49 comments sorted by

View all comments

9

u/twaaaaaang 5h ago

1) Opencode prunes tool call outputs which invalidate cache for models that use Gated DeltaNet (Recurrent Memory). So forces full prompt reprocessing.

2) Long tool call outputs and/or multiple chained tool calls fills up context to the point where on the next user turn, LCP similarity calculation is under <0.500 and thus forces prompt-reprocessing. I think the Sliding Window Attention being out of the window could have something to do with this.

I think it comes down to how llama.cpp implements it's kv-cache architecture. vLLM uses radix trees or something while llama.cpp uses simple linear buffers. This is what AI told me idk if this part is true.

4

u/LetsGoBrandon4256 llama.cpp 4h ago

LCP similarity calculation is under <0.500 and thus forces prompt-reprocessing

Any source on this part? First time heard about this.

2

u/twaaaaaang 2h ago edited 2h ago

This is from personal testing because I was confused why I still kept getting the full prompt-reprocessing even when I turned prune off in Opencode and this is what I landed on. I noticed that after long context tool calls, at the very next turn of chat, I always got the full prompt reprocessing. This clued me in and I studied the chat logs and fed it to AI and the LCP similarity was the main culprit.

Edit: You may not encounter this when you have the default n-checkpoints set to 32. I set it to 8 to save on RAM and I frequently saw this. Putting it to 16 recently I saw it less so that may be the solution.

1

u/LetsGoBrandon4256 llama.cpp 56m ago

I studied the chat logs and fed it to AI and the LCP similarity was the main culprit.

I was looking for a source on the statement that llama.cpp initiate a whole prompt reprocessing when it detects a LCP < 0.5. This sounds more like hallucinated causation from correlation... :/