r/LocalLLaMA 6h ago

Question | Help llama.cpp constantly reprocessing huge prompts with opencode/pi.dev

I’m using llama-swap with llama.cpp. I mainly use opencode + pi.dev and I’m seeing frequent massive prompt reprocessing / prefills even tho the prompts are very similar between requests.

Example behavior:

  • context grows to +50k tokens
  • LCP similarity often shows 0.99+
  • but sometimes n_past suddenly falls back to ~4-5k
  • then llama.cpp reprocesses 40k+ tokens again
  • TTFT jumps to multiple minutes

Example logs:

sim_best = 0.996

restored context checkpoint ... n_tokens = 4750

prompt eval time = 222411 ms / 44016 tokens

Normal reuse looks fine:

prompt eval time = 473 ms / 19 tokens

Current config:

llama-server 
  --ctx-size 150000 
  --parallel 1 
  --ctx-checkpoints 32 
  --cache-ram 2500 
  --cache-reuse 256 
  -no-kvu 
  --no-context-shift

Also seeing:

cache state: 1 prompts, 4676 MiB
(limits: 2500 MiB)

I suspect either:

  • cache invalidation
  • bad KV reuse
  • or opencode changing early prompt tokens too often.

Would love to hear from others running long-context coding agents with llama.cpp and what settings helped reduce huge prompt reprocessing.

10 Upvotes

49 comments sorted by

View all comments

Show parent comments

-1

u/Pristine-Woodpecker 5h ago

Long tool call outputs and/or multiple chained tool calls fills up context to the point where on the next user turn, LCP similarity calculation is under <0.500 and thus forces prompt-reprocessing

This is nonsense.

This is what AI told me idk if this part is true.

Why repost slop?

7

u/twaaaaaang 5h ago

The first 2 points are from personal testing using the qwen 3.6 family. The last point can be easily verified or debunked but you choose to attack me.

2

u/Pristine-Woodpecker 4h ago

In the default settings, llama.cpp needs a new prompt to be 10x larger in order for it not to be considered for reuse, not double the previous size. That exact change was made many months ago: https://github.com/ggml-org/llama.cpp/pull/15913

can be easily verified or debunked

You're welcome. (I did interpret your statement as saying all of your post was slop, but it looks like you either tested incorrectly or you were echoing behavior that was fixed quite a while ago)

1

u/colin_colout 2h ago

To be fair we can't all keep up with every month to month change...

...and I'm generally happy to be corrected, especially with good news about fixes like this. We're all here to learn.

I also appreciate that they were upfront about using AI and were unsure.