11 From 300KB to 69KB per Token: How LLM Architectures Solve the KV Cache Problem (future-shock.ai) 2 hours ago future-shock-ai future-shock.ai