22
3
There's (exactly) seven ways to optimize latency in an LLM application (platform.openai.com)
1
Show HN: ShellAI – My (Pretty) Online/Offline Terminal Assistant (github.com/ibigio)
10