Ask AIToggle Cachepost http://localhost:8000/api/llm-cache/toggleEnable or disable LLM response caching.