The AI Monitoring crisis that no one's talking about
When I spoke at AWS London earlier this year, I had the chance to discuss something that more and more teams are starting to feel: traditional observability doesn’t cut it for AI systems. In AI, “Is it running?” is no longer enough. We have to ask, “Is it right?” When I delivered that line, I saw the heads nodding. Everyone’s excited to build with LLMs, but when it comes to actually monitoring them in production? That’s where things fall apart.