Operations | Monitoring | ITSM | DevOps | Cloud

What "AI-Ready Data" actually means for observability teams

Many organizations deploying AI are learning similar lessons right now: the challenge isn’t this or that AI model, it’s the data. According to Gartner, 60% of AI projects will be abandoned by organizations because of failures to support these projects with AI-ready data. Also, 63% of organizations either lack or aren’t sure they have the right data management practices to get there.

Who's on call? How Claude helped us calculate this 2,500x faster

Schedules are a core part of any on-call system. In ours, they define who to page and when. But people use them in lots of other ways too: checking their next shift, asking for cover while at the gym, keeping a Slack user group up to date, or updating a Linear triage responsibility. For many of our customers, they’re one of the main ways they interact with our product, and as they’re such a foundational part of On-call, it’s very important they work well.

Introducing Seer Agent: The answer is already in Sentry. Now you can ask for it.

This is a story about an engineer’s night that could have been bad, but ended up… not so bad. A few weeks ago, on a Saturday, our AI debugger, Seer, started failing. Note the big scary spike on the right. The errors were generic failures from the LLM calls, nothing that pointed at a root cause. Most of the team wasn’t scheduled to be on this weekend, and it just so happened Indragie, our Head of AI, was online. He started paging engineers.

Context-Driven AI You Can Trust: How Edwin AI Earns Confidence in Production

Most legacy AIOps investments underdeliver because the AI lacks context, not capability. LogicMonitor’s latest innovations expand Edwin AI’s contextual intelligence across every dimension, so recommendations are accurate, explainable, and trusted by the teams that need to act on them. Reduce incident resolution time with AI that understands your environment—not just your alerts.

LogicMonitor Advances Autonomous IT with No Blind Spots, Trusted AI, and Closed-Loop Action

LogicMonitor’s latest innovations span the entire platform to deliver the operational foundation enterprises need for Autonomous IT—complete visibility from infrastructure to end user, AI that reasons in full context, and closed-loop automation that moves from detection to resolution. Over 90% of organizations rely on at least two to three monitoring solutions—and many enterprises operate five or more.

Stop watching the looms: why the AI era belongs to infrastructure

I live in Manchester, England now. I moved here from Texas last summer (which is its own story), but the thing I wasn't prepared for is how the Industrial Revolution isn't history here. It's the city itself. And if you're American like me, you might need to hear this: the Industrial Revolution didn't start in the US. It started here. Manchester is where the modern world was born. You see it everywhere. The old cotton mills converted into apartments.

Why Your Agentic AI Aspirations Need to Evolve from Models to a Workflow Data Fabric

Enterprise conversations today are dominated by one phrase: Agentic AI. Across boardrooms and innovation labs, organizations are experimenting with copilots, autonomous agents, and AI bots capable of resolving tickets, recommending actions, and orchestrating complex processes. The promise is real — AI that doesn't just generate insights, but takes meaningful action. Here's the uncomfortable truth: most enterprises are architecturally unprepared for the agentic future they're trying to build.

Understanding disaggregated GenAI model serving with llm-d

llm-d is an open source solution for managing high-scale, high-performance Large Language Model (LLM) deployments. LLMs are at the heart of generative AI – so when you chat with ChatGPT or Gemini, you’re talking to an LLM. Simple LLM deployments – where an LLM is deployed to a single server – can suffer from latency issues, even with just one user. This can be because of lack of memory-bandwidth on the server, or because of KV cache pressure on system memory.