Operations | Monitoring | ITSM | DevOps | Cloud

LLM Cost Monitoring with OpenTelemetry

Teams running LLM applications in production face a cost problem that traditional APM tools were never designed to solve. CPU and memory costs are relatively predictable — a web service processing 1,000 requests per second costs roughly the same week over week. LLM API costs are not. A single user session can cost $0.01 or $5 depending on prompt length, model choice, conversation history, and how many retries happen inside your chain.

How AI Is Powering the Next Era of IT Operations

AI is redefining the future of IT. In this Nexus Live 2025 keynote, ScienceLogic CEO and Founder Dave Link shares the vision behind Skylar AI, why the industry is shifting toward autonomous operations, and how organizations can move faster, smarter, and more proactively than ever before. In this session you’ll see.

IREX Enhances FireTrack AI Module for Faster, More Accurate Fire Detection

WASHINGTON, DC - IREX, a global developer of ethical AI and intelligent video analytics, has announced a significant upgrade to its FireTrack fire and smoke detection module, expanding its capabilities across a wide range of environments. As outlined in an article on TNW, the updated solution is designed to work seamlessly with existing camera infrastructure, eliminating the need for additional hardware while extending its use to critical infrastructure, public institutions, residential and commercial properties, and natural environments such as parks and forests.

From AI Idea to Real System: What Changes Along the Way

Most companies don't struggle with the idea of AI. They struggle with what to do with it. The potential is clear-automation, predictions, better decisions. But translating that into something useful inside a business is where things become less obvious. That's usually when ai ml consulting services start to make sense.

How Will We Hold AI Accountable For Risky Investments?

The word “Trillion” never fails to set the tech world on fire. Foundation Capital’s Jaya Gupta and Ashu Garg are two of the most recent firestarters. Late in December, they co-wrote “AI’s trillion-dollar opportunity: Context graphs,” outlining how AI will transition from organizational knowledge to organizational comprehension.

AI Working for You: MCP, Canvas, and Agentic Workflows - Part 2

In our previous post in our series on observability for the agent era, we looked at how Honeycomb provides unique visibility into LLMs operating in your production environment. Now, let’s flip it around and explore how Honeycomb provides observability insights uniquely suited to helping your AI agents rapidly diagnose and fix production issues, and build production feedback into the next round of development.

The Fundamentals: Fast, Deep, and Ready for What Comes Next - Part 3

The previous two posts in this series have looked at some of the use cases Honeycomb customers are implementing to observe LLMs in production and power agentic observability workflows. In this third and final post, we’ll take it back to basics and look at how the fundamental capabilities and infrastructure of Honeycomb provide the comprehensive data and fast performance that makes these use cases work at production scale. AI capabilities built on a weak observability foundation fall apart fast.