Operations | Monitoring | ITSM | DevOps | Cloud

MCP: Why AI Needs Git Intelligence

GitKraken CTO Eric Amodio breaks down the Model Context Protocol (MCP) and explains why Git intelligence is critical for AI agents at GitKon 2025. In this session, Eric covers: What MCP is and why every major AI company adopted it Why AI needs Git history, not just file system access How GitKraken MCP removes Git pain safely The future of agentic developer workflows How Commit Composer uses AI to organize commits without losing data.

Zebra Tablets | Portfolio Overview | Zebra

Today’s tablets need to be built to survive the rough and tumble of harsh, always-on enterprise work environments. But they also need to be built to adapt to your unique workflows. From screen size to operating system and customization options, they should have the right combination of capabilities to help you achieve more, every day. Meet the Zebra Tablet Portfolio – ET4, ET4-HC, ET6, and ET8 Series.

Zero Tickets Starts with DEX: Why DEX Data Is Your Missing Ingredient

Every IT leader wants fewer tickets. Many invest in automation, self-service portals, and AI agents to get there. Yet ticket volumes remain stubbornly high, and the service desk stays overloaded. The issue is not the effort or intent. It’s the approach. Most organizations are trying to eliminate tickets without understanding the experience that creates them. They optimize workflows after something breaks but ignore the conditions that cause issues in the first place.

How to Troubleshoot BGP Faster with Kentik AI Advisor

A BGP session goes down because a transit provider exceeded the maximum prefix limit. How do you find the root cause — fast? In this 10-minute demo, we walk through two approaches using Kentik AI Advisor. First, we troubleshoot step by step using natural language: asking AI Advisor to identify the affected interface, check for interface flapping, and review syslog messages until we find the maximum-prefix violation. Then we show how custom network context and natural language runbooks let AI Advisor do the entire investigation autonomously — following the same four steps a senior engineer would.

React 19 is coming to Grafana: what plugin developers need to know

As part of the upcoming Grafana 13 release in April, we will be updating to React 19, the latest major version of the frontend library for building user interfaces. Grafana uses React as the core technology for its frontend UI and its vibrant ecosystem of plugins. This update ensures we stay aligned with the broader React ecosystem, and allows us to take advantage of ongoing performance enhancements and new functionality provided by React APIs.

ChatOps that actually works: Grafana Cloud, Slack, and AI-powered observability

Context switching isn’t just inefficient—under pressure, it’s exhausting. It slows decision-making, increases the risk of mistakes, and makes even experienced engineers feel like they’re always a step behind the system they’re responsible for. At Grafana Labs, we want to build tools that meet you where you are. That's why we embedded Grafana Assistant, our context-aware AI assistant, directly in Grafana Cloud.

Measuring Claude Code ROI and Adoption in Honeycomb

At Honeycomb, we’ve been using Claude Code across our engineering team for a while. Anecdotally, I had a sense of who the power users were, and I had seen some examples of complex usage. But I wanted to be able to confidently answer questions, like: Claude Code supports OpenTelemetry out of the box, which means sending telemetry to Honeycomb takes just a few minutes of configuration.

Monitoring microservices and distributed systems with Sentry

If you’ve ever tried to debug a request that touched five services, a queue, and a database you don’t own, you already know why monitoring distributed systems is hard. Logs live in different places, requests disappear halfway through a flow, and when something breaks in production, you’re reconstructing what happened from fragments. Microservices make this worse by design. A single request fans out across small, independently deployed services, often communicating asynchronously.

Understanding Lighthouse: Largest Contentful Paint

Your hero image takes 5 seconds to show up. Your headline sits invisible while JavaScript churns away. Your users? They’ve already hit the back button. That’s the cost of a slow Largest Contentful Paint, and it’s killing your conversions and search rankings. LCP is one of Google’s Core Web Vitals, which means it directly impacts how Google ranks your website. A slow LCP doesn’t just frustrate users, it actively hurts your SEO.