Operations | Monitoring | ITSM | DevOps | Cloud

React 19 is coming to Grafana: what plugin developers need to know

As part of the upcoming Grafana 13 release in April, we will be updating to React 19, the latest major version of the frontend library for building user interfaces. Grafana uses React as the core technology for its frontend UI and its vibrant ecosystem of plugins. This update ensures we stay aligned with the broader React ecosystem, and allows us to take advantage of ongoing performance enhancements and new functionality provided by React APIs.

ChatOps that actually works: Grafana Cloud, Slack, and AI-powered observability

Context switching isn’t just inefficient—under pressure, it’s exhausting. It slows decision-making, increases the risk of mistakes, and makes even experienced engineers feel like they’re always a step behind the system they’re responsible for. At Grafana Labs, we want to build tools that meet you where you are. That's why we embedded Grafana Assistant, our context-aware AI assistant, directly in Grafana Cloud.

Measuring Claude Code ROI and Adoption in Honeycomb

At Honeycomb, we’ve been using Claude Code across our engineering team for a while. Anecdotally, I had a sense of who the power users were, and I had seen some examples of complex usage. But I wanted to be able to confidently answer questions, like: Claude Code supports OpenTelemetry out of the box, which means sending telemetry to Honeycomb takes just a few minutes of configuration.

Monitoring microservices and distributed systems with Sentry

If you’ve ever tried to debug a request that touched five services, a queue, and a database you don’t own, you already know why monitoring distributed systems is hard. Logs live in different places, requests disappear halfway through a flow, and when something breaks in production, you’re reconstructing what happened from fragments. Microservices make this worse by design. A single request fans out across small, independently deployed services, often communicating asynchronously.

Understanding Lighthouse: Largest Contentful Paint

Your hero image takes 5 seconds to show up. Your headline sits invisible while JavaScript churns away. Your users? They’ve already hit the back button. That’s the cost of a slow Largest Contentful Paint, and it’s killing your conversions and search rankings. LCP is one of Google’s Core Web Vitals, which means it directly impacts how Google ranks your website. A slow LCP doesn’t just frustrate users, it actively hurts your SEO.

Unify and correlate frontend and backend data with retention filters

Teams can use Datadog Real User Monitoring (RUM) and RUM without Limits to get full visibility into the frontend health of their applications while retaining only the sessions that contain critical problems that affect the end-user experience. But application errors or slowness often result from backend issues, such as database bottlenecks. To diagnose these issues, you need to correlate the frontend data from RUM with the backend data from Datadog Application Performance Monitoring (APM).

Drift Under Control: Keep Your Infrastructure Consistent with Continuous Detection, Intelligent Analysis, and Safe Remediation

In cloud-native environments, infrastructure is in constant flux. Teams move fast, leveraging Infrastructure-as-Code (IaC), ephemeral resources, and automation to iterate quickly. But speed brings a cost: configuration drift. A single manual change in the cloud console, an untracked automation script, or an out-of-band fix can cause your infrastructure to fall out of sync with code. Over time, this erodes trust, breaks pipelines, and introduces silent risk.

From Monitoring Signals to Observability Maturity

Efficient monitoring delivers fast results: alerts fire within seconds, dashboards refresh continuously, and teams know the moment something changes. Understanding arrives later. An alert may show that a value shifted, but it does not explain why it shifted, how far the impact will spread, or which components truly matter. Teams see the signal, not the system behavior behind it. This gap defines the limit of traditional monitoring. Detection has improved, but explanation has not kept pace.

AI Anomaly Detection: Catch AI Cost Surprises Before They Kill Margins

Consider this: traditional cloud cost monitoring was like checking your fuel gauge once a month — after the trip was already over. That model worked when infrastructure scaled slowly. You provisioned resources predictably and paid for stable, linear usage. AI breaks that model. Today, AI costs behave like a high-performance engine with a hypersensitive throttle. A small input, like a prompt change or a single power user, can dramatically increase your fuel burn in seconds.

A cleaner, customizable Bitbucket navigation is here

Last month we shared that a new navigation system is coming to Bitbucket, and we know many of you have been eager to see what it looks like. Today, we’re happy to share that the new navigation is available for to all Bitbucket users. This article covers what’s changing in Bitbucket, when it’s happening, and how you can share feedback with us.