Operations | Monitoring | ITSM | DevOps | Cloud

5 key takeaways from the 2026 State of Software Delivery

AI has made it easier than ever to write code. Shipping it is a different story. Today we released the 2026 State of Software Delivery report, sponsored by Thoughtworks. In it, we analyzed more than 28 million CI/CD workflows across thousands of engineering teams. The picture that emerged is clear: teams are producing more code than ever, but fewer of them are able to turn that activity into software that actually reaches customers.

Introducing: Checkly Agent Skills

AI coding agents are excellent at writing code. Ask Claude Code, Codex, or Cursor to add a feature, and it just works. At Checkly, we were ready for the new agentic world from the start! Monitoring as Code means your entire monitoring setup lives in your repository. API Checks, Browser Checks, alert channels, status pages; everything is defined in code, managed with the Checkly CLI, and version-controlled like any other part of your stack.

Top 6 AI-Powered Procurement Platforms Transforming Supply Chain Management

The procurement landscape has undergone a seismic shift over the past few years. What once relied heavily on manual processes, spreadsheets, and gut instinct now operates in an environment where artificial intelligence analyses millions of data points in seconds, predicts supplier risks before they materialise, and optimises spending patterns with machine learning algorithms that continuously improve.

AI Is Changing Healthcare Faster Than Most Systems Are Ready For

Healthcare is shifting fast, and artificial intelligence is no longer a future concept sitting in research labs or pilot programs. It’s already embedded in clinical workflows, operational systems, and patient interactions, often in ways that feel subtle, uneven, and sometimes uncomfortable.

What is the Model Context Protocol (MCP)

The Iron Man’s J.A.R.V.I.S. is the artificial intelligence (AI) that almost every person wants to see. A conversational technology that answers questions like a friend would. The rise of large language models (LLMs) almost seems to give people the friendly robotic sidekick that generations of children grew up dreaming about.

Teaching AI How to Refinery

At the beginning of February, we released v3.1 of Refinery, our advanced, tail-based sampling solution. The new version comes with more performance enhancements, bug fixes, and a few new pieces of telemetry. In tandem with the 3.1 release, we also released a new tool for our MCP server which helps your AIs understand Refinery, and how Honeycomb handles sampling.