Operations | Monitoring | ITSM | DevOps | Cloud

AI Assistant for Calico: Troubleshooting at the Speed of Thought

Despite the wealth of data available, distilling a coherent narrative from a Kubernetes cluster remains a challenge for modern infrastructure teams. Even with powerful visualization tools like the Policy Board, Service Graph, and specialized dashboards, users often find themselves spending significant time piecing together context across different screens.

Claude Code + Lightrun MCP: Your AI Agent Now Has Live Runtime Vision

Claude Code, Anthropic’s coding agent, now integrates with Lightrun through MCP. AI code assistants have been flying blind. Google Dora’ 2025 report found it is causing, an almost 10% increase in code instability. Even with up to 1M tokens of context available in Claude, this powerful agenti cannot see how the code it writes actually behaves inside a live system under real traffic, real dependencies, and under a load of 10,000 requests per second.

Architecting MCP for AI Agents: Lessons from Our Redesign | Harness Blog

-- Key Takeaways: The Harness MCP server is an MCP-compatible interface that lets AI agents discover, query, and act on Harness resources across CI/CD, GitOps, Feature Flags, Cloud Cost Management, Security Testing, Resilience Testing, Internal Developer Portal, and more. -- The first wave of MCP servers followed a natural pattern: take every API endpoint, wrap it in a tool definition, and expose it to the LLM.

CI Pipeline Optimization Guide for Platform Engineering Leaders | Harness Blog

Definition: CI pipeline optimization is the practice of reducing build and test time and the cost per build by running only what matters, reusing unchanged components, and enforcing standardized governance. Platform teams are wasting thousands of hours every year because their pipelines aren't working right. Developers wait 45 minutes for builds. Jenkins consumes 20% of your team's capacity on maintenance.

The Future of UK Digital Infrastructure | Pulsant CEO Rob Coupland Interview

In the latest Platform Insight interview, Pulsant CEO Rob Coupland discusses Pulsant’s evolution into the only truly UK‑wide, interconnected data centre platform. The conversation with Nicola Hayes of @PlatformMarketsGroup explores the growing momentum behind edge and sovereign infrastructure, and why real‑world enterprise demand is shaping the next phase of AI. If you want to understand where the UK digital infrastructure landscape is heading – and why regional platforms matter more than ever – this is a must‑watch.

Real-Time Data: The Engine of Efficient, Sustainable Data Centers

Imagine knowing every detail of your data center as it happens. Real-time data makes this possible. You can monitor systems, track performance, and adjust resources on the fly. This proactive approach leads to smoother operations and reduced downtime. By constantly having up-to-date information, you can maintain peak efficiency in your facility. Such insights allow you to optimize cooling and power use, which are crucial to keeping costs down.

Webinar Recap: Building The Finance Function For The Future

Women leaders from CloudZero, Campfire, and Preql AI sat down to talk about what it actually takes to modernize finance in 2026 — AI spend, smarter tooling, and the skills that matter now for finance practitioners and executives looking to manage cloud and AI spend in a rapidly changing and unpredictable financial environment. On March 19, 2026, CloudZero and Campfire co-hosted a virtual panel in honor of International Women’s Month, called Building the Finance Function for the Future.

Amazon Lex Pricing in 2026 Explained (And Practical Cost Saving Tips to Use Immediately)

If your SaaS product handles 1 million chatbot interactions per month, Amazon Lex alone could cost between $4,000 and $7,500. That range assumes current Amazon Lex V2 pricing of about $0.00075 per text request and $0.004 per speech request. Multiply the requests by the rate, and you’re done. Or are you? Conversational AI services rarely behave that neatly in production — and that includes AWS Lex. Amazon Lex is AWS’s conversational AI service for building chatbots.

FinOps In Action Playbook For Engineering Personas

In 2025, many teams built strong FinOps foundations: These practices created visibility and control. Now it’s time to elevate. FinOps in Action is a three-part series focused on applying that foundation in real engineering scenarios. Each post highlights a different persona and shows how to move from visibility to operational discipline. Today, we focus on Engineering. Engineering teams influence cost through architecture decisions, scaling policies, and workload design.