Operations | Monitoring | ITSM | DevOps | Cloud

Cloud Strategy for 2026: the Year of Repatriation, Resilience, and Regional Rebalancing

This year is set to be a pivotal year for cloud strategy, with repatriation gaining momentum due to shifting legislative, geopolitical, and technological pressures. This trend has accelerated, with a growing focus on data sovereignty. These challenges have set the stage for 2026 to be the year of repatriation, resilience, and regional rebalancing. Here, Rob Coupland, Chief Executive Officer at Pulsant, offers his insights.

AI coding assistants are only as good as the context you give them

AI coding assistants have quickly become part of everyday development. Teams now rely on them to explain unfamiliar code, suggest configuration files, debug errors, and accelerate delivery across the stack. But as these tools move from experimentation into real production workflows, a consistent pattern is emerging: AI breaks down at the platform boundary.

Beyond the Blue Link: UX Patterns for Google's AI Overviews, AI Mode & Answer Engines

The blue link is dying—but not in the way we expected. When Google’s AI Overviews began appearing at the top of the search results page, the SEO community panicked. Publishers watched click-through rates plummet. The Pew Research Center confirmed their fears: searchers who encounter an AI summary are half as likely to click on traditional search results (8% vs. 15%).

Vibe coding tools observability with VictoriaMetrics Stack and OpenTelemetry

AI-powered coding assistants have transformed how developers write software. Tools like Claude Code, OpenAI Codex, Gemini CLI, Qwen Code, and OpenCode have introduced what many call “vibe coding” — a new paradigm where users describe their intent and AI agents handle the implementation details. But as these tools become integral to development workflows, a critical question emerges: how do we understand what’s happening under the hood?

Lightrun MCP: Your AI Assistant Now Debugs and Validates Production Code

Intermittent production bugs are hard to debug and rarely reproduce locally. Teams fall into a loop of adding logs, and every rollback slows them down. In this demo, R&D team leads Maor Yaffe and Or Golan show how an AI assistant can verify production issues using real runtime data, without redeploying. By connecting Cursor to Lightrun MCP, the agent inspects live production behavior, collects real variable values, and confirms the root cause with evidence instead of assumptions.

What the Latest Google "AI Mode" Means for Users Who Care about Privacy and Better Experiences

When Google introduced its AI highlights above the main search results, we thought that was all the company would push to prove its determination to turn traditional Google Search, praised by businesses for expansive SEO opportunities, into an AI-powered experience. But if you live in the U.S. and have recently paid attention to the Google homepage, there's a new button called "AI Mode." Well, it turns out the company is still working hard not to lose its dominance to competitors.

Top tips: RAG isn't the problem, context is. Here are 3 fixes.

Top Tips is a weekly column where we highlight what’s trending in the tech world and list ways to explore these trends. This week, we’ll be talking about how we can improve our retrieval-augmented generation (RAG) systems using contextual engineering. Prompt engineering has gained a lot of attention in the past year, and it’s finally time to move on to a better experience that transforms the way AI results are provided to us.

Make Your Engineering Processes Resilient. Not Your Opinions About AI

Why strong reviews, accountability, and monitoring matter more in an AI-assisted world Artificial intelligence has become the latest fault line in software development. For some teams, it’s an obvious productivity multiplier. For others, it’s viewed with suspicion. A source of low-quality code, unreviewable pull requests, and latent production risk. One concern we hear frequently goes something like this: It’s an understandable fear; and also the wrong conclusion.

When is it ok or not ok to trust AI SRE with your production reliability?

There’s a moment every engineer knows. An AI suggests a fix, it looks reasonable,maybe even obvious, but production is on the line and you hesitate before clicking execute. There’s a big difference between an AI that can recommend an action and one you’re willing to let take that action. All it takes is one bad call, one kubectl command that makes things worse, and suddenly every automated suggestion is a potential liability instead of a help.