Operations | Monitoring | ITSM | DevOps | Cloud

Introducing Ubuntu 26.04 LTS | Resolute Raccoon

Ubuntu 26.04 LTS, codenamed, is now available to download. Resolute Raccoon builds on the resilience-focused improvements introduced in interim releases, with TPM-backed full-disk encryption, improved support for application permission prompting, Livepatch updates for Arm-based servers, and Rust-based utilities for enhanced memory safety. This release also brings native support for industry-leading AI/ML toolkits like NVIDIA CUDA and AMD ROCm, making Ubuntu 26.04 LTS the ideal platform for AI development and production workloads.

AI agents are only as smart as the data you feed it

AI is only as useful as the context you give it. An autonomous observability agent can unlock serious value from your telemetry, but only when the foundation is right: good telemetry, a strong data layer, and efficient access to the data. Annie Freeman and Lewis Isaac had a lot to say about this at AWS Summit London this week! hashtag#Observability hashtag#AI hashtag#AWSSummitLondon hashtag#DevOps hashtag#OpenTelemetry.

Why Mandating AI Tools Backfires on Engineering Teams

Responsible AI adoption for engineering teams starts with culture, not compliance. In this GitKon talk, Rizel Scarlett (Tech Lead of Open Source DevRel at Block) shares how Block helped thousands of engineers actually want to use AI tools, including Goose, Cursor, Claude Code, and more, without mandates, vibe coding disasters, or security gaps.

GPT Image 2 Brings Visual Work Closer

Most AI image tools are easy to praise in a vague way. They can generate striking pictures, imitate styles, and turn a short prompt into something that looks impressive enough to share. But that kind of praise has started to feel cheap. The image model market is crowded now, and "it makes beautiful images" is no longer a meaningful claim by itself.

Blind Tokenmaxxing Is The New Cloud Waste. Focus on Outcome-Maxxing Instead

Meta's internal token leaderboard sparked a frenzy — and a reckoning. Tokenmaxxing without attribution is just cloud waste 2.0. Companies like Hudl and Duolingo use cost intelligence to connect every AI dollar to a business outcome.

Why Enterprise AI Demands More Than Just Automation

Based on insights from The Intelligent Enterprise podcast, “The Evolution from Automation to Autonomy” Every couple of weeks, The Intelligent Enterprise podcast steps away from the day-to-day noise of enterprise life to explore big ideas from a fresh perspective. In one recent episode, the focus turned to a question many organizations are still grappling with: What does it really take to build an AI-powered enterprise that works with people, not against them?

Episode 10 - How I Learned to Stop Worrying and Love AI

Are we still in the first chapter of AI, and mistaking it for the whole story? In this episode of The Intelligent Enterprise, host Tom Stoneman zooms out from the headlines to explore where we really are in the AI journey. He’s joined by journalist and independent analyst Joe McKendrick, who has spent decades documenting how emerging technologies reshape business and society. As co-chair of the AI Summit in New York and a senior contributor to Forbes and ZDNet, Joe brings the perspective of someone who understands how these stories unfold over time.

The New Economics of Enterprise AI: Why Small Models Win Where It Matters

For years, progress in AI was equated with scale. Larger models, broader parameter counts, and increasingly complex cloud architectures were treated as signals of advancement. In enterprise operations, however, scale alone does not determine success. Economics does. As AI becomes embedded in operational workflows, organizations are discovering that model size is less important than cost stability under continuous load. AI-driven operations do not run in bursts. They run constantly.

What Is LLM Observability? For CFOs And Engineers, The Missing Layer Is Cost

You probably have Datadog. Maybe New Relic, maybe Dynatrace. Your observability stack has been solid for years — and you're still flying blind on AI cost. Here's why LLM observability needs a fourth pillar most tools skip, and how to build one that actually tells you what your models are costing you per request, per feature, per customer.