Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on APIs, Mobile, AI, Machine Learning, IoT, Open Source and more!

Fix bugs faster with CircleCI's Chunk AI agent

Bugs hide in plain sight. A date validator that rejects February 29th on leap years. An edge case that slips through code review. A flaky test that passes locally but fails in CI. These issues erode trust in your codebase and waste hours of debugging time. In the era of AI-assisted development, code is being written faster than ever. But speed creates risk.

Boost your test coverage with CircleCI Chunk AI agent

Test coverage is one of those metrics everyone agrees matters until it’s time to actually write the tests. Between shipping features, fixing bugs, and handling production issues, writing comprehensive tests for edge cases and error paths often falls to the bottom of the backlog. The result is coverage gaps that accumulate technical debt and leave your codebase vulnerable to regressions. As AI-powered development tools reshape how we write code, the volume and velocity of changes is accelerating.

How Cisco Revolutionized Platform Engineering with Komodor's Agentic AI

In the world of cloud-native infrastructure, complexity is the silent killer of innovation. For Cisco Outshift, the company’s incubation engine, managing a sprawling environment of AWS EKS clusters and edge-based MicroK8s workloads created a classic bottleneck: the Platform Engineering team was drowning in toil. Facing SRE burnout and the limits of human scaling, Cisco embarked on an ambitious journey to evolve its internal operations from standard DevOps to Agentic AI.

Scaling AI Reliability: Real world lessons from Mistral AI

How does one of the world's leading AI companies keep its infrastructure reliable while shipping new models constantly? In this webinar, Devon Mizelle, Senior SRE at Mistral AI, shares the real story. Devon walks through how Mistral built an automated system that generates synthetic checks for every model the moment it goes live—no manual configuration, no forgotten monitors, no inconsistent alerting. Using monitoring as code, his team eliminated the toil of maintaining hundreds of checks across a rapidly evolving model ecosystem.

The Hidden Cost of 30% AI-Generated Code #speedscale #aicoding #devops #technews #ai

AI now writes 30% of Big Tech’s code, but the resulting surge in defects is crashing platforms like AWS and GitHub. Manual testing can no longer keep up with this velocity; it's time to deploy AI Quality Agents to save our systems. Is AI speed worth the decline in code quality, or are we headed for a breaking point? Let me know if you’ve noticed more bugs in your workflow lately. Video collab with @ScottMooreConsultingLLC.

Why Context, Not Prompts, Determines AI Agent Performance

Prompt engineering improves single responses, but agent performance is determined by how execution context is captured, replayed, and constrained over time. For the past few years, enterprises have obsessed over prompts, with entire roles emerging around their design and an ecosystem of tooling and templates following close behind. This focus delivered early gains because it allowed teams to rapidly improve outputs without modifying the surrounding system. Over time, those gains flattened.

Datadog acquires Propolis

Generative AI enables teams to write and ship code faster than ever. But current methods for testing and quality assurance have not evolved to match the new pace and scale of deployments. Manual and deterministic testing paths quickly become obsolete when new features are released, and they fundamentally can’t test AI outputs, leaving a massive untested surface area. To keep up, teams need new testing methods that can define what goals users have, and ensure that their outcomes match.