Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on APIs, Mobile, AI, Machine Learning, IoT, Open Source and more!

Selector MCP and the Future of Modular Automation

In the first two parts of this series, we explored why modern network operations demand intelligent automation and how AI agents can reason, act, and collaborate to solve complex problems. We examined the frameworks – such as ReACT, LangGraph, and Pydantic – that power these agents, and how the Model Context Protocol (MCP) facilitates seamless integration with tools and services. But theory alone doesn’t improve network uptime or reduce manual toil.

EU AI Act: what changes in August 2025 and how to prepare

‍ On August 2, 2025, a key part of the EU AI Act comes into force. It has serious implications for how you manage incidents related to artificial intelligence. ‍ While the full regulation will not apply until 2026, new obligations for providers of general-purpose AI (GPAI) models begin this summer. If you are building or deploying AI-powered services in Europe, the clock is ticking.

Securing the Invisible: Why Ambient AI Needs Next-Gen Security

If, like me, you’re continuously striving to keep pace with the ever-evolving world of artificial intelligence, you’re probably hearing a lot about how Ambient AI is poised to dominate discussions and developments throughout the second half of 2025. Ambient AI refers to artificial intelligence systems that operate unobtrusively in the background of our daily environments, constantly sensing, analyzing, and responding to various inputs without explicit human interaction.

Applying AI/ML in Observability - Tech Talk #7

Ready to master anomaly detection? Join us for Part 2 of our "Applying AI/ML in Observability" series, where we do a deep dive into vmanomaly! In this live stream, Mathis and Marc will be joined by a very special guest: Fred Navruzov, the lead developer and mastermind behind VictoriaMetrics' vmanomaly. If you want to move beyond the basics and unlock the full potential of AI-driven observability, this is a session you can't afford to miss.

Running AI without blowing up your storage

Storage is often underestimated: In infrastructure discussions, compute and networking get most of the attention, while storage is treated as secondary. For AI workloads, that can be a costly oversight. Data throughput for specialized hardware: AI infrastructure powered by GPUs can process massive volumes of data at unprecedented speeds. This puts immense pressure on the storage system to keep up. Scale-out performance: An on-prem, scale-out, software-defined storage setup allows you to meet high performance demands, grow capacity as needed, and stay in control of infrastructure costs.

Building your AI infra, our tips

Modular architecture: Decouple compute from storage so each can scale independently. This makes it easier to adapt to growing or shifting workloads over time. Future-ready hardware: Select GPUs and CPUs not just for current workloads but with an eye on scalability, including support for newer accelerator types. Scalable design: Ensure the system allows seamless addition of compute nodes or storage without a full redesign.

CapCut for Real Estate: AI Voice Narration for Property Tours

Listing videos have proved a potent display of property available on the internet; however, not all videos with good frames cut through the market. The CapCut Desktop Video Editor has been designed as an all-in-one editing tool that enables real estate professionals to design a property tour with AI voiceover, action transitions, and high-definition pictures. CapCut gives the opportunity to create high-quality, compelling virtual tours even in the case of absence of a professional narrator and a studio where it is possible to shoot.
Sponsored Post

When AI Becomes the Judge: Understanding "LLM-as-a-Judge"

Imagine building a chatbot or code generator that not only writes answers - but also grades them. In the past, ensuring AI quality meant recruiting human reviewers or using simple metrics (BLEU, ROUGE) that miss nuance. Today, we can leverage Generative AI itself to evaluate its own work. LLM-as-a-Judge means using one Large Language Model (LLM) - like GPT-4.1 or Claude 4 Sonnet/Opus - to assess the outputs of another. Instead of a human grader, we prompt an LLM to ask questions like "Is this answer correct?" or "Is it on-topic?" and return a score or label. This approach is automated, fast, and surprisingly effective.

Beyond AI hype: put reliability at the forefront

Reliability is a constant for every technology, whether it’s cloud, microservices, or AI. Full transcript:  Just a few years ago everybody was screaming about microservices, "That's the wave of the future," and now everybody's looking at AI. No matter what the change in technology hot topic is, your reliability should still be at the forefront of everything that you're doing.