Operations | Monitoring | ITSM | DevOps | Cloud

Latest Posts

Canonical announces the general availability of Charmed Kafka

27 February 2024: Today, Canonical announced the release of Charmed Kafka – an advanced solution for Apache Kafka® that provides everything users need to run Apache Kafka at scale. Apache Kafka is an event store that supports a range of contemporary applications including microservices architectures, streaming analytics and AI/ML use cases. Canonical Charmed Kafka simplifies deployment and operation of Kafka across public clouds and private data centres alike.

Crafting new Linux schedulers with sched-ext, Rust and Ubuntu

In our ongoing exploration of Rust and Ubuntu, we delve into an experimental kernel project that leverages these technologies to create new schedulers for Linux. Playing around with CPU scheduling policies has always been a dream for many kernel hackers and OS enthusiasts. However, such material typically remains within the domain of a few core kernel developers with extensive years of experience.

What is a Kubernetes operator?

Operators take a real-world operations team’s knowledge, wisdom, and expertise, and codify it into a computer program that helps operating complex server applications like databases, messaging systems, or web applications. Operators provide implementations for operating applications that are testable and thus more reliable at runtime.

Preview Confidential AI with Ubuntu Confidential VMs and Nvidia H100 GPUs on Microsoft Azure

With Ubuntu confidential AI on Azure, businesses can undertake various tasks including ML training, inference, confidential multi-party data analytics, and federated learning with confidence. The effectiveness of AI models depends heavily on having access to large amounts of good quality data. While using publicly available datasets has its place, for tasks like medical diagnosis or financial risk assessment, we need access to private data during both training and inference.

Edge AI: what, why and how with open source

Edge AI is transforming the way that devices interact with data centres, challenging organisations to stay up to speed with the latest innovations. From AI-powered healthcare instruments to autonomous vehicles, there are plenty of use cases that benefit from artificial intelligence on edge computing. This blog will dive into the topic, capturing key considerations when starting an edge AI project, main benefits, challenges and how open source fits into the picture.

5 Edge Computing Examples You Should Know

In the fast-paced world of technology, innovation is the key to staying ahead of the curve. As businesses strive for efficiency, speed, and real-time data processing, the spotlight is increasingly turning towards edge computing. Edge computing represents a paradigm shift in the way data is processed and analysed. Unlike traditional cloud computing, which centralises data processing in distant data centres, edge computing brings the processing power closer to the source of data.

Charmed MongoDB: use cases for financial services

Financial institutions handle vast amounts of sensitive and confidential data, including customer information, transaction details, and regulatory compliance records. A trusted database ensures the security and privacy of this sensitive information, protecting it from unauthorised access, breaches, or cyber threats. MongoDB is the ideal fit, and it’s one of the most widely used databases in the financial services industry. It provides a sturdy, adaptable and trustworthy foundation.

Cloud storage security best practices

Data is like the crown jewels of any organisation, if lost or exposed there could be severe repercussions. Failure to protect against system failure could lead to the loss of business data rendering a business non-functional and ultimately causing it’s failure. Exposing sensitive data to unauthorised parties not only leads to reputational damage, but can also cause businesses to incur massive fines.