Operations | Monitoring | ITSM | DevOps | Cloud

The latest News and Information on Containers, Kubernetes, Docker and related technologies.

Stay Ahead of India's Data Regulations with Civo's Sovereign Cloud

As India continues its digital transformation, data sovereignty and compliance have become top priorities for businesses. Civo's India Sovereign Cloud is designed to meet the evolving needs of Indian businesses, providing a secure and compliant cloud infrastructure that empowers organizations to scale securely within the country. Learn more about our cloud solution and how it can help your business stay compliant, secure, and ready for the future. -► civo.com/India.

Is observing TLS traffic through eBPF a security risk?

Monitoring deployed applications with eBPF is quickly becoming the standard for good reasons, eBPF: Revolutionizing Observability for DevOps and SRE Teams. Not in the least because it allows monitoring to be a purely operations affair, instead of having to instrument each and every application individually. The security-conscious SRE and SRE manager will immediately ask the question: is this secure? And how about this claim that HTTPS traffic can be monitored?

Real-Time, Automated Resource Optimization for Kubernetes Workloads

Struggling with underutilized Kubernetes resources or rising cloud costs? Learn how Pepperdata Capacity Optimizer delivers real-time, automated resource optimization for Kubernetes and Amazon EMR workloads—helping teams reduce costs and boost performance without manual tuning. In this video, discover how Pepperdata helps DevOps, platform engineers, and FinOps teams.

Heroku vs AWS: Differences & What to Choose for Mid-Size & Startups in 2025?

Heroku and AWS offer distinct benefits for startups and mid-size companies. This guide compares pricing, scalability, security, and developer experience to help you choose the right cloud platform based on your team’s needs and growth goals.

Auto Scaling of Kubernetes Workloads Using Custom Application Metrics

Orchestration platforms such as Kubernetes and OpenShift help customers reduce costs by enabling on-demand, scalable compute resources. Customers can manually scale out and scale in their Kubernetes compute resources as needed. Autoscaling is the process of automatically adjusting compute resources to meet a system's performance requirements. As workloads grow, systems require additional resources to sustain performance and handle increasing demand.

Data Sovereignty Demystified: What You Need to Know

As data continues to flow across borders, understanding data sovereignty is more important than ever. Kunal Kushwaha, explores the laws and regulations governing data storage and transfer, and the implications of data sovereignty in the UK and India. Learn how data sovereignty affects individuals, businesses, and governments, and discover the challenges and opportunities that arise from it. For organizations looking to maintain control over their data, Civo offers Sovereign Cloud solutions in the UK and India.

What's Holding Back AI Adoption in India?

Earlier this year, I spent a few weeks in India, visiting universities, speaking at meetups, and catching up with founders. What stood out wasn’t just the excitement about AI, but the focus on what it can actually do today. The curiosity about GenAI and big-picture questions around AGI is there, but most conversations centered around real needs: learning faster, applying for jobs, and getting healthier.

Pepperdata In Collaboration with AWS | Optimize Utilization and Cost for Kubernetes Workloads

In this AWS Startup Partner Spotlight, discover how Pepperdata empowers cloud-native startups to optimize their Kubernetes and Amazon EMR workloads in real time. With automated resource optimization, companies can reduce costs by an average of 30% while increasing utilization by up to 80%—without any manual tuning. Whether you're scaling rapidly or managing unpredictable workloads, Pepperdata ensures your infrastructure runs efficiently and cost-effectively from day one.

Stop Guessing, Start Measuring: Optimizing Rancher Continuous Delivery With Fleet Benchmarks

Rancher Continuous Delivery (known as Fleet) can be used in a workflow to deploy applications to many clusters. With its GitOps support, it enables downstream clusters to pull updates from a Git repository. We know of users that monitor several hundred Git repositories and deploy to a thousand clusters. To make this scale possible, several intermediate steps are necessary. First, the application is converted into separate bundles, which are then targeted at clusters.

Why Manual Tuning Fails: A Better Way to Optimize Kubernetes Workloads

As a data platform engineer, you’re tasked with running complex workloads—Apache Spark jobs, AI/ML pipelines, batch ETL—across dynamic Kubernetes environments. Performance matters. Time spent tuning matters. And so does cost. But if you’re still relying on manual resource tuning to optimize your workloads, you’re playing a losing game. Sure, you can tweak CPU and memory requests by hand. You can comb through Prometheus metrics, look at job logs, estimate peaks.