Operations | Monitoring | ITSM | DevOps | Cloud

Honeycomb

How We Leveraged the Honeycomb Network Agent for Kubernetes to Remediate Our IMDS Security Finding

Picture this: It’s 2 p.m. and you’re sipping on coffee, happily chugging away at your daily routine work. The security team shoots you a message saying the latest pentest or security scan found an issue that needs quick remediation. On the surface, that’s not a problem and can be considered somewhat routine, given the pace of new CVEs coming out. But what if you look at your tooling and find it lacking when you start remediating the issue?

Building a Secure OpenTelemetry Collector

The OpenTelemetry Collector is a core part of telemetry pipelines, which makes it one of the parts of your infrastructure that must be as secure as possible. The general advice from the OpenTelemetry teams is to build a custom Collector executable instead of using the supplied ones when you’re using it in a production scenario. However, that isn’t an easy task, and that prompted me to build something.

Escaping the Cost/Visibility Tradeoff in Observability Platforms

For developers, understanding the performance of shipped code is crucial. Through the last decade, a tablestake function in software monitoring and observability solutions has been to save and track app metrics. Engineers love tools that get out of your way and just work, and the appeal of today’s best-in-class application performance monitoring (APM) suites lies in a seamless day zero experience with drop-in agent installs, button click integrations, and immediate metrics collection.

Product Managing to Prevent Burnout

I’m currently working on a small team within Honeycomb where we’re building an ambitious new feature. We’re excited—heck, the whole company is—and even our customers are knocking on our door. The energy is there. With all this excitement, I’ve been thinking about a risk that—if I'm not careful—could severely hinder my team's ability to ship on time, celebrate success, and continue work after launch: burnout.

Ship First, Model Later: A Short Recap of AI.Dev

In a keynote at AI.Dev, Robert Nishihara (CEO, Anyscale) described the shift: A year ago, the people working with ML models were ML experts. Now, they’re developers. A year ago, the process was to experiment with building a model, then put a product on top of it. Now, it’s ship a product, find the market fit, then create customized models. The general-purpose generative AI models available to all of us today (such as ChatGPT) change the way work is done.

Introducing Honeycomb's Microsoft Teams Integration for Enhanced Alert Management

Today marks an exciting milestone at Honeycomb, and we're thrilled to share it with you. We officially launched our integration with Microsoft Teams, a step forward in our continuous effort to streamline and enhance your observability experience. Teams now joins our growing list of over 100 Honeycomb integrations.

AI's Impact on Cloud-Native at KubeCon 2023

Cloud-native developers and practitioners gathered from around the world to learn, collaborate, and network at KubeCon/CloudNativeCon North America 2023 between November 6th and 9th at McCormick Place in Chicago, IL—myself included. This wasn’t my first time attending—I’ve been coming to KubeCon since 2016—but it was easily one of the most exciting experiences I’ve had as part of the Cloud Native community.

ShipHero's Observability Journey to Seamless Software Debugging

ShipHero needed a robust, cost efficient observability platform to support DevOps, customer support, and more. Committed to timely service, ShipHero recognizes that the seamless performance of its software is paramount to customer satisfaction. To maintain this high standard, the development team needs the right data at their fingertips to quickly find and solve problems as they occur.

A Practical Guide to Debugging Browser Performance With OpenTelemetry

So you’ve taken a look at the core web vitals for your site and… it’s not looking good. You’re overwhelmed, and you don’t know what change to make because everything seems like too big of a project to make a real difference. There are so many measurements to keep track of and the standards cited seem even scarier. This is extremely normal. Web performance standards can feel impossible to meet for a lot of us.