Operations | Monitoring | ITSM | DevOps | Cloud

April 2020

NodeJS Instrumentation - Adding Custom Tags to Spans | Datadog Tips & Tricks

In part 1 of this 4 part series, you’ll learn how to use manual instrumentation to add additional detail to traces. We’ll add new tags, or attributes, to the spans generated by our NodeJS application, allowing for more insightful data visualizations in App Analytics.

NodeJS Instrumentation - Creating Custom Spans for Method-Level Visibility | Datadog Tips & Tricks

In part 2 of this 4 part series, you’ll learn how to instrument your NodeJS application to capture custom method-level spans, allowing visibility into how specific methods behave in your application. Flame graphs allow for deep insight into the performance of your code. During instrumentation, we can capture custom spans for deeper layers of visibility in the resulting flame graphs. In this video, we use instrumentation to capture a method-level span, allowing us to see the performance of that specific method in our flame graphs in the Datadog UI.

NodeJS Instrumentation - Adding Analyzed Spans for Improved Data Analytics | Datadog Tips & Tricks

In part 4 of this 4 part series, you’ll learn how to add Analyzed Spans to your traces to open up even more data search and aggregation capabilities via App Analytics. In this video, we will walk you through how you can turn any span into an Analyzed Span. Analyzed Spans function like the root spans of a trace, allowing us to turn the tags embedded in them into facets for advanced data aggregation and searching in App Analytics. You can check out how to add tags to spans—and how to utilize them in App Analytics—in our first video of the series here (link to the first video).

Key metrics for OpenShift monitoring

Red Hat OpenShift is a Kubernetes-based platform that helps enterprise users deploy and maintain containerized applications. Users can deploy OpenShift as a self-managed cluster or use a managed service, which are available from major cloud providers including AWS, Azure, and IBM Cloud. OpenShift provides a range of benefits over a self-hosted Kubernetes installation or a managed Kubernetes service (e.g., Amazon EKS, Google Kubernetes Engine, or Azure Kubernetes Service).

OpenShift monitoring with Datadog

In Part 1, we explored three primary types of metrics for monitoring your Red Hat OpenShift environment: We also looked at how logs and events from both the control plane and your pods provide valuable insights into how your cluster is performing. In this post, we’ll look at how you can use Datadog to get end-to-end visibility into your entire OpenShift environment.

OpenShift monitoring tools

In Part 1 of this series, we looked at the key observability data you should track in order to monitor the health and performance of your Red Hat OpenShift environment. Broadly speaking, these include cluster state data, resource usage metrics, and information about cluster activity such as control plane metrics and cluster events. In this post, we’ll cover how to access this information using tools and services that come with a standard OpenShift installation.

Configuring a Custom Agent Check to Run on IoT Devices (Raspberry Pi) | Datadog Tips & Tricks

In this video, you'll learn how to create, configure, and deploy a custom check for your Datadog agent to run on a Raspberry Pi. The results are custom metrics sent into your Datadog account which track your service provider's network speeds over time.

Monitor ECS applications on AWS Fargate with Datadog

AWS Fargate allows you to run applications in Amazon Elastic Container Service without having to manage the underlying infrastructure. With Fargate, you can define containerized tasks, specify the CPU and memory requirements, and launch your applications without spinning up EC2 instances or manually managing a cluster. Datadog has proudly supported Fargate since its launch, and we have continued to collaborate with AWS on best practices for managing serverless container tasks.

Monitoring Kafka with Datadog

Kafka deployments often rely on additional software packages not included in the Kafka codebase itself—in particular, Apache ZooKeeper. A comprehensive monitoring implementation includes all the layers of your deployment so you have visibility into your Kafka cluster and your ZooKeeper ensemble, as well as your producer and consumer applications and the hosts that run them all.

Monitor Jenkins jobs with Datadog

Jenkins is an open source, Java-based continuous integration server that helps organizations build, test, and deploy projects automatically. Jenkins is widely used, having been adopted by organizations like GitHub, Etsy, LinkedIn, and Datadog. You can set up Jenkins to test and deploy your software projects every time you commit changes, to trigger new builds upon successful completion of other builds, and to run jobs on a regular schedule.

Monitoring Kafka performance metrics

Kafka is a distributed, partitioned, replicated, log service developed by LinkedIn and open sourced in 2011. Basically it is a massively scalable pub/sub message queue architected as a distributed transaction log. It was created to provide “a unified platform for handling all the real-time data feeds a large company might have”.Kafka is used by many organizations, including LinkedIn, Pinterest, Twitter, and Datadog. The latest release is version 2.4.1.

Collecting Kafka performance metrics

If you’ve already read our guide to key Kafka performance metrics, you’ve seen that Kafka provides a vast array of metrics on performance and resource utilization, which are available in a number of different ways. You’ve also seen that no Kafka performance monitoring solution is complete without also monitoring ZooKeeper. This post covers some different options for collecting Kafka and ZooKeeper metrics, depending on your needs.