The latest News and Information on Monitoring for Websites, Applications, APIs, Infrastructure, and other technologies.


Monitor Hazelcast with Datadog

Hazelcast is a distributed, in-memory computing platform for processing large data sets with extremely low latency. Its in-memory data grid (IMDG) sits entirely in random access memory, which provides significantly faster access to data than disk-based databases. And with high availability and scalability, Hazelcast IMDG is ideal for use cases like fraud detection, payment processing, and IoT applications.


Building New Teams While Working Remotely

2020 has been a year of challenges, and across all industries, companies are working hard and fast to remain efficient in the face of a new normal. Now that hiring freezes are slowly thawing out, many companies are starting to hire new people virtually and want to create remote cohesion between new and existing teammates. The lack of physical proximity means your team will need to ramp up on communication, transparency, and accountability.


Challenges of Going Serverless (2020 edition)

While we know the many benefits of going serverless - reduced costs via pay-per-use pricing models, less operational burden/overhead, instant scalability, increased automation - the challenges are often not addressed as comprehensively. The understandable concerns over migrating can stop any architectural decisions and actions being made for fear of getting it wrong and not having the right resources.


DRY (Don't Repeat Yourself) on the cloud with Pulumi

Any enterprise working on the cloud is likely to use Infrastructure as Code as it simplifies the deployment process and makes iterative serverless applications easier to manage. There are several open source tools out there that work with most cloud providers, each of them with their own implementation processes. In this article we are going to dive into the workings of one particular tool, Pulumi.


Community Highlight: How Supralog Built an Online Incremental Machine Learning Pipeline with InfluxDB OSS for Capacity Planning

This article was written by Gregory Scafarto, Data Scientist intern at Supralog, in collaboration with InfluxData’s DevRel Anais Dotis-Georgiou. At InfluxData, we pride ourselves on our awesome InfluxDB Community. We’re grateful for all of your contributions and feedback. Whether it’s Telegraf plugins, community templates, awesome InfluxDB projects, or Third Party Flux Packages, your contributions continue to both impress and humble us.

Connecting to 3rd-party APIs - How to work with REST API connector

In this video, I’m working with our REST API connector to connect to our own Integration Management API. We offer several ways to authenticate an API depending on its configuration. Here we take basic authentication with username and password. Now with the API, the username is actually the email address you registered with. You can navigate to the Profile Information page to copy it from there. For the password, you will take our API Key that you will find on the Profile Information page as well. Just click on it and it will be automatically copied to you clipboard. After you added the credentials, you can proceed with the configuration. Here you have several options to choose from. I’m leaving everything as it is but I need to add the URL. In our case, the URL is also provided on the Profile Information page. In other cases – with other APIs –, you will need to check the corresponding API documentation. Now all I need to do is retrieve a sample to check if the configuration has been done correctly – and it looks good. Et voilà, now I can use the’s own API in my integration flow.

Getting started with the Grafana Cloud Agent, a remote_write-focused Prometheus agent

Hi folks! Éamon here. I’m a recent-ish addition to the Solutions Engineering team and just getting my feet wet on the blogging side so bear with me. :) Back in March, we introduced the Grafana Cloud Agent, a remote_write-focused Prometheus agent. The Grafana Cloud Agent is a subset of Prometheus without any querying or local storage, using the same service discovery, relabeling, WAL, and remote_write code found in Prometheus.


Sentry Data Wash Now Offering Advanced Scrubbing

Over the past week, we rolled out access to Advanced Data Scrubbing for all users. If you were one of our Early Adopters, you’ve known about this for a couple of months. As the name implies, it’s an addition to our existing server-side data scrubbing features, meant to provide greater control and more tools to help you choose which data to redact from events. One of Sentry’s main selling points as an error monitoring platform is the data it collects and aggregates.