Logging

datadog

Monitor JavaScript console logs and user activity with Datadog

Monitoring backend issues is critical for ensuring that requests are handled in a timely manner, and validating that your services are accessible to users. But if you’re not tracking client-side errors and events to get visibility into the frontend, you won’t have any idea how often these issues prompt users to refresh the page—or worse, abandon your website altogether.

unomaly

Observing Google Cloud Platform Services Best Practices

Google Cloud Platform (GCP), offered by Google, is a suite of cloud computing services that runs on the same infrastructure that Google uses internally for its end-user products, such as Google Search, Gmail and YouTube. Google Cloud Platform provides infrastructure as a service, platform as a service, and serverless computing environments.

elastic

Testing data shapes with go-lookslike

I’d like to introduce you to a new open source go testing/schema validation library we’ve developed here at Elastic. It’s called Lookslike. Lookslike lets you match against the shape of your golang datastructures in a way similar to JSON Schema, but more powerful and more Go-like. It does a number of things that we couldn’t find in any existing go testing libs.

grafana

Loki's Path to GA: Loki-Canary Early Detection for Missing Logs

Launched at KubeCon North America last December, Loki is a Prometheus-inspired service that optimizes storage, search, and aggregation while making logs easy to explore natively in Grafana. Loki is designed to work easily both as microservices and as monoliths, and correlates logs and metrics to save users money. Less than a year later, Loki has almost 6,500 stars on GitHub and is now quickly approaching GA.

sumologic

How to monitor NGINX logs

In part one of our introduction to NGINX “What is NGINX” , we went over the basic history of NGINX, the difference between Apache and NGINX, and why you would use NGINX over Apache in certain environments and web applications. Today we’ll be diving deeper into NGINX and going over topics such as web server performance, monitoring said performance, how to obtain and archive logs for deeper analysis, and how to even tell which web server you’re running on your environment.

datadog

Introducing Metrics from Logs and Log Rehydration

As your application grows in size and complexity, it becomes increasingly difficult to manage the number of logs it generates and the cost of ingesting, processing, and analyzing them. Organizations often have little control over fluctuations in the volume of logs generated—and the resulting costs of collecting them—so they are forced to limit the number of logs generated by their applications, or to pre-filter logs before sending them to their log management platform.

datadog

Dash 2019: Guide to Datadog's newest announcements

At Dash 2019, we are excited to share a number of new products and features on the Datadog platform. With the addition of Network Performance Monitoring, Real User Monitoring, support for collecting browser logs, and single-pane-of-glass visibility for serverless environments, Datadog now provides even broader coverage of the modern application stack, from frontend to backend.

logdna

3X Growth is Quite a Milestone, And It's Only the Beginning

When you start a company – or a third company as is the case for Lee and me – you start with a problem statement, a product you believe in, and a lot of hope. This means when growth goes as planned or exceeds expectations, you shouldn’t be surprised. This is what is supposed to happen. Great Product + Market Opportunity + Great Team = Successful Business. Intellectually, I know all this, but it is still exciting to see it come to fruition.

elastic

How to monitor NGINX web servers with the Elastic Stack

In this article, we'll be looking at how we can monitor NGINX using the various components of the Elastic Stack. We'll use Metricbeat and Filebeat to collect data. This data will be shipped off to and stored within Elasticsearch. Finally, we'll view that data with Kibana. Metricbeat will collect data related to connections (active, handled, accepted, etc.) and the total number of client requests. Filebeat will gather data related to access and error logs.

logz.io

Deploying Redis with the ELK Stack

In a previous post, I explained the role Apache Kafka plays in production-grade ELK deployments, as a message broker and a transport layer deployed in front of Logstash. As I mentioned in that piece, Redis is another common option. I recently found out that it is even more popular than Kafka! Known for its flexibility, performance and wide language support, Redis is used both as a database and cache but also as a message broker.