The Future of Business Monitoring is Here & it's Autonomous

As the business world continues to integrate AI and machine learning to better manage big data processes, one area that arguably has benefitted the most is business monitoring. From IT management to business intelligence, the last few years have seen a drastic shift in how companies are monitoring their data.


Qlik and Fortune Launch "History of the Fortune Global 500"

Earlier this year, we launched a unique partnership with Fortune Magazine, with the first-ever data analytics site supporting the publication of the annual Fortune 500 list. Today, we extended that partnership with the debut of the “History of the Fortune Global 500,” our interactive data analytics site timed with the publication of the 30th anniversary of the Fortune Global 500 list.


How to Create SQL Percentile Aggregates and Rollups With Postgresql and t-digest

When it comes to data, let’s start with the obvious. Averages suck. As developers, we all know that percentiles are much more useful. Metrics like P90, P95, P99 give us a much better indication of how our software is performing. The challenge, historically, is how to track the underlying data and calculate the percentiles. Today I will show you how amazingly easy it is to aggregate and create SQL based percentile rollups with Postgresql and t-digest histograms!


How to achieve product-market fit

Imagine going to work only to find that your inbox is flooded with customers telling you how happy they are with your software. People are in such a hurry to download your app, you need to scale your servers to meet the demand before the infrastructure crashes. Your phone rings: it’s a tech journalist trying to book an interview with you about your company's growth. This is the dream for every business owner and entrepreneur. But the reality is often in stark contrast to the scenario above.

What's Your Streaming-Data Strategy?

Are you ready to harvest the massive real-time data that your organization generates? You need to master streaming data to  gain an edge in business, in every industry. But, most businesses still rely on batch and incremental processing. If that’s you, don’t despair. Join this session to understand key concepts, common technologies and best practices you need to succeed with streaming  data. You will  also learn about the Hitachi Vantara streaming data stack and how we can help you meet your goals.

Deliver Analytics-Ready Data to the Cloud With Snowflake and Hitachi Vantara

One of the toughest challenges for data professionals today is migrating data from on-premises environments to the cloud. Many companies still lack the tools and infrastructure to ingest and process complex datasets to achieve critical business outcomes. Tune in for a joint-session with Snowflake and Hitachi Vantara as we discuss best practices to address common edge-to-multicloud issues and how our joint offering can dramatically simplify data preparation, migration and analytics tasks to help deliver analytics-ready data in the cloud.

Getting Started: Writing Data to InfluxDB

This is a beginner’s tutorial for how to write static data in batches to InfluxDB 2.0 using these three methods. Before beginning, make sure you’ve either installed InfluxDB OSS or have registered for a free InfluxDB Cloud account. Registering for an InfluxDB Cloud account is the fastest way to get started using InfluxDB.


Getting Started: Streaming Data into InfluxDB

This is Part Two of Getting Started Tutorials for InfluxDB v2. If you’re new to InfluxDB v2, I recommend first learning about different methods for writing static data in batches to InfluxDB v2 in Part One of this Getting Started series. This is a beginner’s tutorial for how and when to write real-time data to InfluxDB v2. The repo for this tutorial is here. For this tutorial, I used Alpha Vantage’s free “Digital & Crypto Currencies Realtime” API to get the data.