Operations | Monitoring | ITSM | DevOps | Cloud

August 2021

Indexing Strategies for SQL Server Performance

One of the easiest ways to increase query performance in SQL Server is to make sure it can quickly access the requested data as efficiently as possible. In SQL Server, using one or more indexes can be exactly the fix you need. In fact, indexes are so important, SQL Server can warn you when it figures out there’s an index missing that would benefit a query.

Indexes Matter-How Poor Index Management Can Ruin Query Performance

Ideally, database queries use the fewest possible resources: time, memory, bandwidth, etc. Lower resource consumption maps to better query performance. To find relevant data in a table, a database query relies on lookup operations, and a table index can help a query efficiently find the table values it needs. With an efficient, well-designed table index, a database query can find the table data it needs, avoiding the need to "scan"—or search through—all the table data.

MySQL queries - faster than light (almost)

At the moment I’m working at a tool for migrating Icinga 2 IDO history to Icinga DB . Sure, one could also run IDO and Icinga DB in parallel for one year and then switch to Icinga DB if they only care for the history of the past year. But the disadvantage is: one would have to wait one year. Nowadays (in our quickly changing world) that’s quite a long time.

Basic SQL Server Query Tuning Secrets Every SQL Admin Should Know

The performance of your applications is a complex, multi-layered puzzle. Performance can be negatively impacted at the application layer or even by remote calls to networked services. However, the most common bottleneck for applications is the data storage layer. The most common data storage tier for applications is a relational database, whose performance can vary widely depending on query optimization.

How to Use Intelligent Query Processing to Boost Query Outcomes

Experienced SQL Server database administrators and developers spend years learning best practices within SQL Server and how to identify performance pitfalls in the query optimizer. Starting with SQL Server 2017, Microsoft introduced a family of features called “Intelligent Query Processing” to provide more consistent performance for your queries.

An Overview of Intelligent Query Processing in SQL Server

When you issue a query to SQL Server or Azure SQL, it internally tries to optimize a query plan through calculations such as whether to use an index. Much of SQL Server’s query plans are based on its best guess of what will happen at run time when your query executes. Even when SQL Server guesses right, as your data changes (especially as the volume of data increases), optimal plans can end up performing so poorly, they can drag your whole system’s performance down.

Monitor and visualize database performance with Datadog Database Monitoring

When you’re running databases at scale, finding performance bottlenecks can often feel like looking for a needle in a haystack. In any troubleshooting scenario, you need to know the exact state of your database at the onset of an issue, as well as its behavior leading up to it.

NiCE Oracle Management Pack 5.2 for Microsoft SCOM

TThe Management Pack provides clear and precise performance indicators and timely alerts enriched by pinpointing problem identification and troubleshooting information. It streamlines the workflow and helps for better planning based on detailed reports. The integration into System Center enables a single pane of glass view into your Oracle environment, secured by Microsoft technologies.

Choose the right time series database | Aiven Info Bytes

You have a load of time-stamped data coming in and you realize you need a time series database. But which one should you choose? Watch the video to find out! ABOUT AIVEN We help organizations fuel the continuous innovation needed to create awesome, data-intensive applications by using the leading open source technologies. After building expertise managing mission-critical data infrastructure for companies like F-Secure and Nokia, Aiven’s founders noticed that cloud adoption was increasing but infrastructure solutions were either proprietary or difficult to translate into business results.

Visibility Into Distributed Availability Groups With SQL Sentry

I began my career as an associate software development engineer in June of 2020, and during my short time in the industry, I’ve had the opportunity to build and troubleshoot continuous integration and continuous development (CI/CD) pipelines, work on many different technologies within several SolarWinds®, formerly SentryOne, products, and learn proper engineering practices.

Why Observability Requires a Distributed Column Store

Honeycomb is known for its incredibly fast performance: you can sift through billions of rows, comparing high-cardinality data across thousands of fields, and get fast answers to your queries. And none of that is possible without our purpose-built distributed column store. This post is an introduction to what a distributed column store is, how it functions, and why a distributed column store is a fundamental requirement for achieving observability.

How to Troubleshoot Apache Cassandra Performance Using Metrics and Logs in Debugging

In the era of data abundance, there exists a significant need for database systems that can effectively manage large quantities of data. For certain types of applications, an oft-considered option is Apache Cassandra. Like any other piece of software, however, Cassandra has issues that could potentially impact performance. When this happens, it’s critical to know where to look and what to look for in the effort to quickly restore service to an acceptable level.

Exploring Your Data Universe

You can learn a lot about an organization by looking at their data. For example, if I see “LoanToValue” (LTV) or “CollateralValue,” I could surmise they deal with financial data—specifically loans—in some fashion. Developers and database administrators with domain knowledge about the intrinsic meaning of the data are precious friends. These individuals understand the meaning behind the data and how it works within the applications of the organization.

Compliance in your Database DevOps pipeline - continuous classification with SQL Data Catalog

Keeping classifications up to date across a constantly evolving structured data landscape is a difficult task, however it can become part of your DevOps process instead of simply offering further red tape to your development teams. Join Chris Unwin, a solution engineer at Redgate Software, to see how you can include SQL Data Catalog within your upstream DevOps process so that nothing in your Production environments is ever without classification.

Generating DDL Statements to Recreate Single Objects

Every database administrator (DBA) is—first and foremost—human. And everyone makes mistakes. It’s not the absence of mistakes but rather how you prepare for those mistakes that makes you a great DBA. Luckily, there are many ways to prepare for those mishaps, whether the errors are made by you or someone else on your team. One commonly made mistake is to drop an object in a database or accidentally delete data.

A Complete Guide of Database Monitoring

A database is a collection of organized information for easy access and management. Computer databases generally consist of aggregated data or files that contain information about customers, transactions, or inventories. Regular monitoring of the database’s performance is necessary to ensure that it is running properly and to detect issues as they arise. Here is a short database monitoring guide that can assist you in choosing the right tools.

A DBA's Perspective: What Is DevOps?

If you’ve worked in IT in recent years, you’re no doubt familiar with the term “DevOps.” Accelerating the pace of development and faster delivery of new features is the goal. DevOps involves integrating the development life cycle with Agile methodology. DevOps practices applied to database operations is now commonly referred to as DataOps.

How to Monitor Redis Logs and Metrics

With a multitude of digital options available in almost every industry, it’s become increasingly critical that applications and services provide a positive user experience. Doing so requires a high level of availability, made possible (in part) by efficiently identifying and resolving issues with the system, when they occur. To achieve this, monitoring all critical components of an application and its infrastructure is a necessity.