Operations | Monitoring | ITSM | DevOps | Cloud

AI

ML and APM: The Role of Machine Learning in Full Lifecycle Application Performance Monitoring

The advent of Machine Learning (ML) has unlocked new possibilities in various domains, including full lifecycle Application Performance Monitoring (APM). Maintaining peak performance and seamless user experiences poses significant challenges with the diversity of modern applications. So where and how does ML and APM fit together? Traditional monitoring methods are often reactive, resolving concerns after the process already affected the application’s performance.

Paving the way for modern search workflows and generative AI apps

Elastic’s innovative investments to support an open ecosystem and a simpler developer experience In this blog, we want to share the investments that Elastic® is making to simplify your experience as you build AI applications. We know that developers have to stay nimble in today’s fast-evolving AI environment. Yet, common challenges make building generative AI applications needlessly rigid and complicated. To name just a few.

Generative AI explained

When OpenAI released ChatGPT on November 30, 2022, no one could have anticipated that the following 6 months would usher in a dizzying transformation for human society with the arrival of a new generation of artificial intelligence. Since the emergence of deep learning in the early 2010s, artificial intelligence has entered its third wave of development. The introduction of the Transformer algorithm in 2017 propelled deep learning into the era of large models.

How Generative AI Makes Observability Accessible for Everyone

We are pleased to share a sneak peek of Query Assistant, our latest innovation that bridges the world of declarative querying with Generative AI. Leveraging our large language models (LLMs), Coralogix’s Query Assistant translates your natural language request for insights into data queries. This delivers deep visibility into all your data for everyone in your organization.

Build Operational Resilience with Generative AI and Automation

For modern enterprises aiming to innovate faster, gain efficiency, and mitigate the risk of failure, operational resilience has become a key competitive differentiator. But growing complexity, noisy systems, and siloed infrastructure have created fragility in today’s IT operations, making the task of building resilient operations increasingly challenging.

Automate insights-rich incident summaries with generative AI

Does this sound familiar? The incident has just been resolved and management is putting on a lot of pressure. They want to understand what happened and why. Now. They want to make sure customers and internal stakeholders get updated about what happened and how it was resolved. ASAP. But putting together all the needed information about the why, how, when, and who, can take weeks. Still, people are calling and writing. Nonstop.

Using Honeycomb for LLM Application Development

Ever since we launched Query Assistant last June, we’ve learned a lot about working with—and improving—Large Language Models (LLMs) in production with Honeycomb. Today, we’re sharing those techniques so that you can use them to achieve better outputs from your own LLM applications. The techniques in this blog are a new Honeycomb use case. You can use them today. For free. With Honeycomb.