Operations | Monitoring | ITSM | DevOps | Cloud

Technology

The latest News and Information on APIs, Mobile, AI, Machine Learning, IoT, Open Source and more!

Graphite Energy Uses Time Series Data to Drive Industrial Decarbonization Efforts

One major challenge with decarbonization of industrial heat is converting the variability of renewable energy into the reliability required by process plants. Solar panels only generate energy when the sun is out, and wind turbines generate energy when the wind blows. Industry, however, has a consistent and persistent need for energy. Graphite Energy, based in Australia, recognized this disconnect and set out to create a solution to it.

A Platform Gaining Momentum: Announcing New InfluxDB Features for Industrial IoT

Data – specifically time series data – continues to be the key ingredient for successful digital transformation. No matter the industry, time series data helps companies understand the activities and output of people, processes and technologies impacting their business. The effective management and use of time series data has emerged as the best path towards this goal.

Low latency Linux kernel for industrial embedded systems - Part III

Welcome to the concluding chapter of this three-part blog series on the low latency Ubuntu kernel for industrial embedded systems. Each blog is standalone and can be read independently from the others, although you may want to start at the beginning for some continuity. If you need a quick refresher on userland and kernel space, we recommend you check Part I out first.

Minimize downtime, and improve performance for Verizon 5G Edge applications with Sumo Logic

It is safe to say that customers and enterprises have come to expect their digital experiences to be near instantaneous. Fifty three percent of consumers will wait no more than three seconds for a web page to render before abandoning the site. But new technologies, like connected vehicles, AR/VR, and industrial automation, are pushing the limits of what traditional architecture can handle when it comes to delivering ultra-low latency.

HPC workloads on Robin Cloud Native Platform (CNP) using Nvidia GPU (MIG A100)

In today’s world, graphics processing units or GPUs have attracted a lot of attention as the optimal vehicle to run artificial intelligence (AI), machine learning (ML) and deep learning (DL) workloads. These workloads require massive amounts of data, both ultra-high speed and parallel processing, along with flexibility and high availability. It is clear that high-performance computing (HPC) with graphics processing unit (GPU) systems are required to support cutting-edge workloads.

Best Practices to Secure Your Corporate-Owned Personally Enabled (COPE) Android Devices

COPE stands for Corporate Owned Personally Enabled Device. These are devices that are owned and provided by the company for work but are also expected to be used for personal reasons. It’s a term that’s especially relevant today, with the adoption of Everywhere Workplace, as companies are giving employees more freedom with corporate-owned and controlled devices.

Low latency Linux kernel for industrial embedded systems - Part II

Welcome to Part II of this three-part blog series on adopting the low latency Ubuntu kernel for your embedded systems. In case you missed it, check out Part I for a brief intro on preemptable processes in multiuser systems and memory split into kernel and user space. The low-latency Ubuntu kernel ships with a 1000 Hz tick timer granularity (CONFIG_HZ_1000) and the maximum preemption (CONFIG_PREEMPT) available in the mainline Linux kernel.

Expert believes machine learning can improve after failing for Covid

Machine learning and artificial intelligence (AI) systems have long been touted as the future of medicine. A patient can walk into a doctors office, and after a quick scan discover their risk for a variety of diseases, and be given information on how to prevent them from occurring. Patients suffering from diseases like cancer can have treatment decisions made by an AI that can optimize care and maximize likelihood of survival.