The Impact of AI on the Data Analyst

The introduction of AI, automation and data storytelling to the world of analytics has not only had an immediate impact on the end users of analytics but also the people that work in the field. While many analysts may fear they will be replaced by automation and AI, CEO of Yellowfin, Glen Rabie, believes that the role of the data analyst will increase in significance to the business and breadth of skills required.

ghost inspector

Automated UI Testing for WordPress

Many websites and even applications online are built on top of a CMS. According to recent survey data, WordPress has a 60% market share, making it the most popular CMS by far. The next closest competitor, Joomla, has only 5.2%! But unlike bespoke software, many people don’t test their WordPress website. While the core of WordPress is fairly well tested by it’s creators, users, and the open source community, the same cannot be said for every plugin and theme.


5 Biggest Mistakes in IoT PoCs

Let me start by saying I love proof of concepts (PoCs), especially when Splunk is involved. PoCs allow me to validate the technical feasibility of our platform with customers and remove any doubts about implementing the technology. While a PoC is an effective way for businesses to evaluate new technologies, you may encounter pitfalls if you’re not well prepared. If you want to give your PoC the best chance of success, it’s important to understand these common mistakes and how to avoid them.


jsDelivr is now handling all load-balancing with FlexBalancer

jsDelivr, the first and only free multi-CDN for open source projects are now using PerfOps FlexBalancer to load balance between their sponsoring CDNs. Until now, jsDelivr had been using Cedexis Openmix to achieve its Multi-CDN load balancing. After analyzing the ease of migration and benefits of FlexBalancer, jsDelivr decided that it was time for a change.


Stop shackling your data-scientists: tap into the dark side of ML / AI models

Developing Artificial Intelligence and Machine Learning models comes with many challenges. One of those challenges is understanding why a model acts in a certain way. What’s really happening behind its ‘decision-making’ process? What causes unforeseen behavior in a model? To offer a suitable solution we must first understand the problem. Is it a bug in the code? A structural error within the model itself? Or, perhaps it’s a biased dataset?