The latest News and Information on DevOps, CI/CD, Automation and related technologies.
With automation and CI/CD practices, the entire AI workflow can be run and monitored efficiently, often by a single expert. Still, running AI/ML on GPU instances has its challenges. This tutorial shows you how to meet those challenges using the control and flexibility of CircleCI runners combined with Scaleway, a powerful cloud ecosystem for building, training, and deploying applications at scale.
In a traditional DevOps implementation, you automate the build, test, release, and deploy process by setting up a CI/CD workflow that runs whenever a change is committed to a code repository. This approach is also useful in MLOps: If you make changes to your machine learning logic in your code, it can trigger your workflow. But what about changes that happen outside of your code repository?
This week, NVIDIA unveiled what they are calling “the world’s most powerful GPU for supercharging AI and HPC workloads,” the H200 Tensor Core GPU. There is much hype around the H200 as it is the first GPU with HBM3e. The larger and faster memory will further enable generative AI, large language models, and advance scientific computing for HPC workloads. Read the NVIDIA press release.
The cognitive bias known as the streetlight effect describes our desire as humans to look for clues where it’s easiest to search, regardless of whether that’s where the answers are. For decades in the software industry, we’ve focused on testing our applications under the reassuring streetlight of GitOps. It made sense in theory: wait for changes to the codebase made by engineers, then trigger a re-test of your code. If your tests pass, you’re good to go.