The latest News and Information on DevOps, CI/CD, Automation and related technologies.
In a traditional DevOps implementation, you automate the build, test, release, and deploy process by setting up a CI/CD workflow that runs whenever a change is committed to a code repository. This approach is also useful in MLOps: If you make changes to your machine learning logic in your code, it can trigger your workflow. But what about changes that happen outside of your code repository?
This week, NVIDIA unveiled what they are calling “the world’s most powerful GPU for supercharging AI and HPC workloads,” the H200 Tensor Core GPU. There is much hype around the H200 as it is the first GPU with HBM3e. The larger and faster memory will further enable generative AI, large language models, and advance scientific computing for HPC workloads. Read the NVIDIA press release.
The cognitive bias known as the streetlight effect describes our desire as humans to look for clues where it’s easiest to search, regardless of whether that’s where the answers are. For decades in the software industry, we’ve focused on testing our applications under the reassuring streetlight of GitOps. It made sense in theory: wait for changes to the codebase made by engineers, then trigger a re-test of your code. If your tests pass, you’re good to go.
You’ve done your research and decided to use the DevOps approach for your software development process and IT operations. However, before you start tossing around terms like “continuous integration” and “containerization,” there’s an important starting point on your DevOps journey — creating a DevOps implementation roadmap.
How many software pipelines are required to keep your business running? How many of those build, test, release, or deploy code? The most likely answer here is “I don’t know…” followed by a set of mental gymnastics to try and determine a rough number that is most likely incorrect.