Blog Post

Delivering on the DevOps Promise with ITOA

    
February 10, 2015 Author: Amir Halfon

In our last blog, we talked about the emergence of a new generation of IT Operations Analytics (ITOA) tools, which empower sysadmins through Big Data analytics. Today we want to expand the discussion beyond operations, and talk about the relevance of ITOA to developers, specifically from a DevOps perspective.

DevOps is becoming a key concern of many CIOs, as it’s seen as a key enabler for driving IT agility and accelerated delivery in response to business change. There’s a recognition that a systematic, thought-through approach is needed, which includes continuous monitoring and insight as an inherent component. In a sense, continuous insight is what makes continuous delivery possible, which is the linchpin of DevOps (as expressed so eloquently in this diagram, Key DevOps Patterns and Practicesfrom Cameron Haight, Senior Research VP at Gartner). This raises the question: how do you make continuous monitoring of everything a reality?

This question is especially relevant in face of all the different silos of monitoring tools and event and log data across the enterprise. Networks, servers, applications, and transactions are all monitored with different tools and their logs and metrics are stored in different places and analyzed by different systems. This makes it all but impossible to get the comprehensive view that’s required in order to provide accurate feedback when deploying, upgrading, or modifying an application.

What’s needed is a centralized event data warehouse – one place to store all IT data. An event data warehouse combines information from across these silos, not necessarily replacing them, but allowing open access to all the data, and providing predictive analytics and intelligent guidance on top of it. Both of these – large scale data management and deep analytics – are the essential capabilities of ITOA, and the reason we consider it a key enabler for DevOps.

But in order to fully realize the DevOps promise, ITOA needs to extend beyond mere guidance, to consider the response to any issues and alerts being raised by the system – are they marked as valid or invalid? The system needs to be able to learn from the users’ responses in order to reduce false positives and improve the accuracy of future predictions. In other words, machine learning and predictive analytics should be an integral part of the system in order to provide the accurate, timely feedback required to make DevOps a reality.

How do you build such a system? Our CTO, Eric Sammer, will be presenting our approach to doing this, from source to solution, during his session at the upcoming Strata + Hadoop World conference in San Jose, and we will be blogging about it in the next installment as well. In the mean time, you can email us at info@scalingdata.com if you’d like to discuss further.