Data has become the lifeblood of business operations. New applications are making it possible to monitor thousands of systems in real time and proactively alert IT within milliseconds of a failure. These new applications apply analytics to large volumes of data in order to explain what is happening within a company’s systems. With more data available to operational applications, algorithms can better detect and recommend solutions to system problems with minimal human efforts. As a result, operators gain more meaningful visibility into how well systems are functioning and can deliver higher quality of service in increasingly complex environments. Gaining this kind of operational visibility in complex systems requires orders of magnitude more data than legacy solutions currently collect.
In large organizations, each operations team is focused on its own responsibilities and priorities. Using purpose built applications, each team has limited visibility into how other parts of the organization operate. Businesses have operational tools silos that mirror their organizations’ structures. Applications for triage and diagnostics, application performance monitoring, security, fraud prevention, operations and management, support, and marketing all collect, store and analyze data in separate systems. Just like the business processes they support, each operations tool collects its own data from the same systems as every other tool.
These tools each store their own copies of full fidelity data for a short period of time and then sample data to retain a summary for reporting and analysis. The result is multiple copies of the same data in different incompatible systems and no visibility between them. As data continues to grow, silos continue to sprawl, and the cost of data storage grows as a multiple of the number of data silos.
The current model of collecting data into disparate IT tools is not scaling to meet business needs. The growing complexity of front line business applications is putting a massive burden on IT that can only be solved with smarter IT applications. Even if legacy tools were updated to handle data more intelligently, IT could not afford to keep multiple copies of the same data.
This approach to data-centric operations management is starkly different from how today’s inadequate tools operate. Massively increased amounts of data more can accurately inform algorithms and increase visibility and productivity for IT. In order to handle this volume of data, IT applications must use new infrastructure built and architected around a single repository of all IT data instead of multiple silos of the same data. The new class of IT operations applications will use this larger data set to data inform operators about correlations and anomalies within the thousands of systems running the business.
We continue to speak with CIOs who require a new class of IT operations applications to effectively run their businesses, assure quality service, and shorten mean time to resolution. Here at ScalingData we believe that these new applications should leverage the same principles used in frontline big data applications, such as machine learning, predictive models and personalization algorithms. Ultimately, IT operators will be better served by business operations applications that utilize ever greater amounts of data to feed predictive models, personalization algorithms and machine learning. This paradigm shift will require eliminating silos in the data center and creating a single data repository.
What data collection and storage trends are you seeing in your organization? We @scalingdata would love to hear from you on what you are doing about silos and duplicates. Email or tweet!