There are more than 10 million idle servers worldwide—taking up space, soaking up power, and many still costing companies maintenance and support. This translates to $30 billion dollars annually in wasted computing power.
IT operations can be far more efficient if teams pay more attention to the data these systems are collecting. Most companies have huge volumes of data from data center operations at their disposal—both historical and real-time —collected by numerous data center management tools across the IT landscape.
Collectively, this data can provide both broad and deep insight into the data center. That insight can help IT ensure that it is meeting the performance and availability requirements of the business while also eliminating waste and optimizing data center spend. Unfortunately, this sort of data is usually scattered across multiple tools in multiple formats. And important segments of the data are unstructured.
Advanced analytics and reporting tools can assist IT in pulling these pieces of data together to create a holistic, 360-degree, living view of the data center. With these tools and techniques, you can access the fragmented data, consolidate it, normalize it, and bring it to a central place where the tools can be used to analyze and report on the data for a variety of business purposes.
Here are five practical ways you can apply analytics and reporting on that data to increase efficiency and cut costs so they can maximize their return on investment in IT assets.
1. Optimally balance resource capacity and usage
Data center management tools, such as automatic discovery and configuration management database (CMDB), capture a wealth of data on physical and virtual compute, network, and storage resources. This data includes what assets are out there, where they’re located, what their dependencies are, who owns them, what business services they support, and what they cost. The data also includes end-to-end views of applications running on those assets as well as services delivered by cloud providers.
Telemetry data offers both historical and real-time usage data on the assets. Some increases in usage are normal and predictable, such as when usage typically increases at month end because of financial reasons. Retailers experience seasonal spikes as well as spikes related to sales promotions.
Analytics and reporting tools can utilize this data to determine usage patterns that provide insight into how efficiently the organization is using compute, storage, and network capacity. Organizations can leverage this insight to optimize capacity allocation, ensuring that you don’t overbuy capacity while also ensuring that you have enough headroom to meet demand.
Predictive analytics enables you to project usage into the future so that you can ensure that precisely the right capacity is available when it’s needed to maintain service levels.
2. Eliminate wasteful resource allocation
The combination of asset and usage data also enables you to identify waste. Remember those 10 million idle servers worldwide? Analytics and reporting tools can pinpoint physical servers that are running but whose usage is zero. Retiring them or reallocating them to other workloads eliminates waste and reduces spending.
In addition, asset and usage data helps uncover resources that have been overallocated capacity. Organizations that are moving into virtualization without a clear understanding of the capacity requirements of the virtualized machines (VMs) may overallocate capacity to be sure there is enough capacity for the VMs and applications running on a physical server. It’s not uncommon to see a virtual server running on a powerful host even though that server is not doing heavy work and does not require the capacity that has been allocated to it. Moving it to a less powerful host in the data center frees up capacity on the more powerful host.
3. Move intelligently to the cloud
Many organizations are jumping on the cloud bandwagon because of its perceived cost savings. In many cases, however, the savings are short-lived. An organization may deploy an application to the cloud only to discover that, as the application scales up, costs rise to unacceptable levels. At that point, the organization may opt to bring the application back into the data center.
Analytics and reporting empowers you to understand cost implications before moving an application or service to the cloud. What-if models help determine the true costs of running the workload in the cloud. Consequently, you can ascertain which workloads are good cloud candidates and which are better left on premise. With this information, you can optimize the mix of cloud and on-premise services.
4. Maintain regulatory compliance
Government regulations, industry standards, corporate policies … the list seems almost limitless. Maintaining compliance with all of them is complicated, especially in an environment characterized not only by constant change in the IT infrastructure but also constantly evolving regulations. Compliance tools continually track and report on compliance. Unfortunately, as with other types of data, the compliance data is often scattered across multiple tools.
With analytics and reporting, you can consolidate compliance data to develop a holistic, up-to-date picture of compliance across the IT environment. With this high-level, comprehensive view, you can quickly identify drifts from compliance and move to rectify them to ensure compliance is maintained.
5. Stay agile in meeting the needs of the business
IT organizations have to be quick on their feet to meet the demands of the business. For example, a particular business unit sets a goal to double sales within the next year. That would result in a major usage increase for the IT assets supporting that business unit, assets such as web servers, application servers, order processing and invoicing systems, and other assets. IT has to ensure that it can meet that usage increase without degradation in service quality, and meet it in a cost-effective way.
Here again, analytics and reporting help by leveraging data that is already available. IT can, for example, model a doubling of usage across the affected assets and determine the capacities required to meet that usage increase.
You've got a wealth of operational data
Imagine you’re a corporate vehicle fleet manager responsible for thousands of vehicles scattered across the country, but you have no overall visibility into the fleet. You don’t know how many vehicles your company has, where they’re located, or how they’re being used. All you have to go on are piecemeal reports submitted by various regional managers.
How can you possibly optimize the fleet to meet the needs of the business? Some vehicles are likely over-utilized, resulting in service degradation. Others might be under-utilized, or not used at all. Some may not even be on your radar screen. The result is gross inefficiency and waste.
This is why savvy IT organizations are beginning to use analytics and reporting to leverage the data they already have to uncover and exploit opportunities for data center optimization.
Your organization probably already has a wealth of operational data that can reveal dozens of opportunities for driving efficiency. With analytics and reporting, you can tap into that data and gain valuable insight into your “fleet” of IT assets. With this insight, you can optimize the use of those assets to maximize operational efficiency, reduce costs, and increase IT’s ability in meeting the needs of the business.
Image credit: Flickr
Keep learning
Take a deep dive into the state of quality with TechBeacon's Guide. Plus: Download the free World Quality Report 2022-23.
Put performance engineering into practice with these top 10 performance engineering techniques that work.
Find to tools you need with TechBeacon's Buyer's Guide for Selecting Software Test Automation Tools.
Discover best practices for reducing software defects with TechBeacon's Guide.
- Take your testing career to the next level. TechBeacon's Careers Topic Center provides expert advice to prepare you for your next move.