Have you found yourself expecting more performance out of your host to find out that you have reached or exceeded the maximum amount of cycles for your physical server? Don’t feel bad it happens to many users due to lack of capacity planning up front or loads on the VMs have gotten heavier over time out growing the physical server. Either way good capacity planning for performance and monitoring that performance over time to know when more resources are need will help prevent you from being in this situation.
Capacity planning will help ensure not only good performance from the start but help plan for increases in demand. The best way to capacity plan is to understand the workload of your current infrastructure and gauge what resources are needed. Whether you are moving VMs to new servers or moving from a physical to virtual environment there are important steps to follow:
Analyzing all resource dimensions of your current environment is very important, especially for each tier 1 application if possible. There are many tools that will not only help you collect and monitor performance information can also help you analyze the data as well (next section). Key workload measures have to do with supply and demand of resources, i.e. low latency, 100% utilized, and the objects starving….
The analysis should answer certain questions, such as:
Verifying that all resources are not being over utilized will ensure that you are right sizing the HW with the SW. Also verify that best practices are being applied. Many tools allow for threshold setting to help keep resources within health limits or alarm and notify you when resources cross those health thresholds.
Without tools to monitor performance is like a guessing game to meet SLA, especially for virtualized infrastructures. Good ongoing performance monitoring is a great to ensure that SLAs are continuously met. It is important to monitor both physical and virtual infrastructures holistically since bottlenecks tend to either move around and can cause issues in unexpected places. There are many products on the market today both open and proprietary that holistically monitor and whole IT infrastructures. There are plenty of established tools that monitor servers, networking, and storage performance but the latest products to monitor infrastructure as a whole to identify root cause bottlenecks is best; especially for large datacenter environments. A good number of these tools have added analytics to make it much easier to do modeling and analysis to predict trends and prevent performance issues and threats to SLAs before they happen.
Back to Analyst Blogs