Our Blog

Recently published project topics and materials


Spread the love



With active deployment of virtualization in the University of Agriculture, Makurdi data  center, allocation and scheduling of virtual CPU raises more challenges and have impacts on system performance due to workload fluctuations and changing needs. The objectives were to deploy a typical data center using VMware hyper visor and improve the reliability of Zhukui et al (2009) performance model by considering number of cores available to Virtual machine as well as custom workload on production environment.In this work, the researcher presented a model to predict the performance of consolidated virtual machines hosted on VMware Hyper visor 5.1 using Ubuntu 12.1 and windows server 2008 as guest operating system. Using CPU controlled approach for 20%, 40%, 60%, 80% and 100% allocations and workload generator son test beds and production environments, data was collected at peak periods during student registration processes using basic system performance matrix on the virtual machines at intervals of 60 seconds and analyzed using SPSS package.Empirical result showed that proposed model predicted values are closer to the observed values than the Zhukui et‘ al (2009) for the three(3) workloads considered. Thus by varying the CPU allocation based on business needs, the model predicted the response time as observed in the production environment.

Chapter One



Internet and business applications are increasingly being moved to large data centers that hold massive server and storage clusters. Currently data centers can even contain tens or hundreds of thousands of servers, and plans are already being made for data centers holding over a million servers (Appleby et al., 2001). Some data centers are built to run applications for a ‗single‘ company, such as the search engine clusters run by Google. Other data centers are operated by service providers that are able to rent storage and computation resources to other customers at very low cost due to their large scale.

Virtualization in an IT environment is essentially the isolation of one computing resource from others. By separating the different layers in the logic stack, you enable greater flexibility and simplify change management. Although services on dedicated servers achieve performance isolation, but it also leads to lower resource utilization, higher operational costs and power waste. Thus there is a strong incentive for consolidating resources among hardware and services, which is the so-called virtualization technology (Cong-Feng et al., 2011).

A recent focus on reducing the economic costs of information technology (IT) motivates increased resource sharing and on-demand computing. Towards this, virtualization technologies enable IT resources to be dynamically allocated among multiple applications. Such a model empowers organizations to flex their computing resources based on workloads and business needs, and hence improve the efficiency of IT operations (Kephart et al., 2007).

According to (Rolia et al., 2005), Virtualization can be used to ―slice‖ a single physical host into one or more virtual machines (VMs) that share its resources. This can be useful in a hosting environment where customers or applications do not need the full power of a single server. In such a case, virtualization provides an easy way to isolate and partition server resources. The abstraction layer between the VM and its physical host also allows for greater control over resource management (Heo et al., 2007).

As data centers continue to deploy virtualized services, new problems have emerged such as determining optimal VM placements and dealing with virtualization overheads. At the same time, virtualization allows for new and better solutions to existing data center problems of power and maintenance cost of physical servers by allowing for rapid, flexible resource provisioning.


Resource contention is intensified in virtualized environments due to the consolidation nature of virtualization. Hence, the problem is how tominimize the allocation of CPU server resource to an IT service (or application), while satisfying Service Level Objectives (SLOs).

This work is an improvement on the approach considered by Zhikuiet al. (2009)in modelling application performance in virtual machine. Zhikuiet al.(2009)proposed themodel

1 C
  • em − C



T is the response time, λ is the workload, Cm is the consumption in tier m, em is the allocated CPU resource in tier m. The model predicts resource demand to meet application-level performance requirement based on workload transaction-mix history. However, the following problems were identified.

  1. Approach failed to maximize server resources through consolidation
  2. Approach failed to consider other hyper visors (Vmware 2012) where CPU cores available can be shared among virtual machines.
  • The design architecture in Zhikui et al(2009) does not mimic a typical data center.
  1. Approach failed to consider hidden request/transaction been process by the CPU especially on windows guest operating system where Antivirus and other services are running.
  2. Zhikui et al (2009) model was not experimented online on the production server to accommodate real user experience.
  3. The model assumed that request is held only at one tier.
  • Measurement of response time on the client is associated with too many factors that affects performance (network availability and speed, client system and its configuration, geographical location).

This research intends to solve these challenges mentioned above.


As application portfolios expands to meet up with the target of making virtually all processes IT based; budgets have tightened as well. The best approach is to increase server utilization and reduce costs using virtualization strategies that consolidate servers and pool IT resources allowing better control while increasing the flexibility of the IT infrastructure (Krishnamurthy et al., 2006).

One of the challenges of this technology lies in the fact that many of the application servers which in themselves are easily visualized are attached to a myriad of storage devices that vary greatly in type and size. Monitoring the health of the virtual environment is very crucial as there is an expressed concern that the large increase in traffic to the data center may affect the overall performance experienced by the end user. Network bandwidth and overload virtual environment have also been cited as concerns that affects response by end users.

Data center managers therefore have the responsibility to monitor user experience and also monitor the technical environment to determine if performance in terms of application response time begins to deteriorate in order to respond to issues which negatively impacts the users‘ experience.This condition prompts questions like;how can we efficiently manage server resources despite highly varying application workloads? This research attempts to provide solution to these issue.


The primary goal of the service level objectives (SLO) in Vitalized data center is to ensure appropriate performance of the corresponding IT service.

Most previous researches have considered only an average performance (e.g., mean response time). However, average performance guarantees are not sufficient for many applications, especially the interactive ones. Hence, this research focuses on improving Zhikui et al (2009) model that optimizes performance of applications in virtualized data center. This research model is important for the following reasons:

  • Capacity provisioning: This will enable a server farm to determine how much CPU capacity to allocate to an application in order for it to service its peak workload.
  • Performance prediction: Enables the response time of the application to be determined for a given workload and a given hardware and software configuration
  • Application configuration:which enables various configuration parameters of the application to be determined for a certain performance goal
  • Bottleneck identification and tuning: which enables system bottlenecks to be identified for purposes of tuning


  1. How application-level performance in terms of response time and throughput are affected in virtualized system.
  2. How can we efficiently manage server CPU resource despite highly varying application workloads? (The data center environment makes these challenges particularly difficult since it requires solutions with high scalability and extreme speed to respond quickly to fluctuating internet workloads).
  • What effect has resource management on application performance?(The CPU and memory allocated to a virtual machine can be dynamically adjusted and hence it is important to know the impact of this flexibility attribute on application performance).
  1. To what extent can CPU contention affect application performance? (Due to the consolidation nature of data centers, resource contention is intensified and there is need to minimize the resources used for the different applications while satisfying service level objectives).


The aim of the research is to model the performance of virtual machines base on workload transaction/request history and allocated CPU in VMware Hypervisors.

The objectives of this research are to:

  1. Deploy a typical data center to measure performance in a consolidated production environment using VMware hype rvisor.
  2. To propose a model based on VMware hyper visors and Zhikui et al(2009) model
  • To improve the reliability of performance model by considering the number of cores available to virtual machines.
  1. To test the reliability of improved model using various tested/platform and workload generators.
  2. Identify the influence of CPU allocation on application performance in virtual machines in offline and live environment.


To model the performance of virtual machines, the researcher deploys a typical virtualized datacenter setup using VMware EXSI 5.1 Hypervisor running onHP SL170s, 1U 6 Intel Xeon 5600 server with 32GB RAM, 2.93GHz 2 4-core Intel Xeon Processor, 2x1TB HDD internal storage with 12TB SAN storage server of 12 RAIDs connected with NIC teaming configured to mimic an average datacenter solution.The CPU controlled approach is adopted such that CPU allocation to the virtual machine is varied as workload varies.All virtual machines were assigned the same number of CPU cores available on the physical host.

The methodology requires three different applications as workload generator setup on three testbeds with the same configuration. These workloads include a PHP and Java controller based RUBiS standard workload generator for offline estimation of parameters and two custom application exMaster and AppSton based on ASP.net and PHP respectively for online parameter estimation on the production server. Performance data collectedfrom the application is used to validate the model.


Was the material helpful? Comment below. Need the material? Call 08060755653.

This site uses Akismet to reduce spam. Learn how your comment data is processed.