Follow us on Facebook

Header Ads

Dynamic Resource Allocation Using Virtual Machines for Cloud Computing Environment




Dynamic Resource Allocation Using Virtual Machines for Cloud Computing Environment

ABSTRACT:
Cloud computing allows business customers to scale up and down their resource usage based on needs. Many of the touted gains in the cloud model come from resource multiplexing through virtualization technology. In this paper, we present a system that uses virtualization technology to allocate data center resources dynamically based on application demands and support green computing by optimizing the number of servers in use. We introduce the concept of “skewness” to measure the unevenness in the multidimensional resource utilization of a server. By minimizing skewness, we can combine different types of workloads nicely and improve the overall utilization of server resources. We develop a set of heuristics that prevent overload in the system effectively while saving energy used. Trace driven simulation and experiment results demonstrate that our algorithm achieves good performance.


EXISTING SYSTEM:
Virtual machine monitors (VMMs) like Xen provide a mechanism for mapping virtual machines (VMs) to physical resources. This mapping is largely hidden from the cloud users. Users with the Amazon EC2 service, for example, do not know where their VM instances run. It is up to the cloud provider to make sure the underlying physical machines (PMs) have sufficient resources to meet their needs. VM live migration technology makes it possible to change the mapping between VMs and PMs While applications are running. The capacity of PMs can also be heterogeneous because multiple generations of hardware coexist in a data center.


DISADVANTAGES OF EXISTING SYSTEM:

·         A policy issue remains as how to decide the mapping adaptively so that the resource demands of VMs are met while the number of PMs used is minimized.
·         This is challenging when the resource needs of VMs are heterogeneous due to the diverse set of applications they run and vary with time as the workloads grow and shrink. The two main disadvantages are overload avoidance and green computing.

PROPOSED SYSTEM:
In this paper, we present the design and implementation of an automated resource management system that achieves a good balance between the two goals. Two goals are overload avoidance and green computing.

1.     Overload avoidance: The capacity of a PM should be sufficient to satisfy the resource needs of all VMs running on it. Otherwise, the PM is overloaded and can lead to degraded performance of its VMs.
2.     Green computing: The number of PMs used should be minimized as long as they can still satisfy the needs of all VMs. Idle PMs can be turned off to save energy.




ADVANTAGES OF PROPOSED SYSTEM:
We make the following contributions:
v We develop a resource allocation system that can avoid overload in the system effectively while minimizing the number of servers used.

v We introduce the concept of “skewness” to measure the uneven utilization of a server. By minimizing skewness, we can improve the overall utilization of servers in the face of multidimensional resource constraints.

v We design a load prediction algorithm that can capture the future resource usages of applications accurately without looking inside the VMs. The algorithm can capture the rising trend of resource usage patterns and help reduce the placement churn significantly.


SYSTEM ARCHITECTURE:








MODULES
Ø VM SCHEDULER
Ø PREDICTOR
Ø HOSTSPOT SOLVER
Ø COLDSPOT SOLVER
Ø MIGRATION LIST

MODULES DESCRIPTION

VM SCHEDULER
VM Scheduler run and invoked periodically receives from the user LNM (Local Node Manager)the resource demand history of VMs(virtual machines), the capacity and the load history of PMs(personal machine), and the current layout of VMs on PMs. Then it can forward the request to predictor

PREDICTOR

The predictor predicts the future resource demands of VMs and the future load of PMs based on past statistics. We compute the load of a PM by aggregating the resource usage of its VMs. The details of the load prediction algorithm will be described in the next section. The LNM at each node first attempts to satisfy the new demands locally by adjusting the resource allocation of VMs sharing the same VMM. Xen can change the CPU allocation among the VMs by adjusting their weights in its CPU scheduler. The MM Alloter on domain 0 of each node is responsible for adjusting the local memory allocation

HOSTSPOT SOLVER
      The hot spot solver in our VM Scheduler detects if the resource utilization of any PM is above the hot threshold (i.e., a hot spot). If so, some VMs running on them will be migrated away to reduce their load. Then it can give the request to coldspot solver
COLDSPOT SOLVER
         The cold spot solver checks if the average utilization of actively used PMs (APMs) is below the green computing threshold. If so, some of those PMs could potentially be turned off to save energy. It identifies the set of PMs whose utilization is below the cold threshold (i.e., cold spots) and then attempts to migrate away all their VMs then it forward request to migration list

MIGRATION LIST
             When migration list can receive the request from coldspot solver and it can compiles list of VMs and migration list can passes it response to the Usher CTRL (user controller) for execution

SYSTEM CONFIGURATION:-

HARDWARE CONFIGURATION:-


ü Processor             -        Pentium –IV

ü Speed                             -        1.1 Ghz
ü RAM                    -        256 MB(min)
ü Hard Disk            -        20 GB
ü Key Board            -        Standard Windows Keyboard
ü Mouse                  -        Two or Three Button Mouse
ü Monitor                -        SVGA

 

SOFTWARE CONFIGURATION:-


ü Operating System                    : Windows XP
ü Programming Language           : JAVA
ü Java Version                           : JDK 1.6 & above.

REFERENCE:
Zhen Xiao, Senior Member, IEEE, Weijia Song, and Qi Chen-“Dynamic Resource Allocation Using Virtual Machines for Cloud Computing Environment”-  IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 24, NO. 6, JUNE 2013.

SEE THE PROJECT OUTPUT HERE