Friday, October 23, 2015

DyScale: a MapReduce Job Scheduler for Heterogeneous Multicore Processors



DyScale: a MapReduce Job Scheduler for Heterogeneous Multicore Processors

ABSTRACT:
The functionality of modern multi-core processors is often driven by a given power budget that requires designers to evaluate different decision trade-offs, e.g., to choose between many slow, power-efficient cores, or fewer faster, power-hungry cores, or a combination of them. Here, we prototype and evaluate a new Hadoop scheduler, called DyScale, that exploits capabilities offered by heterogeneous cores within a single multi-core processor for achieving a variety of performance objectives. A typical MapReduce workload contains jobs with different performance goals: large, batch jobs that are throughput oriented, and smaller interactive jobs that are response time sensitive. Heterogeneous multi-core processors enable creating virtual resource pools based on “slow” and “fast” cores for multi-class priority scheduling. Since the same data can be accessed with either “slow” or “fast” slots, spare resources (slots) can be shared between different resource pools. Using measurements on an actual experimental setting and via simulation, we argue in favor of heterogeneous multi-core processors as they achieve “faster” (up to 40%) processing of small, interactive MapReduce jobs, while offering improved throughput (up to 40%) for large, batch jobs. We evaluate the performance benefits of DyScale versus the FIFO and Capacity job schedulers that are broadly used in the Hadoop community.
EXISTING SYSTEM:
v In the MapReduce model computation is expressed as two functions: map and reduce. MapReduce jobs are executed across multiple machines: the map stage is partitioned into map tasks and the reduce stage is partitioned into reduce tasks. The map and reduce tasks are executed by map slots and reduce slots.
v Daniel et al. propose using architecture signatures to guide thread scheduling decisions.
v Lee et al. propose to divide the resources into two dynamically adjustable pools and use the new metric “progress share” to define the share of a job in a heterogeneous environment so that better performance and fairness can be achieved.
v Polo et al. modify the MapReduce scheduler to enable it to use special hardware like GPUs to accelerate the MapReduce jobs in the heterogeneous MapReduce cluster.
v Jiang et al. developed a MapReduce-like system in heterogeneous CPU and GPU clusters.

DISADVANTAGES OF EXISTING SYSTEM:
v The existing method needs to modify the applications for adding the architecture signatures, therefore it is not practical to deploy.
v Cannot maintain good performance for large batch jobs
PROPOSED SYSTEM:
v In this system, we design and evaluate DyScale, a new Hadoop scheduler that exploits capabilities offered by heterogeneous cores for achieving a variety of performance objectives. These heterogeneous cores are used for creating different virtual resource pools, each based on a distinct core type. These virtual pools consist of resources of distinct virtual Hadoop clusters that operate over the same datasets and that can share their resources if needed. Resource pools can be exploited for multiclass job scheduling.
v We describe new mechanisms for enabling “slow” slots (running on slow cores) and “fast” slots (running on fast cores) in Hadoop and creating the corresponding virtual clusters. Extensive simulation experiments demonstrate the efficiency and robustness of the proposed framework. Within the same power budget, DyScale operating on heterogeneous multi-core processors provides significant performance improvement for small, interactive jobs comparing to using homogeneous processors with (many) slow cores.
v Our goal is twofold: 1) design a framework for creating virtual Hadoop clusters with different processing capabilities (i.e., clusters with fast and slow slots); and 2) offer a new scheduler to support jobs with different performance objectives for utilizing the created virtual clusters and sharing their spare resources.

ADVANTAGES OF PROPOSED SYSTEM:
v DyScale can reduce the average completion time of time-sensitive interactive jobs by more than 40%.
v At the same time, DyScale maintains good performance for large batch jobs compared to using a homogeneous fast core design (with fewer cores).
v The considered heterogeneous configurations can reduce completion time of batch jobs up to 40%.

SYSTEM ARCHITECTURE:

SYSTEM REQUIREMENTS:
HARDWARE REQUIREMENTS:

Ø System                          :         Pentium IV 2.4 GHz.
Ø Hard Disk                      :         40 GB.
Ø Floppy Drive                 :         1.44 Mb.
Ø Monitor                         :         15 VGA Colour.
Ø Mouse                            :         Logitech.
Ø Ram                               :         512 Mb.

SOFTWARE REQUIREMENTS:

Ø Operating system           :         Windows 7/UBUNTU.
Ø Coding Language :         Java 1.7 ,Hadoop 0.8.1
Ø IDE                      :         Eclipse
Ø Database              :         MYSQL
REFERENCE:
Feng Yan, Member, IEEE, Ludmila Cherkasova, Member, IEEE, Zhuoyao Zhang, Member, IEEE, Evgenia Smirni, Member, IEEE, “DyScale: a MapReduce Job Scheduler for Heterogeneous Multicore Processors”, IEEE Transactions on Cloud Computing 2015.




 hadoop ieee projects 2015, big data ieee projects 2015, hadoop bulk ieee projects, hadoop student projects, hadoop 2015 project titles, big data student projects, hadoop projects in chennai, hadoop ieee projects in chennai, big data projects in chennai, hadoop ieee projects in pondicherry, big data ieee projects in pondicherry