POWER SAVING LOAD BALANCING STRATEGY USING DVFS IN CLOUD ENVIRONMENT

Cloud Computing is a technology that provides a platform for the sharing of resources such as software, infrastructure, application and other information. Cloud Computing is being used widely all over the world by many IT companies as it provides benefits to the users like cost saving and ease of use. However with the growing demands of users for computing services, cloud providers are encouraged to deploy large datacenters which consume very high amount of energy resulting in carbon dioxide emissions. Power consumption is a key concern in data centers. That type of critical issues not only reduces the profit margin, but also has effect on high carbon production which is harmful for environment and living organisms. Reducing power consumption has been an important requirement for cloud resource providers not only to reduce operating costs, but also to improve system reliability. In research work, we have arranged the virtual machines in ascending order of the load. Cloudlets would be assigned to that virtual machine that has lesser load. Cloudlets are divided into three categories like high, medium and low on the basis of their instruction length. Dvfs approach which has been implemented in the paper would scale the power according to the length of the cloudlets. Three modes of Dvfs have been implemented in the research work. Various parameters like processing time, processing cost and total power consumed by all the cloudlets at the data center have been computed and analyzed. Cloudsim a toolkit for modeling and simulation of cloud computing environment has been used to implement and demonstrate the experimental results.


INTRODUCTION
Cloud Computing is a new paradigm which delivers computing as a utility through the internet [1]. It provides data access, software, storage services and computation as services which are provided to the consumers through Internet based on the pay-as-you-go model. It provides significant benefits to the IT companies as they are relieved from making the setups for hardware and software infrastructure and thus reducing the cost of those companies. However, the growing demands of consumers for computing services are encouraging the computing service providers e.g., IBM, Facebook, Yahoo!, Google, Microsoft, etc. to deploy large number of data centers all over the world that consume very large amount of energy. Consequently, the energy consumption of information industry is increasing. The total power consumption of the datacenters in 2012 was about 38 Giga Watt (GW) and this is around 63% more than the power consumption of 2011. It was estimated that this total power could have been enough for fulfilling the energy requirements of all residential households of United Kingdom. The fact that electricity consumption is set to rise 76% from 2007 to 2030 and datacenters are the main contributors of an important portion of this increase, emphasizes the importance of reducing energy consumption in clouds. Increase in the level of carbon dioxide in our ecosystem is another consequence of this increasing amount of energy consumption by the datacenters. According to Gartner, the Information and communication industry produces 2% of global carbon dioxide emission [10]. Hence, there is a great requirement of making use of more environmentally friendly computing called "Green Cloud Computing" to minimize operational and energy consumption costs and also to reduce the environmental impact. Green Computing refers to the attempts to maximize the use of power consumption and energy efficiency and to minimize the cost and CO2 emission. Development of new computing models, computer systems, and applications having low cost and low energy consumption are the primary purposes of Green Computing.
Contrastingly, as more and more users shift to cloud computing, the energy demand of the cloud increases and companies resort to coal powered energy generation that increases the carbon emissions [3]. Factors like availability, scalability, mobility, low infrastructure costs along with usage models suiting different user needs have driven a lot of individual users and organizations towards this technology. As the number of users shifting to the cloud has increased, the resources used to provide the services requested by the users have also increased thereby, increasing energy consumption on cloud. Hence cloud becomes a point of concentration of energy consumption, and conservation of energy on cloud leads to a large scales energy savings. Therefore, energy conservation on the cloud will have a great impact on reduction in global warming. Efforts in designing software at various levels (OS, compiler, algorithm and application) that facilitates system wide energy efficiency have to be made. Although SaaS (Software as a Service) providers may still use already implemented software, they should analyze the runtime behavior of applications. The gathered empirical data can be used in energy efficient scheduling and resource provisioning [4]. The energy management techniques are broadly classified as static and dynamic. The static energy management techniques are applied at compile time whereas the dynamic energy management techniques use the run-time behaviour to conserve energy. Dynamic Voltage Scaling (DVS) and Dynamic Power Management (DPM) are the two widely used energy management techniques,. In DVS, both the supply voltage and the CPU frequency are scaled down to save energy [5]. In DPM, the unused system components are put to sleep state [6]. Reliability and performance are two of the many key features of system design. Different types of For example, hardware defects. The ill effects of Permanent faults can be reduced by introducing hardware redundancy. Transient faults are recoverable faults that are usually caused due to radiation interferences. When the CPU supply voltage is scaled down, the number transient faults increase, resulting in decreased system reliability. Therefore, care should be taken to maintain the system reliability. Real-time applications are deadline critical applications i.e. a task should strictly complete before the deadline. While scaling down the supply voltage and CPU frequency, care should be taken to complete the tasks before their respective deadlines. This helps in maintaining the performance of the system.

RELATED WORK
Yi-Ju Chiang et al. (2013) discussed that cloud computing is a new service model for sharing a pool of computing resources that can be rapidly accessed and released based on a converged infrastructure. In the past, an individual use or company can only use their own servers to manage application programs or store data. Thus it may cause the dilemma of complex management and burden in "own-and-use" patterns. To satisfy uncertain workloads and to be highly available for users anywhere at any time, providing more resources are required. Consequently, resource overprovisioning and redundancy are common situations in a traditional operating system. However, most electricity dependent facilities will inevitably suffer from idle times or under-utilized for some days or months since there usually have off-seasons caused by the nature of random arrivals.

Lucio et al. (2014)
presents a hybrid optimization model that allows a cloud service provider to establish virtual machine (VM) placement strategies for its data centers in such a way that energy effciency and network quality of service are jointly optimized. Usually, VM placement is an activity not fully integrated with network operations. As such, the VM placement strategy does not take into account the impact it produces on the network performance in terms of quality of service parameters such as packet losses and traffc delays. The proposed strategy allows cloud providers to reach a balance between the energy efficiency of their infrastructures and the network quality of service they offer to their customers.

Maurizio Giacobbe et al. (2015)
presented a new strategy to reduce the carbon dioxide emissions in federated Cloud ecosystems. More specifically, we propose a solution that allows providers to determine the best green destination where virtual machines should be migrated in order to reduce the carbon dioxide emissions of the whole federated environment.

Samiran Roy et al. (2014)
states that the computing is a computational framework that provides collection of virtualized resources as Service. Cloud computing is highly profitable cost effective services in the business world in the present day scenario. However, the energy consumption of Data Centers is the big problem emerging out of growing demand for cloud services. That type of critical issues not only reduces the profit margin, but also has effect on high carbon production which is a harmful for environment and living organisms. On the other hand, Green Computing is an overwhelming need based environment friendly computational framework empowered by low emission rate. The basic principles of Green computing is directed towards environment friendly computation.

Moona Yakhchi et al. (2015)
presented that with rapid increasing demand of cloud computing technology, energy efficiency has become highly important in cloud computing infrastructures. Cloud computing concept offers low cost and high level of availability. However, it still has some challenging problems, such as resource management and power consumption. In this concept, reducing energy consumption and maximize resource utilization, became a primary concerns of many resource management methods.

Md Sabbir Hasan et al. (2015)
discusses the proliferation of Cloud services have greatly impacted our society, how green are these services is yet to be answered. Although, demand escalation for green services has grown due to societal awareness, the approaches to provide green services and establish Green SLAs remain oblivious for cloud or infrastructure providers. The main challenge for cloud provider is to manage Green SLAs with their customers while satisfying their business objectives, such as maximizing profits by lowering expenditure for green energy. discusses that there is a need for cloud providers to optimize energy efficiency while maintain high service level performance to tenants, not only for their own benefit but also for social welfares (e.g., protecting environment). Both simulation and real-world AmazonEC2 experimental results demonstrate the effectiveness of our pricing policy to incentivize CSBs to save energy for cloud providers and the superior performance of our algorithms in energy efficiency and resource utilizations in comparison with the previous algorithms.

Yibin Li et al. (2015)
states that Dynamic voltage scaling (DVS) has emerged as a critical technique to leverage power management by lowering the supply voltage and frequency of processors. In this paper, based on the DVS technique, we propose a novel Energy-aware Dynamic Task Scheduling (EDTS) algorithm to minimize the total energy consumption for smartphones, while satisfying stringent time constraints and the probability constraint for applications.

YunNi Xia et al. (2015)
states that the increasing call for green cloud, reducing energy consumption has been an important requirement for cloud resource providers not only to reduce operating costs, but also to improve system reliability. Dynamic voltage scaling (DVS) has been a key technique in exploiting the hardware characteristics of cloud datacenters to save energy by lowering the supply voltage and operating frequency. This work presents a novel stochastic framework for energy efficiency and performance analysis of DVS-enabled cloud.

RESEARCH GAP
 The growing demands of consumers for computing services are encouraging the service providers to deploy large number of data centers, all over the world that consume very large amount of energy.
 Increasing amount of energy consumption by the datacenters is one of the reasons of increase in the level of carbon dioxide in our ecosystem.


Research gives an idea that one google search generates as much CO2 as car produces by driving 3 inches and could power a 100 watt light bulb for 11 secs.
 According to gartner the information and communication industry produces 2 % of global carbon dioxide emission .


In this paper , no proper load balancing algorithm has been discussed.
 Without load balancing there will be no proper division of load among several VM's.


Tasks are assigned randomly on round robin basis to the VM's where VM's are arranged in ascending order of their carbon foortprints.
The aim of green cloud internet data center is to reduce the power consumption on same side it leveraging live virtual machine migration technology and guarantee the performance from users perspective. A major challenge for Green Cloud is to automatically make the scheduling decision for dynamically consolidating/migrating virtual machines among physical servers to meet the workload requirements meanwhile saving energy.

OBJECTIVES 
To study the performance of existing load balancing algorithm.


To propose an efficient load balancing algorithm considering the load on VMs.
 To implement different Dvfs modes for power saving.


To improve processing cost.
 To improve processing time.


To implement the proposed algorithm in cloudsim.


To evaluate the performance of proposed approach with current approach.  VMs are arranged in ascending order of the load.


In the next go cloudlets would be assigned to that vm that has lesser load.
 Now ,cloudlets are divided into three categories(high,medium,low) on the basis of their length.
 Dvfs approach which has been implemented in the paper would scale the power according to the length of the cloudlets.  Cloudlets: Cloudlets in the CloudSim is defined as job submitted in clouds. Every task is represented as cloudlets that means the task want execute in CloudSim is called as cloudlets. For example if we want to execute a sorting algorithm then the sorting program is the task and in CloudSim there is no need to put the entire file(that contains the sorting program) just have to give the file size, length, input size, output size, etc.
 Power Consumption: It is the power consumed by the machine for executing a particular cloudlet or the group of cloudlets.
 Execution Time: The time in which a single instruction is executed. The execution time or CPU time of a given task is defined as the time spent by the system executing that task, including the time spent executing run-time or system services on its behalf.

ENERGY IMPROVEMENT FOR DIFFERENT NO. OF CLOUDLETS
In this only the number of cloudlets are changes the number of hosts are remain same. By changing number of cloudlets there is comparison for power consumption with the existing work and the proposed work using three processors at DVFS.   Table 1 and table 2 shows that as the number cloudlets increases the execution time also increases. As cloudlets i.e. number of instruction lines are increases the execution time also increases. Throughput remains the same as the number of processes completed per second and the average execution time is approximately same every time. This is the initial stage of the proposed algorithm in which it checks that if the demand of the users are increases the execution time also increases that means number of cloudlets are directly proportional to the execution time. In Table 2, there is a comparison of power by using DVFS technique or without using DVFS. As it is clearly shown in the table that as we increases the number of cloudlets the power consumption will also increases and without DVFS power consumption is more as compare to with DVFS. In DVFS that is dynamic voltage/frequency scaling limit the frequency in particular like if the number of instructions are greater than or equal to 10crorre than machine will run in high frequency mode that means in 100% utilization of machine. If the instruction length is between 1-10 cr than run machine in moderate mode i.e. 70% and if the instruction length is less than 1cr than run machine in low mode i.e. 50%. As the number of cloudlets increase the difference between with or without DVFS also increases.
% improvement in energy consumption = difference/ without DVFS*100 It is clearly shown that there is incremental increase in energy and performance efficiency by using combination of load balancing with DVFS technique.
The above table 2 shows the improvement in energy as we changes the number of cloudlets that is as we increases the workload power consumption by the machine also increases. The above has he following bar chat which clearly shows the reduction in power consumption by using proposed technique i.e. hybrid technique for energy efficiency in load balancing.

Figure 2. Power Consumption Comparison
The above bar chat shows the difference between the power consumption as we increases the number of cloudlets the power consumption by the data center also increases. It also shows that energy consumption in the existing work is more as compare to with using DVFS technique. In this technique first sorted the cloudlets in according to their instruction length and then then apply frequency limit that called as DVFS technique.
The below bar chart in the figure 3 shows the processing times as the number of cloudlets are changes. There is signification reduction in processing time in the current work and the proposed work. In all above results the number of cloudlets are changes and there is significant improvement in the cost involved for processing the cloudlets.Now, in this section, we have calculated the processing cost of the cloudlets for the existing work and the proposed work.

Figure 4. Comparison between with and without DVFS
The user applications is based on the size of instruction (number of MIPS) of the applications. The applications are divided into three cases that are defined below:  Case 1: High frequency mode-The applications with greater or equal to 10cr MIPS run under the high frequency mode i.e. in 100% or fully utilization of machine. As the number of instructions increases the execution time of that application will also increases. So, these applications should allocated to the large or to fast virtual machines as a result it reduced the power consumption.

CONCLUSION
This thesis gives the introduction of Cloud computing and background of various workload consolidation techniques to manage heterogeneous workloads. As the energy efficiency is one of major problem in cloud computing. So, in this work efficient energy consumption technique has been proposed. Many load balancing algorithms are existing today but no is one energy efficient as they balance the load among the nodes of virtual machines. The proposed technique can balance the load as well as it is energy efficient. In proposed technique cloud environment is designed or developed in java, deployed on CloudSim toolkit and Experimental results have been gathered. Existing load balancing algorithms are not energy efficient therefore, in proposed technique along with load balancing algorithm, DVFS is combined which decreases the CPU clock speed. In dynamic voltage/frequency scaling set the frequency range or modes according to length or size of instruction of applications. In this work frequency is divided into three modes that are high frequency mode, moderate frequency mode, low frequency mode.