Cloudcomputing is the most widely adapted computing model to process scientific workloads in remote servers accessed through the internet. In the IaaS cloud, the virtual machine (VM) is the execution unit that processes the user workloads. Virtualization enables the execution of multiple virtual machines (VMs) on a single physical machine (PM). Virtual machine placement (VMP) strategically assigns VMs to suitable physical devices within a data center. From the cloud provider's perspective, the virtual machine must be placed optimally to reduce resource wastage to aid economic revenue and develop green data centres. Cloud providers need an efficient methodology to minimize resource wastage, power consumption, and network transmission delay. This paper uses NSGA-III, a multi-objective evolutionary algorithm, to simultaneously reduce the mentioned objectives to obtain a non-dominated solution. The performance metrics (Overall Nondominated Vector Generation and Spacing) of the proposed NSGA-III algorithm is compared with other multi-objective algorithms, namely VEGA, MOGA, SPEA, and NSGA-II. It is observed that the proposed algorithm performs 7% better that the existing algorithm in terms of ONVG and 12% better results in terms of spacing. ANOVA and DMRT statistical tests are used to cross-validate the results.
Cloud computing is a model for outsourcing an organization's computing power to a rented infrastructure. Cloud computing is possible because of emerging service-oriented architecture, sophisticated servers, and software-defined networking technologies. The physical machine can host multiple operating systems with the help of a hypervisor software module installed in physical devices [1, 2]. Virtualization significantly reduces resource wastage instead of using an entire machine hosted with a single operating system. Resource wastage is the unused CPU and RAM after placing the virtual machine in the respective physical machine (Residual). In simple terms, we can express \(R_w=R_a-R_u\), where \(R_w\) denotes the resource waste, \(R_a\) denotes the available resource in the physical machine, \(R_u\) denotes the resource consumed or utilized by the number of virtual machines hosted in the physical machine [3].
The networking infrastructure is isolated using SDN and assigned to individual virtual machines for communication. SOA is used to expose the virtualized data center to the end-users over the internet. Cloud supports elasticity, service on demand, and the pay-as-you-go model. Cloud provides three fundamental service models to the end-user: IaaS, PaaS, and SaaS. Many other prefabricated services like databases and Hadoop are also in existence. For creating a virtual machine, the user needs to specify the operating system, Memory, CPU cores, and Storage [4]. The preconfigured operating system is a machine image stored in the SAN network that can be executed directly on the virtualized hardware without installation. The machine image is an operating system deployment file compatible with the hypervisor software. The CPU and RAM are partitioned from the physical server and assigned to run the virtual machines. Virtual machine favors the data center with consolidation, migration, and load balancing. When two or more physical devices are underutilized, the virtual machine can be migrated to a single physical machine to save resources. The unused servers can be put to hibernate mode, to consume minimal energy.
The challenge for efficiently utilising a data center lies in using the underlying data center resources. As per the Gartner report [5], the physical machine consumes 60% of data center power, and the remaining 40% is consumed by networking, cooling, and storage infrastructure. It is crucial to efficiently utilize the data center resources by hosting an appropriate virtual machine to the server. Reducing resource utilization will significantly reduce the expense of a data center. Another vital aspect is placing a virtual machine in a data center with less latency [6]. The data centers are distributed in various geographical locations. When a VM is placed in an area having more latency, it suffers from a performance bottleneck. Consider a virtual machine configured to host a database server in a location with more significant latency. Even though the workload is hosted in sophisticated servers with more excellent configurations, it will only help retrieve the data. The delivery of the information solely depends on network bandwidth and latency. As the latency increases, the user will experience a delay in content delivery in both get and put requests.
To compare the performance metrics (Overall Nondominated Vector Generation and Spacing) of the proposed NSGA-III algorithm with other existing multi-objective algorithms, namely VEGA, MOGA, SPEA, and NSGA-II.
The motivation behind designing an efficient algorithm to place virtual machines in appropriate servers is to address resource wastage and power consumption in data centers. Currently, data centers consume approximately 2% of the total electricity generated by nations. This significant energy consumption needs substantial efforts to generate electricity, leading to environmental impacts and resource depletion. With the rapid growth of businesses adopting cloud platforms for their operations, data center electricity consumption is projected to increase to 95% in the coming years. This surge in demand makes it primary to find solutions to reduce electricity consumption in data centers, given its crucial role in meeting the escalating digital needs. By developing practical VM placement algorithms, we can optimize resource utilization, distribute workloads efficiently, and minimize energy consumption in data centers. This proactive approach towards energy efficiency aligns with the urgent need to mitigate environmental impact and promote sustainable computing practices. As cloud computing becomes an integral part of modern business operations, the quest to reduce electricity consumption becomes paramount, and an efficient VM placement algorithm emerges as the need of the hour.
Building an energy-efficient data center is a crucial concern for any cloud provider. Server virtualization technologies give the flexibility to host multiple operating systems with a partitioned resource called a virtual machine in the same physical machine [3]. It has improved the utilization of cloud servers to a great extent. The challenges are replaced, and the issues are now related to the placement of virtual machines in the cloud server to increase its utilization even further. Thus, objectives emerged to place VM to PM considering criteria like maximizing resource utilization of servers and networking devices, minimizing power consumption, maximizing economic revenue, etc. Consumption or Power Consumption means the amount of electricity the physical machine consumes [7]. A heuristic algorithm like bin packing [8] and linear programming-based formulation [9] is used to achieve better results in problems on a smaller scale. Many novel stochastic algorithms are proposed to achieve maximum benefits from the large-scale data centre. A bio-inspired and evolutionary algorithm is extensively applied out of many stochastic algorithms, and the literature is presented in this section.
Swarm intelligence (SI) is a technique that mimics the natural behaviour of a species to find a food source or a mate. Many researchers used swarm intelligence algorithms to solve virtual placement problems [10, 11]. Ant exhibits their intelligence in finding the food source, whereas the firefly exhibits intelligence in finding a mate. In swarm intelligence, randomly, each agent works until it finds a solution then the information is communicated with the remaining individuals. The remaining individuals will tune themselves to achieve a better solution. The global solution is the individual that dominates all the remaining individuals. Every swarm intelligence algorithm works based on two factors called exploration and exploitation [12]. Exploration is searching for a solution in the overall solution space, and exploitation is searching within the best-known solution space. The solution space is defined using the objective function. For many of the problems, there might be more than a single objective function that either needs to be minimized or maximized. Minimizing an objective function may have a negative impact on other objective functions. When an algorithm is constructively optimized, two or more objective functions are called a multi-objective optimization algorithm [13].
In [14] proposed a multi-objective ant colony algorithm to minimize power consumption (η1) and maximize the revenue of communication (η2). The movement of an ant to a food source is mapped to the VM to be placed in PM. The favorability of placing VMi to PMj is based on the pheromone trails η(i, j). The multi-objective problem solution is converted to scalar quantity using the weighted sum approach \(\upeta \left(\mathrmi,\mathrmj\right)=\upeta _1\left(\mathrmi,\mathrmj\right)+\upeta _2(\mathrmi,\mathrmj)\). In [15], proposed a modified ACA called Order Exchange and Migration ACS to minimize the number of active servers favours energy-efficient data centres. The proposed algorithm is compared with ACS and shows significant performance improvement with a single objective function. The algorithm also focuses on ordering and migrating overloaded and underloaded server loads. The congested server's VM configurations are sorted, and the VM utilizing higher resources is swapped with an underutilized server called load balancing. A load-balancing operation is a network-intensive task once the virtual machine is placed into a physical machine. In [16] proposed work, ant colony-based power-aware and performance-guaranteed methodology (PPVMP) is used to optimize the data centre power consumption and improve VM performance in a physical machine. In [4] proposed Energy Efficient Knee point driven Evolutionary Algorithm (EEKnEA) uses the evolutionary algorithm framework with a modified selection strategy called KnEA where the highest fit Pareto optimal solutions are considered along with knee points for the next generation. The algorithm uses a single-point crossover technique. The chromosomes are checked for feasibility during each population generation, and infeasible chromosomes are subjected to solution repair. In this work, the author addressed the objectives: the energy consumption of servers, the energy consumption of inter-VM communication, Resource Utilization, and Robustness.
3a8082e126