Degree Type

Dissertation

Date of Award

2019

Degree Name

Doctor of Philosophy

Department

Computer Science

First Advisor

Morris J. Chang

Second Advisor

Ying Cai

Abstract

In recent years, the growth and popularity of cloud computing services is leading toward the rise of large-scale data centers. Data centers are one of the most energy consumed categories in the world. How to efficiently save data center energy while maintaining its performance is one of the most important research issues in the field of cloud computing.

In this thesis, we try to tackle the following research problems: 1. how to achieve considerable amount of energy saving in cloud data centers; 2. how to maintain data center performance, 3. how to provide a practical and scalable solutions that can be implemented in modern enterprise data centers. We addressed those research problems by proposing and analyzing different for saving energy in data centers: The first model focuses on saving data center network energy while preserving network performance. The idea is to use route consolidation to switch traffic to a small number of network devices and turn off unused devices and links. To maintain network performance, safety thresholds for links utilization and valiant load balancing on active switches are used.

The second model discusses the energy-saving problem for the server side of the data center. The model uses dynamic placement and live migration of virtual machines to save energy while taking into account the current status of the network. The model migrates virtual machines to a subset of servers and put unused servers into standby mode. At all times, the resource requirements for all virtual machines are maintained and the overhead introduced to the network by live migration is minimized.

The third model combines server and network sides to maximize energy saving while preserving network performance. The model takes advantage of network traffic and virtual machines consolidation techniques to focus the workloads to a subset of devices and puts others to off or standby mode. The model is part of a framework that monitors the state of the data center by collecting and predicting run time utilization data for servers' resources (CPU, memory, network, and disk) and network traffic. It uses them as an input. The model will provide a new virtual machines placement and flow routing matrix that assure maximum data center energy saving while maintaining performance. Migration commands will take place to adjust the placement of the virtual machines based on the solution. Finally, unused servers are moved to standby mode and switches are turned off.

Copyright Owner

MotassemAl-Tarazi

Language

en

File Format

application/pdf

File Size

103 pages

Available for download on Saturday, April 25, 2020

Share

COinS