Luận án Các phương pháp tiết kiệm năng lượng sử dụng công nghệ mạng điều khiển bằng phần mềm trong môi trường điện toán đám mây
Trang 1
Trang 2
Trang 3
Trang 4
Trang 5
Trang 6
Trang 7
Trang 8
Trang 9
Trang 10
Tải về để xem bản đầy đủ
Bạn đang xem 10 trang mẫu của tài liệu "Luận án Các phương pháp tiết kiệm năng lượng sử dụng công nghệ mạng điều khiển bằng phần mềm trong môi trường điện toán đám mây", để tải tài liệu gốc về máy hãy click vào nút Download ở trên.
Tóm tắt nội dung tài liệu: Luận án Các phương pháp tiết kiệm năng lượng sử dụng công nghệ mạng điều khiển bằng phần mềm trong môi trường điện toán đám mây
the extended power control system (Ext-PCS) manages and controls both servers and network entities of a data center. Figure 2.16: Extended Power-Control system (Ext-PCS) Ext-PCS consists of four logical modules, known as optimizer+, monitoring+, power control+ and configuring+, which are shown in Figure 2.16. The role of optimizer+ is to calculate the migration plan and find the energy-efficient network subset, which satisfies the conditions of servers. Its inputs are a topology, the state and the power models of switches and servers. The outcomes of the optimizer+ are the server migration plan and a set of active components (servers and network devices) which will be sent to both power control+ and configuring+ modules. Power control+ changes the power states of switches and servers, while configuring+ module implements the server migration and chooses the paths for the whole network in data centers. The following workflow shows the operation of this system in details. Step 1: Monitoring+ identifies the network states (on/off switches and links), traffic flows, servers’ states (active/inactive) and VMs states. After that, monitoring+ module sends this information to optimizer+ module. Step 2: Optimizer+ runs the proposed topology-aware VM placement algorithm, which will be presented in the section 2.4.3, to identify the migration plan and network configuration. Optimizer+ sends the results of its algorithm to both power control+ and configuring+ modules. Step 3: After receiving results from the optimizer+ module, the configuring+ implements routing algorithm to route the traffic flows. Then this module migrates VMs following the VM migration plan. The configuration+ sends the confirmation signal to power control+ module it finishes routing and migrating processes. Step 4: The power control+ module changes the network and servers state when it receives the configuration signal from the configuration+. Use case This Section describes an example of the proposed platform. Inputs of this system are the power model of devices, traffic demand and VMs distribution. Resources of each VM are CPU and memory (RAM). According to [66], the state of VMs is divided into two states, namely active VM and inactive VM. Active VM (handling tasks): both CPU and RAM are being used. Inactive VM (idle state): only RAM is still working. Table 2.4: Traffic demand VM Source VM Destination Bandwidth’s demand VM 08 VM 14 100Mbps VM 01 VM 09 100Mbps VM 07 VM 10 100Mbps Figure 2.17 is an example of a part of the data center. In this example, the network is k=4 Fat-tree topology which supports 16 physical servers. Each physical server hosts three VMs with their corresponding requirements of CPU and RAM (The requirement of the system is described in Table 2.4). The system maintains the MST (Minimum Spanning Tree) of the Fat-tree topology that was described in the section 2.2.2.2. Also in this example, the red-color switches are active switches which are under the exchange of traffic flows while the green-color switches are idle switches. Figure 2.17: Example On the one hand, Figure 2.18 shows the migration strategy of the popular VM migration algorithm, also known as first-fit algorithm [67]. This First-fit algorithm focuses on minimizing number of active physical servers. The VMs allocation plan is described in following steps: VM07 to S8 (server 8), VM08 to S6, VM09 to S2, VM10 to S7, VM01 to S3. After the process, two physical servers S4 and S5 can be turned-off for energy saving. All the switches keep turned on to maintain the traffic between servers. Figure 2.18: First-fit Migration [67] Algorithm On the other hand, Figure 2.19 shows proposed idea and algorithm. The topology-aware VM migration algorithm is deployed which allocate the servers based on traffic demand and network topology. The allocation plan is: VM14 to S3, VM09 to S1, VM10 to S2, VM11 to S4, VM12 to S8. As we can see, after the re-allocation, the servers S5, S6 and the edge switch E3 can be turned-off. As a result, energy consumed for both servers and network will be saved. Figure 2.19: Topology-Aware Placement Algorithm Topology-aware VM migration algorithm In the Fat-tree topology with MST working topology, which is presented in the section 2.2.2.2, the number of switches that could be turned off depends on the required number of servers connecting to switches. In other words, the fewer switches and PODs, which are connecting to the working servers, the more switches which could be turned-off. Figure 2.19 shows that switch E3 can be also turned-off by using the proposed topology-aware VM migration algorithm. The constraints of this algorithm are quite similar and focusing on the resources of the devices. Each physical server hosts many virtual machines (VMs). So that the resources’ requirement of hosted VMs must be less than the physical capacity, RAM and CPU, of the server. These constrains are depicted in the Eq.(2.25 and Eq.(2.26 ∀VMi∈Sp, i=0nVMiCPU<SCPUp (2.25) ∀VMi∈Sp, i=0nVMiRAM<SRAMp (2.26) The bandwidth demand off all VMs that are hosted on a server must be less than the network capability of the server. This constrain is presented in Eq.(2.27. ∀VMi∈Sp, i=0nVMiBw<SNICp (2.27) In this dissertation, the proposed topology-aware VM migration algorithm migrates servers with two objectives: (1) minimize the number of physical servers; and (2) reduce the number of switches being used for interconnecting of these physical servers. The inputs of this system are traffic matrix, VMs and servers states. The idea of this algorithm is to accumulate physical servers that are hosting VMs to fewer PODs and edge switches, and then either turn off more switches or put them to their low working state for the energy efficiency. Each migration process contains a source server, the server hosts current VM, and the destination server, where the VM must be migrated to. Firstly, the algorithm builds a list of source servers, Lsrc. These source servers are sorted in order of increasing number of active VMs hosting in servers (line 4 - Pseudocode 1: ). Then the servers, which are hosting the same active VMs of Lsrc list, are re-sorted again by prioritizing the position to neighboring server: near →middle → far (line 4 - Pseudocode 1: ). The list of destination servers, Ldstp, is sorted in order of decreasing number of active VMs that stay on servers (line 8 - Pseudocode 1: ). After that, all possible active VMs, that can be migrated from Lsrc to Ldstp by using bubble sort algorithm, are found. The mig:vmi→Sip condition is checked by constraints which are described in the equations (2.25), (2.26) and (2.27). The migrating process for inactive VMs is similar to the same process in active VMs. Topology-aware VM Migration Algorithm Input: StateSw, Link , SpVM Begin //Create a list of source server by increasing number of active servers, Lsrc←sortSpVM, key=activeSer, order=increasing //All the server with the same active VMs is re-sorted by near →middle → far Lsrc←sort(Lsrc, key=neighbor, order=near→middle→far) //Create a list of destination server by decreasing number of active servers LdstP←SortSpVM,key=activeSer, order=decreasing For all vmi∈Lsrc do For all Sip∈Ldstp do If mig:vmi→Sip=successful then mig:vmi→Sip Update StateSw, Link , SpVM End If End for End for End Output: StateSw, Link , SpVM Pseudocode 1: Topology-aware VM Migration algorithm (Proposed Algorithm 2) VM Migration cost and Power modeling of a Server Physical Server In [68], De Maio et al. surveyed and formalized the energy model of a server with several objectives. In this dissertation, the model is integrated with the binary state indicator stateSip,t of a server whether ith server turned on (1) or turned off (0). The Util is defined as the percentage of a server utilization in term of CPU or Memory. The energy consumption of a server at time t is defined in below equation, where γ and δ are energy coefficient and baseline power, respectively. Est= ∀Sip∈SpStateSip, t(γ×Util+β) (2.28) VM Migration cost The power cost of migration comprises of the power used by the source physical server, which starts the migration; and the power used by the destination server. All the cost is caused by the increase of server resources (CPU, memory) and I/O resource (network). The migration energy consumption is calculated as the sum of an energy consumption of the source server and the destination server. Emig=Pmigs+Pmigd (2.29) And the difference between migrated VM and normal working VM is: Emig=Pmigs-Pnors+(Pmigd-Pnord) (2.30) Where Pmigs, Pnors are powers of source server at migration time and normal working time, respectively. Pmigd, Pnord are powers of destination server at migration time and normal working time, respectively. Experimental Results Simulation Environment C# based Ext-PCS simulator is deployed which investigates the energy performance of data centers under different traffic conditions, VM requirements, network topologies and energy models of devices. Moreover, the tool allows us to implement and analyze different energy-aware optimization algorithms and VM migration strategies. For performance evaluation, the scenario generator module is built, which randomly creates a scenario following two steps: VMs distribution: with the requirements of CPU & RAM. Traffic flows between the active VMs: The traffic matrix is generated according to realistic traffic distributions [42]. The benefit of this traffic generator is that it helps us providing an appropriate energy-saving approach in favor of a specific traffic pattern. The great advantage of this Ext-PCS simulator is the analyzing capacity of large data centers. The fat-tree topology with k=8 and k=16 are analyzed which can support 128 and 1024 physical servers, respectively. In our lab, A Dell PowerEdge R710 server with Quad-core Intel 5520, 32GB RAM is used as power modeling of a server for energy measurement. The energy consumption table of this server is measured and described in Table 2.5. Mapping to the equation (2.28, the coefficient γ is 1.113 and baseline power δ is 205.1. Table 2.5: Power profile of server Dell PowerEdge R710 Number of Serving VMs Power (W) Power Off 0 No VM serving (Baseline) 205.1 01 VM (full load) 232.9 02 VMs (full load) 260.7 03 VMs (full load) 288.6 04 VMs (full load) 316.4 Experimental Results The ratio of energy-saving level of the network using the proposed algorithm to the ratio of energy-saving level in the full mesh scenario is made in this dissertation. The full mesh scenario means that all switches and servers work at maximum speed and performance. The network utilization (NU) in percent (%) is the ratio of current bandwidth occupied by the traffic flows to the maximum bandwidth that the DCN can handle. Figure 2.20: K=8, comparison with full mesh scenario As we can see in Figure 2.20 and Figure 2.21, the ratios of the consumed power of proposed algorithm to the consumed power of full mesh scenario are remarkable. In these figures, the consumed energy ratio of network devices, the blue line, can save up to 46% while the red line depicts the consumed energy ratio of servers. Figure 2.21: K=16, comparison with full mesh scenario In another case, the proposed topology-aware VM migration algorithm is also compared with Honeyguide [67] migration-aware algorithm. This Honeyguide algorithm is a migration-aware algorithm which is also applied to Fat-tree topology. This algorithm is based on the first-fit algorithm that is widely used for VM consolidation in data centers [67]. As we can see in the Figure 2.22 and Figure 2.23, the proposed topology-aware algorithm and Honeyguide algorithm are implemented in two sizes of a fat-tree topology in a DCN, k= 8 and k=16 which can support 128 servers and 1024 servers, respectively. In these figures, the blue line, the ratio of consumed energy of network devices, shows that proposed topology-aware algorithm can save up to 30% power consumption of network in comparison with the Honeyguide algorithm. On the other hand, the ratio of consumed energy of servers between the proposed algorithm and the Honeyguide algorithm, which is presented in the red line, is approximately equal one. As a result, the proposed algorithm can save energy consumption of a network part of DC while remaining the same energy consumption of servers part in comparison with the Honeyguide migration algorithm. Figure 2.22: K=8, comparison with Honeyguide Figure 2.23: K=16, comparison with Honeyguide Conclusion In this section, the extended power-control system (Ext-PCS) of a DCN is proposed. The system is awareness of energy consumption and has the ability to support administrators in monitoring, controlling and applying several energy-efficient strategies such as power scaling, power scaling with energy-profile-aware algorithms. The chapter also presents two main energy-efficient approaches including: (1) the energy-aware routing algorithm, namely power scaling and energy-profile-aware (PSnEP), which is based on power scaling and based on the power profile of network devices. The experimental results show that the proposed PSnEP algorithm effectively reduces the energy consumption of the network and works well with several networking devices. The energy-saving level increases up to 41%, which is more efficient than the common power scaling algorithm; and (2) topology-aware VM migration algorithm which could migrate the servers for two goals: (a) minimizing the number of physical servers; and (b) reducing the number of switches used for interconnections of these physical servers in order to turn-off more devices for energy efficiency. The most significant advantage of this algorithm is that the migration process saves the energy consumption of servers as much as other migration (first-fit) while reducing the energy consumption of the network devices. The experimental results show that the consumed power of the network devices can be saved up to 46%, which is significant compared to the full-mesh scenario, while the energy-saving level of the servers remains unchanged. ENERGY-EFFICIENT NETWORK VIRTUALIZATION FOR CLOUD ENVIRONMENTS. As described in chapter 2, the proposed power-control system works well in a data center network and satisfies a rapid growth in the number of DC servers as well as the number of the Internet services. In cloud computing environments, many services models such as IaaS, NaaS, PaaS have emerged in the last few years as a promising paradigm. For such kinds of these services, virtualization technologies including network virtualization and data center virtualization have quickly developed. In this chapter, the energy efficiency in the network virtualization technology is taken into account while the energy efficiency in the data center virtualization will be presented in the next chapter. Firstly, network virtualization is presented as a technology with huge potential [5] [6] [7] in term of green networking. Network Virtualization (NV) allows multiple separate Virtual Networks to be run on the same physical substrate network. From a theoretical perspective, network virtualization should deal with is, how to map a virtual network on top of the physical infrastructure based on specific requirements and constraints. In this work, the following open issues of network virtualization technology are coming in interest: One major challenge of network virtualization is, the virtual network embedding problem that deals with efficient mapping of virtual resources on substrate network resources [69]. To be specific, an efficient utilization of physical network resources strongly depends on the virtual network embedding algorithms under such constraints like node, link resources, admission control request and so forth. Solving the virtual network embedding (VNE) problem is NP-hard, as it is related to the multiway separator problem [70]. For that reason, current research mostly follows the heuristic and meta-heuristic approaches. Furthermore, virtual network embedding research often focuses on a virtual node and virtual link embedding in combination as well as the optimization approach of the NV resource allocation. To the best of our knowledge, there are only few research studies which address the energy-efficient VNE problem. The main reason is the lack of energy-aware NV platform, which allows researchers to develop new NV power efficiency approaches, evaluate their performance as well as efficiency. The combination of more advanced technologies, such as SDN and network virtualization, enables the realization of a programmable and flexible network. Moreover, optimized virtualization technology will also constitute less dependence on energy consumption. The implementation of network virtualization with SDN technology will offer not only an integrated orchestration experience, but also a unification of substrate and virtual innovations. FlowVisor [71] [72] is one of the most successful SDN-based network virtualization layers, widely used in Network Virtualization testbeds such as GENI [73], Ofelia [74] and OF@TEIN [75]. In Future Internet Lab - HUST, we also deployed a ReServNet Platform which is based on FlowVisor platform. Architecturally, FlowVisor acts as a transparent proxy and it is mentioned as one of the first hypervisor-like virtualization architectures for network infrastructure, resembling the hypervisor model that is common for computing and storage (Figure 3.1). Network devices generate OpenFlow protocol messages, which go to the FlowVisor and are then routed to the appropriate OpenFlow controller by network slice. Figure 3.1: FlowVisor – Hypervisor-like Network Layer [71] From the implementation perspective, although there is a common consensus in the network research communities that SDN is likely to be the technology to introduce new virtualization concepts, there is still a gap between theory and practice in the SDN-based network virtualization. An important question is how to realize and evaluate the energy-saving level of network virtualization mechanisms by using SDN in real cloud computing environments. The current lack of an energy-aware network virtualization platform constitutes significant difficulties in deploying and evaluating the network energy efficiency. With these above motivations, the concept of energy-aware network virtualization with power of energy monitoring and controlling is proposed in this section. The contributions are described as follows: Constructing an Energy-Aware Network Virtualization (EA-NV) platform in cloud environments. With inputs are Virtual Network Requests (VNRs), the system performs separate VNE algorithms and evaluates their performance as well as power-saving level. Proposing two novel heuristic-based energy-efficient VNE algorithms, namely Heuristic Energy-Efficient (HEE) VNE algorithm and Reducing Middle node Energy efficiency (RMN-EE) VNE algorithm. These proposed algorithms increase energy-saving level while maintaining a reasonable resource optimization, known as acceptance ratio. The rest of this chapter is organized as follows. Section 3.1 provides the background knowledge of network virtualization and the concept of virtual network embedding. Section 3.2 presents the construction of energy-aware SDN-based network virtualization platform. The modeling and problem formulation is described in Section 3.3. Sections 3.4 and 3.5 provide the proposed energy-efficient virtual network embedding algorithms and their performance evaluation, respectively. The last Section concludes the chapter’s work. Network Virtualization and Virtual Network Embedding Network virtualization is a highly flexible and cost-effective technology that satisfies the continuously rising demand for the Internet services of the current network. NV provides an abstraction
File đính kèm:
- luan_an_cac_phuong_phap_tiet_kiem_nang_luong_su_dung_cong_ng.docx
- 1.LuanAn.pdf
- 2.1. Tóm Tắt - EN.docx
- 2.1. Tóm Tắt - EN.pdf
- 2.2. Tóm Tắt - VN.docx
- 2.2. Tóm Tắt - VN.pdf
- 3.Trich Yeu Luan An.doc
- 3.Trich Yeu Luan An.pdf
- 4.thông tin - en.doc
- 4.thông tin - en.pdf
- 4.thông tin - vn.doc
- 4.thông tin - vn.pdf