Skip to main content

Design and test bed experiments of server operation system using virtualization technology

Abstract

According to current researches, much of the electric power is being consumed by the server cooling system at the Data Center. Moreover, the power consumption rate increases when the number of the equipments and servers expands. Thus, the proposed server operation system has been designed to decrease power consumption rate and CO2 emission volume by minimizing the number of these equipments and simplifying the physical composition of the system. Virtualization technology was adopted in both designing and implementation phases to improve resource efficiency of the system. As a result, significant amount has been saved while constructing the server operation system in this paper. System’s performance has been evaluated using a virtual machine prior to its practical use through test bed experiments and the results confirms our expectation that the virtual hardwares will work as efficiently as actual ones.

Background

It’s been analyzed that about 65 % of the electric power consumption is consumed for the cooling system to suppress server overheating at the Data Center (where data is processed), bringing a negative effect on the environment. Particularly, the electric power is consumed during the operation process of storage server, main server and network equipments [e.g., network interface cards (NIC), etc.] so that the ‘Greening’ plan should be considered in the process in order to improve energy efficiency [1, 2]. For PCs and network equipments, a large amount of data can be processed rapidly with the high-performance CPU. However, high electric power consumption is required during the processing of such amount of data and heat value increases causing a negative impact to the environment. It is estimated that the IT sector-related electric power consumption and CO2 emission will be increased anticipating that the power consumption would rise from 3.1 % (2007) to 11.1 % (2030) while the CO2 emission would increase from 1.1 % (2007) to 4.7 % (2030) [24].

Most of firms, institutions and the users of servers have not been able to make the best use of server or network resources. The usual average utilization rate is under 20–30 % and the rest remain as unused resources. Moreover, systems are often constructed in excess of server or network’s capacity and they mostly fall into disuse after 5 years. Maintaining such systems for about 5 years doesn’t come cheap also. Therefore, physical and logical virtualization works are required to attend these problems [516].

Thus, in this paper, the server operation system that has considered the greening plan from the designing phase of the operating servers has been devised. During the test bed implementation, the traffics in this system were generated by IMGP, responses between DHCP server and DHCP client, ARP, Ping and mail transmissions and some of core areas of these traffics have been analyzed with Wireshark.

Related research

Virtual computer

Similar to PCs and server computers, the virtual computer or the virtual machine is an assembly of virtual hardwares which are created by the virtualized operation system and also has BIOS, CPU, hard disk, network interface card, and so on.

Virtualization

The capacity of an emulation test bed scales when experimental nodes are mapped onto limited physical resources [5]. For example, the DETER containers system [5, 7] can support experiments that are two orders of magnitude larger than the test bed.

Most emulation test beds support various state-of-the-art virtualization techniques. With root access on test bed machines, users are able to create different types of virtual machines on the provisioned test bed machines [7]. For instance, if multiple experimental nodes with different operating systems are hosted on a single test bed machine, the user can apply full virtualization solutions, such as KVM [8], VMware [9], or VirtualBox [10], to create the virtual machines with different guest operating systems. If the user needs to minimize the virtualization overhead, the virtual machines can be created using lightweight OS-level virtualization techniques such as LXC [11]. VMware, Inc. has the largest market share in the virtualization technology. They already have technologies that support Infrastructure-as-a-service (IaaS), Platform-as-a-service (PaaS) and Software-as-a-service (SaaS), all of which are verified as virtualized services [1320]. Since the existing servers and network systems construct the server systems that fit their own purposes depending on the programs they run, high-cost investments for the servers and networks and the maintenance costs such as backup and storage processes have increased, resulting in overall cost increase for the IT apparatus construction and its maintenance Figs. 1 and 2.

Fig. 1
figure 1

Future of the Data Center

Fig. 2
figure 2

VMware [9]

Virtualization technique lets the user to share the same resources by separating applications and services from the actual resources using the middle layer and uses IT resource as the logical resource rather than the individual resource [1824]. Server partitioning is a typical example of the virtualization technique and it lets the resource to be shared by partitioning a single large server into many smaller systems. Additionally, the storage virtualization technique makes it possible to produce a virtualized disk in the ‘disk pool’ which was created by drawing unused disks left in the several physical storage systems. By accessing this virtualized disk, the application server can go into action as if 1 TB-storage is attached to it while the actual maximum usable space being 300 MB.

Conversion to a virtualized system will bring about next several benefits so that we’ve performed the virtualization work for the server and network laying emphasis on them in this paper. The benefits are: reduction in the costs involving the Data Center Management and new investments; human resource efficiency related to the repetitive tasks; and achievement of environment-friendly IT [11] by being able to fulfill low-power operation by securing the rack space and working environment with constant temperature and humidity.

The virtualization technology can be largely divided into the application virtualization [2539], server virtualization [2628], network virtualization [2931], desktop virtualization [3235] and storage virtualization. We attempt to describe technical characteristics, trends and current foundation technologies for each category.

Application virtualization

The technical characteristics of the application virtualization can be provided through virtualization process for the applications individually installed in the user’s PC. The user can instantly use his/her virtualized PC without installing necessary applications each time. This technique has been developed since decades ago and it is being termed and used as server-based computer, presentation virtualization or application streaming. Currently, the supporting technologies for the rich applications (e.g., 3D CAD) and smart-phones (e.g., IPv6) are being developed and applied and they are implemented with the Cloud SaaS foundation technology.

Server virtualization

Server virtualization is an abstraction technique in which the level of computer resource utilization and flexibility are improved though the distribution of a single computing system into several resources by separating operation system from physical hardwares such as server, storage and others, and such techniques is expected to assume an important role in emission reduction [1, 2, 4]. The information resources to which the virtualization can be applied include the operation systems and hardwares such as servers, storage networks, and application programs. On the other hand, for the hardware virtualization technique, the virtualized hardwares are provided to each virtual computer offering an independent environment to have an advantage of running multiple operation systems.

Virtualization of server means to hide server resources like the number and type of the physical server from the user, as in Fig. 3. Once the server virtualization is completed, the user can implement his/her application programs regardless of number of servers and their types.

Fig. 3
figure 3

Server virtualization

The server virtualization is a technique that integrates workloads of dozens of physical servers in the Data Center with a few virtualized servers. Such virtualization has the benefits of reducing the costs related to the management and rack space securing, and of enhancing the applicability of resources including the power consumption in the aspect of environment-friendly IT. Server virtualization has started with the hardware partitioning technology during the UNIX-frame age and advanced to the host-based virtualization method (i.e., software emulation method). Today, the Hypervisor type server virtualization technology—the bare metal-based virtualization engine—has become a main stream technology. The foundation technology is realized through IaaS implementation.

Unrivaled in the virtualization technology, VMware, Inc. is now facing a challenge from the corporate giants like Citrix and Microsoft. Citrix has the advantage in the desktop virtualization field and using the advantage they are expanding their domain in the field of server virtualization. Their major product, XenServer takes a role of platform for cloud, server and desktop virtualizations. Also, they are reinforcing clouding function by adding the interworking function with Amazon cloud service. Microsoft is narrowing the technological gap between VMware by offering the Hyper-V virtualization system. They have the advantage of having world-wide window-based computer environment and variety of solutions. However, the fact that Hyper-V cannot support virtual hard disks is their weakness. Oracle Corp. is engaged in the market with the Oracle VM, which is an open technology. Red Hat Inc. possesses the Linux operation system which has integrated the kernel-based virtual machine with their corporate virtualization technology but their influence is minimal. As the name represents, VMware is a front-runner in the virtualization field and thus we’ve chosen their line of products in this paper.

Network virtualization

The network virtualization means segmenting one or more physical networks that integrate available bandwidths (i.e., channel integration). Recently, the use of smart and mobile devices is flooding everywhere such that the mobility technology which continuously transmits the data fast and correctly is getting attentions. Although extension of the physical networks is needed to manage these data, logical-type network virtualization technique is required above all things. Such change brings out the questions for the existing network construction methods since mobile devices and their contents are the mainspring of server virtualization. Existing network structures have a hierarchical topology. That is, there exists several Ethernet switch layers set-up as a tree structure. This client–server based environment is not suitable for the new network structure.

Desktop virtualization

Desktop virtualization, also called the server-side desktop virtualization, enables a user to virtually own dissimilar desktops with different operating programs such as Windows Vista or Windows 7. The Client-side desktop virtualization makes it possible to operate dissimilar virtual desktops within a PC and a separation between personal operating space and company operating space is possible with such method. Especially, since the emergence of the server-side virtual desktop technology (2006), which accesses the virtual desktop remotely, the technology that operates several client-based desktops have been developed and begun to appear in 1997. Currently, the bare metal-type client hypervisor, which is mounted directly on the hardware, has been developed and it continues to advance as an integrated technology that interworks with the server-side technology. This foundation technology is being implemented with the Cloud DaaS. Several PCs can be operated with a single PC through widely-used desktop virtualization works.

Storage virtualization

Storage virtualization is a technology that makes it possible to implement service by virtually allocating minimum necessary space instead of required space through the technology called ‘Thin-Provisioning’. Additionally, it provides an environment that can be used for the dissimilar storage systems integration. Currently, the technologies such as NAS, FC, SAN, IP-SAN continue to develop to take supporting role as a storage service in the virtual infrastructure environment. The storage virtualization provides the foundation technology for Cloud IaaS implementation. An explosive increase in the usage of storages has also brought forward increased pressures on everyday storage and data managements. As a result, satisfying the service levels for the usability and provisioning has become a huge task. Companies are now looking into the disk and tape storage virtualization technology to avoid such burden.

Server operation system using virtualization technology

Designing of the server operation system

We’ve configured a network by creating several servers using the virtualization technique for three physical servers (IBM System ×3250 servers). A usable domain name has been given so that this network will have the servers which include various service elements under one domain name. Web server is constructed with Apache, PHP and MySQL, and the DNS server, with bind9. E-mails can be exchanged using Postfix.

Also in this paper, to construct an efficient network environment and to run the test bed experiments, VMware [9] Work station 7.1.0 and VMware vSphere Client 4.1.0 (the virtualization softwares of VMware, inc.) were used for the PC in use and installed VMware ESXi 4.1.0 in 3 IBM servers mentioned above—all of which to construct a virtualized server operation system. The details of the hardwares and softwares used for the construction of the system are shown in Table 1.

Table 1 The conditions of hardwares and softwares

Figure 4 is the server operation system diagram and total of three networks had been configured by combining three nodes for each network (total of nine nodes). Since the nodes had been combined with a single hub, VLAN was established to separate the networks.

Fig. 4
figure 4

The server operation system diagram

Figure 5 is the proposed server operation system. This system had been constructed by integrating various kinds of servers in three IBM System ×3250, and for each virtual machine, the network was configured by distributing the networks with VLAN. Additionally, three LANs were established in the virtual machine and WAN was also established to connect LANs.

Fig. 5
figure 5

The server operation system

The server was constructed and operated in each LAN. All of the servers were combined with a single hub such that VLAN had been established to separate the networks. Node IPs in each server were set as in Fig. 3, thereby the server in which one network would have one domain name and contain various other service elements had been constructed. In addition, the involved cost can be reduced when low-cost server equipments and virtual softwares products are used. Meanwhile, The conditions of the physical networks of IBM System ×3250 are shown in Figs. 6 and 7.

Fig. 6
figure 6

The system construction condition of first, second physical server

Fig. 7
figure 7

The system construction condition of third physical server

Traceroute test bed experiments

First, the traceroute test was performed in a condition where the link has been completed. Below Fig. 8 indicates the network configuration diagram and anticipated paths of the packets, all of which are to be put into practice.

Fig. 8
figure 8

Network configuration diagram and anticipated paths of the packets

The test was carried out assuming that the route between Router 1 and 3 had been severed due to an accident. For the test, the router connections had been disconnected in VM setting as shown in Figs. 9 and 10.

Fig. 9
figure 9

Router disconnection at 172.16.0.1

Fig. 10
figure 10

Router disconnection at 192.168.0.1

Then, the network will become as Fig. 11 below. For OSPF, the flow of packets will change in accordance with the altered network configurations.

Fig. 11
figure 11

Modified network configuration and anticipated packet flow

Once the line has been actually altered, OSPF automatically changes the routing table according to the altered line. Thus, because of the severed line, the packets cannot take Router 3 but they can be delivery by making a detour around Router 2 after the modification.

Performance evaluation

When analyzing the network with Wireshark, it is important to observe the time column. Network can slow down due to long delays, access errors and excessive packet requests to obtain data. One should check the time gaps between request and response, or acknowledgement response and normal response when the performance of the network degrades because of delays.

Packet time measurement method of Wireshark for performance evaluation

When Wireshark conducts capturing operation, a time stamp value will be drawn out from the libcap/WinPcap library. This time stamp is then stored together with the trace file so that the packet arrival time can be indicated when the file is opened.

Filter

Since many packets will appear during the packet analysis work by Wireshark such that it is not easy to find a desired information. Filtering can assist in this situation. There are two methods of filtering (i.e., display filtering and capture filtering) and they differ a little depending on their usages. In short, the former is used to find a desired information among captured packet and has better functionality. The latter is used to prevent the packets becoming too large when they are stored. For this, some definitions have to be set prior to its use. Figure 12 shows the examples of settings for the performance evaluation.

Fig. 12
figure 12

Example of capture filter setting

Packet list pane on data link layer

In Fig. 13, all the packets captured with Wireshark are being displayed on the packet list pane. Here, the information such as Source/destination MAC/IP address, TCP/UDP port number, protocols, packet contents can be obtained.

Fig. 13
figure 13

Packet list pane on data link layer

Observing Fig. 14, if the length of received domain exceeds 0X0600, it will be defined as a Ethernet II-type and if less, the length of IEEE 802.3. Thus, when the Ethernet II-type has the length of 0X0800, the protocol will be of IP.

Fig. 14
figure 14

Analysis of packets on the data link layer

Packet details pane on Network Layer

The 4th record value in Fig. 15 shows the ICMP protocol used for the ping operation. The ICMP header elements can be checked by clicking a protocol.

Fig. 15
figure 15

Analysis of packets on the data link layer

Figure 16 shows structure of IP header. Figure 17 shows IP packet header capture. Comparing the elements in IP header structure and IP packet header capture, relevant values arranges IP headers sequentially from version to destination IP Address.

Fig. 16
figure 16

Structure of IP header

Fig. 17
figure 17

Analysis of packets on the data link layer

Disector pane

Figure 18 shows the same contents as the packet details pane above, but the difference is that they are being indicated with hexadecimal numbers. In the next section, corresponding ASCII values will be generated.

Fig. 18
figure 18

Analysis of packets on the data link layer

Performance evaluation of proposed server operation system: Example of mail transmission and reception on a network

As in Fig. 19, user can be added by entering below command.

Fig. 19
figure 19

Addition of user for performance analysis

sudo adduser test

STMP

For the analysis environment, a mail was sent to the user (test) after accessing the mail server of pknu.com (192.168.0.0/16) with STMP from dhcp-client. The transmitted mail will be stored in the user/Maildir/new/folder. Then, this temporary file will be transferred to cur folder (user/Maildir/cur) after it is opened and read using POP3 or IMAP. If one wishes to send a mail by using the stmp protocol, he/she should follow below process.

For telnet, enter below command to access 192.168.0.3 (mail.pknu.com) with a port number 25 (stmp)

telnet 192.168.0.3 25

Appropriate command for the STMP protocol will be needed to send a mail. To send a mail, enter below line.

mail from: root@dslab.com

After receiving a message 250 2.1.0 OK, enter the sender and relevant mail server names.

rcpt to: test@mail.pknu.com

data

Subject: subject

body

.(ending line)

Then, enter the message contents after entering data and Subject. The result was successful as shown in Figs. 20 and 21.

Fig. 20
figure 20

Mail transmission performance analysis using STMP (1)

Fig. 21
figure 21

Mail transmission performance analysis using STMP (2)

POP3

If one wants to check the mail sent previously by using STMP with POP3 protocol, enter below command to access one’s own mail server.

telnet 192.168.0.3 110

Access port number 110 (POP3) of mail.pknu.com (192.168.0.3) and enter the command.

user user (test)

pass user (test)

list

retr (msg)

quit

To check the mail sent with STMP, one should access the mail server remotely. After establishing remote access to POP3 port, log in the user name to check the mail. ‘List’ command will show the mail list that includes all the received mails. The list numbers are arranged in the order of reception times Figs. 22 and 23.

Fig. 22
figure 22

Mail transmission performance analysis using POP3 (1)

Fig. 23
figure 23

Mail transmission performance analysis using POP3 (2)

IMAP

To check the mail sent by STMP previously with IMAP protocol, one needs to access one’s own mail server.

telnet 192.168.0.3 143

The rest of the procedure is the same as above but the port number should be 143 (IMAP).

a01 login test test

a02 select inbox

The numbers (i.e., 01, 02, and etc.) attached to the commands are just the reference numbers given by the user. Here, alphabetic head must remain but the numbers are optional Fig. 24.

Fig. 24
figure 24

Mail transmission performance analysis using IMAP (1)

When a command ‘select inbox’ is given, the number of mails and the unread mails received recently will be displayed, and a directory and a file will be added to the/home/user/Maildir/folder. Should the mail has been read by the user, IMAP will also add it to/home/user/Maildir/cur/ Fig. 25.

Fig. 25
figure 25

Fetch command result

As in Figs. 24 and 26, mail content can be checked when ‘fetch’ command is used after entering select inbox command. That is, enter fetch 7 body [header], which shows the header, and fetch body 7 body [text] should be entered to see its content Fig. 24.

Fig. 26
figure 26

Mail transmission performance analysis using IMAP (2)

Performance analysis of the network operation system using Wireshark

The major traffics in the network server operation system are generated by IMGP, responses between DHCP server and DHCP client, ARP, Ping and mail transmissions, and some of core areas of the traffics have been analyzed with Wireshark.

STMP, POP3 and IMAP are connected with 3-way handshake and run with TCP link termination mechanism. Figure 27 shows TCP 3-way handshake and TCP link termination examining the first three packets, SYN flag bit is set at the segment’s header in the first phase of initial TCP connection. This segment will be encapsulated in the IP datagram and sent to the server. The involved packet is the No. 9 packet. The second phase of TCP connection setting, where SYN and ACK are to be set, can be seen at No. 10 packet. And, the third phase is observed at No. 49 and 50 packets.

Fig. 27
figure 27

TCP 3-way handshake and TCP link termination

Figure 28 shows the contents of STMP packets within the generated traffics in the network system using Wireshark. The red lines are the client’s requests and the blue ones are of server’s responds.

Fig. 28
figure 28

Analysis of STMP packets

Conclusion

The reduction of the number of equipments required for the system operation and the improvement of the system resource efficiency by simplification of the physical composition have led to our more affordable and cost-effective system. Adapting the virtualization technology to the system, we designed the system which is possible to implement with a comparatively low cost of around 5 million Korean won for the network equipments, which would have been much higher cost of nearly 85 million Korean won and the Test bed experiments was created subsequently. However, the cost of 5 million won does not refer to the full-scale network operation system, but instead it’s for the test bed model. It is not easy to estimate entire costs needed to build a full-scale virtualized server operation system at this time as further research is needed to answer the question. Meantime, the test bed we’ve constructed can still be used by the small-sized organizations such as schools and training institutions. Our aim of using virtual technology was to pursue feasibility of constructing such a model before actually producing one in a full and workable form Fig. 29.

Fig. 29
figure 29

Analysis of Cos

The proposed system performance was evaluated prior to its practical use with a virtual machine and by making it possible to re-deploy resources and to recycle idle resources, we expect that the costs and resources involved could be reduced once such processes are applied to the existing systems.

References

  1. Shin J (2009) Technological trends in green IT. Korean Institute of Information Scientists and Engineers, pp 35–36 (in Korean)

  2. Baek JS (2009) Operating system virtualization for mobile desktop environment on windows, M.S. thesis, Department of computer and communications engineering POSTECH Graduate School of Information Technology, pp 2–8 (in Korean)

  3. Wang S, Xu DS, Yan SL (2010) Analysis and application of Wireshark in TCP/IP protocol teaching. In: IEEE EDT, pp 269–272

  4. Huh JH, Seo K (2013) Designing and implementation of networks learning systems by using virtual computers. In: The 9th international conference on MITA 2013, Bali, pp 68–71

  5. Huh JH (2012) Designing and implementation of networks learning systems by using virtual computers, M.S. thesis, Department of Computer Science Education, Graduate School of Education, Pukyong National University at Daeyeon, Busan, pp 2–18. (in Korean)

  6. Yao WM (2013) Increasing scalability in network simulation and testbed experiments, Ph.D. thesis, Purdue University, West Lafayette, pp 18–19

  7. DETER Team (2011) Building apparatus for multi-resolution networking experiment using containers. Technical Report ISI-TR-683, DeterLab

  8. Kernel-based virtual machine. http://www.linux-kvm.org

  9. VMware. http://www.vmware.com

  10. Oracle VM virtualbox. http://www.virtualbox.org

  11. LinuX containers. http://lxc.sourceforge.net

  12. Huh JH, Seo K (2014) Development of competency-oriented social multimedia computer network curriculum. J Multimedia Inf Syst 1(2):133–142

    Google Scholar 

  13. Kim BH (2014) Establishment of IT resource integration system using server and network virtualization, Ph.D. thesis, Department of Computer Engineering The Graduate School of Korea National University of Transportation, pp 11–16 (in Korean)

  14. Baek SJ, Park SM, Yang SH, Song EH, Jeong YS (2010) Efficient server virtualization using grid service infrastructure. J Inf Process Syst 6(4):553–562

    Article  Google Scholar 

  15. Ahn H, Jung B, Park J (2015) Effect of reagents on optical properties of asbestos and remote spectral sensing. J Converg 5:15–18

    Google Scholar 

  16. Ju M, Ahn H, Yoo D, Kim H, Kim Y (2014) Feasibility test of wireless monitoring of changes of fish fauna according to habitat conditions of artificial lakes and wetlands. J Converg 5:19–22

    Google Scholar 

  17. Peng G, Zeng K, Yang X (2013) A hybrid computational intelligence approach for the VRP problem. J Converg 4:1–4

    Google Scholar 

  18. Liu J, Chung SH (2013) An efiicient load balancing scheme for multi-gateways in wireless mesh networks. J Inf Process Syst 9:365–378

    Article  Google Scholar 

  19. Lv J, Guo J, Ren H (2014) Efficient greedy algorithms for influence maximization in social networks. J Inf Process Syst 10:1–12

    Article  Google Scholar 

  20. Kolici V, Herrero A, Xhafa F (2014) On the performace of oracle grid engine queuing system for computing intensive applications. J Inf Process Syst 10:491–502

    Article  Google Scholar 

  21. Feese S, Burscher MJ, Jonas K, Tröster G (2014) Sensing spatial and temporal coordination in teams using the smartphone. Hum Centric Comput Inf Sci 4:1–18

    Article  Google Scholar 

  22. Elsayed E, Eldahshan K, Tawfeek S (2013) Automatic evaluation technique for certain types of open questions in semantic learning systems. Hum Centric Comput Inf Sci 3:1–15

    Article  Google Scholar 

  23. Sinha A, Lobiyal DK (2013) Performance evaluation of data aggregation for cluster-based wireless sensor network. Hum Centric Comput Inf Sci 3:1–17

    Article  Google Scholar 

  24. Sharma MJ, Leung VCM (2012) IP multimedia subsystem authentication protocol in LTE-heterogeneous networks. Hum Centric Comput Inf Sci 2:1–19

    Article  Google Scholar 

  25. Vanus J, Kucera P, Martinek R, Koziorek J (2014) Development and testing of a visualization application software, implemented with wireless control system in smart home care. Hum Centric Comput Inf Sci 4:1–19

    Article  Google Scholar 

  26. Ueno H, Hasegawa S, Hasegawa T (2010) Virtage: server virtualization with hardware transparency. Lecture notes in computer science (LNCS). Springer, Berlin, pp 404–413

  27. Paessler D (2008) Server virtualization and network management. Database Netw J 38(5):13–16

    Google Scholar 

  28. Taylor C, Consolidation Server (2006) How to enhance utilization of servers and storage. Manuf Comput Solut 12(5):26

    Google Scholar 

  29. Botero JF, Hesselbach X (2013) Greener networking in a network virtualization environment. Comput Net 57(9):2021–2039

    Article  Google Scholar 

  30. Jain R, Paul S (2013) Network virtualization and software defined networking for cloud computing: a survey. IEEE Commun Mag 51(11):24–31

    Article  Google Scholar 

  31. Zhao GY, Tang HF, Xiao LM, Li XQ (2013) Efficient inline deduplication on VM images in desktop virtualization environment. Appl Mech Mater 307:488–493

    Article  Google Scholar 

  32. Wang X, Zhang B, Luo Y (2013) Optimizing interactive performance for desktop-virtualization environment optimizing interactive performance for desktop-virtualization environment. Lecture notes in computer science (LNCS), vol 7719. Springer, Berlin, pp 541–555

  33. Jang SM, Choi WH, Kim WY (2013) Client rendering method for desktop virtualization services. ETRI J 35(2):348–351

    Article  Google Scholar 

  34. Cohen E, Paul W, Schmaltz S (2013) Theory of multi core hypervisor verification. Lecture notes in computer science (LNCS). Springer, Berlin, pp 1–27

  35. Thorpe S, Ray I, Grandison T, Barbir A (2013) Hypervisor event logs as a source of consistent virtual machine evidence for forensic cloud investigations. Lecture notes in computer science (LNCS). Springer, Berlin, pp 97–112

  36. Huh Jun-Ho, Seo Kyungryong (2015) Hybrid advanced metering infrastructure design for micro grid using the game theory model. Int J Softw Eng Appl 9(9):257–268

    Google Scholar 

  37. Huh Jun-Ho, Lee Donghoon, Seo Kyungryong (2015) Implementation of graphic based network intrusion detection system for server operation. Int J Secur Appl 9(2):37–48

    Google Scholar 

  38. Huh JH, Koh T, Kim NJ, Seo K (2015) Design and test bed experiments of smart grid-based PLC client node problem using OPNET. In: Proceedings of the 11th international conference on multimedia information technology and applications (MITA 2015), Tashkent, Uzbekistan, IEEE Region 10, Changwon Section, p 317–322

  39. Huh JH, Seo K (2015) Design and testbed of graphic-based server operation system using virtualization technology. In: The 2015 world congress on information technology applications and services proceedings of the “advanced mobile, communications, security, multimedia, vehicular, cloud, IoT, and computing” (World-IT 2015), p 96

Download references

Authors’ contribution

JH has implemented the entire part of networking with three VMwares, tested mail transmission and reception process, and evaluated system’s performance with Wireshark, which has confirmed that the communication was successful. KS is a paper advisor who has provided a basic logic in preparing the dissertation and reviewed the entire system diagram for implementation, after confirming the feasibility of the networking process. Both authors read and approved the final manuscript.

Competing interests

The authors declare that they have no competing interests.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Jun-Ho Huh.

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Huh, JH., Seo, K. Design and test bed experiments of server operation system using virtualization technology. Hum. Cent. Comput. Inf. Sci. 6, 1 (2016). https://doi.org/10.1186/s13673-016-0060-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/s13673-016-0060-7

Keywords