ABOUT ME

Image

As of now, I am a research assistant at Distributed systems laboratory and PhD candidate at Iran University of science and technology (IUST). I Take pleasure doing my research at Distributed systems laboratory, where I am advised by Dr. Mohsen Sharifi . I am mainly interested in computer systems, including distributed and operating systems. More specifically, my current research focuses on designing and building a Cloud-specific hypervisor scheduler.

PUBLICATIONS

1. Kani: A QoS-Aware Hypervisor Level Scheduler for Cloud Computing Environments
Esmail Asyabi, Azadeh Azhdari, Mostafa Dehsangi, Michel Gokan, Mohsen Sharifi, Sayad Vahid Azhari, Journal of Cluster Computing, 2016

2. cCluster: A Core Clustering Mechanism for Workload-Aware Virtual Machine Scheduling
Mostafa Dehsangi, Esmail Asyabi , Mohsen Sharifi and Seyed Vahid Azhari, The 3rd International Conference on Future Internet of Things and Cloud, Rome, Italy, 2015

3. A New Approach for Dynamic Virtual Machine Consolidation in Cloud Data Centers
Esmail Asyabi and Mohsen Sharifi, International Journal of Modern Education and Computer Science, 2015

4. Jungle Computing: Supercomputing beyond Clouds, Grids and Computing Clusters (Farsi)
Esmail Asyabi and Mohsen Sharifi, First National Workshop of Cloud Computing of Amirkabir University of Technology, Iran, October 31-November 1, 2012

Research

Showan: Clouds currently suffer from the lack of performance predictability, which plays a key role in determining the quality of delivered services. Resource contentions due to co-hosted VMs or noisy neighbors, which monopolize physical resources make it difficult to predict the performance of running VMs based on their configurations. Showan is a hypervisor scheduler that addresses the mentioned issue by employing new hypervisor scheduling policies that make it possible to predict the performance of VMs based on their initial configurations. I have implemented a prototype of Showan in the Xen hypervisor and conducted extensive evaluation for both I/O and CPU bound workloads. Experimental results demonstrate that Showan notably improves the predictability of performance.

Eaxen: High energy consumption of cloud data centers is a matter of great concern. While many attempts have been done to address this issue by dynamic VM consolidation, mitigating the energy consumption of an individual physical machine in a cloud data center has received relatively less attention. Eaxen is an energy aware hypervisor scheduler that its main goal is to mitigate the energy consumption of processors while satisfying QoS requirements. I have implemented a prototype of Eaxen in the Xen hypervisor and conducted extensive evaluations. Experimental results demonstrate that it significantly mitigates the energy consumption of processors.

Kani: Hypervisors currently allocate processor resources among VMs regardless of their delivered quality of services. Kani is a QoS aware hypervisor scheduler that allocates processor resources based on VMs’ delivered QoS. To do So, It leverages a monitoring tool called KQM that dynamically monitors the quality of delivered services to quantify the deviations between desired and delivered levels of QoS for all running VMs. KQM once a while sends the QoS information to the hypervisor by using a new hypercall added to the Xen VMM. The hypervisor, then, based on current delivered QoS, properly allocates processor resources among VMs. Our evaluations of the Kani scheduler prototype in Xen show that Kani outperforms the default Xen scheduler namely the Credit scheduler. For example, Kani reduces the average response time to requests to an Apache web server by up to 93.6%; improves its throughput by up to 97.9%; and mitigates the call setup time of an Asterisk media server by up to 96.6%.

Eamd There are many applications such as web servers, streaming servers and big data analytics that are hosted in Clouds and involve significant I/O activities. Meanwhile Clouds use virtualization technology(VT) to achieve a higher resource utilization. However, VT’s CPU sharing policies may adversely affect the performance for I/O-intensive applications. For example, a long time-slice benefits CPU-intensive applications but responsiveness suffers. In contrast, using short time-slices, the system can react faster but it causes excessive context switches, which degrades the performance for CPU-intensive applications. This put the scheduler in a give-and-take situation which needs to find an appropriate time-slice length based on the type of applications consolidated on the VMs. In cCluster, we tried to enhance the performance for I/O-intensive applications by allocating specific cores with a short time-slice length to I/O-intensive workloads. It substantially boost the performance for I/O-intensive applications. cCluster is our preliminary effort to address this issue. The project is still ongoing to design and develop a hypervisor scheduler that is faster for I/O-intensive applications while maintaining the performance at a high level for CPU-intensive applications.

TEACHING

CONTACT ME

Email

esmail.asyabi@gmail.com
e_asyabi@comp.iust.ac.ir

Address

Distributed Systems Laboratory, School of Computer Engineering, Iran University of Science and Technology, University Road, Hengam Street, Resalat Square, Narmak, Tehran, Iran