Prof. Mohsen Sharifi Webpage Research


Research Interests

The overall research interests span the broad fields of concepts, paradigms, models, languages, algorithms, architectures, infrastructures, frameworks, protocols and formalisms for operating systems, distributed computations, scientific computing, high performance cluster computing, wireless sensor networks, wireless sensor actor networks, mobile computing, virtualization and cloud computing, peer to peer computing, autonomic computing, ubiquitous and pervasive computing, web engineering, and privacy; security concepts. The ultimate research interest is to find the knowhow and technology of engineering a truly distributed operating system, i.e. a kernelware, with embedded basic cells and primitives in support of all types of foreseen and unforeseen! computations, running on heterogeneous platforms.

Research Areas and Projects

High Performance Computing

High Performance Computing is the study and realization of highest envisioned degrees of performability of applications.

High Performance Computing is the study and realization of highest envisioned degrees of performability of applications. Qualitative and quantitative measures of performability are set by stakeholders of computational applications as well as the nature of applications themselves. Generally, high performance issues are pertinent at six computational layers, namely (1) application programs, (2) compilations, (3) runtime system supports, (4) operating systems, (5) networking, and (6) hardware and micro-architectures. All six layers must be properly programmed to fully support qualitative and quantitative measures of performability of computational applications, resulting in a custom designed solution to every class of applications. We have researched into and developed middleware for high performance homogenous and heterogeneous clusters and are currently focusing on developing middleware support for generating and managing fully distributed, scalable (ExaScale), possibly virtualized, heterogeneous, dynamically reconfigurable, high performance systems such as distributed complex event processing (CEP) systems.

ExaScale Systems as the current mainstream of high performance computing (HPC) research areas, is pursued by the Distributed Systems laboratory for realization of HPC goals specially, execution time, scalability, heterogeneity, flexibility and configurability. The key idea behind the ExaScale computing is breaking down HPC applications to standalone, scalable, and possibly heterogeneous (may be in the required system software stack) components that can be implemented independent of other components, and running them in the context of an entity called Enclave. The core technology for practical implementation of this concept is virtualization whose performability has improved in many respects in recent years. Virtualization can help ExaScale to improve its desired performability with nearly minimum effort and cost. Given that an ExaScale system runs on a clustered collection of physical NUMA machines, achieving the optimal local and global resource sharing, openness, transparency and scalability is challenging and needs considerations in all layers of system software including, virtualization layer, operating system, communication middleware, enclave management, and global system management. These subjects have motivated us to study the following research topics.

Static, dynamic and adaptive scheduling including:

  • a) Node-level, socket-level, CPU-level and core-level scheduling of processes and virtual CPUs/cores.
  • b) Enclave-wide and global scheduling of virtual machines to physical machines.
  • c) Virtualization aware/unaware job and task scheduling on virtual and physical machines.
  • d) Distributed, cooperative and coordinated gang scheduling of jobs, tasks, VMs and Enclaves.
  • e) Communication-aware Job, VM and enclave co-location, scheduling and organization.
  • f) Energy-aware scheduling of jobs, VMs and enclaves.

Resource discovery and state information management including:

  • a) Machine-wide high speed soft information bus and monitoring service.
  • b) Distributed high performance information bus.
  • c) Distributed resource discovery.
  • d) Machine-wide, enclave-wide and distributed highly efficient resource and job naming.
  • e) Energy-aware static information propagation and dissemination

High performance operating system and VMM including:

  • a) Light-weight OS and VMM.
  • b) Co-Kernels.
  • c) High performance VMM-level and OS-level I/O and network management.
  • d) High performance OS-level, VMM-level and user-level memory management.

High performance overlay communication networks including:

  • a) Topology aware/unaware application-level efficient overlays.
  • b) Topology aware/unaware VM-level efficient overlays.
  • c) Adaptive overlay communication networking.

Computational and programming model of HPC applications including:

  • a) Distributed computational models in support of new high performance computing applications.
  • b) Distributed programming models in support of new computational models.
  • c) Practical realization of computational and programming models of HPC applications.

Distributed realization of policies, algorithms, and techniques of ExaScale system management

Distributed Computing is a field of computer science that studies distributed systems consisting of multiple relatively independent, frequently uncooperative, dispersed and at sometimes even hostile components that communicate to achieve a common goal.

Distributed Computing is a field of computer science that studies distributed systems consisting of multiple relatively independent, frequently uncooperative, dispersed and at sometimes even hostile components that communicate to achieve a common goal. A program that runs in a distributed system is called a distributed program, and distributed programming is the process of writing such programs. Distributed computing also refers to the use of distributed systems to solve computational problems, each of which is divided into many tasks and each task is solved by one or more components in the distributed system.

Research activities include pure and applied research into all aspects of distributed systems and computing extending from the theoretical foundations underpinning the field to system engineering issues. Research fields include distributed operating systems, high performance computing, cloud computing, virtualization, peer to peer computing, ubiquitous computing, desktop computing, and autonomic computing.

Cloud Computing Environments was first used to multiplex large and expensive mainframes into multiple computing units with time-sharing capabilities. IBM VM/370 was the first operating system that used virtualization to provide binary support for legacy code.

Cloud Computing Environments was first used to multiplex large and expensive mainframes into multiple computing units with time-sharing capabilities. IBM VM/370 was the first operating system that used virtualization to provide binary support for legacy code. The use of virtualization was however inefficient and dramatically slowed down the virtualized systems, leading to the obsolescence of virtualization technology (VT) usage and progress. VT is currently revived once again though by the introduction of more efficient VTs that can contribute to the manageability, scalability and reliability of large-scale systems like clouds and clusters. In general, virtualization, maps interface and visible resources of one system or component, such as processor, memory or I/O devices, at a given abstraction level, onto the interface and resources of an underlying, possibly different, real system. Except its performance degradation issue, several properties of VT have made it helpful for a wide variety of uses. Virtualization can create the illusion of multiple virtual machines (VMs) on a single physical machine (consolidation), it can provide a software environment for debugging operating systems that is more convenient than using a physical machine, and it can provide a convenient interface for adding functionality, such as fault injection, primary-backup replication, and undoable disks. It also offers valuable abilities and improvements like, ease of system administration, resource management, reliability, and security that are harder to achieve without virtualization in traditional systems.

Research Topics

  • Distributed QoS-Aware Virtual Machine Scheduling in Support of Cloud Computing Environments
  • An Scalable High Performance Virtual Cluster
  • A Transparent and Portable Distributed Shared Memory Mechanism at the Virtual Machine Monitor Level
  • Application-Aware Dynamic Resource Management at the Virtual Machine Monitor Level
  • Network Service Provision at the Virtual Machine Monitor Level
  • Intrusion Detection at the Virtual Machine Monitor Level

Complex Event Processing (CEP) encompasses a defined set of tools and techniques for analyzing and controlling complex series of interrelated events that drive modern distributed message-based systems.

Complex Event Processing (CEP) encompasses a defined set of tools and techniques for analyzing and controlling complex series of interrelated events that drive modern distributed message-based systems. This emerging technology helps Information Society and Information Technology professionals in understanding what is happening within a complex system, quickly identify and solve problems, and more effectively utilize events for enhanced operation, performance, and security. CEP is applicable to a broad spectrum of information system domains, including fraud detection, stock market trading, business process automation, process scheduling and control, intrusion detection, and network monitoring and performance prediction. Many of today’s large-scale applications such as e-commerce systems, search engines and stock trading applications generate huge amounts of event data that require real-time processing. CEP systems are used to monitor high throughput event streams and detect complex events from particular event correlation patterns that are pre specified by users. Computational power required for processing of high rates of event streams is mostly larger than the processing capacity of a single central event processing engine. The existing event processing approaches and systems are designed for central event processing on single machines and they have throughput limitations for processing of events. The central event processing can only scale up by increasing the processing capacities. Even with the capability to scale up a single event processing host by adding or replacing more powerful or capable resources (CPU, RAM), the system cannot achieve the required event processing throughput. State of the art approaches for distributed event processing face challenging problems to scale distributed complex event processing. They either partition and distribute events over an event processing network according to some detected event patterns, or distribute processing load to several processing agents to build a scalable event processing system. We are currently working on resolving challenges of distributing CEP.

Research Topics

  • Designing highly scalable CEP for large-scale applications and high volumes of events
  • Applying CEP to a broader spectrum of critical information systems
  • Developing proper middleware for federated CEP systems
  • Developing simulation tools and environments specific to CEP

A wireless sensor network (WSN) consists of spatially distributed autonomous sensors to cooperatively monitor physical or environmental conditions, such as temperature, sound, vibration, pressure, motion or pollutants.

Recent advances in pervasive computing, communication and sensing technologies have lead to the emergence of a new generation of sensor networks called wireless sensor/actuator networks (WSANs). A WSAN is a distributed system of sensor nodes and actuator nodes that are interconnected over wireless links. Sensors gather information about the physical world, e.g., the environment or physical systems, and transmit the collected data to controllers/actuators through single-hop or multi-hop communications. From the received information, the controllers/actuators perform actions to change the behavior of the environment or physical systems. In this way, remote, distributed interactions with the physical world are facilitated. Depending on the type of the target application, nodes in a WSAN can be either stationary or mobile. In many situations, however, sensor nodes are stationary whereas actuator nodes, e.g., mobile robots and unmanned aerial vehicles, are mobile. Sensor nodes are usually low-cost, low-power, and small devices equipped with limited sensing, data processing and wireless communication capabilities, while actuator nodes typically have stronger computation and communication powers and more energy budget that allows longer battery life. Regardless, resource constraints apply to both sensors and actuators.

WSANs and WSNs share many common considerations concerning network design, such as reliability, connectivity, scalability and energy efficiency, the coexistence of sensors and actuators in WSANs causes substantial difference between these two types of networks. Applications in which some actions are introduced for the purpose of enhancing the monitoring capability of the sensor networks do not embody the essential characteristics of WSANs. On the contrary, actuators in a WSAN should be an integral part of the network and perform actions interacting with the physical world. As a consequence, WSANs have the ability to change the physical world, but WSNs do not. In WSNs, power consumption is generally the primary concern; however, this may not be the case in some WSANs where meeting the real-time, reliable communication requirements may be more important.

Research Topics

  • Coordination in Weakly Connected Wireless Sensor Actor Networks
  • Improving Quality of Service in Implementations of CORBA Component Model
  • A Mechanism for Autonomous Detection and Repair of Actor Failures in Wireless Sensor Actor Networks
  • Reputation-Based Fault Map Extraction in Wireless Sensor Networks
  • Improving the Fault-Tolerance of Wireless Sensor Actor Networks
  • An Improved Key Management Scheme for Wireless Sensor Networks
  • A Hybrid Physical Architecture for Coordination in Wireless Sensor and Actor Networks
  • A Service-Oriented Middleware with QoS Support for Wireless Sensor Networks
  • Event-Driven Software Architecture for Wireless Sensor Networks Applications
  • Task Allocation in Wireless Sensor and Actor Networks to Reduce Task Completion Time
  • Energy-Aware Task Partitioning in Support of Real-Time Applications on Cluster-Based Wireless Sensor Networks

Computer Security is a branch of computer technology known as information security as applied to computers and networks.

Computer Security is a branch of computer technology known as information security as applied to computers and networks. The objective of computer security includes protection of information and property from theft, corruption, or natural disaster, while allowing the information and property to remain accessible and productive to its intended users. The terms computer system security, means the collective processes and mechanisms by which sensitive and valuable information and services are protected from publication, tampering or collapse by unauthorized activities or untrustworthy individuals and unplanned events.

The World Wide Web has become a major delivery platform for a variety of complex and sophisticated enterprise applications in several domains. In addition to their inherent multifaceted functionality, these Web applications exhibit complex behavior and place some unique demands on their usability, performance, security and ability to grow and evolve. However, a vast majority of these applications continue to be developed in an ad-hoc way, contributing to problems of usability, maintainability, quality and reliability. While Web development can benefit from established practices from other related disciplines, it has certain distinguishing characteristics that demand special considerations. In the recent years, there have been some developments towards addressing these problems and requirements. Web Engineering promotes systematic, disciplined and quantifiable approaches towards successful development of high-quality, ubiquitously usable Web-based systems and applications.

In particular, Web engineering focuses on the methodologies, techniques and tools that are the foundation of Web application development and which support their design, development, evolution, and evaluation. Web application development has certain characteristics that make it different from traditional software, information system, or computer application development. Web engineering is multidisciplinary and encompasses contributions from diverse areas: systems analysis and design, software engineering, hypermedia/hypertext engineering, requirements engineering, human-computer interaction, user interface, information engineering, information indexing and retrieval, testing, modeling and simulation, project management, and graphic design and presentation. Web engineering is neither a clone, nor a subset of software engineering, although both involve programming and software development. While Web Engineering uses software engineering principles, it encompasses new approaches, methodologies, tools, techniques, and guidelines to meet the unique requirements of Web-based applications.

Research Topics

  • Load Balancing Distributed Web Crawlers Based on Web Structure
  • A Security Framework for Implementation of Web-Based Java Applications
  • A New Technique against Phishing Attack
  • Scam Detection and Authentication for Preventing Phishing Attacks
  • Improving Application Proxies to Detect and Prevent Control Flow Tampering Attacks Targeted at Web Applications
  • Leveraging SELinux to Implement Mandatory Access Control in the JAVA Virtual Machine
  • Intrusion Detection at the Virtual Machine Monitor Level
  • Policy-Based Resource Management Controllers for an Autonomic Computing System
  • Improving Survivability of High Availability Clusters
  • Secure Covert Communication in the Web using a New Image Steganographic Scheme
  • An Improved Key Management Scheme for Wireless Sensor Networks
  • Extending X3D in Support of E-Commerce Requirements
  • Securing Electronic Forms using Elliptic Curves
  • A Software Development Framework Based on Test-Driven and Platform Independent Components
  • Information Hiding using Fractal Based Coding
  • Prototyped Stateful Intrusion Detection System
  • VPN Vulnerable Access Points: Attacks and Counterattacks
  • Agent-Based Solution for Safe and Transparent Dissemination of Certificate Status Information
  • Securing ICC Based E-Payment Systems
  • Securing XML Documents
  • An Enhanced Tuple Routing Strategy for Adaptive Processing of Continuous Queries in Wireless Sensor Networks
  • Fractal-Based Visualization of Clouds and Lightening in FractVRML
  • Distributed Reservation-Based Resource Management in Real-Time Systems
  • Fractal-Based Visualization of Mountains and Rivers in FractVRML
  • Non-Orthogonal Modeling of Quantum Key Distribution
  • Application of Fractals to Virtual Reality
  • Proactive Detection of Distributed Denial of Services Attacks using MCB Traffic Variable
  • A Search Engine Based on Robots
  • Cryptography using Pseudo Random Numbers
  • Evaluation of Edge Detection Algorithms in Image Processing
  • How to Secure Mobile Agents Efficiently
  • Domain Engineering and its Application to Banking
  • Air Traffic Control Domain Engineering using the RAISE Formalism
  • Development of a Prototypical E-Commerce System
  • A 5-Axis Post-Processor for Anvil5K Software
  • An Expert System Kernel for Budget Planning
  • A Learner System for Cephalometry in Orthodontics using Genetic Algorithms
  • Expert Database for Rijal Science
  • Hardware Security Provisioning
  • A Hypermedia Development Environment for Persian Language
  • Numerical Simulation of Tooth Movement in Orthodontics
  • An Orthodontic Cephalometry Analyzer
  • An Expert Prescription Software