Computer
Omar Nowfal MohammedTaher; Mohammed Najm Abdullah; Hassan Awheed Jeiad
Abstract
Definitely, image processing operations without advanced and expensive microprocessors consume more time, power, and larger programs. So, improving the reasonable cost of microprocessors is crucial in this situation. This paper proposes an improvement for the MIPS_32 architecture that is called a Customized ...
Read More ...
Definitely, image processing operations without advanced and expensive microprocessors consume more time, power, and larger programs. So, improving the reasonable cost of microprocessors is crucial in this situation. This paper proposes an improvement for the MIPS_32 architecture that is called a Customized MIPS_32 (CMIPS_32) to enhance the capabilities of image processing (IP) operations. The proposal aims to increase throughput by minimizing the iterative fetching of instructions required by a certain IP operation into a single customized IP instruction. The architecture of MIPS_32 was developed in two phases. Firstly, the Register File, control unit, and ALU are modified to manipulate the information related to the IP operations. Secondly, two new units, the address calculation unit and the last pixel detection unit, were proposed to determine a certain image's starting and ending addresses. Furthermore, the MIPS_32 pipeline is customized to have five to six stages depending on the intensity of operation required by a certain IP instruction to decrease the number of machine clocks and the power consumed. The proposal was implemented using the Zed-Board XC7Z020CLG484-1 FPGA. The results showed that the computation speedup increased by a factor equal to the number of standard instructions required to execute the same operation performed by one of the proposed IP instructions. The CMIPS_32 consumed less power than other models that were implemented on Spartan3-XC3S1500L, Virtex5-XC5VFX30T, Virtex6-XC6VLX75T, and Virtex6-Low-Power-XC6VLX75T by 0.0138W, 0.6468W, 1.31W, and 0.7898W, respectively. Comparing the power consumed by the proposal with the GPU proved that the CMIPS_32 consumes less than the NVIDIA-GPU-GTX980 by 63.8698W.
Computer
Saja Dheyaa Khudhur; Hassan Awheed Jeiad
Abstract
This paper introduces DLSTM-MSF, a distributed approach designed to address the challenge of demand forecasting in multimedia streaming workloads. DLSTM-MSF leverages the power of multi-LSTM networks, each tailored to predict data demand for a specific type of multimedia streaming workload. The central ...
Read More ...
This paper introduces DLSTM-MSF, a distributed approach designed to address the challenge of demand forecasting in multimedia streaming workloads. DLSTM-MSF leverages the power of multi-LSTM networks, each tailored to predict data demand for a specific type of multimedia streaming workload. The central problem addressed in this research is the accurate prediction of workload demand in a dynamic and diverse multimedia streaming environment. To achieve specialization, the training time series set for each LSTM network comprises examples with targets belonging exclusively to the workload type it is designed to predict. This specialization ensures that each LSTM network becomes proficient at capturing the unique demand patterns associated with its designated workload category. The methodology of the proposed approach is based on building the best forecasting model for each multimedia streaming workload type by exploring various combinations of LSTM hyper-parameters using the grid search method. This enables the proposed approach to effectively capture nonlinear patterns in time series data. Furthermore, the implementation of DLSTM-MSF incorporates Apache Kafka for online demand prediction, utilizing the best-developed model for each workload type. Experimental evaluations of DLSTM-MSF compare the performance of two ensemble-learning LSTM models (Ensemble V1 and Ensemble V2) with a single LSTM model. The results unequivocally highlight the superiority of Ensemble V1, with reductions of 71.85% and 74.88% in RMSE and MAE values, respectively, compared to the single LSTM model.
Computer
Sama Salam Samaan; Hassan Awheed Jeiad
Abstract
Traditional network abilities have a drastic shortage in the current networking world. Software-Defined Networking (SDN) is a revival development in the networking domain that provides separation of control and data planes, enlarges the data plane granularity, and simplifies the network devices. All ...
Read More ...
Traditional network abilities have a drastic shortage in the current networking world. Software-Defined Networking (SDN) is a revival development in the networking domain that provides separation of control and data planes, enlarges the data plane granularity, and simplifies the network devices. All these factors accelerate and automate the evolution of new services. However, when the SDN network topology becomes large, it poses new challenges in security, traffic management, and scalability due to the vast amounts of traffic data generated and the need for additional controllers to manage the significant number of networking devices. On the other hand, big data has become an attractive trend that can enhance network performance in general, specifically SDN. Both SDN and big data have gained great attraction from industry and academia. Traditionally, these two subjects have been studied separately in most of the preceding works. However, big data can impact the design and implementation of SDN thoroughly. This paper presents how big data can support SDN in various aspects, including intrusion detection, traffic monitoring, and controller scalability and resiliency. We suggest several approaches toward deeper cooperation between big data and SDN.
Computer
Sama Salam Samaan; Hassan Awheed Jeiad
Abstract
Modelling computer networks in general, particularly Software Defined Networking (SDN) as a graph, is beneficial in network planning and design, configuration management, traffic analysis, and security. According to the dynamic nature of SDN, it needs a fast response due to the rapid changes in the network ...
Read More ...
Modelling computer networks in general, particularly Software Defined Networking (SDN) as a graph, is beneficial in network planning and design, configuration management, traffic analysis, and security. According to the dynamic nature of SDN, it needs a fast response due to the rapid changes in the network state. The SDN network topology can be modelled as a graph and stored in a graph database, and the traffic load of each switch is stored in the created graph. Consequently, a graph processing framework can be used to process the stored traffic data, and the results are utilized in traffic engineering to assist the SDN controller in network management. This paper provides a comprehensive literature survey involving graph techniques applied to SDN. Then, a summary of graph algorithms is presented. In addition, an overview of graph databases and graph processing frameworks is displayed. Finally, a model is suggested to integrate the graph database and graph processing framework in SDN traffic analysis.