Computer
Dhulfiqar Hakeem Dhayef; Sawsan S A Al-Zubaidi; Luma A H Al-Kindi
Abstract
Cell formation plays a crucial role in the development of cellular manufacturing systems (CMS). Previous studies in this field have typically assumed that each part is associated with a single process plan. However, incorporating alternative routes offers additional flexibility in CMS design. This paper ...
Read More ...
Cell formation plays a crucial role in the development of cellular manufacturing systems (CMS). Previous studies in this field have typically assumed that each part is associated with a single process plan. However, incorporating alternative routes offers additional flexibility in CMS design. This paper addresses the cell formation problem by considering alternative routes and presents a two-stage approach to address this problem. In the first stage, a Route Rank Index (RRI) is developed based on a correlation matrix to select the optimal alternative route for each part. Subsequently, a Genetic Algorithm (GA) is employed in the second stage to form part families and machine cells. The proposed approach's computational performance is evaluated using a set of generalized group technology datasets found in the existing literature. The results demonstrate that the proposed approach is highly effective and efficient when it comes to addressing the cell formation problem involving alternative routes. The ramifications of these findings in practice are substantial. Our suggested approach demonstrates its resilience and adaptability by achieving comparable or better grouping results across a wide variety of benchmark datasets. This shows the method can be used in a wide range of practical situations, including those involving matrices of varying sizes and shapes. The theoretical knowledge base on part-machine grouping strategies benefits from the comparison study. By comparing the results of our suggested method to those of well- known heuristics, we shed light on its benefits and drawbacks.
Computer
Omar Nowfal MohammedTaher; Mohammed Najm Abdullah; Hassan Awheed Jeiad
Abstract
Definitely, image processing operations without advanced and expensive microprocessors consume more time, power, and larger programs. So, improving the reasonable cost of microprocessors is crucial in this situation. This paper proposes an improvement for the MIPS_32 architecture that is called a Customized ...
Read More ...
Definitely, image processing operations without advanced and expensive microprocessors consume more time, power, and larger programs. So, improving the reasonable cost of microprocessors is crucial in this situation. This paper proposes an improvement for the MIPS_32 architecture that is called a Customized MIPS_32 (CMIPS_32) to enhance the capabilities of image processing (IP) operations. The proposal aims to increase throughput by minimizing the iterative fetching of instructions required by a certain IP operation into a single customized IP instruction. The architecture of MIPS_32 was developed in two phases. Firstly, the Register File, control unit, and ALU are modified to manipulate the information related to the IP operations. Secondly, two new units, the address calculation unit and the last pixel detection unit, were proposed to determine a certain image's starting and ending addresses. Furthermore, the MIPS_32 pipeline is customized to have five to six stages depending on the intensity of operation required by a certain IP instruction to decrease the number of machine clocks and the power consumed. The proposal was implemented using the Zed-Board XC7Z020CLG484-1 FPGA. The results showed that the computation speedup increased by a factor equal to the number of standard instructions required to execute the same operation performed by one of the proposed IP instructions. The CMIPS_32 consumed less power than other models that were implemented on Spartan3-XC3S1500L, Virtex5-XC5VFX30T, Virtex6-XC6VLX75T, and Virtex6-Low-Power-XC6VLX75T by 0.0138W, 0.6468W, 1.31W, and 0.7898W, respectively. Comparing the power consumed by the proposal with the GPU proved that the CMIPS_32 consumes less than the NVIDIA-GPU-GTX980 by 63.8698W.
Computer
Zainab Hashim; Hanaa Mohsin; Ahmed Alkhayyat
Abstract
Handwritten signature identification is a process that determines an individual’s true identity by analyzing their signature. This is an important task in various applications such as financial transactions, legal document verification, and biometric systems. Various techniques have been developed ...
Read More ...
Handwritten signature identification is a process that determines an individual’s true identity by analyzing their signature. This is an important task in various applications such as financial transactions, legal document verification, and biometric systems. Various techniques have been developed for signature identification, including feature-based methods and machine learning-based methods. However, verifying handwritten signatures in digital transactions and remote document authentication is still challenging. The inherent variety in people’s signatures, which may occur due to factors such as mood, exhaustion, or even the writing tool used, contributes to the problem. Furthermore, the proliferation of sophisticated forgery methods, such as freehand mimicking and sophisticated picture manipulation, necessitates the development of reliable and precise tools for identifying authentic signatures from fake ones.The present paper suggests a method for identifying signatures based on integrating static (off-line) handwritten signature data. This is done by fusing three types of signature features: Linear Discriminant Analysis (LDA) as appearance-based features, Fast Fourier Transform (FFT) as frequency- features, and Gray-Level Co-occurrence Matrix (GLCM) as texture-features. Then, these fused features are inputted into four types of machine learning algorithms: Naive Bayes, K-Nearest Neighbor, Decision Tree, and AdaBoost classifiers, to identify each person and to find the most robust algorithm in terms of accuracy and precision and recall. For experiments, we have used two famous datasets: SigComp2011 and CEDAR. After training datasets, the highest accuracy achieved was 100% on the CEDAR dataset and 94.43% on the SigComp2011 dataset using a Naive Bayes classifier.
Computer
Afrah Salman Dawood
Abstract
Recently, the burgeoning disciplines of Machine Learning (ML) and Deep Learning (DL) have experienced considerable integration across diverse scientific domains. Of significant note is their integration into the medical sector, specifically in the intricate methodologies of pathological categorization. ...
Read More ...
Recently, the burgeoning disciplines of Machine Learning (ML) and Deep Learning (DL) have experienced considerable integration across diverse scientific domains. Of significant note is their integration into the medical sector, specifically in the intricate methodologies of pathological categorization. Present-day innovations underscore the pivotal role of Deep Convolutional Neural Networks (DCNN) in mediating the tasks of image-based taxonomies and prognostications within this domain. In this research, a new DCNN with different modified intelligent architectures like CNN, modified VGG-16, VGG-19, ResNet50, and DenseNet121, besides the newly added classification layer, was implemented and tested for the detection and classification of Alzheimer’s disease. The evaluation and performance metrics are accuracy, loss, f1-score, precision, and recall. Experiments were made on Kaggle-based dataset and test results show that the CNN-based model is the most accurate model, with the highest accuracy of 96% and the lowest loss of 9.92%. Finally, the average performance percentage of the overall proposed model is as follows: accuracy is 91%, loss is 19.75%, precision is 89.4%, F1- score is 88.83%, and recall is 90%.
Computer
Mohammed E. Seno; Ban N. Dhannoon; Omer K. Jasim Mohammad
Abstract
Cloud computing is an evolving and high-demand research field at theforefront of technological advancements. It aims to provide software resources andoperates based on service-oriented delivery. Within the infrastructure as a service (IaaS)framework, the cloud offers end customers access to crucial infrastructure ...
Read More ...
Cloud computing is an evolving and high-demand research field at theforefront of technological advancements. It aims to provide software resources andoperates based on service-oriented delivery. Within the infrastructure as a service (IaaS)framework, the cloud offers end customers access to crucial infrastructure resources,including CPU, bandwidth, and memory. When a cloud system fails to deliver asexpected, it is referred to as an event, signifying a deviation from the anticipated service.To meet their service-level agreement (SLA) obligations, cloud service providers (CSPs)must ensure continuous access to fault-tolerant, on-demand resources for their clients,particularly during outages. Consequently, finding the most efficient ways to accomplishtasks while considering the rapid depletion of resources has become an urgent concern.Researchers are actively working to develop optimal strategies tailored to the cloudenvironment. Machine learning plays a critical role in these endeavors, serving as a keycomponent in various cloud computing platforms. This study presents a comprehensiveliterature review of current research papers that employ machine learning algorithms topropose strategies for optimizing cloud computing environments. Additionally, the surveyprovides authors with invaluable resources by extensively exploring a diverse range ofmachine learning techniques and their applications in the field of cloud computing. Byexamining these areas, researchers aim to enhance their understanding of efficientresource allocation and scheduling, addressing the challenges posed by resource scarcitywhile meeting SLA obligations.
Computer
Umniah Hameed Jaid; alia karim Abdulhassan
Abstract
The voice signal carries a wide range of data about the speaker, including theirphysical characteristics, feelings, and level of health. There are several uses for the estimateof these physical characteristics from the speech in forensics, security, surveillance,marketing, and customer service. ...
Read More ...
The voice signal carries a wide range of data about the speaker, including theirphysical characteristics, feelings, and level of health. There are several uses for the estimateof these physical characteristics from the speech in forensics, security, surveillance,marketing, and customer service. The primary goal of this research is to identify the auditorycharacteristics that aid in estimating a speaker’s age. To this end, an ensemble featureselection model is proposed that selects the best features from a baseline acoustic featurevector for age estimation from speech. Using a feature vector that covers various spectral,temporal, and prosodic aspects of speech, an ensemble-based automatic feature selection isperformed by, first calculating the feature importance or ranks based on individual featureselection methods, then voting is applied to the resulting feature ranks to attain the topranked subset by all feature selection methods. The proposed method is evaluated on theTIMIT dataset and achieved a mean absolute error (MAE) of 5.58 years and 5.12 years formale and female age estimation
Computer
Rashad N. Razak; Hadeel N. Abdullah
Abstract
Multi-Object Detection and Tracking (MODT) are essential in manyapplication fields. Still, many enhancements in the speed of detection and tracking wererequired to overcome the challenges during implementation. This paper presents a newalgorithm system for (MODT) to improve the execution time to be robust ...
Read More ...
Multi-Object Detection and Tracking (MODT) are essential in manyapplication fields. Still, many enhancements in the speed of detection and tracking wererequired to overcome the challenges during implementation. This paper presents a newalgorithm system for (MODT) to improve the execution time to be robust in real-timeapplications. A background subtraction detection algorithm with a Kalman filter wasused to track and predict the object position and speed parameters. To improve theprocessing time, its needs to reduce some frames in a way that does not affect thedetection accuracy too much and instead use the prediction and the estimated valueobtained based on the Kalman filter for the tracked object. This work uses a single videocamera to show how effectively to compute and detect multiple objects concurrently; it isapplied for daytime preprocessing in an automated traffic surveillance system.Preliminary testing findings show that the suggested algorithm for this vehicle monitoringsystem is feasible and effective. It illustrates that using the suggested algorithm with asingle video camera can simultaneously watch, detect, and track several vehicles andimprove execution time. Simulation results on the built system demonstrate that theproposed system reduced the execution time to approximately 41.5% compared to thestandard background subtraction algorithm. Results indicate the proposed algorithm hasan approximate error for the position and speed of detected and tracked objects comparedwith the standard background subtraction algorithm.
Computer
Sanaa Ali Jabber; Soukaena H. Hashem; Shatha H Jafer
Abstract
Finding an optimal solution to some problem, like minimizing andmaximizing the objective function, is the goal of Single-Objective Optimization (SOP).Real-world problems, on the other hand, are more complicated and involve a widerrange of objectives, several objectives should be maximized in such problems. ...
Read More ...
Finding an optimal solution to some problem, like minimizing andmaximizing the objective function, is the goal of Single-Objective Optimization (SOP).Real-world problems, on the other hand, are more complicated and involve a widerrange of objectives, several objectives should be maximized in such problems. No singlesolution could be enhanced in all objectives without deteriorating at least one othergoal, which is the definition of Pareto-optimality. Understanding the idea of MultiObjective Optimization (MOP) is thus necessary to find the optimum solution. Multiobjective evolutionary algorithm (MOEA) are made to simultaneously assess manyobjectives and find Pareto-optimal solutions, MOEA can resolve multi-objective andsingle-objective optimization problems.This paper aims to introduce a survey study for optimization problem solutions bycomparing techniques, advantages, and disadvantages of SOP and MOP withmetaheuristics and evolutionary algorithms. From this study, we conduct that theefficiency of MOP lies in the present more than one SOP, but it takes a longer time toprocess and train and is not suitable for all applications, While SOP is faster and moreuseful in stock and profit maximization applications. And the posterior techniques areconsidered the dominant approach to solving multi-objective problems by the use of thefield of metaheuristics.
Computer
Asaad Raheem Kareem; Hasanen S. Abdullah
Abstract
The article provides an overview of two recent developments in technology: Business Intelligence (BI) and Deep Learning (DL). In order to support decision-making processes, BI entails gathering, integrating, and analyzing data from various sources, while DL uses artificial neural networks to learn and ...
Read More ...
The article provides an overview of two recent developments in technology: Business Intelligence (BI) and Deep Learning (DL). In order to support decision-making processes, BI entails gathering, integrating, and analyzing data from various sources, while DL uses artificial neural networks to learn and generate predictions from complicated datasets. This paper introduces the concepts and principles and highlights recent developments and applications in different domains of research: education, organizations, stock market, forecasting, decision-making in real-time, and security. However, the fundamental problem with the business intelligence approach is that there is no learning involved. Other limitations and challenges include the capacity that affects the data analysis process, the variety of data in results, and the need for a complete presentation of results in the form of dashboards, scorecards, reports, and portals. The approach choice hinges on the problem's context and requirements and the nature and characteristics of the data. Although BI and DL are widespread, alternative methods may suit well too, such as machine learning, data mining, and statistical analysis. Justifying the selection based on precise needs and goals is crucial. Recurrent neural networks (RNN), convolutional neural networks (CNN), long short-term memory (LSTM), gated recurrent units (GRU), and Business intelligence tools are used in the research problem to address these limitations and explore the potential advantages and difficulties of integrating BI and DL to achieve an advantage in a given sector.
Computer
Amar A. Mahawish; Hassan Jalee.l Hassan
Abstract
The performance of the Internet is significantly impacted by network congestion. Because of the internet's current rapid growth, congestion could increase and cause more packets to be dropped. The Transmission Control Protocol (TCP) connection is used as the reliable transmission of packets which has ...
Read More ...
The performance of the Internet is significantly impacted by network congestion. Because of the internet's current rapid growth, congestion could increase and cause more packets to be dropped. The Transmission Control Protocol (TCP) connection is used as the reliable transmission of packets which has a Drop Tail (DT) mechanism since congestion DT signals occur only when the queue has become full. The Random Early Detection algorithm mechanism can be developed as congestion control which eliminates the drawback of a tail drop queue full. The Red algorithm also has problems dealing with different numbers of connections due to fix parameter tuning. In this study, the EXhaustive search used with the RED algorithm to develop the EX-RED algorithm. Based on network performance metrics such as packet drop, delay, and throughput, the developed algorithm adjusts the default RED parameter to find the best of them. When the number of TCP connections changes, the exhaustive search used systematically enumerates all possible states of the parameter. The simulation results showed that the EXRED improved performance of the network as compared with the other five algorithms (GRED, RED, ARED, NLRED, and NLGRED) by decreasing delay and dropped packets and increasing packet throughput.
Computer
Mohammed Majid Msallam
Abstract
In recent years, safe lockers have been spread in public places to secure valuable belongings. The people are concerned about losing safe locker keys or use of the spare key by others and will remain worried about their things. To solve the forenamed matter, this paper proposed a system that depends ...
Read More ...
In recent years, safe lockers have been spread in public places to secure valuable belongings. The people are concerned about losing safe locker keys or use of the spare key by others and will remain worried about their things. To solve the forenamed matter, this paper proposed a system that depends on biometrics to secure valuable things and can minimize their concerns and worries. The proposed system consists of two major parts, software, and hardware. In the hardware part, A microcontroller with a camera and an electronic lock will be used to securely open or close the door of the safe locker. In the software part, the images that are pictured by the camera will be prepared by an image processing algorithm and then Support Vector Machines (SVM) will be trained in the images of a person. The images and information of the person will be saved until the belongings come out and after that deletes everything to prepare for another one.
Computer
Asmaa A Mohammed; Abdul Monem S. Rahma; Hala B. Abdulwahad
Abstract
The rapid "development of communication technology" has had a considerable impact on financial transactions, leading to the appearance of new types of "currency." These digital currencies are intended to streamline processes, decrease time and effort expenditures, and minimize financial losses while ...
Read More ...
The rapid "development of communication technology" has had a considerable impact on financial transactions, leading to the appearance of new types of "currency." These digital currencies are intended to streamline processes, decrease time and effort expenditures, and minimize financial losses while doing away with the need for traditional financial intermediaries and central bank regulation. Despite persistent worries and hazards that remain in the minds of people participating in currency trading and stock exchanges, digital currencies have significantly impacted the global financial industry. The fundamental issue with digital currencies is the question of the legal and regulatory frameworks. Since digital currencies are decentralized, conventional regulatory frameworks might find it difficult to keep up with the consequences of this rapidly changing technology. This may result in ambiguity and uncertainty regarding the governance and regulation of digital currency.Best in Class In addition to exploring the ideas of Bitcoin and block chain technology, this study attempts to give an overview of digital currencies, including their definition, emergence, and development. The research examined the stages of development of digital currency from its inception in 2009 to the present year, 2023. It traced the evolution of digital currency over this period. The study provided into how digital currency has evolved from its early days, marked by the introduction of Bitcoin in 2009, to its current state in 2023.
Computer
Dina M. Abdulhussien; Laith J. Saud
Abstract
Face detection technology is the first and essential step for facial-based analysis algorithms such as face recognition, face feature extraction, face alignment, face enhancement, and face parsing. That is besides serving other applications related to human intention and act analysis such as facial expression ...
Read More ...
Face detection technology is the first and essential step for facial-based analysis algorithms such as face recognition, face feature extraction, face alignment, face enhancement, and face parsing. That is besides serving other applications related to human intention and act analysis such as facial expression recognition, gender recognition, and age classification. Face detection is used to detect faces in digital images. It is a special case of object detection and can be used in many areas such as biometrics, security, law enforcement, entertainment, and personal safety. There are various methods proposed in the field of face detection, and they all compete to make it more advanced and accurate. These methods belong to two main approaches; feature-based approach and image-based approach. This paper reviews the face detection methods that belong to the feature-based approach. Moreover, their work concepts, strengths, and limitations are mentioned. This paper concentrates on the feature-based approach due to its simplicity and high applicability in real-time detection compared to the image-based approach.
Computer
Esraa Q. Naamha; Matheel E. Abdulmunim
Abstract
The World Wide Web (WWW) is a vast repository of knowledge, including intellectual, social, financial, and security-related data. Online information is typically accessed for instructional purposes. On the internet, information is accessible in a variety of formats and access interfaces. ...
Read More ...
The World Wide Web (WWW) is a vast repository of knowledge, including intellectual, social, financial, and security-related data. Online information is typically accessed for instructional purposes. On the internet, information is accessible in a variety of formats and access interfaces. Because of this, indexing or semantic processing of the data via websites may be difficult. The method that seeks to resolve this issue is web data scraping. Unstructured web data can be converted into structured data using web data scraping so that it can be stored and examined in a central local database or spreadsheet. This paper offers a metadata scraping using a programmable Customized Search Engine (CSE) system, which can extract metadata from web pages (HTML pages) in the Google database and save it in an XML format for later analysis and retrieval. Documents that contain metadata are a relatively recent phenomenon on the web and increase the likelihood that users will find the information they need.
Computer
Mohanad A. Mohammed; Hala B. Abdul Wahab
Abstract
decentralization within IoT eliminates the need to use distributed networks within IoT to communicate only with servers that may face difficulties related to the internet, vulnerabilities, DDOS, or hijacks, merging blockchain with IoT converted the IoT system into a decentralized with many benefits and ...
Read More ...
decentralization within IoT eliminates the need to use distributed networks within IoT to communicate only with servers that may face difficulties related to the internet, vulnerabilities, DDOS, or hijacks, merging blockchain with IoT converted the IoT system into a decentralized with many benefits and outcomes from this conversion. An encryption scheme homomorphic technique (HE) is a method that encrypts the cipher data without the need to decrypt it,paillier encryption method is used. This paper aims to propose a system that integrates Paillier cryptosystem homomorphic technology with IoT and lightweight blockchain technology to provide decentralization to the IoT environment and improve security. The proposed system results in improving the IoT device’s work environment by solving the main challenges of security using blockchain, privacy using homomorphism, and data volume using blockchain. The data set used to implement and evaluate the proposed system is industrial internet of things data The dataset used in this paper is generated via machine industry 4.0 Storage System status which represents the system failure and work status. this system is evaluated using standard metrics used to evaluate the blockchain effectiveness and time, resources consumed and shows better results in time and power consumption.
Computer
Haider M. Al-Mashhadi; Hussain Jassim Fahad
Abstract
Electrical energy is one of the most important components of life today where different fields depend on it. The field of electrical energy distribution (electricity network), which transmits electrical energy from sources to consumers, is one of the most important areas that need to be developed and ...
Read More ...
Electrical energy is one of the most important components of life today where different fields depend on it. The field of electrical energy distribution (electricity network), which transmits electrical energy from sources to consumers, is one of the most important areas that need to be developed and improved. In addition to analyzing electrical energy consumption, it needs to forecast consumption and determine consumer behavior in terms of consumption and how to balance supply and demand. The research aims to analyze weather data and find the relation between the weather factors and energy consumption in order to prepare data to use as a suitable data in machine learning model for future use. This model analyzes the building consumption rate for a particular area and takes into account the weather factors that affect electrical energy consumption, where (temperature, dew point, ultraviolet index) are selects based on the correlation confidence and then divided these factors into a set of categories using the K-Means algorithm to show the effect of each factors on the other.
Computer
Sama Salam Samaan; Hassan Awheed Jeiad
Abstract
Traditional network abilities have a drastic shortage in the current networking world. Software-Defined Networking (SDN) is a revival development in the networking domain that provides separation of control and data planes, enlarges the data plane granularity, and simplifies the network devices. All ...
Read More ...
Traditional network abilities have a drastic shortage in the current networking world. Software-Defined Networking (SDN) is a revival development in the networking domain that provides separation of control and data planes, enlarges the data plane granularity, and simplifies the network devices. All these factors accelerate and automate the evolution of new services. However, when the SDN network topology becomes large, it poses new challenges in security, traffic management, and scalability due to the vast amounts of traffic data generated and the need for additional controllers to manage the significant number of networking devices. On the other hand, big data has become an attractive trend that can enhance network performance in general, specifically SDN. Both SDN and big data have gained great attraction from industry and academia. Traditionally, these two subjects have been studied separately in most of the preceding works. However, big data can impact the design and implementation of SDN thoroughly. This paper presents how big data can support SDN in various aspects, including intrusion detection, traffic monitoring, and controller scalability and resiliency. We suggest several approaches toward deeper cooperation between big data and SDN.
Computer
Suha Dh. Athab; Kesra Nermend; Abdulamir Abdullah Karim
Abstract
Microsoft Common Objects in Context (COCO) is a huge image dataset that has over 300 k images belonging to more than ninety-one classes. COCO has valuable information in the field of detection, segmentation, classification, and tagging; but the COCO dataset suffers from being unorganized, and classes ...
Read More ...
Microsoft Common Objects in Context (COCO) is a huge image dataset that has over 300 k images belonging to more than ninety-one classes. COCO has valuable information in the field of detection, segmentation, classification, and tagging; but the COCO dataset suffers from being unorganized, and classes in COCO interfere with each other. Dealing with it gives very low and unsatisfying results whether when calculating accuracy or intersection over the union in classification and segmentation algorithms. A simple method is proposed to create a customized subset from the COCO dataset by determining the class or class numbers. The suggested method is very useful as preprocessing step for any detection or segmentation algorithms such as YOLO, SSPNET, RCNN, etc. The proposed method was validated using the link net architecture for semantic segmentation. The results after applying the preprocessing were presented and compared to the state of art methods. The comparison demonstrates the exceptional effectiveness of transfer learning with our preprocessing model.
Computer
Hind Moutaz Al-Dabbas; Raghad Abdulaali Azeez; Akbas Ezaldeen Ali
Abstract
Iris identification is a well‐ known technology used to detect striking biometric identification techniques for recognizing human beings based on physical behavior. The texture of the iris is essential and its anatomy varies from individual to individual. Humans have distinctive physical characteristics ...
Read More ...
Iris identification is a well‐ known technology used to detect striking biometric identification techniques for recognizing human beings based on physical behavior. The texture of the iris is essential and its anatomy varies from individual to individual. Humans have distinctive physical characteristics that never change. This has resulted in a considerable advancement in the field of iris identification, which inherits the random variation of the data and is often a dependable technological area. This research proposes three algorithms to examine the classifications in machine learning approaches using feature extraction for the iris image. The applied recognition system used many methods to enhance the input images for iris recognition using the Multimedia University (MMU) database. Linear Discriminant Analysis (LDA) feature extraction method is applied as an input of three algorithms of machine learning approaches that are OneR, J48, and JRip classifiers. The result indicates that the OneR classifier with LDA achieves the highest performance with 94.387 % accuracy, while J48 and JRip reached to 90.151% and 86.885% respectively.
Computer
Sama Salam Samaan; Hassan Awheed Jeiad
Abstract
Modelling computer networks in general, particularly Software Defined Networking (SDN) as a graph, is beneficial in network planning and design, configuration management, traffic analysis, and security. According to the dynamic nature of SDN, it needs a fast response due to the rapid changes in the network ...
Read More ...
Modelling computer networks in general, particularly Software Defined Networking (SDN) as a graph, is beneficial in network planning and design, configuration management, traffic analysis, and security. According to the dynamic nature of SDN, it needs a fast response due to the rapid changes in the network state. The SDN network topology can be modelled as a graph and stored in a graph database, and the traffic load of each switch is stored in the created graph. Consequently, a graph processing framework can be used to process the stored traffic data, and the results are utilized in traffic engineering to assist the SDN controller in network management. This paper provides a comprehensive literature survey involving graph techniques applied to SDN. Then, a summary of graph algorithms is presented. In addition, an overview of graph databases and graph processing frameworks is displayed. Finally, a model is suggested to integrate the graph database and graph processing framework in SDN traffic analysis.
Computer
Samara Mohammed Radhi; Raheem Ogla
Abstract
Securing information is difficult in the modern internet era, asterabytes of data are generated daily online and online transactions occurvirtually every second. The current world's information security relies heavily oncryptography, which makes the internet a safer environment. Making informationincoherent ...
Read More ...
Securing information is difficult in the modern internet era, asterabytes of data are generated daily online and online transactions occurvirtually every second. The current world's information security relies heavily oncryptography, which makes the internet a safer environment. Making informationincoherent to an unauthorized person is done through the use of cryptography.Providing legitimate users with confidentiality as a result. There are a widevariety of cryptographic algorithms suitable for this purpose. An idealcryptography method would allow the user to do their job without breaking thebank. Unfortunately, there is no magic formula that can address everyissue.Several algorithms balance cost and performance. A banking applicationneeds robust security at a high cost, while a gaming software that sends userpatterns for analytics cares more about speed and cost. Thus, choosing theappropriate encryption technique for the user. This study offers importantinsights in the process of selecting cryptographic algorithms in terms of eachalgorithm's strengths, weaknesses, cost, and performance . In order todemonstrate an entire performance analysis in this article, as opposed to justtheoretical comparisons, this research developed and thoroughly examined thecost and performance of commonly used cryptographic algorithms, includingDES, 3DES, AES, RSA, and blowfish. According to the findings, blowfishrequires the smallest amount of time to decrypt files of various sizes (25K, 50K,1M, 2M, 3M, and 4M), and it also consumes the smallest amount of memory. Thismakes it approximately three times faster than other cryptographic algorithms.
Computer
Rasheed Abdul Ameer Rasheed; Ahmed Sabah Al-Araji; Mohammed Najm Abdullah; Hamed S Al-Raweshidy
Abstract
A client-server network is one of the most important topics in a computer network. In this work, a real-time computer control system is designed based on the Client-Server Model for a Multi-Agent Mobile Robot System (CSM-MAMRS) and is applied to a building that consists of (N) floors and uses one mobile ...
Read More ...
A client-server network is one of the most important topics in a computer network. In this work, a real-time computer control system is designed based on the Client-Server Model for a Multi-Agent Mobile Robot System (CSM-MAMRS) and is applied to a building that consists of (N) floors and uses one mobile robot on each floor that has specific actions in different types of environments. The new proposed CSM-MAMRS consists of four layers. The first stage manages the network communication for each agent. The second stage solves the major problems of path planning by using a proposed hybrid algorithm that combines the Rapidly Exploring Random Tree Star (RRT*) and the Particle Swarm Optimization (PSO) algorithms in order to provide the shortest and smoothest path with collision avoidance between the starting and the target points in a static and dynamic robot environments are used.. In the third stage, a velocity planner controller is based on an inverse kinematic mobile robot model. In the fourth stage, the Hypertext Transfer Protocol HTTP is used to send velocity values to real mobile robots via the Wireless Network Control Administration. The simulation results and experimental works successfully achieved to significant improvement in real-time when using three missions of three mobile robots on different static map floors in the building. The maximum tracking pose errors for three robots in the static enviormnets are 0.39 cm, 0.02 cm, 0.32 cm respectively, but the maximum tracking pose error in the dynamic environments are 2.94 cm and 2.8 cm only for two mobile robots along the maximum distance of 250 cm.
Computer
Yasmin A. Hassan; Abdul Monem S. Rahma
Abstract
Since the Internet has been more widely used and more people have access to multimedia content, copyright hacking, and piracy have risen. By the use of watermarking techniques, security, asset protection, and authentication have all been made possible. In this paper, a comparison between fragile and ...
Read More ...
Since the Internet has been more widely used and more people have access to multimedia content, copyright hacking, and piracy have risen. By the use of watermarking techniques, security, asset protection, and authentication have all been made possible. In this paper, a comparison between fragile and robust watermarking techniques has been presented to benefit them in recent studies to increase the level of security of critical media. A new technique has been suggested when adding an embedded value (129) to each pixel of the cover image and representing it as a key to thwart the attacker, increase security, rise imperceptibility, and make the system faster in detecting the tamper from unauthorized users. Using the two watermarking types in the same system reaches better results and increases the power of the system and makes it robust against any attack and revel the modification if any at the same time. PSNR has been used as a performance metric to evaluate the study. The result of the new proposed watermark is 54. It is preferable to utilize both a fragile and a robust watermark simultaneously.
Computer
Farah Tawfiq Abdul Hussien; Abdul Monem S. Rahma; Hala Bahjat Abdul Wahab
Abstract
Providing security for each online consumer over the internet is a critical issue that may cause a time consuming problem that may cause big load on the website server especially for the large websites at the rush time. This process may generate a variety of issues, including response time delays, client ...
Read More ...
Providing security for each online consumer over the internet is a critical issue that may cause a time consuming problem that may cause big load on the website server especially for the large websites at the rush time. This process may generate a variety of issues, including response time delays, client orders being lost, and system crash or deadlock, all of which decrease system performance. This work intends to present a new multi-agent system prototype structure that solves the challenge of security while avoiding issues that might degrade system performance. This is accomplished by installing a software agent on the client's device that handles the purchase and encryption processes without the need for the user to intervene. The suggested agent evades the problems of stalemate (i.e., failure) and request loss, ensuring that information exchanged between all entities is protected. The use of a software agent to manage buying and encrypting operations improves system performance by 10% and increases the reaction time of the system by 30.5 percent (response time, page loading time, transaction processing speed, orders per second) according to test results.
Computer
Hayder I. Mutar; Muna M. Jawad
Abstract
Wireless Sensor Networks (WSNs) have become the most cost- effective monitoring solution due to their low cost, despite their major drawback of limited power due to dependence on batteries. Each Sensor Node (SN) is clustered in a particular location and forms a network by self-organizing. They often ...
Read More ...
Wireless Sensor Networks (WSNs) have become the most cost- effective monitoring solution due to their low cost, despite their major drawback of limited power due to dependence on batteries. Each Sensor Node (SN) is clustered in a particular location and forms a network by self-organizing. They often operate in some of the world's most unusual or dangerous conditions. Networking errors, memory and processor limitations, and energy constraints all pose problems for WSN developers. Many problems in WSNs are expressed as multivariate optimization problems that are solved using biologically inspired techniques. Particle swarm optimization (PSO) is an easy, algorithmically sound, and robust optimization technique. It has been used to address problems like Clustering, data routing, Cluster Head (CH) collection, and data collecting in WSNs. This paper presents a brief analysis of WSN studies in which the PSO algorithm was used as the primary or secondary algorithm for enhancing lifespan of WSNs, focusing on results that show energy efficiency in the sensors, extending the network's life.