Communication
Maysoon Hashim Ismaal; Alaa Hussein Ali; Sabah M. Thaba
Abstract
Hydrothermal preparations have been made form vanadium oxygen systems V2O5 and VO2 NPs prototype to design photodetectors. In comparison, the films' polycrystalline structure can be seen in analysis of x-ray diffraction (XRD) pattern, which has 7 and 14 peaks with crystallite sizes = 19.59 for V2O5 and, ...
Read More ...
Hydrothermal preparations have been made form vanadium oxygen systems V2O5 and VO2 NPs prototype to design photodetectors. In comparison, the films' polycrystalline structure can be seen in analysis of x-ray diffraction (XRD) pattern, which has 7 and 14 peaks with crystallite sizes = 19.59 for V2O5 and, crystallite sizes = 12.92 nm, for VO2. The grains had large, neatly separated conical columnar growth combined grains throughout the surface, with some of the columnar grains coalescing in a few spots, according to the analysis from atomic force microscopy (AFM). It is revealed that the average size of particle of V2O5 =29.58 nm and for VO2 = 16 nm, with rms roughness of 6.8 nm and 21.3 nm respectively. Also, the optical energy gap of V2O5 = 2.6 eV, whereas energy gap of VO2 = 1.36 eV. In addition, it was discovered that the reflectance increased in the visible and infrared regions to register 0.09 and 0.07 respectively. The maximum values of the refractive indices for V2O5 and VO2 were 2.6 and 1.9, respectively. Two types of hetero-junction photo-detectors Ag/VO2/PSi/n-Si/Ag and Ag/V2O5/PSi/n-Si/Ag have been fabricated and characterized. The proposed results of Ag/ V2O5 /PSi/n-Si/Ag heterojunction Photo-detector at different concentrations from PMMA: Acetone responsivity was 0.7A/W at 850nm and, the remarkable detectivity = 4.1 x 1012 (1/W) .cm. Hz0.5 , while, highest values of the detectivity in Ag/VO2/PSi/n-Si/Ag = 3.3 x 1012 (1/W) .cm.Hz0.5 . at wavelength equal and greater than 850 nm
Computer
Dhulfiqar Hakeem Dhayef; Sawsan S A Al-Zubaidi; Luma A H Al-Kindi
Abstract
Cell formation plays a crucial role in the development of cellular manufacturing systems (CMS). Previous studies in this field have typically assumed that each part is associated with a single process plan. However, incorporating alternative routes offers additional flexibility in CMS design. This paper ...
Read More ...
Cell formation plays a crucial role in the development of cellular manufacturing systems (CMS). Previous studies in this field have typically assumed that each part is associated with a single process plan. However, incorporating alternative routes offers additional flexibility in CMS design. This paper addresses the cell formation problem by considering alternative routes and presents a two-stage approach to address this problem. In the first stage, a Route Rank Index (RRI) is developed based on a correlation matrix to select the optimal alternative route for each part. Subsequently, a Genetic Algorithm (GA) is employed in the second stage to form part families and machine cells. The proposed approach's computational performance is evaluated using a set of generalized group technology datasets found in the existing literature. The results demonstrate that the proposed approach is highly effective and efficient when it comes to addressing the cell formation problem involving alternative routes. The ramifications of these findings in practice are substantial. Our suggested approach demonstrates its resilience and adaptability by achieving comparable or better grouping results across a wide variety of benchmark datasets. This shows the method can be used in a wide range of practical situations, including those involving matrices of varying sizes and shapes. The theoretical knowledge base on part-machine grouping strategies benefits from the comparison study. By comparing the results of our suggested method to those of well- known heuristics, we shed light on its benefits and drawbacks.
Computer
Omar Nowfal MohammedTaher; Mohammed Najm Abdullah; Hassan Awheed Jeiad
Abstract
Definitely, image processing operations without advanced and expensive microprocessors consume more time, power, and larger programs. So, improving the reasonable cost of microprocessors is crucial in this situation. This paper proposes an improvement for the MIPS_32 architecture that is called a Customized ...
Read More ...
Definitely, image processing operations without advanced and expensive microprocessors consume more time, power, and larger programs. So, improving the reasonable cost of microprocessors is crucial in this situation. This paper proposes an improvement for the MIPS_32 architecture that is called a Customized MIPS_32 (CMIPS_32) to enhance the capabilities of image processing (IP) operations. The proposal aims to increase throughput by minimizing the iterative fetching of instructions required by a certain IP operation into a single customized IP instruction. The architecture of MIPS_32 was developed in two phases. Firstly, the Register File, control unit, and ALU are modified to manipulate the information related to the IP operations. Secondly, two new units, the address calculation unit and the last pixel detection unit, were proposed to determine a certain image's starting and ending addresses. Furthermore, the MIPS_32 pipeline is customized to have five to six stages depending on the intensity of operation required by a certain IP instruction to decrease the number of machine clocks and the power consumed. The proposal was implemented using the Zed-Board XC7Z020CLG484-1 FPGA. The results showed that the computation speedup increased by a factor equal to the number of standard instructions required to execute the same operation performed by one of the proposed IP instructions. The CMIPS_32 consumed less power than other models that were implemented on Spartan3-XC3S1500L, Virtex5-XC5VFX30T, Virtex6-XC6VLX75T, and Virtex6-Low-Power-XC6VLX75T by 0.0138W, 0.6468W, 1.31W, and 0.7898W, respectively. Comparing the power consumed by the proposal with the GPU proved that the CMIPS_32 consumes less than the NVIDIA-GPU-GTX980 by 63.8698W.
Computer
Zainab Hashim; Hanaa Mohsin; Ahmed Alkhayyat
Abstract
Handwritten signature identification is a process that determines an individual’s true identity by analyzing their signature. This is an important task in various applications such as financial transactions, legal document verification, and biometric systems. Various techniques have been developed ...
Read More ...
Handwritten signature identification is a process that determines an individual’s true identity by analyzing their signature. This is an important task in various applications such as financial transactions, legal document verification, and biometric systems. Various techniques have been developed for signature identification, including feature-based methods and machine learning-based methods. However, verifying handwritten signatures in digital transactions and remote document authentication is still challenging. The inherent variety in people’s signatures, which may occur due to factors such as mood, exhaustion, or even the writing tool used, contributes to the problem. Furthermore, the proliferation of sophisticated forgery methods, such as freehand mimicking and sophisticated picture manipulation, necessitates the development of reliable and precise tools for identifying authentic signatures from fake ones.The present paper suggests a method for identifying signatures based on integrating static (off-line) handwritten signature data. This is done by fusing three types of signature features: Linear Discriminant Analysis (LDA) as appearance-based features, Fast Fourier Transform (FFT) as frequency- features, and Gray-Level Co-occurrence Matrix (GLCM) as texture-features. Then, these fused features are inputted into four types of machine learning algorithms: Naive Bayes, K-Nearest Neighbor, Decision Tree, and AdaBoost classifiers, to identify each person and to find the most robust algorithm in terms of accuracy and precision and recall. For experiments, we have used two famous datasets: SigComp2011 and CEDAR. After training datasets, the highest accuracy achieved was 100% on the CEDAR dataset and 94.43% on the SigComp2011 dataset using a Naive Bayes classifier.
Computer
Afrah Salman Dawood
Abstract
Recently, the burgeoning disciplines of Machine Learning (ML) and Deep Learning (DL) have experienced considerable integration across diverse scientific domains. Of significant note is their integration into the medical sector, specifically in the intricate methodologies of pathological categorization. ...
Read More ...
Recently, the burgeoning disciplines of Machine Learning (ML) and Deep Learning (DL) have experienced considerable integration across diverse scientific domains. Of significant note is their integration into the medical sector, specifically in the intricate methodologies of pathological categorization. Present-day innovations underscore the pivotal role of Deep Convolutional Neural Networks (DCNN) in mediating the tasks of image-based taxonomies and prognostications within this domain. In this research, a new DCNN with different modified intelligent architectures like CNN, modified VGG-16, VGG-19, ResNet50, and DenseNet121, besides the newly added classification layer, was implemented and tested for the detection and classification of Alzheimer’s disease. The evaluation and performance metrics are accuracy, loss, f1-score, precision, and recall. Experiments were made on Kaggle-based dataset and test results show that the CNN-based model is the most accurate model, with the highest accuracy of 96% and the lowest loss of 9.92%. Finally, the average performance percentage of the overall proposed model is as follows: accuracy is 91%, loss is 19.75%, precision is 89.4%, F1- score is 88.83%, and recall is 90%.
Communication
Mohammad Abd Abbas; Bilal Ghazal Ghazal; Ahmad ghandour Ghandour
Abstract
Online payment methods for e-commerce and many websites in various fields have increased significantly. Therefore, credit card frauds are easy targets, and their rate is on the rise which poses a major problem for online payments. The basic concept is to examine consumers' purchasing histories to extrapolate ...
Read More ...
Online payment methods for e-commerce and many websites in various fields have increased significantly. Therefore, credit card frauds are easy targets, and their rate is on the rise which poses a major problem for online payments. The basic concept is to examine consumers' purchasing histories to extrapolate their typical behavior patterns, classify cardholders into different groups, and then attempt to detect credit card fraud. Credit card fraud detection based on a machine learning model uses a combination of supervised and unsupervised learning techniques such as Random Forest, Decision Tree, Logistic Regression, and Extreme Gradient Boost. We used the synthetic minority oversampling (SMOTE) technique to balance the dataset. The model is trained on a large set of data related to credit card transactions and uses features such as transaction amount, transaction location, and time of day to identify patterns and anomalies in the data that indicate fraudulent activity. Our goal is to build a model based on machine learning technology that detects and analyzes online shopping fraud Detecting fraud in credit card systems is crucial to protecting consumers from financial losses and maintaining the integrity of the financial ecosystem after collecting Creditcard.csv data. With the help of several algorithms, including Random Forest (RF) algorithm accuracy reached 99%, Logistic Regression (LR) algorithm accuracy reached 97%, and Decision Tree (DT) algorithm accuracy reached 99% Researchers provide a comprehensive method for identifying fraud in credit card transactions Precision Recall F1-score. The proposed system includes four main steps: pre-processing, classification using the algorithm, and checking whether the transaction is fraudulent or not.