Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

Journal Issue Information

Archive

Year

Volume(Issue)

Issues

Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Issue Info: 
  • Year: 

    2023
  • Volume: 

    20
  • Issue: 

    2
  • Pages: 

    3-20
Measures: 
  • Citations: 

    0
  • Views: 

    36
  • Downloads: 

    24
Abstract: 

With the significant growth of social media, individuals and organizations are increasingly using public opinion in these media to make their own decisions. The purpose of Sentiment Analysis is to automatically extract peoplechr('39')s emotions from these social networks. Social networks related to financial markets, including stock markets, have recently attracted the attention of many individuals and organizations. People on these social networks share their opinions and ideas about each share in the form of a post or tweet. In fact, sentiment analysis in this area is measuring peoplechr('39')s attitudes toward each share. One of the basic approaches in automatic analysis of emotions is lexicon-based methods. Most conventional lexicon is manually extracted, which is a very difficult and costly process. In this article, a new method for extracting a lexicon automatically in the field of stock social networks is proposed. A special feature of these networks is the availability of price information per share. Taking into account the price information of the share on the day of tweeting for that share, we extracted lexicon to improve the quality of opinion mining in these social networks. To evaluate the lexicon produced using the proposed method, we compared it with the Persian version of the SentiStrength lexicon, which is designed for general purpose. Experimental results show a 20% improvement in accuracy compared to the use of general lexicon.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 36

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 24 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    20
  • Issue: 

    2
  • Pages: 

    21-38
Measures: 
  • Citations: 

    0
  • Views: 

    60
  • Downloads: 

    13
Abstract: 

By growth in urban population, vehicles have also experienced increased significantly. The increased of vehicles has led to challenges in the services of safety, traffic and comfort. Congestion in urban areas is one of the main examples of increasing the number of vehicles, which has also led to environmental challenges. To address these challenges, Intelligent Transportation Systems (ITS) have been developed. One of the key technologies in ITS to solve these challenges is vehicular networks. These networks increase the efficiency of transportation systems in urban and highway areas by providing a wide range of services. However, these networks face challenges such as network partitioning in low-density areas and lack of network capacity in dense areas to providing proper services. Such challenges will reduce the efficiency of vehicular networks and transportation systems in urban and highway scenarios. To overcome these problems, roadside units are deployed in urban environments, but the high cost of installation and maintenance of RSUs prevents their widespread installation in urban areas. Therefore, it is necessary to install a minimum number of these units in the urban environment and in suitable and necessary places to meet these challenges. The presence of parked vehicles in urban areas and in predetermined places, makes it possible to use them as RSUs. Therefore, it is necessary to consider the location of parking lots in the placement of roadside units in the urban environment so that parked vehicles in these places can be used as roadside units to meet these challenges. Therefore, it is necessary to consider the location of parking lots in urban areas in the placement of RSUs. In this paper, a BIP model for RSUs installation is developed to provide the minimum required coverage by considering parked vehicles as RSUs in the urban area to meet the mentioned challenges. In this model, parked vehicles in parking lots are used as RSUs to increase coverage and capacity of the network. Moreover, constraints have been added to the model to achieve the minimum required coverage and minimize multiple co-coverage to reduce installation costs. Therefore, in the proposed model, in addition to considering the parked vehicles in the parking lots as roadside units and restrictions to prevent multiple coverages in order to reduce installation costs, providing minimum coverage to improve the efficiency of ITS services is also considered. Finally, the proposed solution is evaluated using OMNeT++, SUMO and Veins. To validate the proposed model, the evaluation was repeated in two different maps, with a different number of RSUs and different traffic scenarios, and Packet Loss Rate and Service Delay were measured as performance parameters. The results of the simulation show the improvement of the service delay parameter in the two maps by 39% and 43% and the packet loss rate by 47% and 49% compared to other related work.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 60

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 13 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    20
  • Issue: 

    2
  • Pages: 

    39-58
Measures: 
  • Citations: 

    0
  • Views: 

    50
  • Downloads: 

    10
Abstract: 

The Latent Dirichlet Allocation (LDA) model is a generative model with several applications in natural language processing, text mining, dimension reduction, and bioinformatics. It is a powerful technique in topic modeling in text mining, which is a data mining method to categorize documents by their topic. Basic methods for topic modeling, including TF-IDF, unigram, and mixture of unigrams successfully deployed in modern search engines. Although these methods have some useful benefits, they don’t provide much summarization and reduction. To overcome these shortcomings, the latent semantic analysis (LSA) has been proposed, which uses singular value decomposition (SVD) of word-document matrix to compress big collection of text corpora. User’s search key words can be queried by making a pseudo-document vector. The next improvement step in topic modeling was probabilistic latent semantic analysis (PLSA), which has a close relation to LSA and matrix decomposition with SVD. By introducing of exchangeability for the words in documents, the topic modeling has been proceeded beyond PLSA and leads to LDA model. We consider a corpus contains M documents, each document has words, and each word is an indicator from one of vocabularies. We defined a generative model for generation of each document as follows. For each document draw its topic from and repeatedly for each draw topic of each word from and draw each word from the probability matrix of with probability of. We can repeat this procedure to generate whole documents of corpus. We want to find corpus related parameters and as well as latent variables and for each document. Unfortunately, the posterior is intractable, and we have to choose an approximation scheme. In this paper we utilize LDA for collection of discrete text corpora. We describe procedures for inference and parameter estimation. Since computing posterior distribution of hidden variables given a document is intractable to compute in general, we use approximate inference algorithm called variational Bayes method. The basic idea of variational Bayes is to consider a family of adjustable lower bound on the posterior, then finds the tightest possible one. To estimate optimal hyper-parameters in the model, we used the empirical Bayes method, as well as a specialized expectation-maximization (EM) algorithm called variational-EM algorithm. The results are reported in document modeling, text classification, and collaborative filtering. The topic modeling of LDA and PLSA models are compared on a Persian news data set. It has been observed that LDA has perplexity between and, while the PLSA has perplexity between and, which shows domination of LDA over PLSA. The LDA model has also been applied for dimension reduction in a document classification problem, along with the support vector machines (SVM) classification method. Two competitor models are compared, first trained on a low-dimensional representation provided by LDA and the second trained on all documents of corpus, with accuracies and, respectively, this means we lose accuracy but it remains in reasonable range when we use LDA model for dimensionality reduction. Finally, we used the LDA and PLSA methods along with the collaborative filtering for MovieLens 1m data set, and we observed that the predictive-perplexity of LDA changes from to while it changes from to for PLSA, again showing the domination of the LDA method.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 50

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 10 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    20
  • Issue: 

    2
  • Pages: 

    59-68
Measures: 
  • Citations: 

    0
  • Views: 

    28
  • Downloads: 

    12
Abstract: 

The acquisition of reliable flow velocity and streamflow estimates is vital and essential in aquatic studies. Acoustic Tomography Technology is a branch of remote sensing science which innovatively developed for continuous monitoring of surface water currents in oceans, seas, and in recent years in rivers and is a promising method to measure Flow characteristics such as velocity & discharge with high accuracy and continuously in time. The output of this system impressed by the influence of unknown factors and after the initial processing of raw data, some spikes appear in the data. Although the developers of this system have stated that a source of spurious data can be a complex salinity distribution in estuarine regions, failure to identify these outliers will cause errors in measurements and increase the error of data mining and time series forecasting algorithms. In the previous studies, the spikes removed using the standard deviation method without any replacement. In this study, Phase-Space Thresholding (PST) is proposed to detect and remove the spikes, which was developed for despiking output of Acoustic Doppler Velocimeter (ADV) data. This method combines three concepts: 1) the differentiation enhances the high-frequency portion of a signal, 2) the expected maximum of a normal, random series is given by the universal threshold, and 3) the good data cluster in a dense cloud in phase space. These concepts are used to construct an ellipsoid in a three-dimensional phase-space, then points which lie outside the threshold ellipsoid are designated as spikes. An important advantage of this method in comparison with various other methods is that it requires no parameters. Furthermore, another advantage of this method against the standard deviation method is the replacement of detected spikes with a reliable value. for replacement of detected spike’s values, we used the mean value of two adjacent data points on either side of the detected spike. After 6 iterations of implementing the PST method on the input dataset a total of 8017 data, which is 32% of 25031 data were identified as spikes and replaced with a correct value. Moreover, the standard deviation value before despiking was 0/206 and after applying the PST method improves to 0/119. This change in standard deviation value shows that the data dispersion around signal mean reduces due to despiking process. The results show that the PST method has higher accuracy in comparison with the standard deviation approach. Finally, it was observed that the comparison of the relative discharge error between the output of the PST method and the rating curve data (as a reference) is almost less than 20%. While this value exceeds 50% in the comparison between the standard deviation and the rating curve data.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 28

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 12 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

MOTAMED SARA | ASKARI ELHAM

Issue Info: 
  • Year: 

    2023
  • Volume: 

    20
  • Issue: 

    2
  • Pages: 

    69-79
Measures: 
  • Citations: 

    0
  • Views: 

    62
  • Downloads: 

    23
Abstract: 

Since the behavior of people in the videos are in 3D signals format and they are long, it is difficult to search for a specific action. Therefore, a suitable technique in live security videos is required to detect ongoing armed thieves to reduce the occurrence of crime and theft. The innovation of this paper is to provide a rapid and efficient method for detecting guns in frames of images taken from videos without deleting the main points. The hierarchy of object recognition is that in order to extract frames from images derived from videos, the separation algorithm will be applied at a specified frame rate and all images will be placed in a folder. Then, video samples are divided into three categories of training, validation and testing, and using Haar Cascade (HC) classification, the frames of whole body images are extracted and the rest of the backgrounds are removed from the images. The reason for choosing this method is that the HC classification is resistant to rotation of images and also this algorithm has shown good performance compared to complex calculations. Therefore, in our proposed model, we will use this algorithm as a whole body diagnosis. This is done by detecting the Region of Interest (ROI) area by cutting the selected areas, followed by subtracting the background to eliminate unwanted backgrounds. All key points of selection and extraction are stored inside a folder. Finally, all images are sent to 3D convolutional Neural Networks (3DCNNs) to detect weapons in the images. Finally, in order to evaluate the performance of the system in terms of accuracy, it is used with correct positive rate parameters, false positive rate, positive prediction value and false detection rate. As can be seen in the results of the tests, the highest gun detection rate is related to the 3DCNNs model with a detection rate of 96. 1%, followed by the best detection model rate related to YOLO V3 and with a detection rate of 95. 6%.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 62

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 23 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    20
  • Issue: 

    2
  • Pages: 

    81-98
Measures: 
  • Citations: 

    0
  • Views: 

    48
  • Downloads: 

    6
Abstract: 

With the advancement of computer science, the dramatic developments in data mining area and their increasing applications, the identification of outlier or anomaly data has also become one of the most important research topics. In most applications, the outlier data contain beneficial information that can be used to gain useful knowledge. Today, there are a large number of applications on data streams, in the vast majority of which the discovery of outlier/anomaly data is very important and in some cases vital. Detection of anomalies is an important way for detecting frauds, network intrusion detection, detection of abnormal behaviors in monitoring systems, and other rare events that are always of great importance, but they are often difficult to identify. Most of the existing efficient outlier detection algorithms have been designed for the static data. While outlier detection is more challenging in data streams, where data are generating continuously and has especial properties such as infinity and transience. In this research, we introduce an approach based on the QLattice classification model, which works based on the quantum computing and performs better in the intended application than other classification methods. Given the possibility of changing the distribution of data over time in streaming data, a scheme to take advantage of online incremental learning is also applied in the proposed method. Considering the unlimited data flow and limited processing memory, the detection process is applied to a window of data that is constantly updated with data sampled from previous windows. A function is also designed to solve the problem of data imbalance, which uses the random sampling technique to solve this issue. The results of experiments obtained on benchmark datasets show that the proposed approach has better performance than other methods.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 48

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 6 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    20
  • Issue: 

    2
  • Pages: 

    99-112
Measures: 
  • Citations: 

    0
  • Views: 

    146
  • Downloads: 

    38
Abstract: 

One of the biggest problems facing the human being is energy supply due to reduced resource and cost. The largest share of energy consumption in the world has been allocated to the construction sector. The main sources of energy supply are coal, natural gas and oil, all of which are non-renewable and will be completed in the near future. Major energy consumers can be referred to household, industrial, agricultural, general, commercial, and street lighting. Among the energy consumers, the share of domestic and office sectors is higher than other consumers, and attention to reducing energy consumption and energy losses in the construction sector is an unavoidable necessity. In this paper, a new method of building energy management is proposed, which, with the help of internet networks of objects, controls the energy consumption of buildings. An administrative building with six areas is considered. The proposed method consists of two phases: the first phase, which is the prediction stage, is performed using artificial neural network and six parameters: outside temperature of the building, set point temperature, sun radiation, occupancy, previous temperature and the hour of the day are given as inputs to the perceptron neural network and the output of this phase is inside temperature of building and the energy consumption, which is given as input to the next phase. The second phase uses the gray wolf algorithm to determine the optimal temperature for each part of the building at any hour of the day. The energy consumption and cost of the building are calculated using the software of MATLAB, which results in a significant reduction in energy consumption and energy cost optimization in the office. The proposed method shows a reduction in energy consumption of 22 Kw/h in the early morning hours.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 146

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 38 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    20
  • Issue: 

    2
  • Pages: 

    113-144
Measures: 
  • Citations: 

    0
  • Views: 

    46
  • Downloads: 

    9
Abstract: 

Nowadays the most commonly used method in various tasks of machine learning and artificial intelligence are neural networks. In spite of their different uses, neural networks and Deep neural networks (DNNs) have some vulnerabilities. A little distortion or adversarial perturbation in the input data for both additive and non-additive cases can be led to change the output of the trained model, and this could be a kind of DNN vulnerability. Despite the imperceptibility of the mentioned disturbance for human beings, DNN is vulnerable to these changes. Creating and applying any malicious perturbation named “attack”, penetrates DNNs and makes them incapable of doing the duty assigned to them. In this paper different attack approaches were categorized based on the signal applied in the attack procedure. Some approaches use the gradient signal for detecting the vulnerability of DNN and try to create a powerful attack. The other ones create a perturbation in a blind situation and change a portion of the input to create a potential malicious perturbation. Adversarial attacks include both black-box and White-box situations. White-box situation focuses on training loss function and the architecture of the model but black box situation focuses on the approximation of the main model and dealing with the restriction of the input-output model request. Making a deep neural network resilient against attacks is named “defense”. Defense approaches are divided into three categories. One of them tries to modify the input, the other one makes some changes in the developed model and also changes the loss function of the model. In the third defense approach some networks are first used for purification and refinement of the input before passing it to the main network. Furthermore, an analytical approach was presented for the entanglement and disentanglement representation of inputs of the trained model. The gradient is a very powerful signal usually used in learning and an attacking approaches. Besides, adversarial training is a well-known approach in changing a loss function method to defend against adversarial attacks. In this study the most recent research on the vulnerability of DNN through a critical literature review was presented. Literature and our experiments indicate that the projected gradient descent (PGD) and AutoAttack methods are successful approaches in the l2 and l∞ bounded attacks, respectively. Furthermore, our experiments indicate that AutoAttack is much more time-consuming than the other methods. In the defense concept, different experiments were conducted to compare different attacks in the adversarial training approaches. Our experimental results indicate that the PGD is much more efficient in adversarial training than the fast gradient sign method (FGSM) and its deviations like MIFGSM and covers a wider range of generalizations of the trained model on predefined datasets. Furthermore, AutoAttack integration with adversarial training works well, but it is not efficient in low epoch numbers. Aside from that, it has been proven that adversarial training is time-consuming. Furthermore, we released our code for researchers or individuals interested in extending or evaluating predefined models for standard and adversarial machine learning projects. A more detailed description of the framework can be found at https: //github. com/khalooei/Robustness-framework.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 46

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 9 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    20
  • Issue: 

    2
  • Pages: 

    145-162
Measures: 
  • Citations: 

    0
  • Views: 

    30
  • Downloads: 

    5
Abstract: 

Nowadays, we are witnessing financial markets becoming more competitive, and banks are facing many challenges to attract more deposits from depositors and increase their fee income. Meanwhile, many banks use performance-based incentive plans to encourage their employees to achieve their short-term goals. In the meantime, fairness in the payment of bonuses is one of the important challenges of banks, because not paying attention to this issue can become a factor that destroys the motivation among employees and prevents the bank from achieving its short-term and mid-term goals. This article is trying to tackle the problem of optimizing the coefficients of branch performance evaluation indicators based on their business environment in one of the state banks of Iran. In this article, a two-objective genetic algorithm is proposed to solve the problem. This article is comprised of four main sections. The first section is dedicated to the problem definition which is what is our meaning of optimizing the importance coefficients of branches based on the business environment. The second section is about our proposed solution for the defined problem. In the third section, we are comparing the performance of the proposed two-objective genetic algorithm on the defined problem with the performance of four well-known multi-objective algorithms including NSGAII, SPEAII, PESAII, and MOEA/D. And finally, the set of ZDT problems which is a standard set of multi-objective problems is taken into account for evaluating the general performance of the proposed algorithm comparing four well-known multi-objective algorithms. Our proposed solution for solving the problem of optimizing branch performance coefficients includes two main steps. First, identifying the business environment of the branches and second, optimizing the coefficients with the proposed two-objective genetic algorithm. In the first step, the k-means clustering algorithm is applied to cluster branches with similar business environments. In the second step, to optimize the coefficients, it is necessary to specify the fitness functions. The defined problem is a two-objective problem, the first objective is to minimize the deviation of the real performance of the branches from the expected performance of them, and the second objective is to minimize the deviation of the coefficients from the coefficients determined by the experts. To solve this two-objective problem, a two-objective genetic algorithm is proposed. In this article, two approaches are adopted to compare the proposed solution performance. In the first stage, the results of applying the proposed two-objective genetic algorithm have been compared with the results of applying four well-known multi-objective genetic algorithms on the problem of optimizing the coefficients. The results of this comparison show that the proposed algorithm has outperformed the other compared methods based on the S indicator and run time, and it is also ranked second after the NSGAII algorithm in terms of the HV indicator. Finally, for evaluating the performance of the proposed algorithm with other well-known methods, the set of ZDT problems including ZDT1, ZDT2, ZDT3, ZDT4, and ZDT6 has also been taken into consideration. At this stage, the performance of the proposed algorithm has been compared with the four mentioned algorithms based on four key indicators, including GD, S, H, and run time. The results show, the proposed algorithm has outperformed significantly in terms of run time in all five ZDT problems. In terms of GD indicator, the performance of our proposed algorithm is located in the first or second rank among all considered algorithms. In addition, in terms of S and H indicators in many cases, the proposed algorithm outperformed the other well-known algorithms.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 30

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 5 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    20
  • Issue: 

    2
  • Pages: 

    163-174
Measures: 
  • Citations: 

    0
  • Views: 

    48
  • Downloads: 

    10
Abstract: 

Hyper-spectral image classification is a popular topic in the field of remote sensing. Hyperspectral images (HSI) have rich spectral information and spatial information. Traditional hyperspectral image (HSI) classification methods typically use the spectral features and do not make full use of the spatial or other features of the HSI. In general, the classification approaches classify input data by considering the spectral information of the data to produce a classification map in order to discriminate different classes of interest. The pixel-wise classification approaches classify each pixel autonomously without considering information about spatial structures, further enhancement of classification results can be obtain by considering spatial dependences between pixels. However, how to fuse and utilize spectral-spatial features more efficiently is a challenging task. So the combination of spectral information and spatial information has become an effective means to obtain good classification results. Specifically, firstly, the principal component analysis (PCA) algorithm is used to extract the first principal component in the original hyperspectral image. Secondly, the residual network Gabor, GLCM and MP are introduced for each band to extract the spatial information of the image. Thirdly, the image is classified by using SVM to get the final classification result. In this paper, we have used the neural network classifier in the classification of hyperspectral images by integrating spectral and spatial properties in two methods stack and the method based on binary graphs. In spite of the traditional stack method, the use of local binary graph method to properly integrate spectral and spatial information is a desirable method for the simultaneous use of spectral information along with spatial information (Feature Fusion) in hyperspectral image classification. In each of these methods, the neural network classifier is applied to the spectral and spatial features separately and then compared with the performance of the support vector machine classifier in similar conditions. The classification results show that the proposed method can outperform other traditional classification techniques

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 48

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 10 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

Partabaian Jaafar

Issue Info: 
  • Year: 

    2023
  • Volume: 

    20
  • Issue: 

    2
  • Pages: 

    175-194
Measures: 
  • Citations: 

    0
  • Views: 

    22
  • Downloads: 

    3
Abstract: 

Model checking is among the most effective techniques for automatic verification of hardware and software systems’ properties. Generally, in this method, a model of the desired system is generated and all possible states are explored in the space state graph to find errors and undesirable patterns. In models of large and complex systems, if the size of the generated state space is too extensive so that not all available states can be explored due to computational restrictions, the problem of state space explosion occurs. In fact, this problem confines the validation process in model verification systems. To use the model checking technique, the system must be described in a formal language. Graphs are very beneficial and intuitive tools for describing and modeling software systems. Correspondingly, graph transformation system provides a proper tool for formal description of software system features as well as their automatic verification. Various techniques have been investigated in the researches to reduce the effect of state space explosion problem in the model checking process. Some of these methods try to reduce the required memory by reducing the number of cases explored. Among others are symbolic model checking, partial-order reduction, symmetry reduction, and scenario-driven model checking. In a complex system, these algorithms, along with conventional methods such as DFS or BFS search algorithms may not afford any complete answer due to the explosion of state space. Hence, the use of intelligent methods such as knowledge-based techniques, datamining, machine learning, and meta-heuristic algorithms which do not entail full state space exploration could be advantageous. Recent researches attest that exploring the state space using intelligent methods could be a promising idea. Therefore, an intelligent method is used in this research to explore the state space of large and complex systems. Accordingly in this paper, first a model of the desired system is created using graph conversion system. Then a portion of the state space of the model is generated. Afterwards, using the conditional probability table, the dependencies between the rules in the paths toward the goal state are discovered. Finally, by means of the discovered dependencies, the rest of the model state space is intelligently explored. In other words, only promising paths, i. e. those who match the detected dependencies are explored to reach the goal state. It is worth noting that the first goal of the proposed approach is to find a goal state, i. e., one in which either the safety property is violated or the reachability property is satisfied in the shortest possible time. The second less important goal is to reduce the number of explored states in the graph of the state space until reaching the goal state. This paper provides a way to check the availability feature in complex and large software systems modeled in the official graph transformation language. The suggested method is implemented in GROOVE which is an open source toolset for designing and model checking graph transformation systems. The results of experimental tests indicate that the proposed approach is faster than the previous methods and produces a shorter counterexample/witness.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 22

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 3 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2023
  • Volume: 

    20
  • Issue: 

    2
  • Pages: 

    195-210
Measures: 
  • Citations: 

    0
  • Views: 

    50
  • Downloads: 

    17
Abstract: 

Nowadays, wireless sensor networks (WSNs) have found many applications in a variety of topics. The main purpose of these networks is to measure environmental phenomena and to send read data in multi-hop paths to the sink to be exploited by users. The most important challenge in WSNs is to minimize energy consumption in sensor batteries and increase network lifetime. One of the most important techniques for reducing energy consumption in WSNs is the compressive sensing (CS) technique. CS reduces network energy consumption by reducing data transmission in the network and increasing the network lifetime. The use of CS technique in a WSN results in the production of different models of CS signals. These models are based on spatial, temporal and spatio-temporal sensors readings. On the other hand, in order to overcome the challenge of energy consumption, the exact recognition of energy resources in the network is essential. Energy consumption in a sensor node can be divided into two parts: (a) the energy used for computing, and (b) the energy consumed by the communication. The energy used for the computing consists of three components: 1. sensor energy consumption (data reading), 2. background energy consumption, and 3. energy consumption for processing. The power consumption of the communication includes the following: 1. energy consumption for data transmission, 2. energy consumption for data receiving, 3. energy consumption for sending messages, and 4. energy consumption for receiving messages. Hence, the existence of a model for analyzing energy consumption in a CS-based WSN is necessary. Several models have been developed to analyze energy consumption in a WSN, but there is not a complete model for analyzing energy consumption in a CS-based WSN. In this paper, we study all energy consumption components mentioned above in a CS-based WSN and present a complete model for energy consumption analysis. This model can optimize the design of CS-based WSNs energy efficiency improvement approach. To evaluate the proposed model, we use this model to analyze energy consumption in the compressive data gathering technique which is a CS-based data aggregation method. Using this model can optimize the design of CS-based WSNs.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 50

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 17 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
telegram sharing button
whatsapp sharing button
linkedin sharing button
twitter sharing button
email sharing button
email sharing button
email sharing button
sharethis sharing button