Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

Journal Issue Information

Archive

Year

Volume(Issue)

Issues

Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Issue Info: 
  • Year: 

    2021
  • Volume: 

    18
  • Issue: 

    1 (47)
  • Pages: 

    3-12
Measures: 
  • Citations: 

    0
  • Views: 

    155
  • Downloads: 

    0
Abstract: 

In recent years, mobile ad-hoc networks have been used widely due to advances in wireless technology. These networks are formed in any environment that is needed without a fixed infrastructure or centralized management. Mobile ad-hoc networks have some characteristics and advantages such as wireless medium access, multi-hop routing, low cost development, dynamic topology and etc. In these networks the nodes formed temporarily and can move freely and each node has a limited energy that is supplied by the battery. Energy-efficient routing is one of the most important and challenging issues in these networks because of the limited energy. Therefore, most researchers seek to provide a method for energy aware routing. Soft computing methods help mobile ad-hoc networks, so that these networks would be worked more efficiently. One of these methods is using intuitionistic fuzzy logic that improves the evaluation parameters such as throughput. In this paper, an intuitionistic fuzzy logic system has been used for adjusting node willingness parameter in AODV protocol. Decision about participating in the routing of each mobile node is done by the intuitionistic fuzzy logic system with remaining energy and consumption energy of each node. In order to evaluate the proposed protocol entitled IFEE-AODV (Intuitionistic Fuzzy logic for Energy Efficient routing based AODV), we simulated IFEE-AODV by using MATLAB software and compared these results with AODV(Ad hoc On-demand Distance Vector), DFES-AODV (Dynamic Fuzzy Energy State based AODV) and SFES-AODV (Static Fuzzy Energy State based AODV) protocols. The results show that this protocol in metrics of packet delivery ratio and network lifetime has better performance than other protocols.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 155

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2021
  • Volume: 

    18
  • Issue: 

    1 (47)
  • Pages: 

    13-28
Measures: 
  • Citations: 

    0
  • Views: 

    124
  • Downloads: 

    0
Abstract: 

With the expansion of social networks, the use of recommender systems in these networks has attracted considerable attention. Recommender systems have become an important tool for alleviating the information that overload problem of users by providing personalized recommendations to a user who might like based on past preferences or observed behavior about one or various items. In these systems, the users’ behavior is dynamic and their preferences change over time for different reasons. The adaptability of recommender systems to capture the evolving user preferences, which are changing constantly, is essential. Recent studies point out that the modeling and capturing the dynamics of user preferences lead to significant improvements in recommendation accuracy. In spite of the importance of this issue, only a few approaches recently proposed that take into account the dynamic behavior of the users in making recommendations. Most of these approaches are based on the matrix factorization scheme. However, most of them assume that the preference dynamics are homogeneous for all users, whereas the changes in user preferences may be individual and the time change pattern for each user differs. In addition, because the amount of numerical ratings dramatically reduced in a specific time period, the sparsity problem in these approaches is more intense. Exploiting social information such as the trust relations between users besides the users’ rating data can help to alleviate the sparsity problem. Although social information is also very sparse, especially in a time period, it is complementary to rating information. Some works use tensor factorization to capture user preference dynamics. Despite the success of these works, the processing and solving the tensor decomposition is hard and usually leads to very high computing costs in practice, especially when the tensor is large and sparse. In this paper, considering that user preferences change individually over time, and based on the intuition that social influence can affect the users’ preferences in a recommender system, a social recommender system is proposed. In this system, the users’ rating information and social trust information are jointly factorized based on a matrix factorization scheme. Based on this scheme, each users and items is characterized by a sets of features indicating latent factors of the users and items in the system. In addition, it is assumed that user preferences change smoothly, and the user preferences in the current time period depend on his/her preferences in the previous time period. Therefore, the user dynamics are modeled into this framework by learning a transition matrix of user preferences between two consecutive time periods for each individual user. The complexity analysis implies that this system can be scaled to large datasets with millions of users and items. Moreover, the experimental results on a dataset from a popular product review website, Epinions, show that the proposed system performs better than competitive methods in terms of MAE and RMSE.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 124

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2021
  • Volume: 

    18
  • Issue: 

    1 (47)
  • Pages: 

    29-50
Measures: 
  • Citations: 

    0
  • Views: 

    219
  • Downloads: 

    0
Abstract: 

The rumor is a collective attempt to interpret a vague but attractive situation by using the power of words. Therefore, identifying the rumor language can be helpful in identifying it. The previous research has focused more on the contextual information to reply tweets and less on the content features of the original rumor to address the rumor detection problem. Most of the studies have been in the English language, but more limited work has been done in the Persian language to detect rumors. This study analyzed the content of the original rumor and introduced informative content features to early identify Persian rumors (i. e., when it is published on news media but has not yet spread on social media) on Twitter and Telegram. Therefore, the proposed model is based on physical and non-physical content features in three categories including, lexical, syntactic, and pragmatic. These features are a combination of the common content features along with the proposed new content-based features. Since no social context information is available at the time of posting rumors, the proposed model is independent of propagation-based features and relies on the content-based information of the original rumor. Although in the proposed model, much information (including user information, the userchr('39')s reaction to the rumor, and propagation structures) are ignored, but helpful content information can be obtained for classification by content analysis of the original rumor. Several experiments have been performed on the various combinations of feature sets (i. e., common and proposed content features) to explore the capability of features in distinguishing rumors and non-rumors separately and jointly. To this end, three machine learning algorithms including, Random Forest (RF), AdaBoost, and Support Vector Machine (SVM) have been used as strong classifications to evaluate the accuracy of the proposed model. To achieve the best performance of classification algorithms on the training dataset, it is necessary to use feature selection techniques. In this study, the Sequential Forward Floating Search (SFFS) approach has been used to select valuable features. Also, the statistical results of the t-test on the P-value (<=0. 05) demonstrate that most of the new features proposed in this study reveal statistically significant differences between rumor and non-rumor documents. The experimental results are shown the performance of new proposed features to improve the accuracy of the rumor detection. The F-measure of the proposed model to detect Persian rumors on the Twitter dataset was 0. 848, on the Kermanshah earthquake dataset was 0. 952 and on the Telegram dataset was 0. 867, which indicated the ability of the proposed method to identify rumors only by focusing on the content features of the original rumor text. The results of evaluating the proposed model on Twitter rumors show that, despite the short length of Twitter tweets and the extraction of limited content information from tweets, the proposed model can detect Twitter rumors with acceptable accuracy. Hence, the ability of content features to distinguish rumors from non-rumors is proven.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 219

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2021
  • Volume: 

    18
  • Issue: 

    1 (47)
  • Pages: 

    51-60
Measures: 
  • Citations: 

    0
  • Views: 

    180
  • Downloads: 

    0
Abstract: 

Keywords can present the main concepts of the text without human intervention according to the model. Keywords are important vocabulary words that describe the text and play a very important role in accurate and fast understanding of the content. The purpose of extracting keywords is to identify the subject of the text and the main content of the text in the shortest time. Keyword extraction plays an important role in the fields of text summarization, document labeling, information retrieval, and subject extraction from text. For example, summarizing the contents of large texts into smaller texts is difficult, but having keywords in the text can make you aware of the topics in the text. Identifying keywords from the text with common methods is time-consuming and costly. Keyword extraction methods can be classified into two types with observer and without observer. In general, the process of extracting keywords can be explained in such a way that first the text is converted into smaller units called the word, then the redundant words are removed and the remaining words are weighted, then the keywords are selected from these words. Our proposed method in this paper for identifying keywords is a method with observer. In this paper, we first calculate the word correlation matrix per document using a feed forward neural network and Word2Vec algorithm. Then, using the correlation matrix and a limited initial list of keywords, we extract the closest words in terms of similarity in the form of the list of nearest neighbors. Next we sort the last list in descending format, and select different percentages of words from the beginning of the list, and repeat the process of learning the neural network 10 times for each percentage and creating a correlation matrix and extracting the list of closest neighbors. Finally, we calculate the average accuracy, recall, and F-measure. We continue to do this until we get the best results in the evaluation, the results show that for the largest selection of 40% of the words from the beginning of the list of closest neighbors, the acceptable results are obtained. The algorithm has been tested on corpus with 800 news items that have been manually extracted by keywords, and laboratory results show that the accuracy of the suggested method will be 78%.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 180

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2021
  • Volume: 

    18
  • Issue: 

    1 (47)
  • Pages: 

    61-74
Measures: 
  • Citations: 

    0
  • Views: 

    381
  • Downloads: 

    0
Abstract: 

The proposed algorithm in this research is based on the multi-agent particle swarm optimization as a collective intelligence due to the connection between several simple components which enables them to regulate their behavior and relationships with the rest of the group according to certain rules. As a result, self-organizing in collective activities can be seen. Community structure is crucial for many network systems, the algorithm uses a special type of coding to identify the number of communities without any prior knowledge. In this method, the modularity function is used as a fitness function to optimize particle swarm. Several experiments show that the proposed algorithm which is called Multi Agent Particle Swarm is superior compared with other algorithms. This algorithm is capable of detecting nodes in overlapping communities with high accuracy. The point in using the previously presented PSO algorithms for community detection is that they recognize non-overlapping communities, and this goes back to the representation of genes by these methods, but the use of multi-agent collective intelligence by our algorithm has led to the identification of nodes in overlapping communities. The results show that the nodes that are shared between a set of agents, these nodes are active nodes that create an overlap in the communities. Our experimental results show that when a member node is more than one community, this node is a good candidate to be selected as the active node, which has led to the creation of overlapping networks.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 381

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

Haghniaz Jahromi Benyamin | Almodarresi Seyed MohammadTaghi | HAJEBI POOYA

Issue Info: 
  • Year: 

    2021
  • Volume: 

    18
  • Issue: 

    1 (47)
  • Pages: 

    75-86
Measures: 
  • Citations: 

    0
  • Views: 

    188
  • Downloads: 

    0
Abstract: 

Networked control systems (NCSs) are distributed control systems in which the nodes, including controllers, sensors, actuators, and plants are connected by a digital communication network such as the Internet. One of the most critical challenges in networked control systems is the stochastic time delay of arriving data packets in the communication network among the nodes. Using the Smith predictor as the controller is a common solution to overcome network time delay. Online and accurate modeling of the plant improves the performance of the networked control system, especially when the plant is nonlinear and has unknown parameters and time-variant behavior. In this paper, a novel controller, Neural-Smith predictor, is proposed, which firstly models plant using a perceptron neural network and secondly, another neural network is used as the core of signal processing of the controller. The parameters variation of the plant during time is considered online by the controller, and then the desired control signal is generated. The Integral of Time multiplied by the Absolut value of Error (ITAE) is a proper performance index for position control, so this index has been used to compare the results. Results of simulations show that NCS using the Neural-Smith predictor has better performance in comparison to the common Smith predictor and the novel compensation method using a modified communication disturbance observer (MCDOB) when the values of network time delay and variation of plant’ s transfer function are excessive. For example, while the range of stochastic time delay is between 19 and 21 ms, the difference between the ITAE of controllers is 0. 0004. This value increases to 0. 027, while the range of stochastic time delay is between 910 and 930 ms.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 188

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2021
  • Volume: 

    18
  • Issue: 

    1 (47)
  • Pages: 

    87-102
Measures: 
  • Citations: 

    0
  • Views: 

    190
  • Downloads: 

    0
Abstract: 

Classification of land cover is one of the most important applications of radar polarimetry images. The purpose of image classification is to classify image pixels into different classes based on vector properties of the extractor. Radar imaging systems provide useful information about ground cover by using a wide range of electromagnetic waves to image the Earthchr('39')s surface. The purpose of this study is to present an optimal method for classifying polarimetric radar images. The proposed method is a combination of support vector machine and binary gravitational search optimization algorithm. In this regard, first a set of polarimetric features including original data values, target parsing features, and SAR separators are extracted from the images. Then, in order to select the appropriate features and determine the optimal parameters for the support vector machine classifier, the binary gravitational search algorithm is used. In order to achieve a classification system with high classification accuracy, the optimal values of the model parameters and a subset of the optimal properties are selected simultaneously. The results of the implementation of the proposed algorithm are compared with two states, taking into account all the selected features, and the genetic algorithm, the results of zoning for the three regions are examined. The separation of areas for the San Francisco and Manila regions, and the detection of oil slicks in the ocean surface of the Philippines, have been evaluated. The comparison with the genetic algorithm was approximately between 6% to 12% and the comparison with the presence of all features was between 13% and 20%. For the San Francisco area, the number of extraction properties was 101, which was selected using the proposed 47 optimal properties algorithm. For the city of Manila, after applying the algorithm, 31 optimal features have been selected from 65 features. For the oil slick of the city of the Philippines, we have reached the stated accuracy by selecting 33 features from 69 features, for the first two regions the number of initial population is 50 and the repetition period is 30, and for the third region with 30 initial population and the repetition period is 10.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 190

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2021
  • Volume: 

    18
  • Issue: 

    1 (47)
  • Pages: 

    103-118
Measures: 
  • Citations: 

    0
  • Views: 

    186
  • Downloads: 

    0
Abstract: 

LPWANs are a class of technologies that have very low power consumption and high range of communication. Along with its various advantages, these technologies also have many limitations, such as low bandwidth, connectionless transmission and low processing power, which has challenged encryption methods in this technologies. One of the most important of these challenges is encryption. The very small size of the message and the possibility of packet loss without the gateway or device awareness, make any of the cipher chaining methods such as CBC, OFB or CTC impossible in LPWANs, because either they assume a connection oriented media or consume part of the payload for sending counter or HMAC. In this paper, we propose a new way to re-synchronize the key between sender and receiver in the event of a packet being lost that will enable us to perform cipher chaining encryption in LPWAN limitation. The paper provides two encryption synchronization methods for LPWANs. The first method can be synchronized in a similar behavior as the proof of work in the block chain. The second proposed method is able to synchronize the sender and receiver with the least possible used space of the message payload. The proposed method is able to synchronize the parties without using the payload. The proposed method is implemented in the Sigfox platform and then simulated in a sample application. The simulation results show that the proposed method is acceptable in environments where the probability of missing several consecutive packets is low.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 186

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2021
  • Volume: 

    18
  • Issue: 

    1 (47)
  • Pages: 

    119-134
Measures: 
  • Citations: 

    0
  • Views: 

    107
  • Downloads: 

    0
Abstract: 

Recognition of visual events as a video analysis task has become popular in machine learning community. While the traditional approaches for detection of video events have been used for a long time, the recently evolved deep learning based methods have revolutionized this area. They have enabled event recognition systems to achieve detection rates which were not reachable by traditional approaches. Convolutional neural networks (CNNs) are among the most popular types of deep networks utilized in both imaga and video recognition tasks. They are initially made up of several convolutional layers, each of which followed by proper activation and possibly pooling layers. They often encompass one or more fully connected layers as the last layers. The favorite property of them in this work is the ability of CNNs to extract mid-level features from video frames. Actually, despite traditional approaches based on low-level visual features, the CNNs make it possible to extract higher level semantic features from the video frames. The focus of this paper is on recognition of visual events in video using CNNs. In this work, image trained descriptor s are used to make video recognition can be done with low computational complexity. A tuned CNN is used as the frame descriptor and its fully connected layers are utilized as concept detectors. So, the featue maps of activation layers following fully connected layers act as feature vectors. These feature vectors (concept vectors) are actually the mid-level features which are a better video representation than the low level features. The obtained mid-level features can partially fill the semantic gap between low level features and high level semantics of video. The obtained descriptors from the CNNs for each video are varying length stack of feature vectors. To make the obtained descriptors organized and prepared for clasification, they must be properly encoded. The coded descriptors are then normalized and classified. The normaliztion may consist of conventional and normalization or more advanced power-law normalization. The main purpose of normalization is to change the distribution of descriptor values in a way to make them more uniformly distributed. So, very large or very small descriptors could have a more balanced impact on recognition of events. The main novelty of this paper is that spatial and temporal information in mid-level features are employed to construct a suitable coding procedure. We use temporal information in coding of video descriptors. Such information is often ignored, resulting in reduced coding efficiency. Hence, a new coding is proposed which improves the trade-off between the computation complexity of the recognition scheme and the accuracy in identifying video events. It is also shown that the proposed coding is in the form of an optimization problem that can be solved with existing algorithms. The optimization problem is initially non-convex and not solvable with the existing methods in polynomial time. So, it is transformed to a convex form which makes it a well defined optimization problem. While there are many methods to handle these types of convex optimization problems, we chose to use a strong convex optimization library to efficiently solve the problem and obtain the video descriptors. To confirm the effectiveness of the proposed descriptor coding method, extensive experiments are done on two large public datasets: Columbia consumer video (CCV) dataset and ActivityNet dataset. Both CCV and ActivityNet are popular publically available video event recognition datasets, with standard train/test splits, which are large enough to be used as reasonable benchmarks in video recognition tasks. Compared to the best practices available in the field of detecting visual events, the proposed method provides a better model of video and a much better mean average precision, mean average recall, and F score on the test set of CCV and ActivityNet datasets. The presented method not only improves the performance in terms of accuracy, but also reduces the computational cost with respect to those of the state of the art. The experiments vividly confirm the potential of the proposed method in improving the performance of visual recognition systems, especially in supervised video event detection.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 107

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2021
  • Volume: 

    18
  • Issue: 

    1 (47)
  • Pages: 

    135-150
Measures: 
  • Citations: 

    0
  • Views: 

    134
  • Downloads: 

    0
Abstract: 

Nowadays, Network Intrusion Detection Systems (NIDS) are widely used to provide full security on computer networks. IDS are categorized into two primary types, including signature-based systems and anomaly-based systems. The former is more commonly used than the latter due to its lower error rate. The core of a signature-based IDS is the pattern matching. This process is inherently a computationally intensive task, and in the worst case, about 80% of the total processing time of an IDS is spent on it. On the other hand, the rapid development of network bandwidth and high link speeds, which in turn leads to a loss of a large number of inbound packets in the network intrusion detection system, has posed challenges as crucial factors limiting the performance of this type of system. Snort is a signature-based NIDS that is highly interested due to being open-source, free, and easy to use. To resolve the challenges mentioned above, we propose an enhanced version of Snort, which is enriched by exploiting two key ideas. The first idea is the filtering of unnecessary packets based on a blacklist of source IP addresses. This filter is used as a preprocessing mechanism to improve the efficiency of the Snort. However, the packet filtering speed is decreased by increasing the network traffic volumes. Therefore, to accelerate the function of this mechanism, we have proposed a second crucial idea. The data-parallel nature of snort functions lets us parallelize two main computationally intensive functions of it on the graphical processing unit. These functions include the lookup on the blacklist filter in the preprocessing stage and the signature matching of Snort, which completes the intrusion detection process. For parallelizing the preprocessing step of Snort, first, a blacklist is provided from the DARPA dataset. Next, this blacklist is transferred together with the Snort ruleset to the global memory of the GPU. Finally, each thread concurrently matches each packet against the blacklist filters. For parallelizing the signature matching step of Snort, the well-known pattern matching algorithm of Boyer-Moore is parallelized similarly. Evaluation results show that the proposed method, by up to 30 times faster than the sequential version, significantly improves the blacklist-based filtering performance. Also, the efficiency of the proposed method in using GPU resources for parallel intrusion detection is 81 percent higher than the best state-of-the-art method.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 134

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
telegram sharing button
whatsapp sharing button
linkedin sharing button
twitter sharing button
email sharing button
email sharing button
email sharing button
sharethis sharing button