Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

Journal Issue Information

Archive

Year

Volume(Issue)

Issues

Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Issue Info: 
  • Year: 

    2019
  • Volume: 

    16
  • Issue: 

    2 (40)
  • Pages: 

    3-17
Measures: 
  • Citations: 

    0
  • Views: 

    745
  • Downloads: 

    0
Abstract: 

Representing 3D models in diverse fields have automatically paved the way of storing, indexing, classifying, and retrieving 3D objects. Classification and retrieval of 3D models demand that the 3D models represent in a way to capture the local and global shape specifications of the object. This requires establishing a 3D descriptor or signature that summarizes the pivotal shape properties of the object. Therefore, in this work, a new shape descriptor has been proposed to recognize 3D model utilizing global characteristics. To perform feature extraction in the proposed method, the bounding meshed sphere surrounding the 3D model and concentrated from the outside toward the center of the model. Then, the length of the path which the sphere's vertices travel from the beginning to the model’ s surface will be measured. These values are exploited to compute the path function. The engendered function is robust against isometric variations and it is appropriate for recognizing non-rigid models. In the following, the Fourier transform of the path function is calculated as the features vector, and then the extracted features vector is utilized in SVM classifier. By exploiting the properties of the magnitude response of the Fourier transform of the real signals, the model can be analyzed in the lower space without losing the inherent characteristics, and no more pose normalization is needed. The simulation results based on the SVM classifier on the McGill data set show the proposed method has the highest accuracy (i. e. 79. 7%) among the compared related methods. Moreover, the confusion matrix for performing 70% trained SVM classifier indicates the suitable distinguishing ability for similar models and does not have a high computational complexity of model processing in 3D space.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 745

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2019
  • Volume: 

    16
  • Issue: 

    2 (40)
  • Pages: 

    19-40
Measures: 
  • Citations: 

    0
  • Views: 

    630
  • Downloads: 

    0
Abstract: 

Reverse engineering of network applications especially from the security point of view is of high importance and interest. Many network applications use proprietary protocols which specifications are not publicly available. Reverse engineering of such applications could provide us with vital information to understand their embedded unknown protocols. This could facilitate many tasks including deep protocol inspection in next generation firewalls and analysis of suspicious binary codes. The goal of protocol reverse engineering is to extract the protocol format and the protocol state machine. The protocol format describes the structure of all messages in protocol and the protocol state machine describes the sequence of messages that the protocol accept. Recently, there has been rising interest in automatic protocol reverse engineering. These works are divided into activities that extract protocol format and activities that extract protocol state machine. They can also be divided into those uses as input network traffic and those uses as input program implements the protocol. However, although there are some researches in this field, they mostly focused on extracting syntactic structure of the protocol messages. In this paper, some new techniques are presented to improve extracting the format (both the syntax and semantics) of protocol messages via reverse engineering of binary codes of network applications. To do the research, an integration of dynamic and static binary code analysis are used. The field extraction approach first detects length fields and separators and then by applying rules based on compiler principles locates all the fields in the messages. The semantic extraction approach is based on the semantic information available in the program implements of the protocol and also information exists in the environment of the program. For evaluating the proposed approach, four different network applications including DNS, eDonkey, Modbus, and STUN were analyzed. Experimental results show that the proposed techniques not only could extract more complete syntactic structure of messages than similar works, but also it could extract a set of advantageous semantic information about the protocol messages that are not achievable in previous works.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 630

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2019
  • Volume: 

    16
  • Issue: 

    2 (40)
  • Pages: 

    41-60
Measures: 
  • Citations: 

    0
  • Views: 

    507
  • Downloads: 

    0
Abstract: 

With the advent of cheap indoor RGB-D sensors, proper representation of piecewise planar depth images is crucial toward an effective compression method. Although there exist geometrical wavelets for optimal representation of piecewise constant and piecewise linear images (i. e. wedgelets and platelets), an adaptation to piecewise linear fractional functions which correspond to depth variation over planar regions is still missing. Such planar regions constitute major portions of the indoor depth images and need to be well represented to allow for desirable rate-distortion trade-off. In this paper, second-order planelet transform is introduced as an optimal representation for piecewise planar depth images with sharp edges along smooth curves. Also, to speed up the computation of planelet approximation of depth images, an iterative estimation procedure is described based on non-linear least squares and discontinuity relaxation. The computed approximation is fed to a rate-distortion optimized quad-tree based encoder; and the pruned quadtree is encoded into the bit-stream. Spatial horizontal and vertical plane prediction modes are also introduced to further exploit geometric redundancy of depth images and increase the compression ratio. Performance of the proposed planelet-based coder is compared with wedgelets, platelets, and general image encoders on synthetic and real-world Kinect-like depth images. The synthetic images dataset consists of 30 depth images of different scenes which are manually selected from eight video sequences of ICL-NUIM RGBD Benchmark dataset. The dataset of real-world images also includes 30 depth images of indoor scenes selected from Washington RGBD Scenes V2 dataset captured by Kinect-like cameras. In contrast to former geometrical wavelets which approximate smooth regions of each image using constant and linear functions, planelet transform exploits a non-linear model based on linear fractional functions to approximate every smooth region. Visual comparisons by 3D surface reconstruction and visualization of the decoded depth images as surface plots revealed that at a specific bit-rate the planelets-based coder better preserves the geometric structure of the scene compared with the former geometric wavelets and the general images coders. Numerical evaluations showed that compression of synthetic depth-images by planelets results in a considerable PSNR improvement of 0. 83 dB and 6. 92 dB over platelets and wedgelets, respectively. Due to absence of the noise, the plane prediction modes were very successful on synthetic images and boosted the PSNR gap over platelets and wedgelets to 5. 73 dB and 11. 82 dB, respectively. The proposed compression scheme also performed well on the real-world depth images. Compared with wedgelets, planelets-based coder with spatial prediction achieved noticeable quality improvement of 2. 7 dB at the bit-rate of 0. 03 bpp. It also led to 1. 46 dB quality improvement over platelets at the same bit-rate. In this experiment, application of planelets-based coder led to 2. 59 dB and 1. 56 dB increase in PSNR over JPEG2000 and H. 264 general image coders. Similar results are also achieved in terms of SSIM metric.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 507

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

Torkian Amin | MOALLEM PAYMAN

Issue Info: 
  • Year: 

    2019
  • Volume: 

    16
  • Issue: 

    2 (40)
  • Pages: 

    61-76
Measures: 
  • Citations: 

    0
  • Views: 

    611
  • Downloads: 

    0
Abstract: 

License plate recognition (LPR) by digital image processing, which is widely used in traffic monitor and control, is one of the most important goals in Intelligent Transportation System (ITS). In real ITS, the resolution of input images are not very high since technology challenges and cost of high resolution cameras. However, when the license plate image is taken at low resolution, the license plate cannot be readable; hence, the recognition algorithm could not work well. There are many reasons resulting in the degradation of captured license plate images, such as downsampling, blurring, warping, noising, and distance of car from camera. Many researchers try to enhance the quality of input images by image restoration algorithms to improve the LPR final accuracy. Recently, super-resolution (SR) techniques are widely used to construct a high-resolution (HR) image from several observed low-resolution (LR) images, thereby removing the degradations caused by the imaging of a low resolution camera. As mentioned, in real ITS, the resolution of input image is not high, but there are successive frames from a target, therefore multi-frame SR methods can be used to overcome the ITS resolution challenges. In this paper, an SR technique based on POCS (Projection onto Convex Sets) is used to reconstruct an HR license plate image from a set of registered LR images. The normalized convolution (NC) framework is used in POCS, in which the local signal is approximated through a projection onto a subspace. However, the window function of adaptive NC is adapted to local linear structures. This results in more samples of the same modality being fused for the reconstruction, which in turn reduces diffusion across discontinuities, that is very important factor in improving LPR accuracy. The first step in multi-frame SR is image registration which is necessary to improve quality of the reconstructed HR image, especially in LPR when the quality of the reconstructed edges of characters is very important. For simplicity, it is often supposed simple motions (usually translation) between successive frames in multi-frame SR, but changes in scale, rotation and translation in license plate successive images may happened. It means that the registration is one of the main challenges in SR used for LPR. This paper proposes use of a two-step image matching algorithm to improve the quality of registration stage. In the first step, Fourier-Mellin image matching is used for registration which overcomes the scale and rotation challenge, but the accuracy of registration is not suitable. After matching of the successive input images by Fourier-Mellin algorithm, the Keren or Vandewalle image matching is used to improve the quality of final registration. For real LR images, Fourier-Mellin plus Keren shows higher performance while for simulated LR images, Fourier-Mellin plus Vandewalle shows higher performance. In order to compare the results of two proposed SR algorithms for LPR application with the other methods, we prepare three real datasets of successive frames for Persian LPR, the first and the second one are captured HR and LR successive frames, respectively, while the third one is a downsampled LR version of HR frames. The output HR image of all compared methods is feed to a demo version of a Persian LPR software (www. farsiocr. ir), and the accuracy of each character and the accuracy each license are reported. Five SR methods are compared including: cubic interpolation, ASDS-AR (Adaptive Sparse Domain Selection and Adaptive Regularization), standard POCS, our first and second proposed SR method which both of them firstly use Fourier-Mellin registration, while the first one uses Keren, and the second one uses Vandewalle image matching for a fine registration. Moreover, to present the effectiveness of using SR methods before LPR, the LR images are also directly feed to LPR software. The results represent when the length of license is less than 50 pixels, using SR methods before LPR improves the recognition accuracy. Moreover, when the license plate length is less 35 pixels, SR methods could not improve the performances. Our investigations show that for LR downsampled images from HR ones, our proposed SR method with Fourier-Mellin plus Keren registration reaches to the highest performance, while for real LR images, which are captured by a low resolution camera, our proposed SR method with Fourier-Mellin plus Vandewalle registration reaches to the highest performance. On the other hand, since some Persian numerical characters, like 2 (2) and 3 (3) are very similar to each other, all of the compared methods may confuse between them in LPR step, therefore, the accuracy per license of all compared methods are not high. Among all previous compared methods, for LR images with length between 35 to 50 pixels, the standard PCOS shows the best results, while our proposed SR methods improve the accuracy per character around 25%, with respect to PCOS method.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 611

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2019
  • Volume: 

    16
  • Issue: 

    2 (40)
  • Pages: 

    77-89
Measures: 
  • Citations: 

    0
  • Views: 

    540
  • Downloads: 

    0
Abstract: 

In this paper, a hybrid Machine Translation (MT) system is proposed by combining the result of a rule-based machine translation (RBMT) system with a statistical approach. The RBMT uses a set of linguistic rules for translation, which leads to better translation results in terms of word ordering and syntactic structure. On the other hand, SMT works better in lexical choice. Therefore, in our system, an initial translation is generated using RBMT. Then the proper lexical for the resulted sentence is chosen by using a decoder algorithm which is inspired by SMT architecture. In the pure SMT approach, decoder is responsible for selecting proper final lexical during the translation procedure. Normally this method deals with lexical choice as well as reordering and required exponential order in time complexity. By fixing the word order in the output, a polynomial version of this method, named monotone decoding, is used in this paper. Monotone decoder algorithm selects the best lexical from a candidate list by maximizing the language model of resulted sentence. The candidate list is gathered from the outputs of both pure RBMT and pure SMT systems. The experiments of proposed hybrid method on English-Persian language pair show significant improvements over both RBMT and SMT results. The results show that the proposed hybrid method gains an improvement of almost +5 units over RBMT and about one unit over SMT in BLEU score.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 540

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2019
  • Volume: 

    16
  • Issue: 

    2 (40)
  • Pages: 

    91-104
Measures: 
  • Citations: 

    0
  • Views: 

    628
  • Downloads: 

    0
Abstract: 

Facial expressions play an essential role in delivering emotions. Thus facial expression synthesis gain interests in many fields such as computer vision and graphics. Facial actions are generated by contraction and relaxation of the muscles innervated by facial nerves. The combination of those muscle motions is numerous. therefore, facial expressions are often person specific. But in general, facial expressions can be divided into six groups: anger, disgust, fear, happiness, sadness, and surprise. Facial expression variations include both global facial feature motions (e. g. opening or closing of eyes or mouth) and local appearance deformations (e. g. facial wrinkles and furrows). Ghent and McDonald introduced the Facial Expression Shape model and Facial Expression Texture Model respectively for the synthesizing global and local changes. Zhang et al. published an elastic model to balance the local and global warping. Then, they added suitable illumination details to the warped face image with muscle-distribution-based model. The goal of facial expression synthesis is to create expressional face image of the subject with the availability of neutral face image of that subject. This paper proposes a new method for synthesis of human facial expressions, in which an elastic force is defined to simulate the displacement of facial points in various emotional expressions. The basis of this force is the presence of control points with specific coordinates and directions on the face image. In other words, each control point applies an elastic force into the points of the face and moves them in a certain direction. The force applied to each point is inversely proportional to the distance between that point and the control point. For several control points, the force applied to the points of the face is the result of the forces associated with all control points. To synthesize a specific expression, the location of the control points and parameters of the force are adjusted to achieve an expression face. Face detail is extracted with laplacian pyramid and added to the synthesized image. The proposed method was implemented on the KDEF and Cohn-Kanade (CK+) databases and the results were put on for comparison. Two happy and sad expressions were selected for synthesis. The proper location of the control points and elastic force parameters were determined on the neutral image of the target person based on the expressional images in the database. Then, the neutral image of the person was warped with the elastic forces. Facial expression details have been added with laplacian pyramid method to the warped image. Finally, the experimental results were compared with the photo-realistic and facial expression cloning methods which demonstrate the high visual quality and low computational complexity of the proposed method in synthesizing the face image.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 628

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2019
  • Volume: 

    16
  • Issue: 

    2 (40)
  • Pages: 

    105-120
Measures: 
  • Citations: 

    0
  • Views: 

    655
  • Downloads: 

    0
Abstract: 

Clustering is one of the main tasks in data mining, which means grouping similar samples. In general, there is a wide variety of clustering algorithms. One of these categories is density-based clustering. Various algorithms have been proposed for this method; one of the most widely used algorithms called DBSCAN. DBSCAN can identify clusters of different shapes in the dataset and automatically identify the number of clusters. There are advantages and disadvantages in this algorithm. It is difficult to determine the input parameters of this algorithm by the user. Also, this algorithm is unable to detect clusters with different densities in the data set. ISB-DBSCAN algorithm is another example of density-based algorithms that eliminates the disadvantages of the DBSCAN algorithm. ISB-DBSCAN algorithm reduces the input parameters of DBSCAN algorithm and uses an input parameter k as the nearest neighbor's number. This method is also able to identify different density clusters, but according to the definition of the new core point, It is not able to identify some clusters in a different data set. This paper presents a method for improving ISB-DBSCAN algorithm. A proposed approach, such as ISB-DBSCAN, uses an input parameter k as the number of nearest neighbors and provides a new definition for core point. This method performs clustering in three steps, with the difference that, unlike ISB-DBSCAN algorithm, it can create a new cluster in the final stage. In the proposed method, a new criterion, such as the number of dataset dimensions used to detect noise in the used data set. Since the determination of the k parameter in the proposed method may be difficult for the user, a new method with genetic algorithm is also proposed for the automatic estimation of the k parameter. To evaluate the proposed methods, tests were carried out on 11 standard data sets and the accuracy of clustering in the methods was evaluated. The results showe that the proposed method is able to achieve better results in different data sets compare to other available methods. In the proposed method, the automatic determination of k parameter also obtained acceptable results.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 655

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

GHAZI MAGHREBI SAEED | Khordadpoor deylamani farbayan

Issue Info: 
  • Year: 

    2019
  • Volume: 

    16
  • Issue: 

    2 (40)
  • Pages: 

    121-135
Measures: 
  • Citations: 

    0
  • Views: 

    973
  • Downloads: 

    0
Abstract: 

Orthogonal frequency division multiplexing (OFDM) is used in order to provide immunity against very hostile multipath channels in many modern communication systems. . The OFDM technique divides the total available frequency bandwidth into several narrow bands. In conventional OFDM, FFT algorithm is used to provide orthogonal subcarriers. Intersymbol interference (ISI) and intercarrier interference (ICI) impairements are caused by time domain rectangular windowed sine and cosine basis functions. FFT-OFDM is a very popular multi– carrier modulation (MCM) technique. It has some interesting features such as low complex modulation/demodulation implementation, simple and fast frequency domain channel estimation/ equalization. Also, by transmitting data over different parallel frequencies, FFT-OFDM has spectrum efficiency due to overlapped sub-channels and immunity against fading channels. Unfortunately, FFT-OFDM has serious drawbacks i. e. high sensitivity to ISI and ICI which caused by time domain rectangular windowed sine and cosine basis functions and their high level side lobes in frequency domain. For this purpose, cyclic prefixes (CP) are added at the beginning of the OFDM symbols and this causes bandwidth and power inefficiencies. In order to provide more efficient MCM technique, besides preserving the advantages of conventional FFT-OFDM, discrete wavelet modulation (DWM) and wavelet packet modulation (WPM) have been introduced in recent years. Therefore, it is possible to use time domain equalization (TEQ) or overlap frequency domain equalization (overlap FEQ) to reduce the interferences effectively in the absence of CP. Although TEQ techniques are more complicate than FEQ in conventional OFDM, WPT-OFDM has bandwidth and power enhanced efficiencies and this makes it so appropriate for digital communication systems. In recent years, several studies have been done on the wavelet theory, wavelet and WPM modulation in comparison with FFT-OFDM. Because of the good performance of WPT, a number of studies are still on the performance of WPT in hostile channels with more details. Also, there are a number of studies about various kinds of FEQ and TEQ such as zero force (ZF) and minimum mean square error (MMSE) in the peresence of AWGN and some fading channels. These researches also contain the comparison of FEQ for FFT-OFDM and overlap FEQ for WPT-OFDM. Todays, 3GPP standard is spread in different domains like 3G, 4G and LTE-A technologhies. In this paper, all the parameters are chosen according to 3GPP standards. For demonstrating the benefits of discrete WPT, two OFDM modulation schemes, i. e. FFT-OFDM and WPT-OFDM with two applied channels i. e. 6-tap rural area (RA6) and 6-tap typical urban (TU6) channels are considered. The performance of two systems are investigated by the measure of bit error rate (BER) in different SNRs(dB). Also, Wavelet families i. e. Haar, Daubechies6, Symlet5 and Coiflet5 are compared with FFT in OFDM system with QPSK, 16-QAM and 64-QAM constellation mappings. In the receiver side, FEQ is used in FFT-OFDM and overlap FEQ is used in WPT-OFDM to equalize multipath fading channels. This is a comprehensive comparison between FFT-OFDM and WPT-OFDM with different constellations, a number of wavelet families, different equalizer with two applied channels in order to implement a real environment. The simulation results demonstrate performance improvement of the system using WPT-OFDM scheme. In order to evaluate the performance of these two OFDM techniques, the required SNRs for reaching BER =10-3 are extracted and compared for both systems. It was observed that one can obtain better performance by using Haar wavelets as orthogonal basis function rather than FFT in OFDM modulation. We achieved better performance by using Haar wavelets rather than FFT in OFDM modulation. As a result, WPT-OFDM can be applied, with better performance, in different OFDM-based applied technologhies such as DAB( Digital Audio Broadcast), WiMAX( worldwide Interoperability for Microwave Access), DVB( Digital Video Broadcast).

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 973

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2019
  • Volume: 

    16
  • Issue: 

    2 (40)
  • Pages: 

    137-146
Measures: 
  • Citations: 

    0
  • Views: 

    455
  • Downloads: 

    0
Abstract: 

In this paper, a novel method based on the graph is proposed to classify the sequence of variable length as feature extraction. The proposed method overcomes the problems of the traditional graph with variable length of data, without fixing length of sequences, by determining the most frequent instructions and insertion the rest of instructions on the set of “ other” , save speed and memory. According to features and the similarities of them, a score is given to each sample and that is used for classification. To improve the results, the method is not used alone, but in the two approaches, this method is combined with other existing Technique to get better results. In the first approach, which can be considered as a feature extraction, extracted features from scoring techniques (Hidden Markov Model, simple substitution distance and similarity graph) on op-code sequences, hexadecimal sequences and system calls are combined at classifier input. The second approach consists of two steps, in the first step; the scores which obtained from each of the scoring Technique are given to the three support vector machine. The outcomes are combined according to the weight of each Technique and the final decision is taken based on the majority vote. Among the components of the support vector machine, when given a higher weight in the similarity graph method (the proposed method), the result is better, Because the similarity graph method is more accurate than the other two methods. Then, in the second section, considering the strengths and benefits of each classifier, classifier outputs are combined and the majority voting is used. Three methods have been tested for group combinations, including Ensemble Averaging, Bagging, and Boosting. Ensemble Averaging consisting of the combination of four classifiers of random forests, a support vector machine (as obtained in the previous section), K nearest neighbors and naive Bayes, and the final decision is taken based on the majority vote; therefore, it is used as the proposed method. The proposed approach could detect metamorphic malware from Vxheaven set and also determines categories of malware with accuracy of 97%, while the SSD and HMM methods under the same conditions could detect malware with an accuracy of 84% and 80% respectively.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 455

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2019
  • Volume: 

    16
  • Issue: 

    2 (40)
  • Pages: 

    147-165
Measures: 
  • Citations: 

    0
  • Views: 

    633
  • Downloads: 

    0
Abstract: 

Soft computing models based on intelligent fuzzy systems have the capability of managing uncertainty in the image based practices of disease. Analysis of the breast tumors and their classification is critical for early diagnosis of breast cancer as a common cancer with a high mortality rate between women all around the world. Soft computing models based on fuzzy and evolutionary algorithms play an important role in advances obtained in computer aided detection (CAD) systems. Combination of the evolutionary nature of swarm intelligence algorithms in optimization along with the potential of fuzzy models to cope with uncertainty and complex environments. In this research, a fuzzy inference model has been proposed for managing uncertainty in input data. The main uncertainty issues for classification of the breast tumors were modeled through the linguistic terms, fuzzy variables and fuzzy reasoning processes in the fuzzy inference model. Fuzzy linguist terms and rule sets are valuable to have an intelligent model with the ability to interact with the clinicians. Furthermore, hybrid fuzzy-evolutionary models have been proposed for tuning fuzzy membership functions for diagnosis of malignant and benign breast tumors. The hybrid proposed evolutionary methods are: 1) Fuzzy-Genetic, 2) Fuzzy-Particle swarm intelligence, and 3) Fuzzy-biogeography models. Evolutionary nature inspired combination with the fuzzy inference model (FIM) improves the proficiency of the FIM by adaption to the environment through the tuning process using training and testing datasets. To achieve this, the Genetic Algorithm was applied as a base evolutionary method. Then, the potential of the Particle Swart intelligence algorithm in using local and global experiences of the solutions in the search space. Also, bio-geographical aspects of species in finding an optimum solution lands with the high suitability habitat index has been concentrated in optimization process of the FIM. Evolutionary algorithms perform tuning of the fuzzy membership functions to improve the accuracy of the fuzzy inference model while simplicity and interpretability of the FIM was kept. For performance evaluation, an ROC curve analysis was conducted which is a robust and reliable technique that represents the trades of between classification model benefits and costs. Also, for validation purpose, a 10-fold cross-validation technique was performed for partitioning the dataset into training and testing sets in the evolutionary optimization algorithms. The performance of the proposed methods were evaluated using a dataset including 295 images and extracted features from mammographic image analysis society (MIAS) dataset. The results reveal that the hybrid Fuzzy-biogeography model outperforms the other evolutionary models with an accuracy and area under the ROC curve (AUC) of 95. 25%, and 91. 43%, respectively. Performance comparison of the hybrid evolutionary models in this study with the related methods for classification of the breast tumors on the MIAS dataset reveals that the fuzzy-biogeography model outperforms the other methods in terms of trades-off between accuracy and interpretability with an area under the ROC curve of 95. 25% with four extracted features. The Fuzzy-GA and Fuzzy-Swarm Intelligence models are competitive with the best results of counterpart methods with an accuracy of 93. 9% and 94. 58% in terms of the AUC, respectively. The proposed fuzzy-evolutionary models in this study are promising for diagnosis of the breast tumors in early stages of the disease and providing suitable treatment.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 633

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
telegram sharing button
whatsapp sharing button
linkedin sharing button
twitter sharing button
email sharing button
email sharing button
email sharing button
sharethis sharing button