Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

Journal Issue Information

Archive

Year

Volume(Issue)

Issues

مرکز اطلاعات علمی SID1
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Issue Info: 
  • Year: 

    2020
  • Volume: 

    17
  • Issue: 

    1 (43)
  • Pages: 

    3-14
Measures: 
  • Citations: 

    0
  • Views: 

    832
  • Downloads: 

    801
Abstract: 

In order to have a fair market condition, it is crucial that regulators continuously monitor the stock market for possible fraud and market manipulation. There are many types of fraudulent activities defined in this context. In our paper we will be focusing on "front running". According to Association of Certified Fraud Examiners, front running is a form of insider information and thus is very difficult to detect. Front running is committed by brokerage firm employees when they are informed of a customer's large transaction request that could potentially change the price by a substantial amount. The fraudster then places his own order before that of the customer to enjoy the low price. Once the customer's order is placed and the prices are increased he will sell his shares and makes profit. Detecting front running requires not only statistical analysis, but also domain knowledge and filtering. For example, the authors learned from Tehran's Over The Counter (OTC) stock exchange officials that fraudsters may use cover-up accounts to hide their identity. Or they could delay selling their shares to avoid suspicion. Before being able to present the case to a prosecutor, the analyst needs to determine whether predication exists. Only then, can he start testing and interpreting the collected data. Due to large volume of daily trades, the analyst needs to rely on computer algorithms to reduce the suspicious list. One way to do this is by assigning a risk score to each transaction. In our work we build two filters that determine the risk of each transaction based on the amount of statistical abnormality. We use the Chebyshev inequality to determine anomalous transactions. In the first phase we focus on detecting a large transaction that changes market price significantly. We then look at transactions around it to find people who made profit as a consequence of that large transaction. We tested our method on two different stocks the data for which was kindly provided to us by Tehran Exchange Market. The officials confirmed we were able to detect the fraudster.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 832

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 801 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2020
  • Volume: 

    17
  • Issue: 

    1 (43)
  • Pages: 

    15-27
Measures: 
  • Citations: 

    0
  • Views: 

    1603
  • Downloads: 

    715
Abstract: 

Filters are particularly important class of LTI systems. Digital filters have great impact on modern signal processing due to their programmability, reusability, and capacity to reduce noise to a satisfactory level. From the past few decades, IIR digital filter design is an important research field. Design of an IIR digital filter with desired specifications leads to a no convex optimization problem. IIR digital filter which design by minimizing the error between frequency response of desired and designed filters with some constraints such as stability, linear phase, and minimum phase by meta heuristic algorithms has gained increasing attention. The aim of this paper is to develop an IIR digital filter designing method that can provide relatively good time response characterizations beside good frequency response ones. One of the most important required time characterizations of digital filters for real time applications is low latency. To design a low latency digital filter, minimization of weighted partial energy of impulse response of the filter is used, in this paper. By minimizing weighted partial energy of impulse response, energy of impulse response concentrates on its beginning, consequently low latency for responding to inputs. This property beside minimum phase property of designed filter leads to good time specifications. In the proposed cost function in order to ensure the stability margin the term maximum pole radius is used, to ensure the minimum phase state the number of zeros outside the unit circle is considered, to achieve linear phase the constant group delay is considered. Due to no convexity of proposed cost function, three meta-heuristc algorithms GA, PSO, and GSA are used for optimization processes. Reported results confirmed the efficiency and the flexibility of the proposed method for designing various types of digital filters (frequency selective, differentiator, integrator, Hilbert, equalizers, and … ) with low latency in comparison with the traditional methods. Designed low pass filter by proposed method has only 1/79 sample delay, that is ideal for most of the applications.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1603

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 715 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2020
  • Volume: 

    17
  • Issue: 

    1 (43)
  • Pages: 

    29-45
Measures: 
  • Citations: 

    0
  • Views: 

    747
  • Downloads: 

    660
Abstract: 

This paper discusses about the future of the World Wide Web development, called Semantic Web. Undoubtedly, Web service is one of the most important services on the Internet, which has had the greatest impact on the generalization of the Internet in human societies. Internet penetration has been an effective factor in growth of the volume of information on the Web. The massive growth of information on the Web has led to some problems, the most important one is search query. Nowadays, search engines use different techniques to deliver high quality results, but we still see that search results are not ideal. It should also be noted that information retrieval techniques to a certain extent can increase the search accuracy. Most of the web content is designed for human usage and machines are only able to understand and manipulate data at word level. This is the major limitation for providing better services to web users. The solution provided for this topic is to display the content of the web in such a way that it can be readily understood and comprehensible to the machine. This solution, which will lead to a huge transformation on the Web is called the Semantic Web and will begin. Better results for responding to the search for semantic web users, is the purpose of this research. In the proposed method, the expression, searched by the user, will be examined according to the related topics. The response obtained from this section enters to a rating system, which is consisted of a fuzzy decision-making system and a hierarchical clustering system, to return better results to the user. It should be noted that the proposed method does not require any prior knowledge for clustering the data. In addition, accuracy and comprehensiveness of the response are measured. Finally, the F test is applied to obtain a criterion for evaluating the performance of the algorithm and systems. The results of the test show that the method presented in this paper can provide a more precise and comprehensive response than its similar methods and it increases the accuracy up to 1. 22%, on average.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 747

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 660 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2020
  • Volume: 

    17
  • Issue: 

    1 (43)
  • Pages: 

    47-59
Measures: 
  • Citations: 

    0
  • Views: 

    478
  • Downloads: 

    531
Abstract: 

The Elderly health is an important and noticeable issue; since these people are priceless resources of experience in the society. Elderly adults are more likely to be severely injured or to die following falls. Hence, fast detection of such incidents may even lead to saving the life of the injured person. Several techniques have been proposed lately for the fall detection of people, mostly categorized into three classes. The first class is based on the wearable or portable sensors [1-6]; while the second class works according to the sound or vibration sensors [7-8]. The third one is based on the machine vision. Although the latter methods require cameras and image processing systems, access to surveillance cameras-which are economical-has made them be extensively used for the elderly. By this motivation, this paper proposes a real-time technique in which, the surveillance video frames of the person’ s room are being processed. This proposed method works based on the feature extraction and applying type-II fuzzy algorithm for the fall detection. First, using the improved visual background extraction (ViBe) algorithm, pixels of the moving person are separated from those of the background. Then, using the obtained image for the moving person, six features including ‘ aspect ratio’ , ‘ motion vector’ , ‘ center-of-gravity’ , ‘ motion history image’ , ‘ the angle between the major axis of the bounding ellipse and the horizontal axis’ and the ‘ ratio of major axis to minor axis of the bounding ellipse’ are extracted. These features should be given to an appropriate classifier. In this paper, an interval type-II fuzzy logic system (IT2FLS) is utilized as the classifier. To do this, three membership functions are considered for each feature. Accordingly, the number of the fuzzy laws for six features is too large, leading to high computational complexity. Since most of these laws in the fall detection are irrelevant or redundant, an appropriate algorithm is used to select the most effective fuzzy membership functions. The multi-objective particle swarm optimization algorithm (MOPSO) is an operative tool for solving large-scale problems. In this paper, this evolutionary algorithm tries to select the most effective membership functions to maximize the ‘ classification accuracy’ while the ‘ number of the selected membership functions’ are simultaneously minimized. This results in a considerably smaller number of rules. In this paper to investigate the performance of the proposed algorithm, 136 videos from the movements of people were produced; among which 97 people fell down and 39 ones were related to the normal activities (non-fall). To this end, three criteria including accuracy (ACC), sensitivity (Se. ), and specificity (Sp. ) are used. By changing the initial values of the parameters of the ViBe algorithm and frequent re-tuning after multiple frames, detecting the moving objects is done faster and with higher robustness against noise and illumination variations in the environment. This can be done via the proposed system even in microprocessors with low computational power. The obtained results of applying the proposed approach confirmed that this system is able to detect the human fall quickly and precisely.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 478

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 531 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2020
  • Volume: 

    17
  • Issue: 

    1 (43)
  • Pages: 

    61-77
Measures: 
  • Citations: 

    0
  • Views: 

    759
  • Downloads: 

    599
Abstract: 

Human facial plays a very important role in the human’ s appearance. Many defects in the face affect the facial appearance, significantly. Facial plastic surgeries can correct the defects on the face. Analysis of facial color images is very important due to its numerous applications in facial surgeries. Different types of facial surgeries, such as Rhinoplasty, Otoplasty, Belpharoplasty and chin augmentation are performed on the face to make beautiful structure. Rhinoplasty and Otoplasty are widely used in the facial plastic surgeries. the former is performed to correct air passage, correct structural defects, and make a beautiful structure on bone, cartilage, and soft nasal tissue. Also, the latter is performed to correct defects in the ear area. Development of different tools in the field of facial surgery analysis can help surgeons before and after surgery. The main purpose of this study is the anthropometry analysis of facial soft tissue based on image processing methods applicable to Rhinoplasty and Otoplasty surgeries. The proposed method includes three parts.; (1) contour detection, (2) feature extraction, and (3) feature selection. An Active Contour Model (ACM) based on Local Gaussian Distribution Fitting (LGDF) has been used to extract contours from facial lateral view and ear area. The LGDF model is a region-based model which unlike other models such as the Chan-Vese (CV) model is not sensitive to the inhomogeneity of image spatial intensity. Harris Corner Detector (HCD) has been applied to extracted contour for feature extraction. HCD is a method based on calculating of auto-correlation matrix and changing the gray value. In this study, dataset of orthogonal stereo imaging system of Sahand University of Technology (SUT), Tabriz, Iran has been used. After detecting facial key points, metrics of facial profile view and ear area have been measured. In analysis of profile view, 7 angles used in the Rhinoplasty have been measured. Analysis of ear anthropometry includes measuring the length, width and external angle. In the Rhinoplasty analysis, accuracy of the proposed method was about %90 in the all measurement parameters, as well as, it was %96. 432, %97. 423 and %85. 546 in the Otoplasty analysis for measuring in the length, width and external angle of the ear on AMI database, respectively. Using the proposed system in planning of facial plastic surgeries can help surgeons in the Rhinoplasty and Otoplasty analysis. This research can be very effective in developing simulation and evaluation systems for the mentioned surgeries.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 759

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 599 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2020
  • Volume: 

    17
  • Issue: 

    1 (43)
  • Pages: 

    79-97
Measures: 
  • Citations: 

    0
  • Views: 

    761
  • Downloads: 

    521
Abstract: 

"Coreference resolution" or "finding all expressions that refer to the same entity" in a text, is one of the important requirements in natural language processing. Two words are coreference when both refer to a single entity in the text or the real world. So the main task of coreference resolution systems is to identify terms that refer to a unique entity. A coreference resolution tool could be used in many natural language processing tasks such as machine translation, automatic text summarization, question answering, and information extraction systems. Adding coreference information can increase the power of natural language processing systems. The coreference resolution can be done through different ways. These methods include heuristic rule-based methods and supervised/unsupervised machine learning methods. Corpus based and machine learning based methods are widely used in coreference resolution task in recent years and has led to a good performance. For using such these methods, there is a need for manually labeled corpus with sufficient size. For Persian language, before this research, there exists no such corpus. One of the important targets here, was producing a through corpus that can be used in coreference resolution task and other associated fields in linguistics and computational linguistics. In this coreference resolution research, a corpus of coreference tagged phrases has been generated (manually annotated) that has about one million words. It also has named entity recognition (NER) tags. Named entity labels in this corpus include 7 labels and in coreference task, all noun phrases, pronouns and named entities have been tagged. Using this corpus, a coreference tool was created using a vector space machine, with precision of about 60% on golden test data. As mentioned before, this article presents the procedure for producing a coreference resolution tool. This tool is produced by machine learning method and is based on the tagged corpus of 900 thousand tokens. In the production of the system, several different features and tools have been used, each of which has an effect on the accuracy of the whole tool. Increasing the number of features, especially semantic features, can be effective in improving results. Currently, according to the sources available in the Persian language, there are no suitable syntactic and semantic tools, and this research suffers from this perspective. The coreference tagged corpus produced in this study is more than 500 times bigger than the previous Persian language corpora and at the same time it is quite comparable to the prominent ACE and Ontonotes corpora. The system produced has an f-measure of nearly 60 according to the CoNLL standard criterion. However, other limited studies conducted in Farsi have provided different accuracy from 40 to 90%, which is not comparable to the present study, because the accuracy of these studies has not been measured with standard criterion in the coreference resolution field.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 761

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 521 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2020
  • Volume: 

    17
  • Issue: 

    1 (43)
  • Pages: 

    99-116
Measures: 
  • Citations: 

    0
  • Views: 

    468
  • Downloads: 

    513
Abstract: 

In this paper, a speech enhancement method based on sparse representation of data frames has been presented. Speech enhancement is one of the most applicable areas in different signal processing fields. The objective of a speech enhancement system is improvement of either intelligibility or quality of the speech signals. This process is carried out using the speech signal processing techniques to attenuate the background noise without causing any distortion in the speech signal. In this paper, we focus on the single channel speech enhancement corrupted by the additive Gaussian noise. In recent years, there has been an increasing interest in employing sparse representation techniques for speech enhancement. Sparse representation technique makes it possible to show the major information about the speech signal based on a smaller dimension of the original spatial bases. The capability of a sparse decomposition method depends on the learned dictionary and matching between the dictionary atoms and the signal features. An over complete dictionary is yielded based on two main steps: dictionary learning process and sparse coding technique. In dictionary selection step, a pre-defined dictionary such as the Fourier basis, wavelet basis or discrete cosine basis is employed. Also, a redundant dictionary can be constructed after a learning process that is often based on the alternating optimization strategies. In sparse coding step, the dictionary is fixed and a sparse coefficient matrix with the low approximation error has been earned. The goal of this paper is to investigate the role of data-based dictionary learning technique in the speech enhancement process in the presence of white Gaussian noise. The dictionary learning method in this paper is based on the greedy adaptive algorithm as a data-based technique for dictionary learning. The dictionary atoms are learned using the proposed algorithm according to the data frames taken from the speech signals, so the atoms contain the structure of the input frames. The atoms in this approach are learned directly from the training data using the norm-based sparsity measure to earn more matching between the data frames and the dictionary atoms. The proposed sparsity measure in this paper is based on Gini parameter. We present a new sparsity index using Gini coefficients in the greedy adaptive dictionary learning algorithm. These coefficients are set to find the atoms with more sparsity in the comparison with the other sparsity indices defined based on the norm of speech frames. The proposed learning method iteratively extracts the speech frames with minimum sparsity index according to the mentioned measures and adds the extracted atoms to the dictionary matrix. Also, the range of the sparsity parameter is selected based on the initial silent frames of speech signal in order to make a desired dictionary. It means that a speech frame of input data matrix can add to the first columns of the over complete dictionary when it has not a similar structure with the noise frames. The data-based dictionary learning process makes the algorithm faster than the other dictionary learning methods for example K-singular value decomposition (K-SVD), method of optimal directions (MOD) and other optimization-based strategies. The sparsity of an input frame is measured using Gini-based index that includes smaller measured values for speech frames because of their sparse content. On the other hand, high values of this parameter can be yielded for a frame involved the Gaussian noise structure. The performance of the proposed method is evaluated using different measures such as improvement in signal-to-noise ratio (ISNR), the time-frequency representation of atoms and PESQ scores. The proposed approach results in a significant reduction of the background noise in comparison with other dictionary learning methods such as principal component analysis (PCA) and the norm-based learning method that are traditional procedures in this context. We have found good results about the reconstruction error in the signal approximations for the proposed speech enhancement method. Also, the proposed approach leads to the proper computation time that is a prominent factor in dictionary learning methods.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 468

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 513 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

Asheghi Dizaji Zahra | Asghari Aghjeh Dizaj Sakineh | SOLEIMANIAN GHAREHCHOPOGH FARHAD

Issue Info: 
  • Year: 

    2020
  • Volume: 

    17
  • Issue: 

    1 (43)
  • Pages: 

    117-129
Measures: 
  • Citations: 

    0
  • Views: 

    640
  • Downloads: 

    576
Abstract: 

Due to the exponential growth of electronic texts, their organization and management requires a tool to provide information and data in search of users in the shortest possible time. Thus, classification methods have become very important in recent years. In natural language processing and especially text processing, one of the most basic tasks is automatic text classification. Moreover, text classification is one of the most important parts in data mining and machine learning. Classification can be considered as the most important supervised technique which classifies the input space to k groups based on similarity and difference such that targets in the same group are similar and targets in different groups are different. Text classification system has been widely used in many fields, like spam filtering, news classification, web page detection, Bioinformatics, machine translation, automatic response systems, and applications regarding of automatic organization of documents. The important point in obtaining an efficient text classification method is extraction and selection of key features of texts. It is proved that only 33% of words and features of the texts are useful and they can be used to extract information and most words existing in texts are used to represent purpose of a text and they are sometimes repeated. Feature selection is known as a good solution to high dimensionality of the feature space. Excessive number of Features not only increase computation time but also degrade classification accuracy. In general, purpose of extracting and selecting features of texts is to reduce data volume, time required for training, computational time and increase performance speed of the methods proposed for text classification. Feature extraction refers to the process of generating a small set of new features by combining or transforming the original ones, while in feature selection dimension of the space is reduced by selecting the most prominent features. In this paper, a solution to improve support vector machine algorithm using Imperialism Competitive Algorithm, are provided. In this proposed method, the Imperialism Competitive Algorithm for selecting features and the support vector machine algorithm for Classification of texts are used. At the stage of extracting the features of the texts, using weighting schemes such as NORMTF, LOGTF, ITF, SPARCK, and TF, each extracted word is allocated a weight in order to determine the role of the words in terms of their effects as the keywords of the texts. The weight of each word indicates the extent of its effect on the main topic of the text compared to other words used in the same text. In the proposed method, the TF weighting scheme is used for attributing weights to the words. In this scheme, the features are a function of the distribution of different features in each of the documents. Moreover, at this stage, using the process of pruning, low-frequency features and words that are used fewer than two times in the text are pruned. Pruning basically filters low-frequency features in a text [18]. In order to reduce the number of dimensions of the features and decrease computational complexity, the imperialist competitive algorithm (ICA) is utilized in the proposed method. The main goal of employing the imperialist competitive algorithm (ICA) in the proposed method is minimizing the loss of data in the texts, while also maximizing the reduction of the dimensions of the features. In the proposed method, since the imperialist competitive algorithm (ICA) has been used for selecting the features, there must be a mapping created between the parameters of the imperialist competitive algorithm (ICA) and the proposed method. Accordingly, when using the imperialist competitive algorithm (ICA) for selecting the key features, the search space includes the dimensions of the features, and among all the extracted features, , , or of all the features are attributed to each of the countries. Since the mapping is carried out randomly, there may be repetitive features in any of the countries as well. Next, based on the general trend of the imperialist competitive algorithm (ICA), some countries which are more powerful are considered as imperialists, while the other countries are considered as colonies. Once the countries are identified, the optimization process can begin. Each country is defined in the form of an array with different values for the variables as in Equations 2 and 3. (2) Country = [, , … , , ] (3) Cost = f (Country) The variables attributed to each country can be structural features, lexical features, semantic features, or the weight of each word, and so on. Accordingly, the power of each country for identifying the class of each text is increased or decreased based on its variables. One of the most important phases of the imperialist competitive algorithm (ICA) is the colonial competition phase. In this phase, all the imperialists try to increase the number of colonies they own. Each of the more powerful empires tries to seize the colonies of the weakest empires to increase their own power. In the proposed method, colonies with the highest number of errors in classification and the highest number of features are considered as the weakest empires. Based on trial and error, and considering the target function in the proposed method, the number of key features relevant to the main topic of the texts is set to of the total extracted features, and only through using of the key features of each text along with a classifier algorithm such as, support vector machine (SVM), nearest neighbors, and so on, the class of that text can be determined in the proposed method. Since the classification of texts is a nonlinear problem, in order to classify texts, the problem must first be mapped into a linear problem. In this paper, the RBF kernel function along with is used for mapping the problem. The hybrid algorithm is implemented on the Reuters21578, WebKB, and Cade 12 data sets to evaluate the accuracy of the proposed method. The simulation results indicate that the proposed hybrid algorithm in precision, recall and F Measure criteria is more efficient than primary support machine carriers.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 640

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 576 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

Hadizadeh Hadi

Issue Info: 
  • Year: 

    2020
  • Volume: 

    17
  • Issue: 

    1 (43)
  • Pages: 

    131-146
Measures: 
  • Citations: 

    0
  • Views: 

    335
  • Downloads: 

    429
Abstract: 

Compressive sampling (CS) is a new technique for simultaneous sampling and compression of signals in which the sampling rate can be very small under certain conditions. Due to the limited number of samples, image reconstruction based on CS samples is a challenging task. Most of the existing CS image reconstruction methods have a high computational complexity as they are applied on the entire image. To reduce this complexity, block-based CS (BCS) image reconstruction algorithms have been developed in which the image sampling and reconstruction processes are applied on a block by block basis. In almost all the existing BCS methods, a fixed transform is used to achieve a sparse representation of the image. however such fixed transforms usually do not achieve very sparse representations, thereby degrading the reconstruction quality. To remedy this problem, we propose an adaptive block-based transform, which exploits the correlation and similarity of neighboring blocks to achieve sparser transform coefficients. We also propose an adaptive soft-thresholding operator to process the transform coefficients to reduce any potential noise and perturbations that may be produced during the reconstruction process, and also impose sparsity. Experimental results indicate that the proposed method outperforms several prominent existing methods using four different popular image quality assessment metrics.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 335

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 429 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2020
  • Volume: 

    17
  • Issue: 

    1 (43)
  • Pages: 

    147-158
Measures: 
  • Citations: 

    0
  • Views: 

    444
  • Downloads: 

    471
Abstract: 

Heterogeneous wireless sensor networks consist of some different types of sensor nodes deployed in a particular area. Different sensor types can measure different quantity of a source and using the combination of different measurement techniques, the minimum number of necessary sensors is reduced in localization problems. In this paper, we focus on the single source localization in a heterogeneous sensor network containing two types of passive anchor-nodes: Omni-directional and vector sensors. An omni-directional sensor can simply measure the received signal strength (RSS) without any additional hardware. In other side, an acoustic vector sensor (AVS) consists of a velocity-sensor triad and an optional acoustic pressure-sensor, all spatially collocated in a point-like geometry. The velocity-sensor triad has an intrinsic ability in direction finding process. Moreover, despite its directivity, a velocity-sensor triad can isotropically measure the received signal strength and has a potential to be used in RSS-based ranging methods. Employing a heterogeneous sensor-pair consisting of one vector and one omni-directional sensor, this study tries to obtain unambiguity estimation for the location of an unknown source in a three-dimensional (3D) space. Using a velocity-sensor triad as an AVS, it is possible to determine the direction of arrival (DOA) of the source without any restriction on the spectrum of the emitted signal. However, the range estimation is a challenging problem when the target is closer to the omnidirectional sensor than the vector sensor. The existence method proposed for such configuration suffers from a fundamental limitation, namely the localization coverage. Indeed, this algorithm cannot provide an estimate for the target range in 50 percent of target locations due to its dependency to the relative sensor-target geometry. In general, our proposed method for the considered problem can be summarized as follows: Initially, we assume that the target's DOA is estimated using the velocity-sensor triad’ s data. Then, considering the estimated DOA and employing the RSS measured by two sensors, we propose a computationally efficient algorithm for uniquely estimation of the target range. To this end, the ratio of RSS measured by two sensors is defined and, then, shown that this power ratio can be expressed as a monotonic function of the target range. Finally, the bisection search method is proposed to find an estimate for the target range. Since the proposed algorithm is based on bisection search method, a solution for the range of the target independent of its location is guaranteed. Moreover, a set of future aspects and trends is identified that might be interesting for future research in this area. Having a low computational complexity, the proposed method can enhance the coverage area mostly two times of that explored by the existence method. The simulated data confirms the speed and accuracy of developed algorithm and shows its robustness against various target ranges and different sensor spacing.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 444

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 471 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0