Archive

Year

Volume(Issue)

Issues

Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    1-11
Measures: 
  • Citations: 

    0
  • Views: 

    432
  • Downloads: 

    0
Abstract: 

Increasing the agricultural crops due to climatic conditions، limitation of water resources، limitation of suitable agricultural lands، as well as financial constraints in the country is faced with a lot of problems. Therefore، in order to provide food، the efficiency of the production factors، especially of the water and soil، should be increased. This requires regular monitoring of crops. Remote sensing is one of the most important techniques used in agricultural crop monitoring. SAR remote sensing can bridge the gap between the need for crop information over large scales and the necessity of frequent observations. SAR observables are sensitive to the various characteristics of crops. Today، developing the monitoring methods in the large scale is an important issue for reasonable management of natural resources، especially for the populous countries. The purpose of this study is to monitor and retrieve some parameters of agricultural crops using time series of polarimetric interferometric synthetic aperture radar (PolInSAR) images. The time series of PolInSAR data include intensity، polarimetric، and interferometric information that reflect a large amount of information on various crops. The information obtained from the optics data، the intensity and the polarimetric data is not suitable for retrieving some of the crop parameters such as the height. However، the interferometric data can play a role in monitoring and retrieving these parameters during the growing season. In the present study، the proposed monitoring method is based on the derived features of a decomposition model and regression based methods. In this method، first، an optimal polarization base with the maximum correlation between the slave and master images is calculated. Then eigenvalue decomposition is applied to the interferometric polarimetric covariance matrix in that optimal base، and the features such as entropy and alpha are calculated. Some of these features have a high linear relationship with height، biomass and phenology، and others provide useful information for improving the estimation performance. Finally، the crop parameters are estimated based on the 13 PolInSAR features and also the artificial neural network and support vector regression. The validation analysis is carried out using the images of E-SAR sensor of the DEMMIN region in Germany. These images are acquired between May and June of 2006 and the ground data during the growth cycle is available. The results for wheat and barley crops indicate the good performance of the proposed method in monitoring and retrieving the parameters. Both methods used for estimation including neural network and support vector regression have the good estimates of crop parameters and can be used to monitor the crops. For example for wheat، the RMSE values were 0. 21، 0. 59 and 0. 21، using neural network and 0. 21، 0. 52 and 0. 46، using support vector regression، in height، biomass and phenology estimation، respectively. The estimation results for height and phenology are better than the biomass. Also، using the neural network in the estimation has a relatively higher computational cost. The proposed method can be an appropriate alternative to the experimental and physical models available in the estimation of parameters such as height، which due to the lack of data with suitable baseline can not be used.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 432

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    13-34
Measures: 
  • Citations: 

    0
  • Views: 

    519
  • Downloads: 

    0
Abstract: 

Optical and polarimetric synthetic aperture radar (PolSAR) earth observations offer valuable sources of information for agricultural applications and crop mapping. Various spectral features، vegetation indices and textural indicators can be extracted from optical data. These features contain information about the reflectance and spatial arrangement of crop types. By contrast، PolSAR data provide quad-polarization backscattering observations and target decompositions، which give information about the structural properties and scattering mechanisms of different crop types. Combining these two sources of information can present a complementary data set with a significant number of spectral، textural، and polarimetric features for crop mapping and monitoring. Moreover، a temporal combination of both observations may lead to obtaining more reliable results compared to the use of single-time observations. However، there are several challenges in cropland classification using this large amount of information. The first challenge is the possibility correlation among some optical features or radar features which leads to redundant features. Moreover، some optical or radar features may have a low relevancy with some or all crop types. These two challenges cause to increase complexity and computational load of classification. In addition، when the ratio of number of samples to the number of features is very low، the curse of dimensionality may be occur. Another challenge of classification is the imbalanced distribution among various crop types، the so called imbalanced data. Various classifier have been presented for cropland classification from optical and radar data. Among these classifiers، the multiple classifier systems (MCS) especially the random forest (RF). The main aim of this paper is an alternative to RF which is able to solve these two challenges، the curse of dimensionality and the imbalanced data، simultaneously. The proposed MCSs have other modifications in feature selection and fusion steps of RF. These two methods called as balanced filter forest (BFF) and cost-sensitive filter forest (CFF). The study area of this paper was the southwest district of Winnipeg، Manitoba، Canada، which is covered by various annual crops. The data used in this paper were bi-temporal optical and radar images acquired by RapidEye satellites and the UAVSAR system. RapidEye is a spaceborne satellite، which has five spectral channels: blue (B)، green (G)، red (R)، NIR and RE. In this paper، two optical images were collected on 5 and 14 July 2012. Both these images were orthorectified on the local North American 1983 datum (NAD-83) with a spatial resolution of about 5 m. The UAVSAR sensor is an airborne SAR sensor، which operates in the L-band frequency in full polarization mode (i. e.، HH، HV، VH and VV). The radar images used in this paper were simultaneously acquired with the optical images. They were orthorectified on the World Geodetic System 1984 datum (WGS-84) with an SRTM3 digital elevation model. They were also multilooked by 2 pixels in azimuth and 3 pixels in range directions. Moreover، the de-speckling process، using a 5 × 5 boxcar filter، was applied to the data in order to alleviate the speckle effect. The spatial resolution of these images was then approximately 15 m. The results indicated that the proposed methods could increase the overall accuracy to 10% and the speediness to 6 times more than the classical RF method.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 519

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

HAMIDI M. | EBADI H. | KIANI A.

Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    35-52
Measures: 
  • Citations: 

    0
  • Views: 

    444
  • Downloads: 

    0
Abstract: 

Building detection from remote sensing images has significant importance in updating the maps، urban monitoring، and a wide range of other applications. The high spatial resolution images are an important data source for geospatial information extraction. These images provide extraordinary facilities for feature extraction like buildings and spatial analysis in urban areas. However this task suffers from some problems due to spectral complexity in the image scene. Since high resolution images contain a lot of details about the scene such as; non-homogeneity of the roof of the buildings، sloping and flatness، it can create various spectral properties among other issues. Also، due to the use of similar materials، some buildings can’ t be completely separated from the streets and parking. To overcome these issues، the use of neighborhood and height information is essential. Accordingly، the major part of this research is the use of spatial features of adjacent pixels in a multispectral image and elevation data to increase the accuracy of building classification. In this regard، on the one hand، with the expansion of the feature space in a quasi-deep method، the goal was to train the classification algorithm in higher level and more comprehensive information. But all the features available to distinguish between buildings and non-buildings are not useful. On the other hand، due to the large amount of input data and the increased computing time and memory required، to reduce the processing cost، it is necessary to perform the feature selection operation. Despite many efforts that have been made over the past decades to develop the methods for automatic building detection from these images، high-performance methods are still unavailable because of the uncertainties like optimal feature selection. Therefore، in this study، with a view to improving the automatic detection of the building from remote sensing data، a new hybrid approach is proposed to select the optimal features of the large dataset in a reasonable time. The proposed method of this research firstly extracts high-level features for optimal building detection by using quasi-deep texture structures. Then، it selects the optimal features based on the integration of the developed AdaBoost algorithm (Confidence Based AdaBoost) with the optimized support vector machines by particle swarm optimization (CB-SVMpso)، and performs the binary classification of the building and background. The experiments were performed on the standard data set of Vaihingen in Germany and then the results of the proposed method were compared with efficient methods of machine learning. Also، a comparison was made between the quasi-deep feature sets with the traditional method of GLCM textures. In experiments، in order to purify the final results، no pre-processing and post-processing steps have been interfered. The experimental results showed that on average، the highest overall accuracy and kappa coefficient obtained from the proposed method were 93. 25% and 83. 06%، respectively، and in comparison to conventional methods، the accuracy of kappa coefficient has increased by 7. 27%، as well as the computational time reduction by half، indicating the reliability and efficiency of the proposed method in detecting the majority of buildings.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 444

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

TALEBI D. | KARIMI M.

Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    53-68
Measures: 
  • Citations: 

    0
  • Views: 

    363
  • Downloads: 

    0
Abstract: 

Recent decades has witnessed the use of multi-version in multi-scale spatial databases. Multi-scale spatial databases developed using these method have disadvantages، such as storage and time issues، and economic problems. As multi-scale spatial databases are among basic components in Spatial Data Infrastructures and play a ruling role in digital cities construction، the abovementioned method cannot ensure economic beneficiaries. Today، in order to resolve its disadvantages، a generalization process is used. Due to the incorporation of individuals' favours in choosing and adjusting the parameters، rules، operators، and algorithms، different spatial databases are generated that the selection of the best alternative becomes a significant problem. Considering the importance of generalization process in building multi-scale spatial databases and the ability of generating various outputs through different generalization algorithms and operators، the selection of best and most similar output is necessary. The main aim of this thesis، which is the innovation of this research، is to present an integrated approach in order to calculating spatial similarity degree of the output of generalization algorithms of linear topographic databases in multi-scale space using the combination of models and criteria indicating the spatial similarity degree individual linear objects and the matching methods for groups linear objects. In this paper، Road networks are used as representatives of linear topographic databases. First، in order to produce different outputs of road networks، the Douglas-Peucker generalization algorithm have been used. In this research، two road networks were used in the scale of 1: 25000 and 1: 50000، and the Douglas-Peucker generalization algorithm were implemented on multiple thresholds on the road network of 1: 25000 scale. Then، the matching process has been conducted using five geometric criteria، namely tangent function، direction، median Hausdorff distance based on length، buffer common area، length on the road network of 1: 50000 scale and each of the different outputs generated by the generalization algorithms. For each of the two matched road networks، the F-score value، which represents the degree of accuracy of the matching process، was calculated and those with an F-score value of over 95% were chosen to calculate the spatial similarity degree. After the matching and selection of road networks with an F-score value of over 95%، the spatial similarity degree between the selected road networks was calculated using four relations of distance، direction، topology and attribute. The distance relations include density، length، number of lines، straight-line distance، sinuosity، complexity، linear object area، curvilinearity، and tangent function. The direction relations include difference direction and average angularity. The topological relations include difference topology، number of points، and degrees of points and the attribute relation include significance value. In the end، the total spatial similarity degree was calculated for each pair of compare road networks. The results show that the Douglas-Peucker algorithm with 3 meters tolerance and spatial similarity degree 77. 106 percent has the highest degree of spatial similarity among different outputs. In future researches، in order to increase the accuracy of matching، it is suggested in addition to the geometric criteria used in this study، topological criteria and attribute criteria are also used in the matching process. It is also recommended that in addition to the criteria used in this study to calculate the spatial similarity degree between two road networks، topological criteria relate to the relationship between lines in two road networks، as well as descriptive criteria such as the width of roads and the degree of roads. Also، in addition to the Douglas-Peucker algorithm، other generalization algorithms are recommended for the production of road networks at different scales، in order to identify the most similar smaller scale road networks using the large scale road networks.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 363

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    69-82
Measures: 
  • Citations: 

    0
  • Views: 

    883
  • Downloads: 

    0
Abstract: 

The use of multi-source data، especially the fusion of optical and radar images، is a promising way for improving the level of interpretability of the remote sensing data which leads to a significant performance improvement. In most practical situations، there are two important challenges in the image classification methods feature space generation and choosing the appropriate method for feature selection. This paper aims at reducing the time required for achieving the optimal features. To realize this objective، a new feature selection method is suggested based on the fusion of optical and radar images. In the proposed method، Minimal Redundancy-Maximal Relevance (MRMR) and Genetic Algorithms (GA) are combined and the optimal features are seeked to improve the classification accuracy based on the SAR and optic data. In doing so، at first the SAR data and the optical image are fused together using wavelet procedure. Afterward، several features are extracted from the fused image. In the next stage، the feature selection step is carried out based on the GA method and the combination of the MRMR and GA which is termed MRMR-GA algorithm. Lastly، the fused image is classified using the support vector machine classifier. For the performance analysis of the proposed approach، the TerraSAR and the Ikonos images which are acquired over Shiraz in Iran، are employed. The suggested method leads to the overall accuracy of 97. 25 percent، which indicates the accuracy of the MRMR-GA method is 3% higher than that of the SVM classification with inclusion of the entire feature set. Moreover، the overall accuracy of the proposed approach and GA is approximately equal، while the performance of the MRMR-GA method is approximately 2. 5 times faster than the GA. Therefore، the obtained results confirm the efficiency of the proposed method in feature selection for the purpose of the image classification.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 883

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

SEDAGHAT A. | MOHAMMADI N.

Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    83-97
Measures: 
  • Citations: 

    0
  • Views: 

    479
  • Downloads: 

    0
Abstract: 

Image registration aims to establish precise geometrically alignment between two images of the same scene taken at different times، different viewpoints، or different sensors. It is an essential task in diverse remote sensing and photogrammetry processes such as change detection، 3D modeling، information fusion، geometric correction، and bundle block adjustment. Nowadays، feature based approaches are generally used for satellite image registration due to their resistant against geometric and radiometric variations. A feature based image registration method comprises three main steps: (1) control point (tie-point) extraction، (2) transformation model computation، and (3) image resampling. In the first step، some distinctive conjugate features are automatically extracted using various image matching approaches. In the second step، a suitable transformation model between two images is computed using the extracted control points from the previous step. In the third step، the input image is rectified to the geometry of the base image using the computed transformation model. Transformation models generate the spatial relation between the images and play a critical role in the positional accuracy of the image registration process. Various transformation models have been proposed for remote sensing image registration. The transformation models are generally divided into two types: (a) global models and (b) adaptive models. The global models have a constant number of parameters and describe the global spatial relations between two images. In contrast، the number of parameters in adaptive models are not constant and varies with the severity of the geometric variations and the number of control points. Each transformation model has its strengths and drawbacks. In this paper the capability of some popular transformation models، including similarity، projective، polynomials of degrees 1 to 4، piecewise linear (PL)، Multiquaderic (MQ) and Pointwise (PW) are evaluated for high resolution stereo satellite imagery. To extract high accurate and well distributed control points، an integrated image matching process based on FAST (Features from Accelerated Segment Test) detector، SIFT (Scale Invariant Feature Transform) descriptor and the least square matching algorithm has been proposed. In this method، the initial point features are efficiently extracted using FAST algorithm in a gridding strategy. Then، the well-known SIFT descriptors are computed for extracted features. The control points are determined using feature descriptor comparison in two images، and their positional accuracy are improved using least square matching method. Two evaluation criteria، including computation speed and positional accuracy are used to investigate the capability of the transformation models. To investigate the effect of the feature density in quality of the transformation models، the extracted control points are divided into four classes with different number and distribution of the control points. The experimental results using two high resolution remote sensing image pairs from ZY3 and IKONOS sensors، show that the adaptive MQ model provides the best results، followed by PW and PL models. In contrast، the similarity، projective and global polynomial models do not provide acceptable results for accurate registration in high resolution remote sensing images. However، the commutation time in adaptive models especially in MQ is very high. The registration accuracy of the proposed approach for the MQ model is 2 and 1. 9 pixel، for the ZY3 and IKONOS images respectively

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 479

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

BABADI M. | SATTARI M.

Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    99-113
Measures: 
  • Citations: 

    0
  • Views: 

    424
  • Downloads: 

    0
Abstract: 

In recent years، Light Detection and Ranging (LiDAR) systems، as one of the active remote sensing laser technology، have become one of the most promising tools for measurements of Earth surface and its modeling. With the advent of airborne and satellite LiDAR systems، it has been possible to extract information and parameters related to the vertical structure of the targets، especially trees، while earlier، this was not possible by the use of passive remote sensing data such as multispectral images. Point cloud generated by this sensors provides precise information of the targets on the laser path and their vertical distribution. Some of the applications of these systems are forest management، measurement of forest parameters، Digital Terrain Model generation، sea depth determination، the polar ice thickness determination، 3D city modeling، bridge and power line detection، costal mapping، open cast mapping and land cover classification. Due to the fact that the output of primary LiDAR systems (discrete LiDAR systems) is merely point cloud and is less associated with the intensity recorded for them، there are some limitations in some of its applications such as tree species classification and single tree detection especially in densely forested areas. Since 2004، new commercial airborne laser scanners، namely full waveform LiDAR Systems، have appeared. In recent years، recording the full waveform LiDAR data by these systems has made it possible to rectify some of the weaknesses of the discrete LiDAR systems such as low density of generated point cloud and their limitation in classification tasks; these systems made it possible to classify different tree species and classify targets more precisely by providing features of return waveforms such as amplitude and intensity of return pulses. One of the challenges related to these data is how to decompose return waveforms and generate point cloud and additional information related to waveforms. A great deal of research has been done on using discrete LiDAR data and its applications in forest management and 3D city modeling in Iran; However، full waveform LiDAR data، the process of decomposing LiDAR waveforms to point cloud and different decomposition methods are still unknown. Some of the most important reasons for this matter are unavailability of these data، lack of enough knowledge about the nature of this type of data، Lack of software especially free ones for processing them and the lack of information from commercial firms producing LiDAR sensors. In this research LiDAR waveforms of a forested area have been investigated and it has been tried to show how to decompose raw full waveform LiDAR data to 3D point cloud and extract information and features related to each return waveform. In addition in this research، the results of point cloud generated from full waveform LiDAR data is compared with point cloud acquired from LiDAR sensor to show how the density of LiDAR point cloud can be increased by full waveform analysis. Finally، the generated LiDAR point cloud is visualized based on its extracted features such as amplitude، width، intensity and number of return to show their application in clustering and classification tasks.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 424

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    115-132
Measures: 
  • Citations: 

    0
  • Views: 

    369
  • Downloads: 

    0
Abstract: 

Since the anomalies are unknown targets with low probabilities of occurrence which are significantly different from their neighbors، anomaly detection could be considered as one of the most important information extraction approaches from hyperspectral data. Various types of parametric and non-parametric algorithms have been developed in this area from the 1990's decade. Recently، sparse representation methods have been introduced and successfully accepted as a useful tool for anomaly detection based on the recovery of the majority of high-dimensional signals via a low-dimensional subspace through a dictionary of normalized signals called atoms. In other words، having a dictionary composed of bases denoting the background subspace enables the accurate recovery of background signals. Moreover، the presence of anomaly signals، assuming their deviation from the background subspace، will not have a precise estimation by the background dictionary. Hence the main idea of these anomaly detection methods is focused on evaluating recovery errors of signals by a dictionary that describes the background subspace. In such procedure، removing the atoms that describe the anomaly in the background dictionary can be considered as one of the essential actions. To this aim making diversity in the definition of spatial neighborhoods of spectral signals، as well as voting-based judgment in different situations of the spatial distribution could be proposed. In other words، by designing an optimized local dictionary، based on a local sliding window، the votes of each signal in terms of anomaly presence in each spatial neighborhood could be calculated with the aim of achieving better judgment. In this paper، a new anomaly detector for hyperspectral images is proposed based on simultaneous sparse representation using a new structured sliding window. The main contribution of this research is to improve the judgments about the anomaly presence probability using information collected during transition of the mentioned sliding window for each pixel under test. In this algorithm، each pixel experiences various spatial positions with respect to the neighbors through the transition of the sliding window. In each position، an optimized local background dictionary is molded using a well-known K-SVD method as an iterative process and the recovery error of sparse coding for each pixel under test is calculated using a simultaneous orthogonal matching pursuit algorithm (SOMP). So، the votes of each pixel in terms of the anomaly presence in each neighborhood are calculated and finally the variance of these estimated errors is considered as the anomaly detection criterion. The experimental results of the proposed method using four datasets (synthetic and real datasets) proved its higher performance compared to the GRX، LRX، CRD and BJSR detectors with an average efficiency improvement of about 9%. In addition automatic tuning of the proposed algorithm parameters (level of sparsity and the size of sliding window) and developing parallel processing techniques to improve the running time of this algorithm are the focus of our future research. It is notable that the implementation of this idea and its success showed that development of voting algorithms and the combination of the results could be considered as an efficient approach could also be utilized in other hyperspectral image processing algorithms.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 369

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    133-150
Measures: 
  • Citations: 

    0
  • Views: 

    557
  • Downloads: 

    0
Abstract: 

Wheat is considered as a strategic product in the world، and in Iran it is considered as a major source of food، so that for many years، one of the goals of the government has been to achieve self-sufficiency in providing this product. The estimation of the winter wheat crop area and its spatial distribution in the country، during the plantation and growth period، has a vital role in the value assessment، storage planning، as well as import-export planning. Remote sensing classification techniques، based on one image، are widely used to this task. Remote sensing is considered as a suitable solution to overcome the problems of common and traditional inventory methods. However، drawback of these classical methods is that the winter wheat crop’ s spectral signature is very similar to spectral signature of some other crops growing simultaneously (e. g. barley، alfalfa)، which limits the classification performance. To improve the classification results’ accuracy، multi-temporal imagery acquired during the growth period are instead used. Classification algorithms based on ensemble classifiers are suitable tools to decrease the classification problems of time series images. One appropriate technique for coping with multi-temporal imagery-based classification problems is random forest classifier. In researches performed to determine the area under wheat cultivation، in most cases due to high similarity between wheat and barley، these two products are considered as a same class. In this research، ten Landsat satellite images from Marvdasht-Fars province with cloudiness less than 20% were selected. Then radiometric and atmospheric corrections were applied to all images and the time series analysis was performed as follows to increase the accuracy of wheat classification:-Analysis and selection of optimum images in time series instead of using all images and decreasing volume of calculations and process time-Improving and upgrading the classification results by producing new features in time series and finally surveying the improvement rate of final accuracy and increasing rate of capability of proposed method in separation of wheat from barley.-Determining the most important input features and most suitable dates for imaging by calculation of variable importance in random forest algorithm.-Early classification of wheat and separating it from other classes at least two months before product harvest. The results of this research are as follows: Although the use of spectral bands data of time series images in a random forest algorithm، as input features of the model، increased the accuracy of the results compared to the use of single-image، but the use of these features alone، will not have results with high precision for the separation of wheat and barley (due to the high similarity of these two products). If we consider these two products as a single class، by this method it is possible to separate these two classes from the others in the region with overall accuracy of 89. 5% and Kappa Coefficient 93. 1%. Using time series of spectral bands of images and vegetation indices and their difference in average increased overall accuracy and Kappa coefficient and barley producer’ s accuracy، so that in case of using only three optimized and selected images by algorithm، barley producer’ s accuracy increased and improved by 47%. By examining the variable importance of the random forest algorithm، optimal images and the most important vegetation indices were recognized as the most effective features. The results showed that among the vegetation indices and their differences، the difference of STVI3 index and then the EVI and MSAVI for the images of 2015. 4. 17 and 2015. 5. 3 dates are the most important features. In addition، by surveying the sum of features used in this research، it was determined that the images of 2015. 1. 11، 2015. 4. 17 and 2015. 5. 3 have the most importance among images of time series. Finally، through analysis of the importance of the developed features، we found that different vegetation indices and spectral gradient of multi-temporal images’ bands are the most important features for improving the classification results. Since in this method، the stages of plant growth are surveyed in different times in some way and all extracted information from the images are analyzed as a whole، the suggested way for classification of other products can be generalized considering different cultivation times، peak greenness، their harvest and also for different regions.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 557

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    151-161
Measures: 
  • Citations: 

    0
  • Views: 

    389
  • Downloads: 

    0
Abstract: 

Considering to the compatibility of the urban land uses is one of the important issues in optimization of their spatial arrangement. The most common way of mitigating the negative effects of conflicting land uses on each other is to maintain a certain distance between them. Due to the need to investigate a high amount of information for optimization of the different land uses arrangement and limitations of the precise methods، researchers have focused on the meta-heuristic methods (e. g. genetic algorithm) to solve such problems. Furthermore، because of the need to notice multiple objectives and criteria، multi-objective optimization methods have been considered. To ensure adequate separation distances between incompatible land uses، they can be entered as constraints to these types of optimization methods. In this research، a hybrid method is proposed to meet distance constraints in an optimization problem for locating multiple land uses. For this purpose، a multi-objective genetic algorithm was used to maximize the location suitability and compatibility of the land uses. Simulated Annealing (SA) method was applied to repair infeasible individuals and meet distance constraints in related solutions. SA is a probabilistic technique for approximating the global optimum of a given function. Simulated annealing starts with an initial solution. A neighboring solution is then selected. If the neighbor solution is better than the current solution، is considered as the current solution. Otherwise، the candidate solution، is accepted as the current solution based on the acceptance probability to escape local optima. In this study، the solutions are generated by the genetic algorithm. Each gene of the chromosome represents the location of a candidate site. After generating the population، the distance constraints are checked and infeasible solutions are determined. A solution to which all the distance constraints are met is the feasible solution، otherwise the solution is infeasible. Repairing the infeasible chromosomes were done as follows: • Identify the gene (s) which makes the chromosome infeasible • Identify the neighbors of that gene (s) according to the distances between genes • Create new solutions using neighbors • Calculate the rate violations of new solution • If all new solutions are infeasible، the solution will be replaced by the solution by minimum violation. • If only one feasible solution is generated، the initial solution will be replaced by it. • If more than one feasible solutions are generated، the values of the objective functions are calculated for feasible solutions. Non-dominated solutions are identified. Among them، the solution which has lesser difference with the initial solution، is selected. The results of the research show that the proposed method can be effective in repairing infeasible individuals and converting them to feasible ones، with regard to distance constraints. In this method، for each infeasible individual، several alternatives are generated، from which the closest feasible solution to the original solution، with the better objective function values، can be selected. Increasing the number of neighbors for each site in SA will make it easier to get feasible solutions. By the way، by entering the farther neighbors، there may be more distance between the initial solution and the new solution that replaces it.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 389

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    163-171
Measures: 
  • Citations: 

    0
  • Views: 

    359
  • Downloads: 

    0
Abstract: 

Particulate matters (PM) with an aerodynamic diameter less than 10 microns will cause serious damages to human health. Moreover، their presence can have a critical impact on climate change، global warming، and earth radiance budget. Therefore، obtaining precise information about their concentration and spatial distribution is crucial for public health and environmental studies. High concentration of PM10 can be named as a major environmental and public health problems especially for industrial and populated cities around the world. Thus، policymakers and environmental organizations have decided to establish pollution station to measure various pollutants including PM10. Obviously، it is not possible to establish many pollution stations based on economic justifications so، an only limited number of these instruments are located in every city. However these instruments can measure and record the PM10 concentration with high precision، they only provide sparse point observations. In this case، remote sensing data can be utilized to fill this gap and solve the existing discontinuity problem. Generally، two kinds of remote sensing data which have a good representation of existing pollutant in the atmosphere can be used for this purpose. Aerosol optical depth (AOD) and aerosol contributions to apparent reflectance (ACR) are two of these data. ACR images can be simply calculated from each satellite image consisting of Red and SWIR (2. 1 µ m) bands. This could be achieved by estimating the surface reflectance (SR) of the Red band from the top of atmosphere reflectance (TOAR) of the SWIR band. Then، the difference of SR and TOAR of the Red band can be a representation of the amount of atmosphere reflectance related to existing pollutants. In this study، we have used ACR images instead of AOD data to estimate PM10 concentrations and produce PM10 pollution maps for Tehran city in Iran based on three reasons. First، they have better spatial resolution and second، they are spatially continuous in contrast to AOD data which include much gaps in the study area due to dark target limitations for AOD value retrieval. Lastly، an aerosol robotic network (AERONET) station is not located in this area which is required to evaluate the precision of retrieved AOD values. MODIS level-1B images named MOD02HKM for 8 days in 2017 with corresponding ground measurements of PM10 concentrations from 14 pollution stations have been utilized in this area. Four different regression model including linear، exponential، logarithmic، and power regressions are employed to estimate PM10 concentrations and produce pollution map. Three criteria of R square، the correlation between estimated and observed (measured) PM10 concentrations، and root mean square error (RMSE) are employed to investigate the performance of four regression models. Based on the R square criterion، the linear regression model with 0. 5912 performs better than exponential، logarithmic and power regressions with 0. 5826، 0. 5808، and 0. 5782 R square values respectively. Since we have observed different performance from four regression model based on three evaluation criterion، we have applied a ranking method based on the evaluation criterion to determine the best regression model. Based on the ranking، we recognize that the exponential regression model performs better than linear، logarithmic and power regressions.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 359

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

HOOSHANGI N. | ALESHEIKH A.A.

Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    173-188
Measures: 
  • Citations: 

    0
  • Views: 

    359
  • Downloads: 

    0
Abstract: 

Proper task allocation among agents enhances a system’ s performance and reduces the probability of disorder in resolving a wide range of issues. Appropriate allocations are critical for the efficient implementation of tasks undertaken in natural hazard environments. Task allocation plays an important role in coordinating a multi-agent system (MAS) among a set of agents. Multi-agent systems consist of several automatic and autonomous agents that coordinate their activities to achieve a goal. Agents fail to reach their ultimate goal without the proper assignment of tasks. A proper approach to task allocations plays an important role in decision-making، particularly in urban search and rescue (USAR) operations in crisis-ridden areas. In the last decade، several studies were conducted regarding task allocation and different approaches have been presented to consider assigning tasks in MASs. This paper intends to provide an approach to task allocation in disaster environments through the consideration of appropriate spatial strategies to deal with disturbances. The challenge of this study is to provide the possibility of task reallocation in order to deal with uncertainties and events during the implementation. The main innovation of the study is that it presents an approach to improve conditions during reallocations، or future allocations، when initial allocations face problems due either to available uncertainties، or the addition of a new task. In other words، based on the nature of the implementation environment (natural disaster environments)، current allocation is not only considered but it is performed with regard to future allocations. The selected spatial strategies for a change the order of tasks (preparation for reallocation) are different in accordance with the conditions and the studied phenomenon. In general، strategies are selected in such a way that the final cost of the system will not increase abnormally if initial allocations face a problem. For example، building destruction level are uniformly distributed after an earthquake. Therefore، the convergence of rescue groups should be prevented as much as possible and the initial allocation should be done in such way to decrease agent movement in future allocations. The proposed method is presented in five phases: ordering existing tasks، finding coordinating agent، holding an auction، applying allocation strategies and implementation and observation of environmental uncertainties. The scalability of the proposed method was evaluated with the contract net protocol (CNP) method. In comparison with CNP، the standard time of rescue operations in the proposed approach includes at least 12% of improvement and the maximum improvement of 30% and the average percentage of recovery was 19%. Then obtained from the simulation of the proposed approach indicated that the time of rescue operations in the proposed scenarios was always less than the time required in the CNP method. Further، the evaluations based on deceased people and incorrect allocations indicated the feasibility of the proposed approach. The comparison of the proposed strategies at different levels of uncertainty showed that an increase in uncertainty leads to an increased rescue time for CNP. An effective assigning approach should consider strategies for replanning in order to waste the least time during system disruptions. This optimizes planning to achieve better implementation time and provides conditions for fault tolerance. Also، considering strategies in the task allocation process، especially spatial strategies، resulted in the optimization and increased flexibility of the allocation as well as conditions for fault tolerance and agent-based cooperation stability in emergency management.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 359

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

Hosseiny s.b. | AMINI J.

Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    189-198
Measures: 
  • Citations: 

    0
  • Views: 

    716
  • Downloads: 

    0
Abstract: 

Nowadays synthetic aperture radar (SAR) imaging systems are used in a wide range of applications. Synthetic aperture radar signal processing is processing radar signals with specific data acquisition geometry، in order to forming high resolution radar images in range and azimuth directions. In order to SAR processing become possible، radar antenna must move over the imaging scene. With antenna motion and SAR signal processing technique، we can generate an analytical antenna with narrow beamwidth in the along-tack direction. In order to make use of the SAR image in different applications، in the first step، it is needed to process the raw signals، acquired by the SAR system، and produce the single look complex (SLC) image. Different SAR signal processing algorithms have been developed to create the SLC image. Range migration algorithm (RMA) is one of the best and popular image formation algorithms from SAR signals، which operates in frequency domain. RMA is mostly used for imaging the Airborne and Spaceborne systems. Furthermore، other capabilities of RMA made it possible to use this algorithm for ground-based SAR (GB-SAR) systems. GB-SAR imaging follows the same imaging geometry principles of airborne and spaceborne systems but، the differences are in shorter analytical antenna in azimuth direction of GB-SAR، look angle of antennas، and targets range to sensor، which are very close، in GB-SAR scenario. In ground based systems، radar sensor is mounted on a straight rail. Radar sensor moves step-wise on the rail and acquires data from the corresponding range profile. At the end of data acquisition process، all the raw signals are stored in a two dimensional array. This two dimensional array in used as input of image formation processor and then the final image will be derived. Range Migration algorithm، consists of four main steps: (1) One dimensional long-track Fourier transform (2) Matched filtering (3) Stolt interpolation (4) Two dimensional inverse Fourier transform in azimuth and along-track directions. In this paper، we are going to develop the RMA for image formation of a Ground-based SAR system، which is used for very close range purposes. Our simulated SAR system operates in S-band and modulates signal form 2. 4 GHz to 2. 5 GHz. It mounted on a three meters long rail and acquires data every two centimeters. We simulate the different distributions of point targets، such as presenting one point target or presenting nine point targets in the imaging scene، based on the reflected signal model of targets. All the targets’ ranges are less than 40 meters. After simulating acquired two dimensional raw signal of targets، RMA is used to extract the focused targets. In post processing step، Hann window in used to suppress the sidelobes of compressed signals. PSLR and ISLR are used as parameters to analyze the quality of detected targets. The mean PSLR of all examined targets in range direction is-13. 1143 dB and in azimuth direction is-13. 2153 dB and also the mean ISLR of nine targets in range direction is-5. 9726 dB and in azimuth direction is-6. 1159 dB. Obtained results are acceptable compared to other imaging modes in higher ranges such as Airborne and Spaceborne imaging systems.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 716

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    199-213
Measures: 
  • Citations: 

    0
  • Views: 

    497
  • Downloads: 

    0
Abstract: 

In order to reduce the effect of the systematic error (bias) in the RPC data in the generated DEM (relative model) from the satellite stereo images، an existing elevation model as an absolute model can replace the need for ground control points. In this research، a DEM matching strategy was introduced based on the development of the slope-based approach. Unlike other existing DEM matching methods that first apply a projection system، the proposed mathematical model، based on the coordinate system of the input data was developed. In this way، the parameters of the transformation were obtained in this coordinate system. As a result، it is possible to directly improve the RPC data of the stereo images، which are also expressed in the same coordinate system. Inspiring from the classical absolute orientation of the aerial images، the two-stage transformation was carried out separately. In order to evaluate the proposed method، a Cartosat-1 image pair and an SRTM model were provided from a mountainous region. This method compared with the original slope-based approach and provided a better approximation of the three-dimensional offset values between the relative and absolute models. The generation of the relative DEM (MATDEM) has been implemented in the MATLAB environment. In this research، it was considered a dense image matching method for generating the relative DEM. Therefore، an area-based solution has been used to reach the desired density of the results. The least squares image matching (LSM) method، as an area-based method، has the potential to achieve high precision and is mainly used as an alternative to increasing the accuracy of other matching methods. However، the LSM method has a low convergence radius and it is possible to fall into the local minima of the correlation function، which in turn reduces the reliability of the results. Therefore، it is important to produce the proper seed points which are located within the small convergence radius of this method. Here، these seed points were obtained using raw RPC data and the SRTM model. In this regard، a regular grid was assumed on the first image، and after the extraction of the seed points، the precise matching was performed using this method. Also، for comparison، a relative model (PCIDEM) was produced using the PCI-Geomatica software. The most important achievement was to discover the actual values of the bias of the raw RPCs in the MATDEM case. It was assumed that the systematic errors were propagated from the RPC data in the generated relative DEMs. The estimated values for offset parameters، particularly the offset in the longitude direction، were different for PCIDEM and MATDEM. According to the evaluations، the values obtained from the MATDEM have been more accurate. The reference data for this assessment was the offset calculated using ground control points. In the evaluations using ground control points، the offset values estimated by the proposed method along latitude and longitude directions were 0. 77 and 1. 23 m، respectively. With regard to the pixel size of Cartosat-1 images، the planimetric offset value was estimated as 0. 58 pixels.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 497

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    215-224
Measures: 
  • Citations: 

    0
  • Views: 

    413
  • Downloads: 

    0
Abstract: 

By increasing the accuracy and resolution of gravity data derived from terrestrial، airborne and satellite methods، the accuracy and resolution of global geopotential models have significantly been improved. For example، EGM2008 and EIGEN-6C4 are among the most accurate global geopotential models which have been expanded up to degree 2190. Nevertheless، global geopotential models do not have an adequate accuracy everywhere. Therefore، the local gravity field modeling based on the geodetic boundary value problem approach and the local gravity data has always been an interesting subject. In Iran، first-، second-and third-order gravity networks with the spatial resolutions 30'، 15'، and 5' have been designed for geodetic applications. Now، these important questions arise: (1) How important is the spatial resolution of the ground gravity data in the local modeling? (2) Can local gravity data with any resolution improve the global models? To answer these questions، the effect of the spatial resolution of the Iranian ground gravity data to determine the local geoid based on the geodetic boundary value problem solution by the remove-compute-restore technique is studied. In this line، four regions over Iran with different spatial resolutions are selected as test regions. Region 1 consists of 1738 gravity data with spatial resolution 5. 7'، region 2 consists of 165 gravity data with spatial resolution 21. 6'، region 3 consists of 234 gravity data with spatial resolution 18' and region 4 consists of 1728 gravity data with spatial resolution 5. 7'. Then، the geodetic boundary value problem is separately solved for each region، where the EGM2008 global model up to degrees 360، 720، 1080 and 2160 is used as reference model. Finally، the computed local geoids and the global geoids are compared with the GPS/Leveling geoid. From results we found that the local geoid in the regions 1 and 4 has an accuracy of about 23 cm in terms of the root mean square error (RMSE)، while the local geoid in the regions 2 and 3 has an accuracy of about 32 cm. This means that the local geoid in the regions 1 and 4، where the spatial resolution of gravity data is higher، is more accurate than the local geoid in the regions 2 and 3. Moreover، we found that the local geoid of region 1 is more accurate than the global geoids up to degrees 360، 720 and 1080، while the accuracy of the local geoid is consistent with the global geoid up to degree 2160. Such a result is obtained for the region 4. For the regions 2 and 3، the local geoid is more accurate than the global geoid only up to degree 360، while the accuracy of the local geoid is consistent with the global geoids up to degrees 720، 1080 and 2160. This is due to the fact that the spatial resolution of gravity data in the regions 1 and 4 is 5. 7' which is equivalent to degree about 2160، while the spatial resolution of gravity data in the regions 2 and 3 are 21. 6' and 18'، respectively، which are equivalent to degrees about 500 and 600. Therefore، it is concluded that when the spatial resolution of ground gravity data is lower than the corresponding degree of the reference model، the local geoid does not outperform the corresponding global geoid.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 413

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2018
  • Volume: 

    8
  • Issue: 

    2
  • Pages: 

    225-234
Measures: 
  • Citations: 

    0
  • Views: 

    394
  • Downloads: 

    0
Abstract: 

A Wireless Sensor Network (WSN) is composed of sensor nodes that are located within a certain target region، which are capable of monitoring different environmental phenomena such as forest fires detection، leakage pollutants، etc. In a WSN، sensor nodes are distributed across the Region of Interest (ROI). The overall coverage of the region covered by sensor nodes directly influences the efficiency of the network in obtaining useful information. Establishing global coverage in the network using the least number of sensor nodes plays a very important role in reduction of the costs associated with the wireless sensor network. However، coverage holes appear in the study region due to various reasons، such as non-uniform distribution of sensors، node failure and energy dissipation. Existing methods to heal the holes in the study area are generally divided into global and local methods. In global methods، prior to distribution of sensor nodes in the desired region، the optimal position of the nodes in order to avoid the occurrence of coverage holes is determined using optimization functions. In local methods، on the other hand، after primary distribution of the sensor nodes in the region، coverage holes are detected and the sensors are then moved locally in order to cover the detected coverage holes. The latter methods are mostly designed based on geometrical structures such as Delauney triangulation and Vorosnoi diagrams. This present study focuses on determining optimal locations in order to achieve global coverage in a wireless sensor network. For this purpose، the performance of an existing local method، called tree-based method، in adding new sensors in a wireless sensor network has been evaluated. The results show that، despite the generally acceptable performance of this method، it does not work well in some special configurations of sensors، which is mainly due to lack of attention to the position of the sensors when the sensors are added. Therefore، by combining the tree-base method with a proposed method، called center of gravity method، the results are improved so that it has the ability to properly cover the holes with different sizes and geometrical shapes. Moreover، in the proposed method، before identifying the coverage holes in the area، an improvement phase is applied to improve the configuration of existing sensors after they are randomly distributed in the target area. Here، nearby sensors are moved away in order to prevent the occurrence of large scale coverage holes as well as prevent overlapping coverage of the sensor nodes. Therefore، this phase has an effective operation in creating optimal coverage in the area. Moreover، the results show that the combined method has a better performance compare to the tree-base method، because it covers the region in less iterations and less number of new sensors

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 394

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
telegram sharing button
whatsapp sharing button
linkedin sharing button
twitter sharing button
email sharing button
email sharing button
email sharing button
sharethis sharing button