Archive

Year

Volume(Issue)

Issues

Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    1-16
Measures: 
  • Citations: 

    0
  • Views: 

    1141
  • Downloads: 

    0
Abstract: 

Nowadays, automatic point cloud processing is an important and challenging topic in photogrammetry and remote sensing. The LiDAR has the ability of collecting the accurate 3D point cloud from the earth surface, directly. Moreover, recent advances in image processing provide the capability of producing 3D point clouds with high accuracy using dense matching from the digital aerial images. The point cloud segmentation and classification algorithms are usually time consuming and have high computation cost. In this paper a difference object-based approach was proposed for point cloud classification. In this approach, at first the points were segmented into some regions; then these regions were classified into considered classes. In this regard, firstly some boxes with predefined side size were placed on point clouds and each box was analyzed separately. In order to reduce the point density, the points in each box were removed except the nearest point to the center. Then, the region growing algorithm was employed to segment the points with reduced density based on normal vector and curvature value of each point. Afterward, around of each segmented point was searched for labeling the remains points. In other words, the points which have normal vector close to considered point were labeled same as that point. After point segmentation, for each segment some potentially features were selected and produced in order to detect buildings, vegetation as well as grounds. The features should be selected accordance with the geometrical and structural characteristics of the objects. In this paper some features including mean curvature, area, perimeter, boundary irregularity, flatness, elevation, and being terrain or off- terrain were generated. The Alpha shape is a triangulation based algorithm which has the ability of reconstructing the object shape using a set of dense and irregular points. The Alpha value determines the level of details in the reconstructed shape. After computing the shape of the considered segment using Alpha shape algorithm, calculating the area and perimeter was feasible. In order to analyze the boundary irregularity of the segments, the ratio of area between two reconstructed Alpha shapes with two different Alpha values is computed. For each segment a plane was approximated using the MSAC algorithm and the ratio of points in that plane and out of that plane was computed as flatness value. The SMRF algorithm was employed for specifying the off- terrain points. The height of an off-terrain point was acquired by computing the difference between that point and the closest terrain point. Thus, for each segment a feature vector was obtained. Finally, some training data was collected and the segments were classified by KNN algorithm. The proposed approach was implemented and evaluated in 6 different test areas. Although the area 1, 2, 5 and 6 were acquired by LiDAR, the point density of area 1 and 2 is equal to 4 point per m2 and the point density of area 5 and 6 is equal to 65 points per m2. The area 3 and 4 were acquired by dense matching of digital aerial images and theirs average point density is equal to 20 points per m2. The accuracy of proposed approach in area 1 to 6 were 92.25%, 93.44%, 91.44%, 89.23% 92.46% and 89.73%, respectively. The evaluation results clarify the good performance of proposed approach in different areas with various land covers and point densities.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1141

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

AMRAEI E. | MOBASHERI M.R.

Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    17-26
Measures: 
  • Citations: 

    0
  • Views: 

    660
  • Downloads: 

    0
Abstract: 

Presence of different noises in Landsat images renders the extraction of descent information hard or sometimes impossible. One of these noises is Bit-Flip noise that happens when the signals are transferring from satellite to the other platforms and/or ground stations. During this transferring process, one bit might be affected and altered (e.g.1 to 0 or 0 to 1). Clearly if the position of this bit is in higher bit values, then the change might be as large as the signal itself. This for dark pixels might be even few orders of magnitudes larger. In this work, a novel method for detection of affected bit and its correction is introduced. In this method, first we used a fuzzy detector method to identify the noise affected pixels. Then the pixels DN values were compared with the neighboring pixels and as results the affected pixels and the degree of affection were calculated. Since the change must be of the order of 2n, the proper value will be identified and corrected accordingly. The method was compared with the other well-known methods using statistical parameters such as SSIM and PSNR. The value for SSIM and PSNR were 0.9 and 28db respectively which were much better compared to other methods.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 660

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    27-38
Measures: 
  • Citations: 

    0
  • Views: 

    1144
  • Downloads: 

    0
Abstract: 

Hyperspectral image contains hundreds of narrow and contiguous spectral bands. Because of this high spectral resolution, hyperspectral images provide valuable information from the earth surface materials and objects. By advances in remote sensing technology and production of hyper spectral data with high spatial and spectral information, using such data for a detailed study of the phenomenon is spreading quickly. One of the most important applications of hyperspectral data analysis is either supervised or unsupervised classification for land cover mapping. Among different unsupervised methods, Gaussian mixture model has attracted a lot of attention, due to its performance and efficient computational time. Gaussian Mixture Models (GMMs) have been frequently applied in hyperspectral image classification tasks. The problem of estimating the parameters in a Gaussian mixture model has been studied in the literature. Gibbs sampler is one of the methods that can be applied for this problem. Another method for estimation the parameters of a Gaussian mixture model is Expectation-Maximization (EM) algorithm. EM is a general method for optimizing likelihood functions and is useful in situations where data might be missing or simpler optimization methods fail.On the other hand, the large number of bands in a hyperspectral images leads into estimation of a large number of parameters. In the other point of view, the enormous amount of information provided by hyperspectral images increases the computational burden as well as the correlation among spectral bands. Thus, dimensionality reduction is often conducted as one of the most important steps before target detection to both maximize the detection performance and minimize the computational burden. In this paper, we use PCA and Random Projection (RP) for solving the high dimensionality of the data. In order to evaluate the proposed algorithm in real analysis scenarios, we used two benchmark hyperspectral data sets collected by AVIRIS and Reflective Optics System Spectrographic Imaging System (ROSIS). In order to evaluate the effectiveness of the proposed method which is based on the using GMMS and its parameter are estimated using Gibbs sampler method we used two well-known dataset ROSIS and AVIRIS hyperspectral images which they are acquired from a urban and agricultural area, respectively. Moreover, for better evaluation we used a simulated data which is attained using a toolbox which is known as HYDRA project. Investigations on the simulated dataset and two real hyperspectral data showed that the case in which the number of bands has been reduced in the pre-processing stage using either RP or PCA in the feature space, can result the highest accuracy and efficiency for thematic mapping. We also demonstrated that the superiority of the Gibbs sampler in comparison with EM algorithm for estimating the GMM parameters. For instance, in Pavia university dataset, the overall accuracy and Kappa coefficient was 88.80 and 0.84, respectively for GMM-Gibbs-RP method and for GMM-EM-RP method the overall accuracy and kappa coefficient was 84.21 and 0.80, respectively. In other view point, in urban area (Pavia university dataset) with small structures, the amount of improvement in by Gibbs sampler in comparison with EM algorithm was more than the AVIRIS dataset which is related to agricultural area with bigger regions. This shows the capability of Gibbs sampler in confronting with singularities.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1144

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    39-52
Measures: 
  • Citations: 

    0
  • Views: 

    998
  • Downloads: 

    0
Abstract: 

Earthquake as the most devastating natural disaster in urban areas causes huge physical and human damages worldwide. One way to assist reducing the impact of the earthquake on people and infrastructures is to produce a reliable seismic vulnerability map. The physical seismic vulnerability of a region as a multi criteria problem is concerned with seismic intensity, land slope, the number of building floors, building age and quality. Among the most important sources of uncertainty in determining the vulnerability of each urban statistical unit, is the uncertainty related to the conflicts in expert opinions concerning the level of severity of the seismic vulnerability. The main objective of this paper is to manage uncertainty considering different vulnerability classes allocated by the experts in integration of the concerned parameters. In this model, to reduce the uncertainty in the decision making process related to the expert opinions on allocating a seismic physical vulnerability class to each urban statistical unit, interval mathematics, genetic algorithm and granular computing methods are used. The physical seismic vulnerability map has been produced for Tehran on the basis of activation of North Tehran fault. Among 3174 urban statistical units, 150 randomly selected samples have been selected by 5 experts in related geoscience fields. The experts are asked to fill a questionnaire for allocating the physical seismic vulnerability of the samples. Due to the disaggregation in the experts’ knowledge on the physical seismic vulnerability of each statistical unit, their opinions have been integrated using interval mathematics. For the conflict resolution among the experts, genetic algorithm is used. Granular computing has been applied to manage the uncertainty caused by the large amount of information achieved from the parameters affecting the physical vulnerability to assess the seismic physical vulnerability. The relations among the input parameters and the vulnerability classes are presented in a decision table. The rules with a minimum conflict from the decision table are extracted. The vulnerability classes have been sorted from 1 to 5 considering 1 as the least vulnerable class and 5 as the most vulnerable class. According to the results, most of the statistical units in Tehran fall within interval class vulnerabilities of [3 4] and [5 4]. To compare the similarity between the results of the model and those of the previous research by Khamespnah in the same study area, who used an integrated model of granular computing and rough set theory, Spearman rank correlation coefficient was employed. The value of this coefficient was 0.47 that shows some similarities between the results. The accuracy of 76% was achieved in this research using Kappa index verifying the importance of managing uncertainty using interval mathematics.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 998

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

KHAZAEI S. | KARAMI A.

Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    53-67
Measures: 
  • Citations: 

    0
  • Views: 

    859
  • Downloads: 

    0
Abstract: 

Camouflage is the art of disguising or blending objects into a natural background so as to make them more difficult for viewers to see. Traditional camouflage is usually based on the designer's experience and includes macro patterns spots and stripes irregular whose outlines or boundaries are sharp and are easier to see. To overcome this main drawback, digital camouflage combines macro and micro patterns with computer assistance. Most works related to the digital camouflage are in the field of color pattern design. However, designing a suitable color pattern that can best match the target in terms of shape and color characteristics with the background is a major challenge in the field of digital camouflage. Nowadays, the digital camouflage is based on the principles of visual psychology and uses digital image processing techniques to characterize background features. The common digital camouflage techniques are based on the fuzzy, the neural network and the greedy methods. The main problem to use these methods is that the number of main colors is chosen manually or experimentally, while it is different in each image. Therefore, the optimal colors cannot be obtained for appropriate blending targets into their backgrounds. The main objective of this study is to provide a novel method of designing a digital template which automatically extracts the number of original colors based on the specific features of each image. The proposed method is based on the conventional greedy algorithm. The greedy algorithm tries to minimize the difference between the shape perceived by the viewer and the shape patterned on the target. The proposed method first uses the minimum description length (MDL) criterion for determining the number of optimum clusters of the image. Then, it uses the well-known K-means clustering method to extract the original colors from the image. Finally, the proposed method uses the greedy algorithm to obtain an optimal distribution or arrangement of the combination of pattern templates stored in a database. In this study, the proposed method is compared to the color similarity algorithm proposed by yang and Yin (2015). The quantitative and qualitative assessments of both the methods are based on the saliency map, which is a common criterion for the camouflage assessment. The saliency map is originally intended to model covert attention. It attaches a value to each location in the visual field given the visual input and the current task, with regions of higher salience being more likely to be fixated. For our comparison, 11 different images captured in different conditions have been used in this study. The images used are in different times (spring, summer, autumn, and winter seasons) and different location (desert, forest, sea, urban, etc.) conditions. Experimental results show that, the mean value of the saliency measure in the 11 images are, respectively, 53% and 42% for the color similarity algorithm method and the proposed method. This indicates that the proposed method is superior to the color similarity algorithm for distinguishing the targets in their backgrounds.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 859

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

ASGARI J. | ZAHEDI M.

Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    69-78
Measures: 
  • Citations: 

    0
  • Views: 

    839
  • Downloads: 

    0
Abstract: 

Global Satellite Navigation Systems are widely used for geodetic and geodynamics purposes. However the meteorological applications, such as Precipitable Water Vapor (PWV) estimation, are increased with GNSS permanent stations deployments all over the world. The continuity of GNSS observations and the spatial resolution of the permanent GNSS stations are some of the potentials of GNSS remote sensing using permanent arrays. In this study we are demonstrated one of the real-time meteorological applications of GNSS networks. The spatial distribution of PWV was investigated during extreme rainfall. The PWV data from the SuomiNet network stations in the Texas state was implemented. Using linear interpolation, the PWV were determined for area within the stations. It was observed that the estimated water vapor from the GNSS observations progresses gradually towards the precipitation site and then, with accumulation in the area of ​​precipitation, the rainfall begins, and then the PWV decreases. Therefore, using a network of uniformly distributed GNSS stations, GNSS observations can be used to measure the accumulation of atmospheric precipitation in a region and to investigate the probability of rainfall occurring. These predictions will be effective if the network is sufficiently dense and the perceptible water vapor is estimated with a short latency. The estimation of zenith path delay from GNSS is possible using relative or absolute method. Furthermore the slant delay estimation is one of the possibility in the dense GNSS networks. Tropospheric tomography will aid the scientists in the future applications of GNSS. In this paper the precision of real time PWV estimation via GNSS data is investigated. French RGP GNSS networks data are used for PPP processing. The processing is performed by ultra-rapid IGS orbit and clock products and then it repeated using final IGS products. The precision of Zenith Total Delay (ZTD) of final ephemeris is about 3 mm. The Real time estimation of ZTD using ultra rapid data is compared by final solution and the RMSE for different stations are from 3 to 7 mm approximately that is sufficient for real time estimation of PWV and real time precipitations prediction. Investigation of raining occurrence and the PWV changes is performed in this paper. In the investigation of PWV it may be possible to follow a pattern or patterns for a region prior to intense rainfall, spatial variations and spatial distribution of PWV, which can predict extreme rainfall. Therefore, it is suggested that by studying the PWV behavior accurately, the probability of such patterns is examined. Also, in order to determine the accuracy of the PWV obtained from GNSS observations by the PPP method with the ultra-rapid orbit and clock products, it is possible to compare the PWV obtained from the above-mentioned method with those obtained from the measurement of radiosondes as a reliable source. Also the results of the ultra-rapid products are compared with the final IGS products the consistency is about 3-7 millimeters for the estimated ZTD values. It is also possible to predict the rainfall by the permanent GNSS stations in Iran. There are several permanent arrays which may provide the GNSS observation files instantaneously. The national geodynamic network, the Tehran's Instantaneous Network, The national cadastre RTK network and the Isfahan municipality RTK network, could be used for PWV estimation with high spatial and temporal resolution and instantaneous meteorological application of a unified network is possible.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 839

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    79-92
Measures: 
  • Citations: 

    0
  • Views: 

    1136
  • Downloads: 

    0
Abstract: 

Documentation, preservation, maintenance and restoration of cultural heritage as well as their buffer zones are considered as important tasks of the people and the government. It is imperative for administrators and managers of civil and development projects to adhere to it. To do this, having the exact engineering maps is essential. The detailed maps are the basis for the maintenance, identification, rehabilitation and archiving of the National Cultural Heritage. Today, with the rapid growth of urbanization and the expansion of modern technologies that accelerated and facilitated construction, more attention was paid to the identification and preservation of historical monuments. A principal approach for monitoring and ensuring the preservation of the cultural heritage and its buffer zone is the scientific documentation. The main focus of the present research is on the development and design of an optimal method for the automatic detection and documentation of the Qanat (Kariz) and its environment, which is one of the engineering feats, including the unique cultural heritage of Iran, by extracting and recording spatial information. In this research, data fusion methods were used for the integration of aerial and satellite imagery in order to identify automatic water wells. Two types of integration have been made to achieve the appropriate data for Kariz detection and documentation: 1. Integration of aerial and satellite imagery; 2. Integration of the extracted features from the fused image into decision-making. Satellite and aerial images of the region in Eslamshahr have been merged into Ehler's method. After analyzing each of the different fusion methods and the histogram of the images before and after fusing, and examination of the quantitative criteria, different radiometric characteristics of the wells of the aqueduct are extracted. These methods include applying the TC3 indices (with 62% success in identifying the desired pixels) and NDWI (with 62% success in identifying the desired pixels) and SAVI (with 52% success in identifying the desired pixels) and applying the segmentation algorithm on different image bands, the Ehler algorithm was 76% successful in identifying desirable complications. In the next step, to integrate at the decision level, two other layers of information (the layer of slope of the region and the layer obtained from the template matching algorithm with 54% success in determining the desired pixels) are extracted from the geometric properties, and along with the features obtained in the previous step, the stage of decision-making will begin. Fuzzy method has been used to integrate the results at the decision-making level. Finally, the properties of the Kariz system were detected with a better accuracy than 90%. Based on the results obtained, this method is not optimal in all conditions. Therefore, it is recommended to use the fusion method at hybrid levels and instead of using a single procedure, in each layer of information, an optimal method to input to the next step is feature extraction. Ultimately, merging the extracted properties in different layers is applied. In the present study, integration was performed at the decision-making level based on fuzzy logics. To achieve optimal performance as a result of fusion, different layers, each with their own weight, come in with different coefficients. To decide, a combination of multi-layered neural network algorithm and fuzzy logic can be used which will be tested and evaluated in the next stages of the research.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1136

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    93-110
Measures: 
  • Citations: 

    0
  • Views: 

    869
  • Downloads: 

    0
Abstract: 

In the last two decades, knowledge of the distribution of the ionospheric electron density considered as a major challenge for geodesy and geophysics researchers. To study the physical properties of the ionosphere, computerized ionosphere tomography (CIT) indicated an efficient and effective manner. Usually the value of total electron content (TEC) used as an input parameter to CIT. Then inversion methods used to compute electron density at any time and space. However, CIT is considered as an inverse ill-posed problem due to the lack of input observations and non-uniform distribution of TEC data. Many algorithms and methods are presented to modeling of CIT. For the first time, 2-dimensional CIT was suggested by Austin et al., (1988). They used algebraic reconstruction techniques (ART) to obtain the electron density. Since, other researchers have also studied and examined the CIT. Although the results of all studies indicates high efficiency of CIT, but two major limitations can be considered to this method: first, due to poor spatial distribution of GPS stations and limitations of signal viewing angle, CIT is an inverse ill-posed problem. Second, in most cases, observations are discontinuous in time and space domain, so it is not possible determining the density profiles at any time and space around the world. In this paper, the method of residual minimization training neural network is proposed as a new method of ionospheric reconstruction. In this method, vertical and horizontal objective functions are minimized. Due to a poor vertical resolution of ionospheric tomography, empirical orthogonal functions (EOFs) are used as vertical objective function. To optimize the weights and biases in the neural network, a proper training algorithm is used. Training of neural networks can be considered as an optimization problem whose goal is to optimize the weights and biases to achieve a minimum training error. In this paper, back-propagation (BP) and particle swarm optimization (PSO) is used as training algorithms.3 new methods have been investigated and analyzed in this research. In residual minimization training neural network (RMTNN), 3 layer perceptron artificial neural networks (ANN) with BP training algorithm is used to modeling of ionospheric electron density. In second method, due to the use of wavelet neural network (WNN) with BP algorithm in RMTNN method, the new method is named modified RMTNN (MRMTNN). In the third method, WNN with a PSO training algorithm is used to solve pixel-based ionospheric tomography. This new method is named ionospheric tomography based on the neural network (ITNN). The GPS measurements of the Iranian permanent GPS network (IPGN) (1 ionosonde and 4 testing stations) have been used for constructing a 3-D image of the electron density. For numerical experimentation in IPGN, observations collected at 36 GPS stations on 3 days in 2007 (2007.01.03, 2007.04.03 and 2007.07.13) are used. Also the results have been compared to that of the spherical cap harmonic (SCH) method as a local ionospheric model and ionosonde data. Relative and absolute errors, root mean square error (RMSE), bias, standard deviations and correlation coefficient computed and analyzed as a statistical indicators in 3 proposed methods. The Analyzes show that the ITNN method has a high convergence speed and high accuracy with respect to the RMTNN and MRMTNN. The obtained results indicate the improvement of 0.5 to 5.65 TECU in IPGN with respect to the other empirical methods. The GPS measurements of the Iranian permanent GPS network (IPGN) (1 ionosonde and 4 testing stations) have been used for constructing a 3-D image of the electron density. For numerical experimentation in IPGN, observations collected at 36 GPS stations on 3 days in 2007 (2007.01.03, 2007.04.03 and 2007.07.13) are used. Also the results have been compared to that of the spherical cap harmonic (SCH) method as a local ionospheric model and ionosonde data. Relative and absolute errors, root mean square error (RMSE), bias, standard deviations and correlation coefficient computed and analyzed as a statistical indicators in 3 proposed methods. The Analyzes show that the ITNN method has a high convergence speed and high accuracy with respect to the RMTNN and MRMTNN. The obtained results indicate the improvement of 0.5 to 5.65 TECU in IPGN with respect to the other empirical methods.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 869

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

SEYDI S.T. | HASANLOU M.

Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    111-126
Measures: 
  • Citations: 

    0
  • Views: 

    617
  • Downloads: 

    0
Abstract: 

Earth as the human habit, has been affected by natural events, such as tornado and flood of thunder and drought. In addition, some human activities such as urban development and deforestation have made the changes in many ways. However, these changes are unintentional they constantly threaten out the environment. So, predicting these changes are really important in order to face the consequences. Remotely sensed images, due to wide coverage, high resolution and low cost for providing data from the earth, play an important role in environment monitoring. One of the most important applications of remote sensing is change detection. Change detection is a process which measures the differences between objects in the same place at different times. The change detection is an essential tool for monitoring and managing of resources at the local and global scales. The most important criteria in chage detection are the real-time and accurate detection of land cover changes. Hyperspectral sensors operate at continuous wavelengths with a bandwidth of approximately 10 nanometers. Carrying out change detection procedures on hyperspectral images some problems appear that affect the results, such as the presence of noise in the images, and different atmospheric conditions, all of which lead to more computational complexity and an increase in execution time. This paper presents a new unsupervised change detectin method for land use monitoring by utilizing multi-temporal hyperspectral images. By incorporating similarity/distance based and Otsu algorithm in hierarchically manner, this method can detect any changes. The proposed method implements in two main phases: (1) the corrected data by using distance and similarity-based criteria that converted data to new computing space called similarity space. At this space, the changed areas can be a highlight from the no-change areas. (2) The second phase is to make a decision about the nature of pixels by a hierarchical process using Otsu algorithm that result of this phase is a binary change map. The main advantage of the proposed method is being unsupervised with simple usage, low computing burden, and high accuracy. The efficiency of the presented method has been evaluated by using Hyperion multi-temporal hyperspectral imagery. The first dataset is a farmland near the city of Yuncheng, Jiangsu Province, China. The data were acquired on May 3rd, 2006, and April 23rd, 2007, respectively. This scene is mainly a combination of soil, river, tree, building, road and agricultural field. The second study area covers an irrigated agricultural field in Hermiston City, Umatilla County, Oregon, USA. These data were acquired on May 1st, 2004, and May 8th, 2007. The land cover types are soil, irrigated fields, river, building, type of cultivated land and grassland. The results of two real datasets show high efficiency and accuracy with low false alarms rate by using proposed method compare to common change detection methods with overall accuracy of 98.48%, kappa coefficient of 0.965 and false alarms rate is 1.5% for China dataset as well as overall accuracy of 95.12%, kappa coefficient of 0.87 and false alarms rate is 4.8% for USA dataset.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 617

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

SHOKRI M. | SAHEBI M.R.

Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    127-138
Measures: 
  • Citations: 

    0
  • Views: 

    2028
  • Downloads: 

    0
Abstract: 

Satellite remote sensing (RS), gathered data with different spatial and spectral characteristics of objects or phenomena from a distance that each of them represents part of the object properties. Although multispectral data gives rich spectral information from objects, but significantly influenced by environmental factors such as smoke, fog, clouds and the sunlight. In contrast to optical sensor, the virtual aperture radar sensors have the ability to take data in all types of weather conditions or day and night. Synthetic Aperture Radar (SAR) data can highlight the structural and textural details in the image. It is sensitive to terrain components of shape, direction, roughness, and moisture. So optical data provide detailed spectral information useful for discriminating between surface cover types, while the radar imagery highlights the structural detail in the image. Therefore image fusion techniques can help us for combining of different properties of optical images and SAR data that it can give us a complete view of the target and present higher accuracy and reliability of results obtained by this method. Curvelet transformation is more suitable in comparison with many other transformations for analysis of curved edges, high precision to approximate, describe scattering and directions. In this paper, transition SAR and optical images to Curvelet space by using Curvelet transformation, then the weighted average method is used for fusion in Curvelet space and finally fused images obtained by applying a reverse Curvelet transformation. Our case study is Shiraz city that we used data from this city for implementation of proposed method. Statistical methods and classification were used to evaluate the fused images. IHS and Wavelet transform methods is used for comparison to proposed method. Statistical parameters include standard deviation, entropy, standard spatial frequency, correlation and image quality index show improvement of fused images by proposed method than other methods. Considering that accuracy of classification depends on the image spatial and spectral information, for evaluate the effectiveness of fusion on the spatial and spectral resolution, images are classified. With the classification of input optics image and fusion image, overall accuracy improved 4 percent and Kappa coefficient increased 0.05 compared to the input image. The results show the suitability of the proposed algorithm for fusion SAR and optical images.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 2028

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    139-151
Measures: 
  • Citations: 

    0
  • Views: 

    759
  • Downloads: 

    0
Abstract: 

More than half of the world's population lives in areas where the water crisis and rainfall are serious. To cope with these crises, climatology researchers require rainfall data, pattern analysis, and rainfall estimation and management in order to manage and cope with these conditions. Iran is located in the middle belt in dry belt, which is characterized by low rainfall and high evapotranspiration. The average rainfall in the country is 250 mm and is subject to severe spatial and temporal changes. Variety of spatial factors such as position, elevation, topographic characteristics such as slope and aspect are the most effective factors in the spatial variation of rainfall. Each of these characteristics is able to determine the pattern of precipitation behavior. Therefore, in this paper, the aim is to develop a comprehensive mechanism for describing this geographical problem with the help of various earth sciences tools and techniques and considering the various environmental and spatial factors affecting rainfall. The Basin of Urmia Lake is one of the most important and most valuable aquatic ecosystems in Iran and the world. The ecosystem of this lake is a typical example of a closed basin that all runoff drain in this basin. The catchment area of Lake Urmia is selected as a case study due to the critical situation that has been facing in recent years. At first, the synoptic data of 21 stations of the Meteorological and Adventure Organization of the Ministry of Energy are used. This data is collected during the period of 63 years of statistical period from 1951 to 2014, and then the annual precipitation rates of the stations are calculated as the dependent variable based on these statistics. In addition data, longitude, latitude, height and slope of each station as well as the average annual and average annual wind speed were also extracted as independent variables. First, initial statistical tests (rainfall data normalization at stations, data normalization, trend review and deletion) were performed. Then, a combination of traditional and statistical methods are reviewed and examined. As a result, the ordinary Kriging method was selected with RMS equal to 4.15. Then, with the help of different analytical and spatial methods, including cluster analysis, the southern and southwestern regions of this lake as hot and high-frequency parts as well as low-low cold spots in the northern and central parts of the basin Lake Urmia and two spots in the Sarab and Salmas areas with low concentration of rainfall were identified in this area. At the end, in order to model spatial relationships, general regression was fitted to rainfall, and the latitude is obtained as the most effective dependent variable. In addition, longitude and wind speed are detected as the least effective variables on precipitation in the lake of Urmia. The results of this paper have shown that land survey methods are more accurate than traditional methods for locating Lake Urmia.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 759

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    153-165
Measures: 
  • Citations: 

    0
  • Views: 

    766
  • Downloads: 

    0
Abstract: 

Land surface temperature (LST) is among the most important indices in the studies related to earth surface such as conservation, energy exchange and the water between the land surface and atmosphere. The main goal of this study is to present an algorithm in order to estimate the land surface temperature using the data of the Hyperspectral Thermal Emission Spectrometer (HYTES). The HYTES sensor has 256 bands in the range of 7.4-12 micrometers that bands in the range of 7.4-8 micrometers are removed due to strong water vapor in this spectral region and bands above 11.5 micrometers are removed due to issue of calibration.202 bands remain, which we want in this study to obtain optimal bands from 202 bands using the genetic algorithm and then obtain the land surface temperature using those bands. We need to define a cost function and appropriate initial parameters for the genetic algorithm to select optimal bands. In this research, the cost function is to minimize the temperature difference between the thermal product of the sensor and the obtained land surface temperature with a split window algorithm and the number of variables in each gene is as large as the number of bands (202) and the initial population is 80 in genetic algorithm. The bands used in split window algorithm are selected using the genetic algorithm. In this study, we use split window algorithm that obtain land surface temperature through optimal bands that are selected using Genetic algorithm among 202 bands. Generally, in this study, first we use Genetic algorithm to choose optimal bands from 202 bands and obtain the coefficients of split window algorithm. The number of these coefficients is dependent on the number of bands which are selected by Genetic algorithm. Then, by using resulted coefficients that are obtained with least squares method and selected bands, land surface temperature is obtained for two different data through split window algorithm. In this research, a small part of the first data was used as training data for the genetic algorithm to obtain the coefficients algorithm of the split window and optimal bands to calculate the land surface temperature for the rest of the data. In a separate window algorithm, in addition to the algorithm coefficients, we need the emissivity of the relative bands used in split window algorithm. In this study, we used the emissivity product of the HYTES sensor. Among 202 bands, 110 bands are selected using the genetic algorithm. Using this 110 bands, split window algorithm coefficients and bands emissivity, land surface temperature is calculated for two data and evaluated. Finally, the thermal product of HYTES is used to evaluate the our proposed method and indicate its accuracy. The temperature obtained using proposed algorithm for both data is evaluated with reference data (thermal product) and the RMSE value is resulted as 0.025 and 0.999 for the first and second data respectively. Therefore, according to the obtained errors, we can argue that the proposed algorithm is an appropriate method to obtain the land surface temperature using the data of HYTES.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 766

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

HEMMATI Z. | EBADI H. | HOSSEINI NAVEH AHMADABADIAN A. | ESMAEILI F.

Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    167-180
Measures: 
  • Citations: 

    0
  • Views: 

    1359
  • Downloads: 

    0
Abstract: 

3D models of ancient sites are produced and utilized for different purposes such as research, restoration and renovation of valuable ancient objects, creation of virtual museums and documentation of ancient sites. Nowadays, Geomatics techniques, as the most efficient methods for geometrical measurements, analysis and interpretations concerning issues in cultural heritage, are applied to produce geometric and thematic information. Buildings are susceptible to change and damage through the passage of time due to natural agents and disasters such as rain, wind, earthquake, flood, or damages imposed by human beings. The characteristics of these changes in some buildings with ancient value bear special importance. The first step to create 3D models, provide the information about ancient monuments and record them with documents is having accurate maps of their present condition to be able to add other information like type of construction materials. Special techniques should be employed to provide maps with high accuracy, in addition to other characteristics such as spending the least expense and time for continuous map production. The process of changes are recognized by comparing maps from different time spans based on which due decisions can be made. To provide these maps many different techniques have been used since past such as traditional surveying (using the usual total stations), photogrammetry (especially close-range photogrammetry) and laser scanners. In comparison to other techniques, photogrammetry has unique characteristics in documentation of ancient sites. No need to contact with the feature, the possibility to obtain the information of texture and color and the compliance of these characteristics with the 3D output data, high flexibility of this method to access the desired accuracy in measurements and its potential of access to accuracy at micrometer level as well as capability of low expense observations and archiving images, are parameters that have given rise to the more usage of techniques of photogrammetry in the modelling of ancient sites. Yet, the usual techniques of photogrammetry sometimes have limitations, for example, in rare cases of inaccessible features. As a result, the requirement to obtain accurate information from features, especially in dangerous and remote areas, and also, the necessity to economize expense and time have led to the usage of UAV-based photogrammetry. UAV-based photogrammetry is a combination of aerial photogrammetry and close-range photogrammetry in which there is a sensor that can be a metric or non-metric camera or any other data collection tool. The images are acquired from low height. Access to imaging stations with appropriate angle toward all parts of a feature and low height of flight, result images with high spatial resolution, which consequently, bring about more accurate and precise 3D information from earth. Different categorizations have been presented for UAVs based on different criteria and applications. To mention some of these criteria we can refer to the criterion of flexibility, fixed or rotating blades or wings in UAVs and their source of energy. Based on the categorizations of platforms regarding this research, which is ancient sites, it is obvious at first glance that UAVs with fixed wings, fixed or semi-flexible parachutes and wingless are practically of no use due to low flexibility in flying and imaging, and also limited space of flight. Therefore, low expense, high flexibility and appropriate time of flight have contributed to the suitability of quadrotors as the best option among all systems with rotating blades in this research. Low expense for production, no need to airports and long runways and better maneuverability are some particular parameters and characteristics of the functionality of UAVs. There are factors that limit the function of UAVs, such as instability while flying due to light weight, limited source of supply, limitation to carry bigger and more accurate measuring tools and requiring longer time for imaging, processing and calculations. Fortunately, all these limitations can be modified to some extent by an appropriate network design. In spite of all aforementioned capabilities of UAV systems, no specific standards have been designed to utilize them. Therefore, it is obviously necessary to investigate the feasibility of the usage of these systems, and to design appropriate networks to locate them in proper points to obtain images for photogrammetry. Thus, the need for high accuracy in UAV-based photogrammetry for documentation and restoration of ancient sites necessitates more concern for the network geometry to achieve the desired accuracy. This article presents appropriate method for optimum locations of UAV for imaging. The proposed method for the optimal locations of UAV is based on the ellipsoid fitted on object, principles and constraints of photogrammetry network design and finally by exploring hidden areas. The results from images taken from a cultural heritage site showed that the number of images was reduced almost 4 times by applying network design principles. Consequently, the speed of 3D modelling would be increased almost eight times by applying the proposed method.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1359

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    181-199
Measures: 
  • Citations: 

    0
  • Views: 

    562
  • Downloads: 

    0
Abstract: 

Vehicle detection is a significant step for many applications such as automatic monitoring of vehicles, traffic control, transportation, etc. In these systems, a fixed camera is usually installed in some places such as highway, street and parking. Images captured by the camera firstly will be processed, and then the vehicles in it will be identified. Generally, in these systems, cars are the favourite’s parts in a video frame. Therefore, providing methods that can increase overall accuracy and also the efficiency in vehicle detection is very important. Several methods have been proposed by researchers to detect cars in images. Frame subtraction is one of the most known in which every change between the current frame and the background image is regarded as a moving car. Since car shadow is identified as variable pixels, so there is a connection between the shadow of a car and that car, which causes two problems in these systems: Firstly, the actual shape and appearance of the car can disappear because of this conjunction. As a result, it is difficult to detect the location of the vehicle and to segment the image based on cars. Secondly, the shadow of one or more cars may interfere, and all of them are identified as a vehicle. These items cause many problems in monitoring systems and traffic control systems such as car counting, or vehicle location estimates and behaviour analysis. So shadows need to be identified. To solve this problem, recently, Karami has developed a method based on the region growing to remove the shadow of the vehicle. First of all, we should find the seed point, and then, based on the Euclidean distance, we will consider the distance from the seed point to each pixel of the image in the feature space as a weight. We will use these weights to separate the shadow of the car. The main problems about that are the complexity and computational speed, low precision and also a strong dependency on grey-grade changes. Therefore, in this research, we want to solve the above problems by improving the method of Background subtraction by weighting the pixels of the image by combining several texture features to remove the shadows of vehicles. To do this, each pixel in the background image and the current frame is weighted based on a combination of textural features. This makes the shadows and the background (asphalt) pixels to have very close values and thus removed in subtraction. The proposed method is evaluated on four datasets based on OA, HR, FAR, MODP and MOTP criteria. Using these criteria, the proposed method is compared and evaluated with region growing method, median and averaging methods, and several other methods in shadow detection. However, in general, according to the results, the proposed method outperforms than other algorithms by 2 to 15 percent improvement in mentioned criteria. To continue and complete the research in the field of vehicles detection and remove of shadows recommended: 1.Using several different data, which are obtained in different situations and at different times. 2.2. Using other non-statistical colour and texture features. 3.Detection of vehicles based on geometric structure.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 562

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

ZAREEI A. | EMAMI H.

Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    201-214
Measures: 
  • Citations: 

    0
  • Views: 

    822
  • Downloads: 

    0
Abstract: 

Lake Urmia is the 20th largest lake and the second largest hyper saline lake (before September 2010) in the world. It is also the largest inland body of salt water in the Middle East. Nevertheless, the lake has been in a critical situation in recent years due to decreasing surface water and increasing salinity. In this study, the surface area changes of Lake Urmia, Iran were investigated. Lake Urmia, with an area varying from 5200–6000 km2 in the 20th century, is the 20th largest lake and the second largest hyper saline lake (before September 2010) in the world. It is also the largest inland body of salt water in the Middle East. The lake is the habitat for a unique bisexual Artemia (a species of brine shrimp), and becomes a host for more than 20, 000 pairs of Flamingo and about 200–500 pairs of White Pelican every winter. Lake Urmia forms a rare and important ecologic, economic and geo-tourism zone and was recognized as a Biosphere Reserve by United Nations Educational, Scientific and Cultural Organization (UNESCO) in 1975. In addition, the lake helps moderate the temperature and humidity of the region, providing a suitable place for agricultural activities. Assessment and monitoring of Lake water level changes, in order to protect them in terms of importance, nature, and location are considered at the national, regional and global levels. However, in recent years to improve the water level in the lake, activities have been carried out but unfortunately due to a decrease in the water level of the lake is in a critical state. Therefore, it is necessary the increase or decrease of Urmia Lake water level and its impact on the environment to be monitored on a regular basis and provided correctly scheming and decision-making to effectively improve its situation. In this research, a new model for forecasting recovery period (compared to 14 years ago) of the Urmia Lake water level is conducted and assessment of spatiotemporal changes of its stabilization using Landsat 5 and 7 and 8 multi-temporal imageries during the period 2002 to 2016. In the proposed model have been considered two main factors, average annual rainfall catchment area and the activities taking place in recent years. To this end, to assess spatial-temporal changes of Urmia Lake water level, four different water body extraction indices, including water ratio index (WRI), automatic index extraction of water (AWEI), normalized difference Water Index (NDWI) and the normalized difference vegetation index (NDVI) were used. Then, the performance of each of the four indices was compared with a base map and it was determined the error of each index that NDWI was the lowest error compared to the other three indices. As a result, the proposed model was presented based on the results of this index in three different state, taking into account the various weights to previously mentioned factors. The results showed a marked reduction (78%) of the Urmia Lake water level occurred in the period 2002 to 2014 compared to 2002. In contrast, from 2014 to 2016, the Urmia Lake water level has increased 57.33 % and it has been reached the relative stability condition. This relative stability is unstable and depends on two main previously mentioned. In addition, the results of the proposed model in three different state are shown that based on the increasing trend in the second period and taking into consideration the different weight of the main factors. It will take up at least 11 years (at best), 18 years (the status quo) and maximum 49 years (reduced recovery activity) the Urmia Lake water level returned to its original level in 2002 and it achieved a stable condition. The proposed model is a suitable method and it can be used for any number of recovery activity factors in the future.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 822

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2017
  • Volume: 

    7
  • Issue: 

    2
  • Pages: 

    215-230
Measures: 
  • Citations: 

    0
  • Views: 

    1012
  • Downloads: 

    0
Abstract: 

Social network-based services are among the services that have been welcomed by smart phone users. Location recommendation is a popular service in social network. This service suggested the unvisited places to the users based on the is based on the users’ visiting histories and location related information such as location categories. The existing methods which utilize check-in data and category information, only consider temporal and spatial information. Since the influence of user’s social relations can play an important role in location recommendation since it can improve the algorithm performance. In this paper, a PCLRSGT method is developed that consider temporal, spatial and social components. The spatial component models a user’s probability of checking in to a location. The spatial model obtains user’s home location by using his check-in dataset. It calculates the distance from the location to the user’s home. The spatial PDF filters out those locations that are far away from a user’s home and are not in the users’ interest. These locations should not be recommended to the user. The temporal component employs similar users’ check-in probabilities to model a user’s probability of checking in to a location. It constructs users’ temporal curves to represent a user’s periodic check-in behavior. A User Temporal Curve U for category j is defined as a sequence of probability values. The probability value is denoted as that means probability of checking into category j in hour m (1≤m≤24).The probability sequence for user u into category is denoted as. Since the distances between users temporal curves are used to find users’ similarity, in this paper the distances are measured by curve coupling method. Temporal similarity is used to predict user’s probability of checking in to a location. The periodic behavior of a certain user is predicted by a weighted summation of the periodic behaviors of his similar users. If two users are more similar in terms of temporal similarity, they influence each other's periodic check-in behavior more. The social component models a user’s probability of checking in to a location by considering similarity between user and his friends in terms of social connection, periodic check-in behavior and check-in activities into locations. The social influence weight between two friends is concluded based on all three similarities between a user and his friends in terms of social connection, periodic check-in behavior and check-in activities into locations. Therefore, the social influence weight between two friends is calculated by combining the three above factors. The social influence weight between two friends is used to predict user’s probability of checking in to a location. The dataset employed in this paper was collected from Gowalla. Gowalla was one of the popular location based social network launched in 2007 and closed in 2012..The data set contains 1000 users and 15905 check-in records. A check-in record indicates a user has visited a location at a given time. It contains the user ID, location ID, and time stamp of the check-in. To evaluate the performance of the recommendation algorithm, the dataset was divided into training and testing datasets. So, one of the check-in records of each user was randomly moved to the testing dataset. The rest of the dataset formed the training dataset. As the result, the testing dataset contained 1000 check-in records, and the training dataset contained 14905 check-in records. In this paper, Precision and Recall were used to evaluate the performance of the location recommendation algorithm, which are widely accepted as the performance measurement for recommender systems. The performance of the proposed recommendation algorithm is compared with two existing location recommendation methods, Probabilistic Category-based Location Recommendation (PCLR) and Probabilistic Category-based Location Recommendation Utilizing Temporal Influence and Geographical Influence (sPCLR). The performance of proposed algorithm is reported by recommending top-N recommendation list in the testing set (N=1, 2, 5, 10, 15 and 20). Experimental result prove that PCLRTGS performed better than all other algorithms, in terms of both precision and recall about 10 to 15 percent. This proved that using the social influence helps to improve the location recommendation. It can also be observed that the precision value decreases when the number of recommendations increases. There are two reasons for the precision decreasing, the number of correct recommendations decreases when the number of recommendation increases, or the number of correct recommendation increases with a lower rate compared to the number of recommendations. We should check the recall values to see which one is true. It also is seen that the recall values increase as the number of recommendations increases. Since the number of correct answers is a constant value, so it can concluded that the number of correct recommendations is increasing when the number of recommendations are increasing.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1012

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
telegram sharing button
whatsapp sharing button
linkedin sharing button
twitter sharing button
email sharing button
email sharing button
email sharing button
sharethis sharing button