Archive

Year

Volume(Issue)

Issues

Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Scientific Information Database (SID) - Trusted Source for Research and Academic Resources
Author(s): 

EMAMI H. | MOJARADI B. | SAFARI A.

Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    1-17
Measures: 
  • Citations: 

    0
  • Views: 

    1361
  • Downloads: 

    0
Abstract: 

Land surface temperature (LST) is one of the key parameters in a wide range of geosciences and environmental studies. Remote sensing technology, can monitor the space and time variations of this quantity in large areas. But evaluating the accuracy of this parameter is the most challenging issues. On the one hand, rapid changes in temperature occur in space and time and the mismatch between the spatial scale of satellite and ground-based sensors, validation of LST is difficult by ground based temperature data. Validation is a process of independently assessing the uncertainty of the data derived from the system outputs. Without validation, no methods, algorithms, or parameters derived from remotely sensed data can be used with confidence. As the retrieved LSTs from satellite TIR data involve corrections to the satellite-observed radiances to account for atmospheric effects and non-unity LSEs, it is necessary to assess the accuracy of the retrieval to provide potential LST users with reliable information regarding the quality of the LST product. However, validation of satellite-derived products is often a challenge in itself. This is true for the sensors having a coarse resolution and even more with the thermal bands over land. Although many algorithms have been proposed and developed over recent decades to retrieve the LST from satellite TIR data, validation of satellite-derived LSTs remains far behind the development of new LST retrieval algorithms due to the difficulties. The important difficulties including: (i) it is difficult to obtain representative LST data at the scale of a pixel, which usually covers an area of about several square kilometers, due to the spatial variability of landscape and the large spatial–temporal variations in the LST itself. (ii) The other difficulty concerns the temporal sampling. Surface temperature is characterized by a very high temporal frequency. In a matter of seconds, the surface skin temperature may change by several degrees due to wind, shadow, etc. Generally, ground radiometers often integrate the temperature over several minutes, but the sensor onboard the satellite acquires the data in a fraction of a second. (iii) In addition, it is difficult to conduct in situ LST measurements at local scale, because LST, LSE, and the atmospheric and environmental radiances are coupled. Therefore, to obtain LST, one has to measure simultaneously the atmospheric and environmental radiances and the LSE, which is time-consuming and hard to monitor. Although there exist many problems and difficulties in the validation of the LST derived from satellite TIR data, in recent years, many efforts and studies have been performed to validate the derived LSTs from different satellite sensors. The methods utilized to validate LST values retrieved from space may be roughly grouped into four categories: the temperature-based method (T-based), the radiance-based method (R-based), the cross-validation method, and the indirect validation method. Among these methods, due to development of thermal sensors, cross-validation procedure is the most common method to evaluate the accuracy of surface temperature. Due to the large spatial and temporal variations in the LST, geographic coordinate matching, time matching, and view zenith angle matching are basic requirements of the cross-validation. In this paper, an alternative scaling method was proposed for cross-validation between LST product of LDCM and MODIS sensors. Because of the wide coverage and taken LST product twice per day by each of the MODIS sensors (Terra and Aqua), this product was selected as the reference temperature. The LST products of MODIS sensors have been validated with in situ measurements and by various methods in more than 50 clear-sky cases taking into account the higher accuracies within 1 °K for both Terra and Aqua. The results of the proposed method showed that evaluation of LST accuracy in areas with high homogeneity and in the first examined LDCM scene, in terms of mean differences and mean absolute differences measures, is 0.6 oK and 1.63 oK, respectively. For the second examined LDCM scene, the accuracy measures were 0.94 oK and 1.27 oK in the LST, respectively. Generally, the experimental results on two LDCM data demonstrated that the proposed cross-validation method not only is a robust and accurate LST validation approach, but also it is applicable for each thermal sensor and every time and place.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1361

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

ESHGHI M. | ALESHEIKH A.A.

Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    19-32
Measures: 
  • Citations: 

    0
  • Views: 

    1032
  • Downloads: 

    0
Abstract: 

Quality assessment is one of the most important challenges on the way of Volunteered Geographic Information – VGI. Nowadays, assessing VGI data quality mainly includes documenting and measuring errors with no concern about satisfying various needs of users. Despite the widespread acceptance of “Fitness-for-Use” as one of the key quality elements of VGI data, it is not considered practically in producing as well as sharing of VGI data. Therefore, the aim of this paper is to present a conceptual model so as to consider this indicator as a key to better storage of data. Geographical features and the acquired data from them have different applications; for instance, a road could have roles in both Navigation and Emergency Management applications. With this in mind, various kind of data applications would be considered by different users at the time of using them according to their needs. What is more, it is possible that different applications of data at the time of producing as well as storing data have been considered by different users. Consequently, an acquired data about an object may be stored more than once with different attributes (labels related to applications), or no consideration of other possible applications of data. This will result to redundancy and incomplete attribute of those data, respectively. Therefore, in this paper by designing a conceptual model – which is inspired by CityGML concepts – it is tried to apply some preparation in the process of storing this type of data as comprehensive as possible, of course without inconsistency, so as to make an opportunity for better as well as optimal use of them. This is done by considering a set of labels which represent roles of a specific data in different applications. By doing so, if a user just considers a single application of a data at the time of storing, other application will automatically detect and attached to the data. For example, a single linear object could be considered as Primary road, Arterial road as well as Highway. As a result, data storage will be improved, which can be led to better processing and better presenting data as a response to the users’ queries – needs and purposes. To implement the proposed conceptual model, the Zhao’s semantic similarity measurement algorithm is used. Open Street Map is used as the source of VGI data: linear data including Streets and Roads of districts 2, 5 and 9 of Tehran municipality are selected for this paper. Arc Object programming is used in C#. SQLSERVER is used as the DBMS, and ArcMap is used for presenting the results. According to the proposed conceptual model existing data as well as new imported data are processed and stored. Therefore, many application could be considered for a single data which will result a more complete data. In the following, in order to evaluate the performance of the presented model, database response to the queries of the users is analyzed and compared in two different databases: one with ordinary available storage type, and the other one with new storage type presented in this paper. The results show that compared to the ordinary database, responses by the new database to a group of users are: 1) more available in different queries, and 2) more accurate toward the user’s needs and purposes. The improvement of the performance of the new database is also presented based on the number of correct answers to 15 queries by 10 different users, which is more than 50 percent. As a recommendation for future works, the proposed model can be used for creating an automatic filter at the time of data entry.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1032

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    33-46
Measures: 
  • Citations: 

    0
  • Views: 

    1439
  • Downloads: 

    0
Abstract: 

These days Geospatial data has important roles in every organization and ordinary people’s life. They need these kinds of data for each activity. The most common believe in Geosciences is that which more than 80 percent every type of data in every organization has a spatial component which is produced by public sector, commercial companies and people (Volunteered Geospatial data). In addition to available data, development of data capturing technologies has led to increase data production ratio every day. Information overload still remain as a big challenge in current web and because of information overload, there is also big challenges in the world of geospatial information system. Data and services on the web are beneficial when users could discover them in an efficient manner. Difficulties such as ambiguity of keywords used for resource description by data providers and data users, different context of data providers and data producers, difficulties which users encountered dealing with spatial portals interfaces and spatial resources, number of clicks which a user required to navigate to a search panel, the variety of discoverability criteria’s on search panels, irregular descriptions of spatial resources or insufficient information provided on spatial resources which listed on result panel could cause users wasting lots of time for investigating interfaces and spatial resources which cause bewilderment to users of Geoportal in discovering geospatial resources. geoportal is kind of portal which mostly used for geospatial resource. Geoportals as a gateway for SDI is a tool which helps expert and non-expert users to discover geo information services. Geoportals have lots of functionalities like address identification, measuring, map viewing, printing etc. Most of the different approaches which have been utilized to resolve these problems emphasis on keyword based enhancement of data discoverability so the mentioned problems still exists. The aim of this research is to enhance Geoportals functionality by integrating new information retrieval procedure in Geoportals named as recommender systems. These days most of the web portals and information sources have a process of recommending items to users which could attract their attention and satisfies their expectations with personalizing their content for them. Personalized systems like recommender system assists users to find content, products, services (such as movies, music, news, articles .etc.) using processed and combined ratings (Like/Dislike) which collected from other users interactions on the website. Recommender systems are new paradigm in information discovering and presentation to the users and are a very different procedure from available data retrieval procedures. We offered a geoportal architecture which has an embedded recommender component. we aimed to integrate a recommender system in Geoportal. In this case we could help to conquer the existed problems in Geoportals by recommending related items to users based on user profiles, information item metadata and extracting their similarities. We applied ontological similarity as a semantic similarity tool for calculating relatedness of a geospatial resource metadata and a geoportals user. Our semantic recommender system uses ontological similarity and Singular Value Decomposition (SVD) to predicting related items and to overcome cold start problem. We developed a geoportal based on Road Management and Transportation Organization Geodatabase and designed ontology.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1439

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    47-57
Measures: 
  • Citations: 

    0
  • Views: 

    556
  • Downloads: 

    0
Abstract: 

Optimization methods such as Simulated Annealing (SA), Genetic Algorithm (GA), Ant Colony Optimization (ACO) and Partial Swarm Optimization (PSO) are very popular for solving inverse problems. Gravity inversion is one of the fields of geophysical exploration that mainly used for mining and oil exploration. In this research optimization method is considered for gravity inversion of a sedimentary basin to find out its geometry. A new method for 3D inversion of gravity data is developed. In such optimization methods before anything else having a forward model is necessary. This model is the relationship between Bouguer gravity anomalies and a combination of prisms. The gravity anomalies of the density interface are generated by equating the material below the interface to a series of juxtaposing rectangular blocks. The stochastic optimization method that is used for solving inverse problem is Simulated Annealing. With a try and error method at first the cost function of a lot of choices is measured. Then the minimum value between them is used as the primary set of variables for introducing to model that is the start configuration of model variables. After that the repeating algorithm starts until the cost function reaches to as small as possible value by considering the geophysical constraints of the place that is being studied. Finally the geometry of prisms that is the depth of each prism is achieved. This geometry shows the situation of anomaly source. For better inversion regional anomaly is used with introducing some unknown constants to the model. The values of unknown parameters of model are extracted in a continuous range that is obtained from priori information of the region. At first the algorithm was used in solving a synthetic problem that is defined by developing random data in an arbitrary region, so with using forward model again and again finally the cost function reaches to its minimum value. For evaluating the success or failure of the algorithm, the contour map of depths that was used in forward model is compared with map that is produced after inversion and discrepancies was considered. The good results of synthetic problem lead to implementing the algorithm for real gravity data from Aman Abad region in Arak city. For this purpose at first the gravity data must be changed to Bouguer gravity anomalies by some reduction. After that the repeated algorithm is implemented and finally the results are compared with some priori information which is obtained by madding boreholes around the region. This priori information claims that the minimum and maximum depth of prisms is between 70 meters to 120 meters. Results of the algorithm are compatible with this information and show the power of algorithm in solving this kind of inverse problem. It must be mentioned that without using this priori information the results of gravity inversion are not unique and many different interpretations can be concluded even interpretations that cannot be accepted in geophysical view. As a norm to find the ability of algorithm in solving this problem the cost function can be helpful which shows the amount of error. The maximum value of it was 0.5 mgal in some regions in this research.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 556

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    59-71
Measures: 
  • Citations: 

    0
  • Views: 

    817
  • Downloads: 

    0
Abstract: 

Nowadays, SAR imaging is a well-developed remote sensing technique for providing high spatial resolution images of the Earth’s surface. Fully polarimetric SAR systems which transmit and receive two orthogonal polarizations provide precise information about targets but have some limitations; Transmitting two interleaved waves for each scene doubles the transmitted power, repetition frequency and data volume compared with dual polarized SAR systems which transmit only one polarization and receive two polarizations. On the other hand swath width, which is an important parameter for surveillance applications, for fully polarimetric systems is half of dual polarized systems. Therefore, compact polarimetric systems have been proposed. Compact polarimetric systems are dual polarimetric systems with special transmitted and received polarizations. These combinations of polarizations made the compact polarimetric systems maintain many capabilities of fully polarimetric systems. Three candidates have been proposed for compact polarimetry configurations, namely, π/4, Dual Circular Polarization and Circular Transmit Linear Receive modes. In dual circular polarisation mode one antenna transmits right or left circular polarization and the responses of scatterers are received by two orthogonal right and left circular polarization antennas. There are some deficiencies, regarding the reconstruction of fully polarimetric data from dual circular one in the literature. So in this paper we have explained the reconstruction methods for dual circular polarization data. In order to reconstruct fully polarimetric data from compact polarimetric data, one should consider two approximate assumptions. One of the important assumptions is the reflection symmetry about radar line of sight. This assumption is very necessary for reducing the number of unknowns in the problem of reconstruction. For the second assumption Souyris et al. first linked the magnitude of linear coherence and the cross polarization ratio with a parameter named N and approximately set this parameter to 4. Subsequently Nord modified Souyris’ algorithm by replacing N with the ratio of double bounce scattering power to the volume scattering power and updated it during the calculation process iteratively. These assumptions are not well accurate in reality and thus the reconstruction results have some errors. In this research we noticed that there is a high correlation between N and the ratio of cross polarization power to the sum of co-polarization power. Thus using regression analysis methods, we have linked these two parameters and proposed that N is related to R by a rational model which its nominator and denominator are linear polynomials of R. Using several images from RADARSAT-2 sensor, we have shown that the reflection symmetry assumption is not well accurate for DCP mode and Nord assumption is better than Souyris assumption. Souyris assumption is more appropriate for vegetation areas. We have investigated the accuracies of the proposed and previous assumptions and showed that the proposed assumption is more accurate than the two others. By using the new assumption we have proposed a modified reconstruction algorithm. The proposed and previous reconstruction algorithms have been simulated and quantitative and qualitative analyses have shown that results from the proposed algorithm are closer to the fully polarimetric data and thus the proposed method is more accurate.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 817

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    73-86
Measures: 
  • Citations: 

    0
  • Views: 

    2570
  • Downloads: 

    0
Abstract: 

The use of wind power plants have been considered as one of the most important ways of electricity production using renewable energy. The lack of adequate wind power facilities has caused governmental agencies to increase the number of these facilities. In this regard, it is essential to determine suitable locations to establish the wind power equipment. GIS is a useful tool to convey and present information by overlaying geographically referenced data. Integrating of geographic information system (GIS) and multi criteria decision analysis methods (MCDM) provides an empowerment tool to site selection. The multi criteria decision analysis methods help us to make the best choose from different spatial alternatives based on multiple criteria. Analytic network process (ANP) is one (MCDM) method that is a generalization of the analytic hierarchy process (AHP). The main innovation of the ANP is its network structure, which enables interactions between elements situated in different clusters and dependencies between the elements in the same cluster to be taken into account, in fact (ANP) has been used to overcome interdependence and feedback problems between criteria or alternatives. OWA is a multi-criteria evaluation procedure. The OWA-based method allows participants to define a decision strategy on a continuum between pessimistic and optimistic strategies. The (OWA) operators, provide a very general framework for making the kinds of local aggregations used in the (ANP). In order to create a general and powerful decision-making tool, it seems natural to try to merge these two techniques together. Objective of this paper is integrating the Analytical Network Process (ANP) and the Ordered Weighted Averaging (OWA) for the site selection of wind power plants. In this study, we used a multiple-criteria decision analysis approach; the Analytical Network Process (ANP) and the Ordered Weighted Averaging (OWA) to prioritize the suitable sites for the wind power plants. First (step 1) overlaying process was done on GIS using 12 data layers in three main criteria including: environmental, technical and economical, the land suitability map was produced and by using this map 12 suitable site was selected for the wind power plants. In step 2; weights of the criteria were determined using ANP, In the ANP process constructed a network model that captures the criteria and their relationships, then attributes were pairwise compared to their relative importance towards their control attribute according to interdependency between attributes (outer/inner) relationships and at the last was determined super matrix and attributes final weight. In step 3; the alternatives scores obtained from the ANP-OWA model, in this process for each alternative, the attributes weight that obtained from ANP, ordered based on the attribute value, order weight computed by using the equation, then the ordered value and ordered weight multiplied by together, finally the sum of their will be score of the interest alternative. To apply the different degree of risk we changed the value of or-ness on the ordered weight equation. By using the ANP-OWA model we can be able to obtain different result of due to optimism and pessimism and can generate many maps with different risk levels. Finally the wind power plants alternatives were prioritized in the different risk levels.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 2570

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    87-100
Measures: 
  • Citations: 

    0
  • Views: 

    878
  • Downloads: 

    0
Abstract: 

Due to increasing of population and its impacts on urban development and to avoid creating problems, studying and predicting of urban expansion and its impacts is one of the most important issues in scientific societies and environmental academies. For a better planning for the future, a vision of the future would be needed. Prediction of urban expansion gives a good view about the future to experts and decision makers. In this regard, they can make a well plan to meet the needs of the city in the future. Tehran is the capital and one of the most growing cities of Iran. So, the major purpose of this paper is to examine and display the growth of urbanization in Tehran in recent years and forecasting it for future. Thus, the combined results of the optimized feed-forward artificial neural network with different neighboring filters were used to model this issue. Moreover, for the urban growth purpose, only the building land use class was considered as the urban land use. The Landsat images at 1994, 2004, and 2014 were used to produce the urban map with the SVM method in ENVI. To access the trend of changes, the images for the years 1994 and 2004 were used for temporal mapping. In this paper, twelve parameters were considered as the effective parameters in urban expansion for Tehran, including digital elevation model, slope, population density, distance from Building blocks, distance from a new buildings created from 1994 to 2004, distance from villages, distance from roads, distance from farm lands, distance from jungle and parks, number of buildings pixel in 3×3 neighborhood, non-proliferation areas, and existing buildings. These factors were extracted from various existing maps and remotely sensed data using the ArcGIS and ENVI software. The feed-forward artificial neural network was performed with these parameters in two phases: (i) for training the network and (ii) for prediction. In order to predict the suitability map with high precision, the feed-forward artificial neural network architecture was optimized according to the least RMSE value for each number of hidden layer neurons. Then, the predicted suitability map was combined with different neighborhood filters with different thresholds in order to produce the urban map of 2014. Then, a combined method concluded from the results of the different neighborhood filters was presented. For accuracy assessment, the output of the model was validated in two phases: (i) 92.46 % of accuracy of the suitability map was obtained using Relative Operating Characteristic (ROC), and (ii) the final urban map of 2014 resulted from the combined method was compared with the real map of 2014 with the kappa index of agreement, i.e., 82.31 %, and the overall accuracy, i.e., 92.22 %. Finally, the proposed combined method was used to predict the urban map for the year 2024. The results indicate that an uncontrolled growth is going to happen in West and Southwest of Tehran. Accordingly, the results of this paper warned the city managers and experts for better planning and having a good strategy in critical situations.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 878

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    101-115
Measures: 
  • Citations: 

    0
  • Views: 

    1501
  • Downloads: 

    0
Abstract: 

Combination of GIS and Multicriteria Decision Analysis (MCDA) in the Web setting has recently gained much attention in participatory (collaborative) spatial decision making. GIS-based MCDA tools offer a wide spectrum of visual and computational decision support capabilities that can be used by the experts and lay participants on the Web for selection, prioritization, and integration of spatial decision options. These tools provide appropriate analytical tools and platforms for direct involvement of the public in the spatial planning process. The overall aim of GIS-based MCDA techniques is to contribute to the decision making process by selecting the best alternative from the number of feasible alternatives according to multiple criteria. It involves the use of geographical data, decision maker preferences, and an aggregation function (decision rule) that combines spatial data and the decision maker’s preference to evaluate decision alternatives. The main rationale behind integrating GIS and MCDA is that these two separate areas of research can complement each other. While GIS is normally known as a powerful and integrated tool with exclusive abilities for storing, manipulating, analyzing and visualizing geographically referenced information for decision-making, MCDA provides a rich collection of procedures and algorithms for structuring decision problems, designing, evaluating and prioritizing alternatives. Studies on the asynchronous spatial MCDA suggest that the space-time distributed environment of the Internet not only provides the flexibility to work in different places and times for the convenience of the participants, but also offers equal participation opportunity. In other words, the Web-based GIS-MCDA provides an access to the relevant GIS-MCDA data and tools anyplace (any location that has the Internet access), anytime (24 hours a day, seven days a week), and through any PCs or handheld devices (e.g., PDA, smart phones) and networks (wired or wireless technologies).However, the currently used GIS-MCDA tools do not provide the participants with a choice of their own criteria, alternatives, and preferences. To address this problem, this study represents a user-oriented GIS-MCDA system where decision participants can (i) define their own alternatives and determine individual decision maps and (ii) integrate their individual decision maps using a group decision rule to reach a compromise solution. The problem of public toilet site selection in district #1 of Mashhad, Iran, was selected as the case study to demonstrate the proposed approach. The proposed system was used as a potential decision support tool for collaborative site selection of the public toilets. Every decision maker could define his/her toilet alternatives by drawing a set of polygons, evaluate them based on AHP-based weighted linear combination and eventually develop an individual decision map. The group decision rule, fuzzy majority approach, was employed to create a group decision map based own individual solution maps. In other words, the individual solutions (individual rankings of public toilet facilities) are aggregated using the group decision rule to obtain a group solution (group rankings of public toilet facilities). Such tool enhances the level of community participation in spatial planning. The study’s implications can advance public participatory planning and allow for more informed and democratic land-use decisions.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1501

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    117-128
Measures: 
  • Citations: 

    0
  • Views: 

    742
  • Downloads: 

    0
Abstract: 

Nowadays, with considerable developments in technology, the accessibility and usage of positioning devices has been increased. These systems facilitate generation of position data as streams of spatio-temporal data, so-called trajectory. In recent years, research related to trajectory data management has mostly focused on the techniques of storage/retrieval, modeling, and data mining and knowledge discovery from trajectory data. These studies mainly emphasize on geometric aspects. However, emerge of different applications area from shipment tracking to geo-social networks generate motifs to detect semantic behavior (pattern) of moving objects. Semantic patterns include not only geometric patterns, but also the knowledge derived from the data relating to the geographical and application domains. Most of the studies in the field of semantic trajectory are based on offering different ways to add meaning to trajectories and little work has been carried out on preparing data for the semantic enrichment. Actually, process of knowledge discovery and semantic enrichment is a computing framework that commonly involves several steps.In this paper, an effective method as a multi-step computing framework is proposed and the results have been evaluated. In the proposed methodology, the outputs of levels are raw cleared data, spatio-temporal trajectory, and structured trajectory, respectively. At the first level, which includes techniques for cleaning raw data collected by moving objects, the algorithms for data cleaning from outliers are assisted and removal of unnecessary data are provided. To identify and eliminate the outliers from data set, a two-step algorithm is provided based on three sigma rule. Another step that needs to be done is the use of compression techniques for detection and removal of additional data. Compression techniques are typically based on linear simplification and use distance functions to approximate data. However, these methods are not able to keep stop and move points of trajectories. In this study, a compression method based on velocity of points is provided. The proposed method is based on combination of two functions. The first function is based on instant velocity of points to calculate distance function. Another function is perpendicular Euclidean distance. However, in this function instead of constant velocity, assumption of constant acceleration has been used in the interpolation. The results of implementation show that the suggested algorithm is able to reduce the number of points at the same time keep important point in trajectories. At the second level of abstraction, spatio-temporal trajectories have been derived from cleaned data. In this stage, trajectory identification is based on type of data and application. In this work, based on the specified application, which is transportation and traffic management, daily trajectories were identified from cleaned data. The final step in preparation of data for semantic enrichment is producing structured trajectories as stop and move episodes. For this task, a method based on velocity of points is implemented. In this method, the moving object data and environment in which it moves is used to identify episodes. In this paper, the proposed algorithm in the semantic enrichment framework has been implemented on a real trajectory data and evaluation results show the effectiveness of the method.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 742

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    129-139
Measures: 
  • Citations: 

    0
  • Views: 

    1066
  • Downloads: 

    0
Abstract: 

such images provide valuable information from the earth surface objects. Target detection (TD) is a fast growing research filed in processing hyperspectral images. In recent years, developing target detection algorithms has received growing interest in hyperspectral images. The aim of TD algorithms is to find specific targets with known spectral signatures. Nevertheless, the enormous amount of information provided by hyperspectral images increases the computational burden as well as the correlation among spectral bands. Besides, even the best TD algorithms exhibit a large number of false alarms due to spectral similarity between the target and background especially at subpixel level in which the size of target of interest is smaller than the ground pixel size of the image. Thus, dimensionality reduction is often conducted as one of the most important steps before TD to both maximize the detection performance and minimize the computational burden. However, in hyperspectral image analysis few studies have been carried out on dimension reduction or band selection for target detection in comparison to the hyperspectral image classification field. Otherwise band selection has great impact on remote sensing processing because of its effect on dimension reduction and reducing computational burden of hyperspectral image processing by selecting of optimum bands subset.This paper presents a simple method to improve the efficiency of subpixel TD algorithms based on removing bad bands in a supervised manner. The idea behinds the proposed method is to compare field and laboratory spectra of desired target for detecting bad bands. Since the laboratory spectrum of targets is measured under standard conditions with the minimized level of noise and atmospheric effects, they can be considered as ideal spectrum. On the other hand, the recorded field-based reflectance spectrum are affected by surrounding objects such as vegetation cover and atmospheric affects specially water vapor absorption. Obviously, the spectrum becomes progressively noisier at longer wavelengths due to reduction of radiance of the illumination source, i.e., the sun. In this way, bad bands can be observed in the field based spectrum when comparing with the laboratory spectrum of the target of interest. Based on fitting a normal distribution to laboratory-field spectral difference of all corresponding bands, best of them will be select and introduce to target detection methods.In this study for our evaluation, the proposed method is compared with six popular band selection methods implemented in PRtools and False alarm parameter for validation is used in this study. Comparison is done using two well-known sub-pixel TD algorithms, the adaptive coherence estimator (ACE) and the constrained energy minimization (CEM), in the target detection blind test dataset. This dataset includes two HyMap radiance and reflectance images of Cooke City in Montana, USA. The images are obtained by an airborne HyMap sensor which has 126 spectral bands and a ground sample distance of 3m. This dataset includes 10 sub-pixel targets located in an open grass region. Experimental results show that the proposed method improves the efficiency of the ACE and CEM comparing with other band selection methods used. Between of all target detection experiments only in 12 percent results destroyed. Moreover, high speed, simplicity, low computational burden, and time consuming are the advantages of the proposed method.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1066

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    141-158
Measures: 
  • Citations: 

    0
  • Views: 

    1077
  • Downloads: 

    0
Abstract: 

An important part of human activities in an urban environment is their mobility behavior. Nowadays, measuring the movement of people is a fundamental activity in modern societies. First insight regarding the mobility within a region can be captured by extracting the origin–destination (OD) matrix, which specifies the travel demands between the origin and destination nodes on a network. This matrix could be on different scales including macroscopic scales, e.g., at the inter-urban level, or at microscopic scales, e.g., at the intra-urban level. In the intra-urban level, OD matrix is indicative of the movement of people between different areas of the city.Many methods have been suggested for OD-matrix estimation, which can be classified into three main categories: survey-based methods, traffic counts, and methods based on the positioning technology. Using location-Based Social Networking (LBSN) data is a method based on the positioning technology which is raised in recent literatures as a new travel demand data source. Users of LBSN provide location-sensitive data interactively via mobile devices, including smartphones and tablets. These data has the potential to provide origin-destination movement estimation with significantly higher spatial and temporal resolution at a much lower cost in comparison with traditional methods. Data concluded from these networks are one of the modern and updated data sources that attracted researches attentions to urban management subject and let theme investigate three significant aspects of human movements and interactions including location, time and social communities.On the other hand, various models have been proposed for the estimation of OD matrix. In this paper, two OD estimation models have been utilized in order to investigate their relative performance when using LBSN data. The first model used in this study is “radiation model” in which the number of trips is estimated with regard to population of the origin and destination location, and the total population in the circle centered at the origin with the distance between the origin and destination as the radius (excluding the origin and destination population). The second model is “population-weighted opportunities model”, in which the travel rate from origin to destination is calculated with regard to the attraction of destination. This model assumes that the attraction of a destination for a traveler is the destination’s opportunities inversely weighted by number of opportunities between the destination and the origin.Although the two mentioned models have no adjustable parameters, they require information on variables such as population distribution and location attraction as input. In order to extract these inputs from location-based social networking data and make the proposed models compatible with this kind of data, it is essential to renew and advance the models, considering specific characters and limitations of LBSN data.This study examines the efficiency of LBSN data in the estimation of the intra-urban OD matrix. For this, Foursquare check-ins data for Manhattan, one of the five boroughs of New York City, is collected. As each LBSN data records has a time attribute, check-ins are sorted based on time and individuals trajectories extracted using consecutive check-ins. The study area is then partitioned based on census tracts and using aggregated trajectories between these areas, the movement flow intensity between each pair of tracts is estimated by each of the mentioned models. Finally, the outputs of models are evaluated using real observations based on different criterions including distance distribution, destination travel constraints and flux. The results demonstrate the promising potential of using LBSN data for urban travel demand analysis and monitoring.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1077

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

NADI S. | DELAVAR M.R.

Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    159-169
Measures: 
  • Citations: 

    0
  • Views: 

    924
  • Downloads: 

    0
Abstract: 

Time and space are main parts of any phenomena in the real world. In spatial Information Systems (GIS) modeling processes are the first step in analyzing phenomena. Therefore including temporal aspects as well as spatial capabilities in GIS is necessary. One of the most important requirements in spatio-temporal modeling of phenomena in GIS is the ability to investigate their spatial and temporal topological relationships. In this paper we first explain different aspects of time as well as space and then propose a novel approach to extract a full list of temporal and also spatio-temporal topological relationships. Time as a distinct part of any phenomena can be considered to be zero or one dimensional. A zero dimensional temporal space is referred to a phenomena with start and end time occurrence in one chronon which is named event. A one dimensional temporal space is referred to a phenomena with a duration of existence and is named state. Accordingly, different spatial dimensions are point, line, area and volume. The proposed approach in this paper extracts all the possible relationships between different temporal aspects. The approach is based on a Boolean comparison matrix with two rows and four columns. Each cell of the matrix is a comparison between start and end of one temporal phenomena and start and end of the other which results true or false. This matrix leads to 28=256 relations which all of them are not logically acceptable. Therefore we explain how to find unacceptable relations. Removing these relations we find 13 relations between two temporal states, 6 relations between a temporal state and a temporal event and finally 3 relations between two temporal events. Furthermore, in order to conflate spatial and temporal topological relationships two strategies are proposed. In the first one, we used logical AND and OR operators and in the second strategy we consider the time to be an orthogonal dimension added to 2D spatial space. Each of these strategies leads to a comprehensive list of spatio-temporal relationships. Finally in order to illustrate the applicability of the extracted relationships, we implement them in a land information management system. The system is used to response spatial, temporal and spatio-temporal queries regarding historical situation of any parcels. As an example for spatial topological queries, the system is able to find parcels which are overlap with newly designed green space. Similarly the system can for example find land parcels which their duration of existence has same start point in time. This is a "start" temporal topological relationships. Regarding spatio-temporal relationship, the system can for example find land parcels which are coincide with a land parcel and are also neighbor to it.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 924

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    171-183
Measures: 
  • Citations: 

    0
  • Views: 

    848
  • Downloads: 

    0
Abstract: 

The World Wide Web Consortium (W3C) defines the semantic web as a framework to share and reuse of data between groups, communities and applications. In this relation Tim Berners-Lee proposed the idea of “web of data” that can be processed by machines. In order to reach the aims of semantic web, there are needs to data representation languages, knowledge bases and rule bases that are known as semantic web technologies. In the world of spatial data, the semantic information has important roles in different activities of spatial analysis and data management. In this research the spatial semantics are dealt with a difference in two levels. The first is all the semantics about the location and geometry of an individual object and the second is about the arrangement and mixture of objects in an extent. The latter is more related to spatial relations that cause complexities to cope with. The semantic explicating of such concepts in relational spatial databases has some limitations such as non-flexibility of database schema, non-flexibility of data types, the restricted defining of sub-classes, implicit semantics of relations and finite number of joins between tables. The exclusive characteristics of semantic web technologies in storing, managing and retrieving of data and knowledge lead to a wide range of applications; also the applications independent of web environment. The databases defined in semantic web technologies are based on graph structures and present the advantages of logical databases, object-oriented data bases and network databases. The introduced capabilities are applied to explicate the concept of "deteriorated structures" in urban environments. This concept is inherently non-spatial but has two complementary spatial interpretations; permeability and fine granularity; which make it possible that the concept to be explicit in spatial data of urban layouts. The scenario which based on, the semantic of " deteriorated area" is made explicit starts from a point, that by acquiring its coordinates it is determined whether this point is located within the such area or not? Handling of the spatial data of the case study in district 12 of Tehran and the procedure to semantic explicating of “deteriorated area” are done in the spatial and semantic extension of Oracle DBMS. The spatial data of urban parcels and streets accompanied with area of parcels and width of streets are used. In order to model the spatial relations, all the neighborhood relations between parcels and intersection relations between streets are also entered as linked data to triple store. In addition to RDF data, a rule base to exploit the concepts of “connected streets” and “co-blocked parcels” and also to define the conditions of narrow streets and small parcels is needed. Using some SPARQL queries the percentages of parcels with less than 200 m2 area and the streets with less than 6 m width are presented. The infinite number of inter tabular joints which have many applications in the spatial relations, is the main shortcoming of conventional databases and means we cannot express an infinite, recursively defined sequence of similar expressions. The procedure shows the deficiency of relational spatial database about recursive relations and implicit relations especially the spatial ones can be satisfied by semantic web data models and knowledge management techniques.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 848

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    185-198
Measures: 
  • Citations: 

    0
  • Views: 

    1313
  • Downloads: 

    0
Abstract: 

In recent years, volunteered geographic information is developed rapidly and is considered as an important and new source for spatial information. In the beginning, only two-dimensional (2D) data were shared in voluntary databases, but recently, users have begun to collect and record three-dimensional data such as height, shape of roof, number of levels, doors and windows information. Three-dimensional models are used in many fields such as urban management, navigation, disaster management, and traffic modelling. Existence of three-dimensional digital data results in increasing of accuracy, efficiency, and improvement of results of many spatial analyses. Open Street Map (OSM) is one of the most important projects in the area of voluntary spatial data that has potential to store three-dimensional geospatial data. The mission of OSM is to create a free online map which covers all over the world. Each user, without the need to any license, can create, share, edit, and improve spatial data in OSM by three levels of editors (iD, Potlatch and JOSM). Geometric data might be shared in the form of node, way, and relation in OSM database. Moreover, semantic information in the form of key-value pairs could attach to geometric data. Many research have focused on extraction of three dimensional models from OSM data. City GML standard format compared to other existing 3D formats has greater potential for displaying three-dimensional information, especially three-dimensional models of cities. In this research, an algorithm is proposed for automatic extraction of three-dimensional information from the OSM database and creation a three-dimensional model in the level of detail (LOD) 4. The output of this algorithm is three-dimensional models of Iranian buildings in City GML format, which may be used as an input to many spatial analysis and applications.To implement the proposed algorithm in this paper, java programming language and Net Beans software are used. At first, intended range/building is selected and extracted in .osm format from OSM database. Then, the intended range will be introduced to java program for future process. Our case study in this research is the new building for school of electrical and computer engineering at University of Tehran, located in Tehran, Iran. This building has 9 levels and many interior and exterior doors and windows. The program needs some key-value pairs such as number of levels, levels height, windows height, and door height for creation of the three-dimensional model. After extraction essential key-value pairs from OSM data, the process for construction 3D model of building starts. For each level, all of the walls, windows, and doors are constructed one after another. Various levels of the building with interior partition, interior and exterior doors, and windows to be created by proposed algorithm. After construction of the levels of the building, the building's roof is created by using semantic and geometric information. At the end, the output 3D model of building(s) is stored in City GML format.One advantage of this method is minimal use of semantic information for creation of three-dimensional models. The algorithm also focuses on improving the demonstration of the output, while maintains topological relations between features. The results of this research could be used for development of three-dimensional virtual cities based on voluntary information of VGI databases.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1313

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    199-213
Measures: 
  • Citations: 

    0
  • Views: 

    1497
  • Downloads: 

    0
Abstract: 

Using spatial data mining and weighted comparison methods can have an efficient role in determining the value of local utilites for efficient indices in real estate pricing. In this study, the factors affecting the price of real estate are divided into four general categories: economic and market factors, physical and welfare factors, neighborhood and access characteristics and organizational characteristics. Organizational characteristics considered as a group of influencing indices in determining property prices that its information is collected from relevant agencies and organizations can contribute significantly in pricing, marketing and management to the different users involved in the real estate sector including buyers, sellers and real estate agencies. The classification proposed assuming stable economic and market factors at the time of user requests and the influence of each index on the quantity and quality is finally divided into three groups that are: Boolean, Multi-Value and Fuzzy/Logarithmic. Where Boolean indices denote those that exist or does not exist on a property that value in order 1 and 0. Multi value indices are those that get more than one value and their influence coefficients are calculated like Boolean indices. Finally, Fuzzy/Logarithmic indices are those that their influence functions are continuous and like fuzzy or logarithmic functions and should be determined by available information and related techniques. Influence coefficient is the influence that the presence or absence of an index affects the property price. For example how presence of elevator affects the property price. Influence function for Fuzzy/Logarithmic indices is the influence that the amount of increasing or decreasing of an index affects the property price. For example how far to main road affects the property price. In this study spatial data mining and compared weighted coefficients are used interactively for gathered information in a crowd-sourced environment. With these methods, indices and impact factors and impact functions are obtained from using different information classifications that have considerable influences in determining local utilities in property pricing. The study area of this research is part of metropolitan area of Kerman (Iran) that is located within 57˚01 ́and 57˚04 ́of longitude and 30˚16 ́ and 30˚19 ́ of latitude from UTM coordinate system(zone 40). Required information includes both spatial and descriptive information. Spatial information includes the location of properties, public places (such as educational, religious, shopping and industrial centers, parks, offices and official buildings) and main and secondary streets. Hence, by using available maps such as maps provided by the National Cartographic Centre (NCC) in 2003 and cadastre maps from registration organizations under the preprocessing GIS ready, the shape files from properties, public places and streets are prepared. All these files are prepared in UTM coordinate system from reference elliptical WGS84. Thus 18053 numbers of properties, 260 numbers of public places and 1009 numbers of streets include main and secondary streets are prepared after preprocessing GIS ready in polygon format to enter to database. Also the description information of 150 properties were gathered from the agencies in the study area.The results obtained for 16 Boolean indices show that the maximum influence and utility is belong to indices parking, store, elevator and pool and indices the basement and furnished have the lowest utility. Substructure area index is highly dependent on the number of bedrooms. This means that an increase in the number of bedrooms will not necessarily led to an increase in the property price. Therefore the best number of bedrooms up to substructure area 90 squared meter is 2, up to area 120 squared meter is 3, up to area 150 squared meter is 4 and up to area 200 squared meter is 5.For Multi-Value indices the 15 indices are studied. For the index the number of stories with an increase in class between 2 and 3 percent is added to the price of each property. The index the number of units in a class is inversely related to the property price. This means that if the number of units in each class is higher in the same conditions like area the property price will be lower. For the index the registration document of property the highest utility belongs to full document and the lowest belongs to full devoted document (The full document means the document that completely belongs to its owner with the entire legal rights of a property. When some restrictions apply to the property by the previous owner for example in application style the degree of its ownership will be decreased. The level of restriction in any stage is one sixth of ownership. The restrictions start from full to full devoted document that has the highest restriction.) The registration range of property is a proper index for the regional utilities that depends on different factors. The results show that the highest utility belongs to the registration tag 5 and the lowest belongs to 2773. The organizational restrictions index such as being located in the heritage policy and being located in the urban future plan have a negative influence in local utilities and property price. The results show that being located in the heritage policy has the lowest utility and highest negative influence in property price. For the index view of property there is no utility in the study area and therefore this index is considered as ineffective index.Finally, for the Fuzzy/Logarithmic indices, calculating the influence function or obtaining a clear process for the changes aren’t easy as two previous groups and the influence rate of these indices is different in various distances. The influences of some indices are estimated by one influence function that are: land area, construction time, land price, reconstruction time, substructure area, distance from shopping centers and distance from main streets. For the indices distance from health centers, distance from educational centers, distance from administrative centers and distance from city center, the influence functions take various modes in different distances. The highest utility for these indices is situating in the distances that are not too far away or to close so that not to take negative effects of too close nor to disturb the appropriate access. There is neutral utility in some distances. This means that situating properties in these distances hasn’t any significant effect on property price. The influence functions for the indices distance from industrial centers, distance from historical and social centers and distance from religious centers are almost the same. According to the results obtained, Boolean and Multi-Value indices obtained with disciplined and more predictable process than the Fuzzy/Logarithmic indices. Finally for evaluation, the prices that were calculated by the system for 10 numbers of properties compared with their actual prices and their accuracy was calculated.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1497

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Author(s): 

BIJANDI M. | KARIMI M.

Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    215-233
Measures: 
  • Citations: 

    0
  • Views: 

    835
  • Downloads: 

    0
Abstract: 

The physical conditions in informal settlements are not appropriate due to uncontrolled and unplanned growth in these areas. Simulation of informal settlements growth leading to better understanding of the complexity of its spatial process and how its formation. The phenomenon of informal settlements growth as a complex system has a macro pattern that results from the interaction of downstream agents at the micro level, and accordingly the feedback of the macro patterns affects the interactions at the micro level. In this study, it is attempted to consider this complex process of informal settlements growth. Agent-based models have the innate ability to model the behavior and interaction of agents and provide an ideal framework to study the informal settlements growth.Since the informal settlements are often formed organically the geometric shape of the land parcels is highly irregular, therefore choose a suitable spatial unit is among the challenges that local scale of simulation is confronted. In this study, it is attempted to address this challenge. Decision maker agents were combined with land parcel agents and simulation was performed on a local scale. In this model, the suitability of the land parcel agents was calculated based on factors of neighborhood, accessibility, physical suitability and spatial constraints using the spatial analyses in the GIS environment. Each parcel calculates its own general suitability value using these factors. Decision making for selecting the parcels was carried out by household agents who worked in downtown and those who worked in industrial areas around the city. For urban worker agents, a land parcel is desirable which has high suitability, small area, close distance to the city center and same agent type in the majority of the inhabitants around the land parcel. Besides, for the industrial worker agents a land parcel is desirable that has all conditions mentioned above and is also close to the industrial areas. The sensitivity of developed model was investigated on the basis of shape and size of land parcels more specifically. In this study, the rectangle and square shapes were used as the common geometric shapes of the land parcels in 20 different scenarios. The area of the residential parcels is always considered among the most important physical indicators for investigation of informal settlements at a local scale. So, in this study, it is attempted to investigate the land parcels in various areas. Besides, in this study, a fishnet grid of cells was created in two cell sizes of 2 and 5 m and the study area was also investigated with this configuration. In other words, by creating a raster-like space and comparing its results with the vector status it is attempted that the role of selecting the spatial unit in simulation process of informal settlements local growth also be analyzed from the perspective of the data model. All of these 22 scenarios was implemented using the developed agent based model on one of the informal settlements neighborhoods in the Kashan city of Isfahan province of Iran. Evaluation the results by real data showed that the accuracy of the model developed by rectangular land parcels with an area of 50 to 150 m2 is better than other vector and raster-like spatial units and is equal to 64%. Besides, the results showed that the overall accuracy of the raster-like models is significantly decreased compared to the vector structures.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 835

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    235-248
Measures: 
  • Citations: 

    0
  • Views: 

    1576
  • Downloads: 

    0
Abstract: 

Pan sharpening methods aim to produce a more informative image containing the positive aspects of both source images. However, the pan sharpening process usually introduces some spectral and spatial distortions in the resulting fused image. The amount of these distortions varies highly depending on the pan sharpening technique as well as the type of data. Among the existing pan sharpening methods, the Intensity-Hue-Saturation (IHS) technique is the most widely used for its efficiency and high spatial resolution. This method converts a color image from RGB space to the IHS color space. In next step the I (intensity) band is replaced by the panchromatic image. Before fusing the images, a histogram matching is performed on the multispectral and the panchromatic image. When the IHS method is used for IKONOS or QuickBird imagery, there is a significant color distortion which is mainly due to the wavelengths range of the panchromatic image. Regarding the fact that in the green vegetated regions panchromatic gray values are much larger than the gray values of intensity image. A novel method is proposed which spatially adjusts the intensity image in vegetated areas. To do so the normalized difference vegetation index (NDVI) is used to identify vegetation areas where the green band is enhanced according to the relation of red and NIR bands. In this paper vegetated areas are first found via thresholding on the NDVI and then green band is enhanced in these areas. In this way an intensity image is obtained in which the gray values are comparable to the panchromatic image. Intensity image is produced as a linear combination of MS bands and, thus, its weight parameters have a direct effect on the final fusion result. In the proposed method weight parameters are estimated by Genetic Algorithm. Beside the genetic optimization algorithm is used to find the best optimum weight parameters in order to gain the best intensity image. Visual interpretation and statistical analysis proved the efficiency of the proposed method as it significantly improved the fusion quality in comparison to conventional IHS technique. Spatial quality can be judged visually, but color changes more difficult to be recognized in this manner The spectral quality of pan-sharpened images was determined according to the changes in colors of the fused images as compared to the MS reference images. The accuracy of the proposed pan sharpening technique was also evaluated in terms of different spatial and spectral metrics. In this study, 7 metrics (Correlation Coefficient, ERGAS, RASE, RMSE, SAM, SID and Spatial Coefficient) have been used in order to determine the quality of the pan-sharpened images. Experiments were conducted on three different high resolution data sets obtained by three different imaging sensors, IKONOS, QuickBird and WorldWiew-2. The IHS pan-sharpening method results good spatial quality and is a commonly used algorithm for its speed and simplicity. The result of this proposed method showed that the evaluation metrics are more promising for our fused image in comparison to other pan sharpening methods. In fact, A combination of enhanced vegetated areas, genetic algorithm optimization and integration of IHS improves the spectral and spatial quality.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1576

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    249-262
Measures: 
  • Citations: 

    0
  • Views: 

    1029
  • Downloads: 

    0
Abstract: 

The sub-pixel level mineral information with reasonable accuracy could be a valuable guide to geological and exploration community for expensive ground and/or lab experiments to discover economic deposits. Thus, several studies demonstrate the feasibility of hyperspectral images for sub-pixel mineral mapping.Target detection is one of the most important applications in hyperspectral remote sensing image analysis. Relative to multispectral sensing, hyperspectral sensing can increase the detectability of pixel and subpixel size targets by exploiting finer detail in the spectral signatures of targets and natural backgrounds. Over the past several years, different algorithms for the detection subpixel targets with known spectral signature have been developed.Using derivative spectra in analytical chemistry is an established technique which is renowned as derivative spectroscopy. On the other hand, hyperspectral data nature allows the implementation of derivative spectroscopy, in hyperspectral images. Application of this technique in hyperspectral remote sensing is due to its ability for resolving complex spectra of several target species within individual pixels. Several researches have shown that the application of derivative spectrum in target detection, prepare better detection results in each individual pixel.However, one of the points that we have to pay special attention is that spectrum differentiation eliminates low frequency components of the spectrum. Hence, having a spectra and its best derivative order in a unified approach cause an increase in the information content of a spectral curve. Thus, an ensemble approach was proposed in this research.To justify the above-mentioned framework, this study was carried out with an airborne hyperspectral data in comparison to the previous works whereby the standard laboratory images were applied. Both proposed detection algorithms were used for identification of four mineral targets including alunite, kaolinite, epidote and hematite which were located in a hydrothermally altered mineral region (Gonabad county) in Iran east (Lat. AWT IMAGE and Lon. AWT IMAGE ). The Hymap images were acquired on September 11th 2006 with a spatial resolution of AWT IMAGE.Gonabad is structurally a part of Iran’s central desert (Lūt Desert). The regional rock units show the presence of altered mineralized rocks consisting of volcanic and subvolcanic tracit, agglomera, tuff, riolite, riodacite, dacite, etc. These mass rocks were altered due to the effect of hydrothermal solutions. The importance of geological researches in this region comes from above-mentioned hydrothermally altered zones.Ground sample data were collected for four dominant targets in the study area, including alunite, kaolinite, hematite and epidote. Stratified random sampling strategy was used for collecting the minerals in the field based on geological experts’ opinion. The precise target’s coordinates were determined by using GPS sensors.After the completion of field sampling procedure, samples were transformed to laboratory. Samples were grained in lab for spectral measurements. Then, reflectance measurements were done using a SVC HR 1024 lab spectrometer covering a full spectral range of VIS/SWIR (350 to 2500 nm). After spectral measurements were recorded, all obtained spectra were resampled to Hymap spectral response function. Finally, the resampled spectra was applied in the proposed ensemble approach for subpixel mapping of above-mentioned mineral targets.Experimental results show that the proposed method had clearly better detection results in entire understudy samples. The best performance upgrade was about 28, 24, 26 and 16 percent respectively for Alunite, Kaolinite, Hematite and Epidote targets.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1029

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    263-274
Measures: 
  • Citations: 

    0
  • Views: 

    1652
  • Downloads: 

    0
Abstract: 

Having a proper Digital Surface Model (DSM) is crucial in many earth dependent applications such as change detection, 3D urban modelling, urban planning and environmental monitoring. Generating DSM from High resolution stereo satellite images provides good possibilities in this issue. Image matching technique has an important role in DSM generation from stereo satellite images and directly effects on the quality of DSM. Up to now, many image matching algorithms have been proposed, such as Least Square Matching (LSM), Dynamic Programing (DP) and Semi-Global Matching (SGM) which the last one has a higher efficiency than the other methods.One of the main inputs for SGM algorithm are the epipolar images. Epipolar images are rectified images so that each row at left image corresponds with the same raw at right image (i.e. parallax y is zero). It will cause to accelerate the matching process and reduce the search space from 2D into 1D. Unlike the images with perspective geometry (Frame Camera), the epipolar geometry of linear array images could not modeled with straight line. Therefore, generating the epipolar images from high resolution stereo satellite images has been a research topic in the photogrammetry and remote sensing.Among the literatures, one can find different epipolar resampling methods proposed in image and object space. All these methods need Rational Polynomial Coefficient (RPC), orientation parameters or ground control points (GCP). Unfortunately, the orientation parameters are not available and RPCs need to be modified. Also measuring the GCPs is not affordable. So it is significant to decrease or completely remove the need for these information in epipolar resampling.In this paper, we aim to propose a method to build epipolar image by modeling the epipolar curves without any extra information. In the proposed method, dividing the original image into overlapping tiles and utilizing computer vision algorithms to automatically find the corresponding points in stereo images are employed.Our proposed framework consists of four main steps. The first step is pre-processing of data. In this step, the stereo pair is automatically registered and divided into overlapping tiles. To register images in the absence of RPC, GCPs or other metadata, SURF feature detector and RANSAC algorithm are employed. SURF operator automatically identifies point features in the images. Then RANSAC algorithm filters wrongly detected conjugate points among all points. These matched points will be used to register image pair using an affine transformation.In the second step, all registered stereo image tiles will be epipolarly resampled. Epipolar geometry for the image tiles of satellite images is equivalent to simple line. Fundamental matrix and Morgan’s method are used to build epipolar images.In the third step, disparity maps of corresponding epipolar images are computed using Semi-Global matching (SGM). The SGM algorithm needs large amount of temporary memory for saving matching costs cube and aggregated costs cube. The size of temporary memory depends on the image size and the disparity range. The solution which has been proposed by SGM is to divide the epipolar images into small image tiles. This idea has also followed in our proposed method.Finally in the fourth step, the result of experiments will be evaluated. The mean and standard deviation of y-parallax for some conjugate points is used to compare the result of different methods.A stereo-pair acquired by GeoEye-1 high resolution satellite pushbroom sensor is used in our experiments. This image scene includes urban areas over Qom city in Iran. In continue, epipolar images are produced using fundamental matrix and Morgan’s method. The mean and standard deviation of y-parallax is computed for these methods. The results show that our proposed method without using RPCs could produce the epipolar images with sub-pixel accuracy.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1652

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    275-291
Measures: 
  • Citations: 

    0
  • Views: 

    1321
  • Downloads: 

    124
Abstract: 

Following the request of the Tehran municipality and in order to provide the spatial information required in their various projects, a real-time kinematic network has been designed for Tehran. Based on the existing measures such as the dilution of precision at the network point positions, two different designs have been proposed. A minimum number of six GNSS stations are used in both of the proposed designs. In contrary to the first design (Design 1), the second design (Design 2) provides a better coverage within the city. The impact of the distance dependent errors within the proposed designs as well as the other existing measures has been analyzed for selecting the optimum design. The GNSS almanac as well as the IGS GIM and meteorological data obtained from the existing synoptic stations within and next to the city are used for this purpose. Totally, only the effect of tropospheric refraction suggests the second design as the optimum one in this project. GNSS measurements made at this network point positions are used to verify the measures used through the design process. Two testing approaches have been used to check the efficiency of the designed network in real-time applications. In the first approach and within the city of Tehran point positions are determined independently using kinematic as well as static positioning techniques. Different correction estimation and dissemination algorithms including FKP, MAX and i-MAX techniques have been applied for this purpose. In the second approach, the central and northern reference stations of the network, separately plays the role of a rover receiver (test point) while it is not contributing in computing the network corrections. Repeatability, accuracy and precision of these points’ positions have been analyzed. Result of this verification analysis acknowledges the applied method to design the TNRTK. Accuracy of 1 to 3 cm is achieved in the coordinate components within the network area. However, the accuracy of the point positions outside of the network area, especially height components, is not fulfilling the presupposed accuracy of an NRTK due to the tropospheric effects.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1321

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 124 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
Issue Info: 
  • Year: 

    2016
  • Volume: 

    6
  • Issue: 

    1
  • Pages: 

    292-303
Measures: 
  • Citations: 

    0
  • Views: 

    1202
  • Downloads: 

    105
Abstract: 

Most of the land use change modelers have used to model binary land use change rather than multiple land use changes. As a first objective of this study, we compared two well-known LUC models, called classification and regression tree (CART) and artificial neural network (ANN) from two groups of data mining tools, global parametric and local non-parametric models, to model multiple LUCs. The case study is located in the north of Iran including cities of Sari and QaemShahr. Urban and agricultural changes over a period of 22 years between 1992 and 2014 have been model. Results showed that CART and ANN were effective tools to model multiple LUCs. While it was easier to interpret the results of CART, ANN was more effective to model multiple LUCs. In earlier studies, despite using CART, the extraction of effective factors of LUCs using a precise index has not been considered efficiently. As a second objective, this study performed a sensitivity analysis using variable importance index to identify significant drivers of LUCs. While ANN was a black box for sensitivity analysis, CART identified significant delivers of LUCs easy. The results showed that the most important factors were distance from urban areas and rivers while aspect was the least effective factor. As a third or final objective of this study, the recently modified version of receiver operating characteristics (ROC) called total operating characteristic (TOC) as well as ROC were used for accuracy assessment of CART and ANN. The area under the ROC curves were 78% and 75% for urban changes for ANN and CART models, respectively. The area under the ROC curves were 72% and 65% for agricultural changes for ANN and CART models, respectively. We found that although TOC and ROC were similar to each other, TOC proved more informative than conventional ROC as a goodness-of-fit metric. The outcome of this study can assist planners and managers to provide sustainability in natural resources and in developing a better plan for future given the needs to understand those contributing factors in urban and agriculture changes.

Yearly Impact: مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic Resources

View 1202

مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesDownload 105 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesCitation 0 مرکز اطلاعات علمی Scientific Information Database (SID) - Trusted Source for Research and Academic ResourcesRefrence 0
telegram sharing button
whatsapp sharing button
linkedin sharing button
twitter sharing button
email sharing button
email sharing button
email sharing button
sharethis sharing button