- Home  »
- VGI - Die Zeitschrift  »
- Abschlussarbeiten  »
- Jahrgang 2018
Abschlussarbeiten 2018
Multi-Scale Soil Moisture Retrieval from Satellite Radars in a Novel Data Cube Architecture
Department für Geodäsie und Geoinformation, Forschungsgruppe Fernerkundung, Technische Universität Wien, 2018
Betreuer: Univ.-Prof. Dipl.-Ing. Dr. Wolfgang Wagner
Kurzfassung/Abstract
Spaceborne remote sensing has been profiting from technological advances in numerous fields and has entered the era of Big Data. The growing sector of civilian data providers and the European Copernicus Earth observation programme with its Sentinel satellite constellation provide an unprecedented rich source of geophysical data. While fuelling science as well as public and private endeavours, the produced data volumes of some Terabytes per day constitute a major challenge and place high demands on processingand storagefacilities. When aiming for global data processing, an efficient handling of remote sensing data is of vital importance, demanding a well-suited definition of spatial grids for the data's storage and manipulation. For high-resolution image data, regular grids defined by map projections have been identified as practicable, cognisant of their drawbacks due to geometric distortions and data inflation. The here newly defined metric named grid oversampling factor (GOF) estimates local data oversampling appearing during projection of generic satellite images to a regular raster grid. With this, an optimised grid system named Equi7Grid is defined that minimises image distortions and data oversampling, with a global mean oversampling of 2% (compared to 35% for the widely used global Plate Carree projection). The Equi7Grid consists of 7 continental subgrids featuring a coordinate and tiling system, based on Equidistant Azimuthal projections. This choice is opposed to previous studies that suggested equal-area projections, which were found to be disadvantageous due to critical raster image distortions in the course of this study. One application of satellite remote sensing is to provide data on Soil Moisture (SM). SM is a key environmental variable, important to e.g. farmers, meteorologists, and disaster management units. In climatology, knowledge on SM is essential for the assessment of the global water-, energy-, and carboncycles. This study presents a method able to retrieve Surface Soil Moisture (SSM) from the Sentinel-1 satellites, which carry C-band Synthetic Aperture Radar (S-1 CSAR) sensors that provide the richest freely available SAR data source so far, unprecedented in accuracy and coverage. The SSM retrieval method, which adapts well-established change detection algorithms, builds the first globally deployable soil moisture observation dataset with 1km resolution and is suitable to be operated in data cube architectures like the Equi7Grid and High Performance Computing (HPC) environments. It includes the novel Dynamic Gaussian Upscaling (DGU) method for spatial upscaling of SAR imagery, harnessing its field-scale information and successfully mitigating effects from the SAR's high signal complexity. Also, a new regression-based approach for estimating the radar slope is defined, coping with Sentinel-1's inhomogeneity in spatial coverage. For a single remote sensing system, there always exists a trade-off between spatial and temporal resolution of the observations, leading to missed dynamics either in the spatial or temporal domain. Harnessing the Equi7Grid data cube's features of a common data space and the inherent possibility to access directly both space and time domain, this scale gap in remote sensing of SM is closed with a novel data fusion approach. Through temporal filtering of the joint signal of spatio-temporally complementary radar sensors, a kilometre-scale, daily soil water content product is obtained, named SCATSAR-SWI. With 25 km MetopASCAT SSM and 1km Sentinel-1 SSM serving as input, the SCATSAR-SWI is globally applicable and achieves daily full coverage over operated areas. For evaluation, both the S-1 SSM retrieval algorithm as well as the SCATSAR-SWI data fusion algorithm, are employed on a 3 years data cube over Italy, and SM data is thereby compared against in-situ measurements, reference data from ASCAT SSM, a 1km soil moisture model, and rainfall observations. The experiments for the Sentinel-1 SSM yield a consistent set of model parameters and product masks, unperturbed by coverage discontinuities. The SSM shows high agreement over plains and agricultural areas and low agreement over forests and strong topography. While positive biases during the growing season are detected, excellent capability to capture small-scale soil moisture changes as such from rainfall or irrigation is evident. For the SCATSAR-SWI, the experiments yield comprehensively high agreement with all reference datasets. However, while the Sentinel-1 signal appears to be attenuated, the ASCATs signal dynamics are fully transferred to the SCATSAR-SWI and benefit from the Sentinel-1 parametrisation. Finally, the SCATSAR-SWI shows excellent capability to reproduce rainfall observations over Italy. In the end, the insights gained during the conducted experiments and investigations has led to the realisation of an optimised data cube architecture, and to the successful production of a soil moisture product ingesting satellite measurements observed at complementary spatio-temporal scales. The here defined grid and algorithms build the basis for the upcoming operational Sentinel-1 SSM and SCATSAR-SWI production in the frame of the Copernicus Global Land Services (CGLS).
Spaceborne remote sensing has been profiting from technological advances in numerous fields and has entered the era of Big Data. The growing sector of civilian data providers and the European Copernicus Earth observation programme with its Sentinel satellite constellation provide an unprecedented rich source of geophysical data. While fuelling science as well as public and private endeavours, the produced data volumes of some Terabytes per day constitute a major challenge and place high demands on processingand storagefacilities. When aiming for global data processing, an efficient handling of remote sensing data is of vital importance, demanding a well-suited definition of spatial grids for the data's storage and manipulation. For high-resolution image data, regular grids defined by map projections have been identified as practicable, cognisant of their drawbacks due to geometric distortions and data inflation. The here newly defined metric named grid oversampling factor (GOF) estimates local data oversampling appearing during projection of generic satellite images to a regular raster grid. With this, an optimised grid system named Equi7Grid is defined that minimises image distortions and data oversampling, with a global mean oversampling of 2% (compared to 35% for the widely used global Plate Carree projection). The Equi7Grid consists of 7 continental subgrids featuring a coordinate and tiling system, based on Equidistant Azimuthal projections. This choice is opposed to previous studies that suggested equal-area projections, which were found to be disadvantageous due to critical raster image distortions in the course of this study. One application of satellite remote sensing is to provide data on Soil Moisture (SM). SM is a key environmental variable, important to e.g. farmers, meteorologists, and disaster management units. In climatology, knowledge on SM is essential for the assessment of the global water-, energy-, and carboncycles. This study presents a method able to retrieve Surface Soil Moisture (SSM) from the Sentinel-1 satellites, which carry C-band Synthetic Aperture Radar (S-1 CSAR) sensors that provide the richest freely available SAR data source so far, unprecedented in accuracy and coverage. The SSM retrieval method, which adapts well-established change detection algorithms, builds the first globally deployable soil moisture observation dataset with 1km resolution and is suitable to be operated in data cube architectures like the Equi7Grid and High Performance Computing (HPC) environments. It includes the novel Dynamic Gaussian Upscaling (DGU) method for spatial upscaling of SAR imagery, harnessing its field-scale information and successfully mitigating effects from the SAR's high signal complexity. Also, a new regression-based approach for estimating the radar slope is defined, coping with Sentinel-1's inhomogeneity in spatial coverage. For a single remote sensing system, there always exists a trade-off between spatial and temporal resolution of the observations, leading to missed dynamics either in the spatial or temporal domain. Harnessing the Equi7Grid data cube's features of a common data space and the inherent possibility to access directly both space and time domain, this scale gap in remote sensing of SM is closed with a novel data fusion approach. Through temporal filtering of the joint signal of spatio-temporally complementary radar sensors, a kilometre-scale, daily soil water content product is obtained, named SCATSAR-SWI. With 25 km MetopASCAT SSM and 1km Sentinel-1 SSM serving as input, the SCATSAR-SWI is globally applicable and achieves daily full coverage over operated areas. For evaluation, both the S-1 SSM retrieval algorithm as well as the SCATSAR-SWI data fusion algorithm, are employed on a 3 years data cube over Italy, and SM data is thereby compared against in-situ measurements, reference data from ASCAT SSM, a 1km soil moisture model, and rainfall observations. The experiments for the Sentinel-1 SSM yield a consistent set of model parameters and product masks, unperturbed by coverage discontinuities. The SSM shows high agreement over plains and agricultural areas and low agreement over forests and strong topography. While positive biases during the growing season are detected, excellent capability to capture small-scale soil moisture changes as such from rainfall or irrigation is evident. For the SCATSAR-SWI, the experiments yield comprehensively high agreement with all reference datasets. However, while the Sentinel-1 signal appears to be attenuated, the ASCATs signal dynamics are fully transferred to the SCATSAR-SWI and benefit from the Sentinel-1 parametrisation. Finally, the SCATSAR-SWI shows excellent capability to reproduce rainfall observations over Italy. In the end, the insights gained during the conducted experiments and investigations has led to the realisation of an optimised data cube architecture, and to the successful production of a soil moisture product ingesting satellite measurements observed at complementary spatio-temporal scales. The here defined grid and algorithms build the basis for the upcoming operational Sentinel-1 SSM and SCATSAR-SWI production in the frame of the Copernicus Global Land Services (CGLS).
Hybrid orientation of LiDAR Strips and Aerial Images
Department für Geodäsie und Geoinformation, Forschungsgruppen Photogrammetrie und Fernerkundung, Technische Universität Wien, 2018
Betreuer: Univ.-Prof. Dipl.-Ing. Dr. Norbert Pfeifer
Kurzfassung/Abstract
Airborne LiDAR (Light Detection And Ranging) and airborne photogrammetry are both proven and widely used techniques for the 3D topographic mapping of extended areas. Although both techniques are based on different reconstruction principles (polar measurement vs. ray triangulation), they ultimately serve the same purpose, the 3D reconstruction of the Earth’s surface. It is therefore obvious for many applications to integrate the data from both techniques to generate more accurate and complete results. Many works have been published on this topic of data fusion. However, no integrated solution existed prior to this work for the first steps that need to be carried out after data acquisition, namely (a) the lidar strip adjustment and (b) the aerial triangulation. A consequence of solving these two optimization problems independently can be large discrepancies (of up to several decimeters) between the lidar block and the image block. This is especially the case in challenging situations, e.g. corridor mapping with one strip only or in case few or no ground truth data is available. To avoid this problem and thereby profit from many other advantages, a first rigorous integration of these two tasks, the hybrid orientation of lidar point clouds and aerial images, is presented in this thesis. The main purpose of the presented method is to simultaneously optimize the relative orientation and absolute orientation (georeference) of the lidar and image data. This data can be used afterwards to generate accurate and consistent 3D or 2D mapping products. The orientation of the lidar and image data is optimized by minimizing the discrepancies (a) within the overlap area of this data and (b) with respect to ground truth data, if available. The measurement process is thereby rigorously modelled using the original measurements of the sensors (e.g. the polar measurements of the scanner) and the flight trajectory of the aircraft. This way, systematic measurement errors can be corrected where they originally occur. Both, lidar scanners and cameras, can be fully re-calibrated by estimating their interior calibration and mounting calibration. Systematic measurement errors of the flight trajectory can be corrected individually for each flight strip. For highest accuracy demands, time-dependent errors can be modelled by natural cubic splines. The methodological framework of the hybrid adjustment was adapted from the ICP algorithm. Consequently, correspondences are established iteratively and on a point basis to maintain the highest possible resolution level of the data. Four different strategies are presented for the selection of correspondences within the overlap area of point clouds. Thereby, the Maximum Leverage Sampling strategy is newly introduced. It automatically selects those correspondences that are best suited for the estimation of the transformation parameters. The various aspects of the hybrid adjustment are discussed on the basis of four examples. It is demonstrated, that the integration of the lidar strip adjustment and aerial triangulation leads to many synergetic effects. Two of the major advantages are an increased block stability (avoiding block deformations, e.g. bending) and an improved determinability of the parameters.
Airborne LiDAR (Light Detection And Ranging) and airborne photogrammetry are both proven and widely used techniques for the 3D topographic mapping of extended areas. Although both techniques are based on different reconstruction principles (polar measurement vs. ray triangulation), they ultimately serve the same purpose, the 3D reconstruction of the Earth’s surface. It is therefore obvious for many applications to integrate the data from both techniques to generate more accurate and complete results. Many works have been published on this topic of data fusion. However, no integrated solution existed prior to this work for the first steps that need to be carried out after data acquisition, namely (a) the lidar strip adjustment and (b) the aerial triangulation. A consequence of solving these two optimization problems independently can be large discrepancies (of up to several decimeters) between the lidar block and the image block. This is especially the case in challenging situations, e.g. corridor mapping with one strip only or in case few or no ground truth data is available. To avoid this problem and thereby profit from many other advantages, a first rigorous integration of these two tasks, the hybrid orientation of lidar point clouds and aerial images, is presented in this thesis. The main purpose of the presented method is to simultaneously optimize the relative orientation and absolute orientation (georeference) of the lidar and image data. This data can be used afterwards to generate accurate and consistent 3D or 2D mapping products. The orientation of the lidar and image data is optimized by minimizing the discrepancies (a) within the overlap area of this data and (b) with respect to ground truth data, if available. The measurement process is thereby rigorously modelled using the original measurements of the sensors (e.g. the polar measurements of the scanner) and the flight trajectory of the aircraft. This way, systematic measurement errors can be corrected where they originally occur. Both, lidar scanners and cameras, can be fully re-calibrated by estimating their interior calibration and mounting calibration. Systematic measurement errors of the flight trajectory can be corrected individually for each flight strip. For highest accuracy demands, time-dependent errors can be modelled by natural cubic splines. The methodological framework of the hybrid adjustment was adapted from the ICP algorithm. Consequently, correspondences are established iteratively and on a point basis to maintain the highest possible resolution level of the data. Four different strategies are presented for the selection of correspondences within the overlap area of point clouds. Thereby, the Maximum Leverage Sampling strategy is newly introduced. It automatically selects those correspondences that are best suited for the estimation of the transformation parameters. The various aspects of the hybrid adjustment are discussed on the basis of four examples. It is demonstrated, that the integration of the lidar strip adjustment and aerial triangulation leads to many synergetic effects. Two of the major advantages are an increased block stability (avoiding block deformations, e.g. bending) and an improved determinability of the parameters.
Satellite Observations with VLBI
Department für Geodäsie und Geoinformation, Forschungsgruppe Höhere Geodäsie, Technische Universität Wien, 2018
Betreuer: Univ.-Prof. Dipl.-Ing. Dr. Johannes Böhm
Kurzfassung/Abstract
The application of the Very Long Baseline Interferometry (VLBI) technique for observations of artificial Earth-orbiting satellites instead of extra-galactic radio sources has been vividly discussed in the geodetic community for several years. Promising applications - among others - can be found in the field of inter-technique frame ties. In this respect, the fundamental idea is to establish a co-location in space by combining the sensors of different space-geodetic techniques on a common satellite platform orbiting the Earth. Observations of this satellite can then be used to connect the technique-specific coordinate frame solutions. This approach is particularly relevant for the realization of the International Terrestrial Reference Frame (ITRF), which is a combination product of long-term time series of observations with VLBI, Satellite Laser Ranging (SLR), Global Navigation Satellite Systems (GNSS), and Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS). Additionally, the ITRF combination fundamentally relies on so-called local ties -- terrestrially measured vectors between the reference points of geodetic instruments at co-location sites. Connecting the individual techniques via a co-location in space (i.e. by establishing so-called space ties), complementary to using local ties, provides promising possibilities to reveal technique-specific biases, and to investigate discrepancies between local tie vectors and space geodetic coordinate solutions which are widely present on the cm level. Additionally, a co-location in space promotes the rigorous integration of all space-geodetic techniques, which was identified as one of the main goals of the Global Geodetic Observing System (GGOS) of the International Association of Geodesy (IAG). From the perspective of VLBI, satellite observations would allow to connect the purely geometric coordinate frame realized by VLBI observations of extremely remote radio sources, with the dynamic coordinate frames of the geodetic satellite techniques (GNSS, SLR, and DORIS) which are subject to the Earth's gravity field. Although space ties between the satellite techniques have already been shown, the space tie with VLBI has not been realized so far and could only be studied by simulations. One of the main reasons for this deficiency is, that actual observation data is widely missing. Observations of satellites with geodetic VLBI systems are non-standard, and the required observation and analysis processes were not in place in order to collect real observation data. Encountering this issue, a goal of this work was to establish -- for the first time -- a closed process chain which enables to obtain group delays based on observations of satellites with VLBI. This process chain includes all required processes from scheduling, over observations, correlation and post-correlation processing, to the final analysis of the delays. To stay as close as possible to data acquisition and processing scheme which is operationally used for geodetic VLBI sessions, standard software tools were adopted for satellite observations: The Vienna VLBI and Satellite Software (VieVS) was used for scheduling and data analysis, the software DiFX for correlation, and the Haystack Observatory Postprocessing System (HOPS) for the fringe fitting. The second goal of this work was to apply the established process chain to perform actual observation experiments, in order to validate and test all processing steps, and to refine and adapt them whenever necessary. Hence, in 2015 and 2016 a series of VLBI sessions with observations of GNSS satellites (GPS and GLONASS) was carried out mainly on the Australian baseline Hobart-Ceduna. End of 2016 the network was extended by the antenna at Warkworth (New Zealand). All antennas were equipped with L-band receivers suitable to record the GNSS L1 and L2 signals, and with modern backends. The final experiments in this series lasted for up to 6 h and yielded results in terms of observed minus computed (O-C) residuals on the level of a few ns. In November 2016 the Chinese APOD-A nano satellite was tracked over a few days whenever visible by the Australian AuScope VLBI array. This small cube satellite was a particularly interesting observation target, as it can be considered as a first realization of a co-location satellite enabling GNSS, SLR, and VLBI on a common platform in a low Earth orbit (LEO). APOD was equipped with a dedicated VLBI beacon emitting narrow-bandwidth tones in the S- and X-band that could be observed with standard receiver equipment used for geodetic application. Although APOD was challenging to track due to the low orbit height of about 450 km, all observations were successfully correlated, and yielded O-C residuals below 10 ns. All experiments are described in detail within this thesis. Although the results of the conducted satellite observation experiments did not reach an accuracy level which would allow for studying actual frame ties with VLBI, the work is still valuable due to the gained hands-on observation experience. Furthermore, the newly developed procedures and programs now enable to perform more observations in a semi-manual manner, similar to standard observations of natural radio sources -- enabling further research and development in the field of VLBI satellite observations.
The application of the Very Long Baseline Interferometry (VLBI) technique for observations of artificial Earth-orbiting satellites instead of extra-galactic radio sources has been vividly discussed in the geodetic community for several years. Promising applications - among others - can be found in the field of inter-technique frame ties. In this respect, the fundamental idea is to establish a co-location in space by combining the sensors of different space-geodetic techniques on a common satellite platform orbiting the Earth. Observations of this satellite can then be used to connect the technique-specific coordinate frame solutions. This approach is particularly relevant for the realization of the International Terrestrial Reference Frame (ITRF), which is a combination product of long-term time series of observations with VLBI, Satellite Laser Ranging (SLR), Global Navigation Satellite Systems (GNSS), and Doppler Orbitography and Radiopositioning Integrated by Satellite (DORIS). Additionally, the ITRF combination fundamentally relies on so-called local ties -- terrestrially measured vectors between the reference points of geodetic instruments at co-location sites. Connecting the individual techniques via a co-location in space (i.e. by establishing so-called space ties), complementary to using local ties, provides promising possibilities to reveal technique-specific biases, and to investigate discrepancies between local tie vectors and space geodetic coordinate solutions which are widely present on the cm level. Additionally, a co-location in space promotes the rigorous integration of all space-geodetic techniques, which was identified as one of the main goals of the Global Geodetic Observing System (GGOS) of the International Association of Geodesy (IAG). From the perspective of VLBI, satellite observations would allow to connect the purely geometric coordinate frame realized by VLBI observations of extremely remote radio sources, with the dynamic coordinate frames of the geodetic satellite techniques (GNSS, SLR, and DORIS) which are subject to the Earth's gravity field. Although space ties between the satellite techniques have already been shown, the space tie with VLBI has not been realized so far and could only be studied by simulations. One of the main reasons for this deficiency is, that actual observation data is widely missing. Observations of satellites with geodetic VLBI systems are non-standard, and the required observation and analysis processes were not in place in order to collect real observation data. Encountering this issue, a goal of this work was to establish -- for the first time -- a closed process chain which enables to obtain group delays based on observations of satellites with VLBI. This process chain includes all required processes from scheduling, over observations, correlation and post-correlation processing, to the final analysis of the delays. To stay as close as possible to data acquisition and processing scheme which is operationally used for geodetic VLBI sessions, standard software tools were adopted for satellite observations: The Vienna VLBI and Satellite Software (VieVS) was used for scheduling and data analysis, the software DiFX for correlation, and the Haystack Observatory Postprocessing System (HOPS) for the fringe fitting. The second goal of this work was to apply the established process chain to perform actual observation experiments, in order to validate and test all processing steps, and to refine and adapt them whenever necessary. Hence, in 2015 and 2016 a series of VLBI sessions with observations of GNSS satellites (GPS and GLONASS) was carried out mainly on the Australian baseline Hobart-Ceduna. End of 2016 the network was extended by the antenna at Warkworth (New Zealand). All antennas were equipped with L-band receivers suitable to record the GNSS L1 and L2 signals, and with modern backends. The final experiments in this series lasted for up to 6 h and yielded results in terms of observed minus computed (O-C) residuals on the level of a few ns. In November 2016 the Chinese APOD-A nano satellite was tracked over a few days whenever visible by the Australian AuScope VLBI array. This small cube satellite was a particularly interesting observation target, as it can be considered as a first realization of a co-location satellite enabling GNSS, SLR, and VLBI on a common platform in a low Earth orbit (LEO). APOD was equipped with a dedicated VLBI beacon emitting narrow-bandwidth tones in the S- and X-band that could be observed with standard receiver equipment used for geodetic application. Although APOD was challenging to track due to the low orbit height of about 450 km, all observations were successfully correlated, and yielded O-C residuals below 10 ns. All experiments are described in detail within this thesis. Although the results of the conducted satellite observation experiments did not reach an accuracy level which would allow for studying actual frame ties with VLBI, the work is still valuable due to the gained hands-on observation experience. Furthermore, the newly developed procedures and programs now enable to perform more observations in a semi-manual manner, similar to standard observations of natural radio sources -- enabling further research and development in the field of VLBI satellite observations.
Description of natural surfaces by laser scanning
Department für Geodäsie und Geoinformation, Forschungsgruppen Photogrammetrie und Fernerkundung, Technische Universität Wien, 2018
Betreuer: Univ.-Prof. Dipl.-Ing. Dr. Norbert Pfeifer
Kurzfassung/Abstract
Laser scanning (also LiDAR – light detection and ranging) provides accurate and high-resolution geometric and radiometric measurements of natural surfaces at different spatial scales, which is relevant for many environmental and physical models. However, high-resolution laser scanning data are often not fully explored or are not used at all for surface description in such models. The aim of this research is to revisit current methods and to introduce new methods for the description of natural surfaces by exploring the full potential of novel high-resolution laser scanning data. The work comprises (a) natural surfaces such as soil, gravel, and vegetation; (b) a range of different laser scanning techniques, such as TLS (terrestrial laser scanning), ULS (unmanned aerial vehicle laser scanning), ALS (airborne laser scanning); and (c) ranging methods such as time-of-flight ranging, phase-shift ranging, and active and passive triangulation. The work is focused on three land-surface parametrisations such as surface roughness, a 3D model of a conifer shoot, and canopy transmittance, which are selected as representatives of geometric-stochastic, geometric-deterministic, and geometric-radiometric surface descriptions, respectively. As those parametrisations have also been the subject of several research projects, particular objectives are set and analysed in six separate studies. The research contributed by introducing new methods and by improving current methods for those parametrisations from contemporary high-resolution laser scanning data. Surface roughness is mainly analysed in the frequency domain by means of the roughness spectrum. A new method is introduced that optimizes the interpolation parameters so that a DTM (digital terrain model), derived from a laser scanning point cloud, has a unique stochastic property (the fractal dimension is maximized at high frequencies), which is important for an unbiased surface roughness assessment. Furthermore, multi-scale laser scanning point clouds are analysed to determine spatial scales over which corresponding roughness spectra can be used interchangeably. The 3D modelling of a conifer shoot is (to the author’s best knowledge) modelled on the basis of point clouds up to individual needles for the first time. The modelling is based on a semiautomatic method developed here for micro-scale triangulating laser scanning data. Then, a new method is introduced to estimate canopy transmittance from small-footprint ALS waveform data, where assumptions on vegetation-ground scattering properties are not required. To enable upscaling of the canopy transmittance information to the space-borne LiDAR footprint scale, a waveform stacking method is developed in an additional study. The stacking method and the simulated space-borne LiDAR waveforms are then used, along with field measurements of forest inventory, to estimate aboveground biomass. The information and methods about surface roughness, 3D shoot geometry, and canopy transmittance that are derived here provide a basis for a better understanding and description of natural surfaces in environmental and physical models.
Laser scanning (also LiDAR – light detection and ranging) provides accurate and high-resolution geometric and radiometric measurements of natural surfaces at different spatial scales, which is relevant for many environmental and physical models. However, high-resolution laser scanning data are often not fully explored or are not used at all for surface description in such models. The aim of this research is to revisit current methods and to introduce new methods for the description of natural surfaces by exploring the full potential of novel high-resolution laser scanning data. The work comprises (a) natural surfaces such as soil, gravel, and vegetation; (b) a range of different laser scanning techniques, such as TLS (terrestrial laser scanning), ULS (unmanned aerial vehicle laser scanning), ALS (airborne laser scanning); and (c) ranging methods such as time-of-flight ranging, phase-shift ranging, and active and passive triangulation. The work is focused on three land-surface parametrisations such as surface roughness, a 3D model of a conifer shoot, and canopy transmittance, which are selected as representatives of geometric-stochastic, geometric-deterministic, and geometric-radiometric surface descriptions, respectively. As those parametrisations have also been the subject of several research projects, particular objectives are set and analysed in six separate studies. The research contributed by introducing new methods and by improving current methods for those parametrisations from contemporary high-resolution laser scanning data. Surface roughness is mainly analysed in the frequency domain by means of the roughness spectrum. A new method is introduced that optimizes the interpolation parameters so that a DTM (digital terrain model), derived from a laser scanning point cloud, has a unique stochastic property (the fractal dimension is maximized at high frequencies), which is important for an unbiased surface roughness assessment. Furthermore, multi-scale laser scanning point clouds are analysed to determine spatial scales over which corresponding roughness spectra can be used interchangeably. The 3D modelling of a conifer shoot is (to the author’s best knowledge) modelled on the basis of point clouds up to individual needles for the first time. The modelling is based on a semiautomatic method developed here for micro-scale triangulating laser scanning data. Then, a new method is introduced to estimate canopy transmittance from small-footprint ALS waveform data, where assumptions on vegetation-ground scattering properties are not required. To enable upscaling of the canopy transmittance information to the space-borne LiDAR footprint scale, a waveform stacking method is developed in an additional study. The stacking method and the simulated space-borne LiDAR waveforms are then used, along with field measurements of forest inventory, to estimate aboveground biomass. The information and methods about surface roughness, 3D shoot geometry, and canopy transmittance that are derived here provide a basis for a better understanding and description of natural surfaces in environmental and physical models.
The use of sar backscatter time series for characterising rice phenology
Department für Geodäsie und Geoinformation, Forschungsgruppe Fernerkundung, Technische Universität Wien, 2018
Betreuer: Univ.-Prof. Dipl.-Ing. Dr. Wolfgang Wagner
Kurzfassung/Abstract
Detailed knowledge of the area and location of rice cropland is of great importance to any nation whose economy depends on rice production. Research in the field of rice cropland monitoring is necessary to investigate the different factors and effects of rice cultivation. Areas of application include risk management for the insurance industry, environmental reporting, and determination of greenhouse gas emissions from rice cultivation, analysis of life and water cycles, and crop forecasts. Important sources of data for rice cropland records are space-borne active microwave instruments, due to the advantage of being non-susceptible to cloud cover. A Synthetic Aperture Radar (SAR) is an active imaging system operating in the microwave spectrum. The resulting images reflect the backscatter properties of the surface, which are determined by the physical (e.g., surface roughness, geometric structure, orientation) and electrical (e.g., dielectric constant, moisture content, conductivity) characteristics of the surface, and the radar frequency of the sensor (e.g., L-, C-, X-band). Multi-temporal SAR image analysis is a common approach for rice cropland monitoring. High variations of SAR backscatter signal during the growing of rice crop in comparison with other types of land use and land cover is therefore the most important method for rice monitoring from space. However, no study so far was able to utilize the complete Advanced Synthetic Aperture Radar (Envisat ASAR) archive to map rice fields because incidence angle dependency affects the backscatter signal. In addition, the exploitation of the potential of the Sentinel-1 mission for rice monitoring (i.e., on regional and continental scales) is still subject to ongoing research. This dissertation developed a time series backscatter analyzing method, aiming for classifying rice areas and determining the seasonality of rice crops. A phenology SAR-based approach is proposed and successfully applied for rice monitoring, allowing a more objective interpretation of rice areas from historical Envisat ASAR data (polarization: horizontal-horizontal HH) and the current Sentinel-1 SAR mission (polarization: vertical-horizontal VH).
Detailed knowledge of the area and location of rice cropland is of great importance to any nation whose economy depends on rice production. Research in the field of rice cropland monitoring is necessary to investigate the different factors and effects of rice cultivation. Areas of application include risk management for the insurance industry, environmental reporting, and determination of greenhouse gas emissions from rice cultivation, analysis of life and water cycles, and crop forecasts. Important sources of data for rice cropland records are space-borne active microwave instruments, due to the advantage of being non-susceptible to cloud cover. A Synthetic Aperture Radar (SAR) is an active imaging system operating in the microwave spectrum. The resulting images reflect the backscatter properties of the surface, which are determined by the physical (e.g., surface roughness, geometric structure, orientation) and electrical (e.g., dielectric constant, moisture content, conductivity) characteristics of the surface, and the radar frequency of the sensor (e.g., L-, C-, X-band). Multi-temporal SAR image analysis is a common approach for rice cropland monitoring. High variations of SAR backscatter signal during the growing of rice crop in comparison with other types of land use and land cover is therefore the most important method for rice monitoring from space. However, no study so far was able to utilize the complete Advanced Synthetic Aperture Radar (Envisat ASAR) archive to map rice fields because incidence angle dependency affects the backscatter signal. In addition, the exploitation of the potential of the Sentinel-1 mission for rice monitoring (i.e., on regional and continental scales) is still subject to ongoing research. This dissertation developed a time series backscatter analyzing method, aiming for classifying rice areas and determining the seasonality of rice crops. A phenology SAR-based approach is proposed and successfully applied for rice monitoring, allowing a more objective interpretation of rice areas from historical Envisat ASAR data (polarization: horizontal-horizontal HH) and the current Sentinel-1 SAR mission (polarization: vertical-horizontal VH).
Classification and change detection using point clouds
Department für Geodäsie und Geoinformation, Forschungsgruppen Photogrammetrie und Fernerkundung, Technische Universität Wien, 2018
Betreuer: Univ.-Prof. Dipl.-Ing. Dr. Norbert Pfeifer
Kurzfassung/Abstract
The point cloud is a very powerful source for deriving 3D models which are widely applied in natural resource, environmental management, and urban domain. Point cloud classification and change detection are used in the context of Earth observation to monitor and assess the status and change of the natural and built environment. They have an essential role in providing and updating information in three dimensions compared to the provision of 2D information from traditional raster images. There are a number of sensors and platforms that acquire point clouds at different resolutions and spaces, in those, airborne laser scanning (ALS) and image matching (IM) are two main sources which allow to collect point clouds over large areas. The number of published research articles regarding to point cloud classification and change detection is increasing. Many studies uses ALS data on classification and change detection, but concentration on raster, and fewer publication on point clouds. In addition, image matching point cloud classification draw a less attention so far compared to ALS data. The objectives of this dissertation are focused on point cloud classification and change detection based on raster-based and point-based approaches to consider advantages they bring in different levels of details and types of datasets. This includes finding effective attributes for classifying and detecting changes, transferring attribute thresholds between different data sets and locations, and evaluating the benefit of machine learning in classification and change detection. The study questions range from measurement technology via feature derivation to processing methods are investigated and evaluated in four research articles. The presented studies are published in peer-reviewed journals and a conference paper. Article I and II investigate the classification using (i) full-waveform airborne laser scanning, and (ii) an image matching point cloud based on simple decision tree and machine learning method. The presented approaches show high potential for classifying multiple objects over urban areas. Article III investigates the reduction of individual trees in forested area using traditional image differencing method. The presented method finds new features of the LiDAR point cloud, which are useful for detecting single object change in wooded areas. Finally, Article IV investigates the integration of detecting and classifying changes simultaneously for multi-objects change detection in urban area based on airborne laser scanning data. The presented studies prove, that the point cloud, either acquired by airborne laser scanning or by image matching, is an effective and practicable data source for accurate classification and change detection in large areas.
The point cloud is a very powerful source for deriving 3D models which are widely applied in natural resource, environmental management, and urban domain. Point cloud classification and change detection are used in the context of Earth observation to monitor and assess the status and change of the natural and built environment. They have an essential role in providing and updating information in three dimensions compared to the provision of 2D information from traditional raster images. There are a number of sensors and platforms that acquire point clouds at different resolutions and spaces, in those, airborne laser scanning (ALS) and image matching (IM) are two main sources which allow to collect point clouds over large areas. The number of published research articles regarding to point cloud classification and change detection is increasing. Many studies uses ALS data on classification and change detection, but concentration on raster, and fewer publication on point clouds. In addition, image matching point cloud classification draw a less attention so far compared to ALS data. The objectives of this dissertation are focused on point cloud classification and change detection based on raster-based and point-based approaches to consider advantages they bring in different levels of details and types of datasets. This includes finding effective attributes for classifying and detecting changes, transferring attribute thresholds between different data sets and locations, and evaluating the benefit of machine learning in classification and change detection. The study questions range from measurement technology via feature derivation to processing methods are investigated and evaluated in four research articles. The presented studies are published in peer-reviewed journals and a conference paper. Article I and II investigate the classification using (i) full-waveform airborne laser scanning, and (ii) an image matching point cloud based on simple decision tree and machine learning method. The presented approaches show high potential for classifying multiple objects over urban areas. Article III investigates the reduction of individual trees in forested area using traditional image differencing method. The presented method finds new features of the LiDAR point cloud, which are useful for detecting single object change in wooded areas. Finally, Article IV investigates the integration of detecting and classifying changes simultaneously for multi-objects change detection in urban area based on airborne laser scanning data. The presented studies prove, that the point cloud, either acquired by airborne laser scanning or by image matching, is an effective and practicable data source for accurate classification and change detection in large areas.
Quantification of single-tree structure in mountain forests using terrestrial laser scanning
Department für Geodäsie und Geoinformation, Forschungsgruppen Photogrammetrie und Fernerkundung, Technische Universität Wien, 2018
Betreuer: Univ.-Prof. Dipl.-Ing. Dr. Norbert Pfeifer, Dipl.-Ing. Dr. Markus Hollaus
Kurzfassung/Abstract
Mountain forests provide a great deal of values, ranging from protection against natural hazards, timber production, biodiversity conservation, to carbon storage and climate change mitigation. Understanding and monitoring the detailed struc- ture information at the single-tree level in mountain forests is equally important as area-wide assessments to sustainably managing these mountain forest services. Fine-scale three dimensional (3D) forest structures can be assessed by using ter- restrial laser scanning (TLS) systems, which provide accurate and high-resolution measurements (i.e., 3D point clouds) of objects. TLS has greatly advanced single- tree quantifications by successfully extracting attributes such as tree stem location, diameter, stem curve, stem volume and biomass components. However, existing approaches are mainly developed for managed forests or those in flat environ- ments. Due to factors such as site fertility, spacing and light conditions, wind, and landslide events, mountain forests have more complex below-canopy structures mainly featuring multifarious understory, stems with non-vertical orientations and cross-sections that differ significantly from a circular shape. These impacts make it difficult to directly apply existing methods in mountain forests. This dissertation tackles such challenges by developing novel methods that overcome the high degree of complexity in processing TLS data acquired in mountain forests. The work in this dissertation focuses on methodology developments specifically associated to three scientific objectives; (a) separation of tree wood and leaf components; (b) tree stem detection and modeling in mountain landslide-affected forests; and (c) reconstruction of stem cross-sections. A side focus is also paid on smart point cloud structuring in order to assist the processing of large volume point cloud data. Firstly, an empirical study is carried out to examine the feasibility of four popular supervised machine learning methods and the impact of feature calculation. A follow-up work develops a novel approach that is fully automatic and unsupervised. Experiments confirm its strength in separating wood and leaf components for plot-level mountain forests. Second, a new method is introduced that detects and reconstructs tree stems with irregular vertical orientations. The reconstructed stems reach high accuracy compared to field references. Lastly, a new method is developed to model the actual shape of stem cross-sections, which breaks down the assumption that the cross-section of tree stems is circular. These works conducted in this dissertation provide practical examples and guidelines for understanding mountain forest structures at the single-tree level, and at the same time demonstrate that the required data processing can be largely automated. These contributions can help to achieve more intelligent and sustainable mountain forest managements in the future.
Mountain forests provide a great deal of values, ranging from protection against natural hazards, timber production, biodiversity conservation, to carbon storage and climate change mitigation. Understanding and monitoring the detailed struc- ture information at the single-tree level in mountain forests is equally important as area-wide assessments to sustainably managing these mountain forest services. Fine-scale three dimensional (3D) forest structures can be assessed by using ter- restrial laser scanning (TLS) systems, which provide accurate and high-resolution measurements (i.e., 3D point clouds) of objects. TLS has greatly advanced single- tree quantifications by successfully extracting attributes such as tree stem location, diameter, stem curve, stem volume and biomass components. However, existing approaches are mainly developed for managed forests or those in flat environ- ments. Due to factors such as site fertility, spacing and light conditions, wind, and landslide events, mountain forests have more complex below-canopy structures mainly featuring multifarious understory, stems with non-vertical orientations and cross-sections that differ significantly from a circular shape. These impacts make it difficult to directly apply existing methods in mountain forests. This dissertation tackles such challenges by developing novel methods that overcome the high degree of complexity in processing TLS data acquired in mountain forests. The work in this dissertation focuses on methodology developments specifically associated to three scientific objectives; (a) separation of tree wood and leaf components; (b) tree stem detection and modeling in mountain landslide-affected forests; and (c) reconstruction of stem cross-sections. A side focus is also paid on smart point cloud structuring in order to assist the processing of large volume point cloud data. Firstly, an empirical study is carried out to examine the feasibility of four popular supervised machine learning methods and the impact of feature calculation. A follow-up work develops a novel approach that is fully automatic and unsupervised. Experiments confirm its strength in separating wood and leaf components for plot-level mountain forests. Second, a new method is introduced that detects and reconstructs tree stems with irregular vertical orientations. The reconstructed stems reach high accuracy compared to field references. Lastly, a new method is developed to model the actual shape of stem cross-sections, which breaks down the assumption that the cross-section of tree stems is circular. These works conducted in this dissertation provide practical examples and guidelines for understanding mountain forest structures at the single-tree level, and at the same time demonstrate that the required data processing can be largely automated. These contributions can help to achieve more intelligent and sustainable mountain forest managements in the future.
Determination of Arctic land surface and soil properties with Synthetic Aperture Radar information from satellites
Department für Geodäsie und Geoinformation, Forschungsgruppe Fernerkundung, Technische Universität Wien, 2018
Betreuer: Priv.-Doz. Dr. Annett Bartsch
Kurzfassung/Abstract
Permafrost is an essential climate variable and prone to change with future warming. Extensive permafrost degradation is likely to occur within this century. Currently stored carbon will potentially be mobilized effecting the global carbon cycle. Furthermore, permafrost degradation will cause impacts on infrastructure and ecosystems. Permafrost monitoring is therefore essential and often challenging due to the fact that Arctic regions affected by permafrost are vast and often remote. Therefore, Remote Sensing holds great potential due to continuous coverage. As permafrost is a subsurface phenomenon it cannot be measured directly via satellite data. However, its state can be indirectly derived and degradation impacts can be observed. This thesis focuses on the possibilities of synthetic aperture radar (SAR) for circumpolar monitoring. Relationships between SAR backscatter and Arctic land cover as well as soil properties are explored, incorporating SAR data of different spatial scales and wavelengths as well as in situ data gathered during field campaigns. In a first publication the influence of vegetation types of certain wetness regimes on C-band summer and winter backscatter is investigated in order to derive a circumpolar wetness map and subsequently to apply at site scale and medium resolution. Soil properties are further explored within a second paper, where the interrelations of arctic vegetation, soil moisture and active layer thickness are analyzed and connected to X-band backscatter as to delineate a continuous active layer map for a study site on the central Yamal Peninsula. Within a third paper a simplified normalization approach is introduced by investigating land cover specific incidence angle dependencies for arctic regions.
Permafrost is an essential climate variable and prone to change with future warming. Extensive permafrost degradation is likely to occur within this century. Currently stored carbon will potentially be mobilized effecting the global carbon cycle. Furthermore, permafrost degradation will cause impacts on infrastructure and ecosystems. Permafrost monitoring is therefore essential and often challenging due to the fact that Arctic regions affected by permafrost are vast and often remote. Therefore, Remote Sensing holds great potential due to continuous coverage. As permafrost is a subsurface phenomenon it cannot be measured directly via satellite data. However, its state can be indirectly derived and degradation impacts can be observed. This thesis focuses on the possibilities of synthetic aperture radar (SAR) for circumpolar monitoring. Relationships between SAR backscatter and Arctic land cover as well as soil properties are explored, incorporating SAR data of different spatial scales and wavelengths as well as in situ data gathered during field campaigns. In a first publication the influence of vegetation types of certain wetness regimes on C-band summer and winter backscatter is investigated in order to derive a circumpolar wetness map and subsequently to apply at site scale and medium resolution. Soil properties are further explored within a second paper, where the interrelations of arctic vegetation, soil moisture and active layer thickness are analyzed and connected to X-band backscatter as to delineate a continuous active layer map for a study site on the central Yamal Peninsula. Within a third paper a simplified normalization approach is introduced by investigating land cover specific incidence angle dependencies for arctic regions.
TLS-Punktwolken in zwei Wellenlängen für die Analyse von Baumstrukturen
Department für Geodäsie und Geoinformation, Forschungsgruppen Photogrammetrie und Fernerkundung, Technische Universität Wien, 2018
Betreuer: Dipl.-Ing. Martin Wieser, Univ.-Prof. Dipl.-Ing. Dr. Norbert Pfeifer
Kurzfassung/Abstract
Terrestrische Laserscanner zeichnen neben den 3D-Koordinaten der gemessenen Punkte auch die Stärke des zurückreflektierten Signals auf. Dieses Signal hängt neben den Rückstrahleigenschaften des Objektes auch stark von der Messgeometrie sowie dem instrumentellen und atmosphärischen Einfluss ab. Durch ausführliche Kalibrierung dieser zusätzlichen Einflüsse kann aus dem zurückreflektierten Signal auf die Rückstrahleigenschaften des Objektes geschlossen werden. Vor allem instrumentelle Einflüsse variieren je TLS Modell sehr stark, wodurch ein direkter Vergleich der Intensitäten unterschiedlicher Scanner nicht möglich ist. Durch Messungen auf Objekte bekannter Reflektivität können je Scanner Kalibrierungskurven erstellt werden. Werden die Kalibrierungskurven auf Intensitäten unterschiedlicher TLS Modelle angebracht, so können die daraus generierten Reflektvitäten miteinander verglichen werden. In dieser Arbeit wird anhand eines Testdatensatzes eines Waldstückes gezeigt, wie rein durch Vergleichen von Reflektivitäten unterschiedlicher Wellenlängen auf die Strukturen von Bäumen (Stämme, Äste, Nadeln und Laub) geschlossen werden kann. So liefert eine Klassifizierung der beiden Klassen Stamm und Nadeln/Laub anhand eines abgewandelten NDVI eine Genauigkeit von 74%. Werden zur Klassifizierung nur Single Echos verwendet, so wird sogar eine Genauigkeit von knapp 90% erreicht. Dies beruht darauf, dass Single Echos über deutlichere Reflektivitäten verfügen und sich der NDVI der einzelnen Strukturen dadurch eindeutiger unterscheidet. Eine Klassifizierung anhand der Relektivitäten bei einer Wellenlänge von 1.5 µm ergibt eine Genauigkeit von 90% und für Single Echos sogar 94%. Wohingegen eine Unterscheidung der Baumstrukturen anhand der Reflektivität bei 1.0 µm nicht möglich ist. Dies zeigt, dass ein Vergleichen und Kombinieren von Reflektivitäten unterschiedlicher TLS und Wellenlängen zwar möglich aber für den Zweck einer Klassifizierung nicht nötig ist.
Terrestrische Laserscanner zeichnen neben den 3D-Koordinaten der gemessenen Punkte auch die Stärke des zurückreflektierten Signals auf. Dieses Signal hängt neben den Rückstrahleigenschaften des Objektes auch stark von der Messgeometrie sowie dem instrumentellen und atmosphärischen Einfluss ab. Durch ausführliche Kalibrierung dieser zusätzlichen Einflüsse kann aus dem zurückreflektierten Signal auf die Rückstrahleigenschaften des Objektes geschlossen werden. Vor allem instrumentelle Einflüsse variieren je TLS Modell sehr stark, wodurch ein direkter Vergleich der Intensitäten unterschiedlicher Scanner nicht möglich ist. Durch Messungen auf Objekte bekannter Reflektivität können je Scanner Kalibrierungskurven erstellt werden. Werden die Kalibrierungskurven auf Intensitäten unterschiedlicher TLS Modelle angebracht, so können die daraus generierten Reflektvitäten miteinander verglichen werden. In dieser Arbeit wird anhand eines Testdatensatzes eines Waldstückes gezeigt, wie rein durch Vergleichen von Reflektivitäten unterschiedlicher Wellenlängen auf die Strukturen von Bäumen (Stämme, Äste, Nadeln und Laub) geschlossen werden kann. So liefert eine Klassifizierung der beiden Klassen Stamm und Nadeln/Laub anhand eines abgewandelten NDVI eine Genauigkeit von 74%. Werden zur Klassifizierung nur Single Echos verwendet, so wird sogar eine Genauigkeit von knapp 90% erreicht. Dies beruht darauf, dass Single Echos über deutlichere Reflektivitäten verfügen und sich der NDVI der einzelnen Strukturen dadurch eindeutiger unterscheidet. Eine Klassifizierung anhand der Relektivitäten bei einer Wellenlänge von 1.5 µm ergibt eine Genauigkeit von 90% und für Single Echos sogar 94%. Wohingegen eine Unterscheidung der Baumstrukturen anhand der Reflektivität bei 1.0 µm nicht möglich ist. Dies zeigt, dass ein Vergleichen und Kombinieren von Reflektivitäten unterschiedlicher TLS und Wellenlängen zwar möglich aber für den Zweck einer Klassifizierung nicht nötig ist.
Entwicklung eines Wissensdokumentationsrahmens für groß angelegte GIS-Projekte am Beispiel GIP Kärnten
Studiengang Spatial Information Management, Fachhochschule Technikum Kärnten, 2018
Betreuer: FH-Prof. Dr. Gernot Paulus
Kurzfassung/Abstract
Die Graphen Integrationsplattform (GIP) ist das multimodale Verkehrsreferenzsystem für ganz Österreich. Die GIP umfasst alle Verkehrsmittel (Öffentlicher Verkehr, Radfahren, zu Fuß gehen, Autoverkehr) und ist aktueller und detaillierter als herkömmliche für ganz Österreich kommerziell verfügbare Graphen. Die GIP führt österreichweit die verschiedenen Datenbanken und Geoinformationssysteme zusammen, mit denen im öffentlichen Sektor Verkehrsinfrastruktur erfasst und verwaltet werden. Verkehrsdaten werden in vielen verschiedenen Abteilungen oder Organisationseinheiten, die sich mit Verkehrswegeinfrastruktur beschäftigen, von unterschiedlichen ExpertInnen bearbeitet. Somit ist in einer Organisation eine Vielzahl an heterogener Information, Daten und „informelles Wissen“ in den „Köpfen der ExpertInnen“ vorhanden. Dieses „ExpertInnenwissen“ ist häufig nicht vollständig dokumentiert und stellt damit einen potentiellen Flaschenhals im Sinne eines transparenten Informationsflusses speziell bei der Übergabe und Weiterführung von Projekten dar. Ziel dieses Projektes ist die strukturierte und vollständige Erfassung und Dokumentation des Wissens der einzelnen ExpertInnen mit besonderer Berücksichtigung der GIP Kärnten. Die Informationen werden so dokumentiert, dass auch andere Mitarbeiter bzw. neue Mitarbeiter diese Prozesse verstehen lernen und selbstständig zu den aktuellen Dokumenten, Werkzeugen und Daten finden. In weiterer Hinsicht werden diese Informationsinhalte in organisierter Weise gespeichert und abgelegt. Zu Beginn des Projektes wurde eine Domänenübersicht entwickelt, welche die in Verbindung stehenden Organisationseinheiten, die bestehenden Prozesse und Tools, sowie die bereits vorhandenen Dokumentationen und Leitfäden aufzeigt. Anschließend wurde eine Anforderungsliste für eine strukturierte Prozessdokumentation in Zusammenarbeit mit den ExpertInnen erstellt. Zusätzlich wurden die Anforderungen an ein Ablagesystem definiert. Um die Dokumentationen bestmöglich erstellen zu können, werden Workflows, welche den Ablauf beschreiben, entwickelt. Um den Datenverwaltungsprozess besser zu verstehen, wurde ein Datenmanagementplan, welcher auf den Anforderungen von Horizon 2020 basiert, entwickelt. Dieser Datenmanagementplan beschreibt den Lebenszyklus der Datenverwaltung für die Daten im Rahmen der GIP Kärnten.
Die Graphen Integrationsplattform (GIP) ist das multimodale Verkehrsreferenzsystem für ganz Österreich. Die GIP umfasst alle Verkehrsmittel (Öffentlicher Verkehr, Radfahren, zu Fuß gehen, Autoverkehr) und ist aktueller und detaillierter als herkömmliche für ganz Österreich kommerziell verfügbare Graphen. Die GIP führt österreichweit die verschiedenen Datenbanken und Geoinformationssysteme zusammen, mit denen im öffentlichen Sektor Verkehrsinfrastruktur erfasst und verwaltet werden. Verkehrsdaten werden in vielen verschiedenen Abteilungen oder Organisationseinheiten, die sich mit Verkehrswegeinfrastruktur beschäftigen, von unterschiedlichen ExpertInnen bearbeitet. Somit ist in einer Organisation eine Vielzahl an heterogener Information, Daten und „informelles Wissen“ in den „Köpfen der ExpertInnen“ vorhanden. Dieses „ExpertInnenwissen“ ist häufig nicht vollständig dokumentiert und stellt damit einen potentiellen Flaschenhals im Sinne eines transparenten Informationsflusses speziell bei der Übergabe und Weiterführung von Projekten dar. Ziel dieses Projektes ist die strukturierte und vollständige Erfassung und Dokumentation des Wissens der einzelnen ExpertInnen mit besonderer Berücksichtigung der GIP Kärnten. Die Informationen werden so dokumentiert, dass auch andere Mitarbeiter bzw. neue Mitarbeiter diese Prozesse verstehen lernen und selbstständig zu den aktuellen Dokumenten, Werkzeugen und Daten finden. In weiterer Hinsicht werden diese Informationsinhalte in organisierter Weise gespeichert und abgelegt. Zu Beginn des Projektes wurde eine Domänenübersicht entwickelt, welche die in Verbindung stehenden Organisationseinheiten, die bestehenden Prozesse und Tools, sowie die bereits vorhandenen Dokumentationen und Leitfäden aufzeigt. Anschließend wurde eine Anforderungsliste für eine strukturierte Prozessdokumentation in Zusammenarbeit mit den ExpertInnen erstellt. Zusätzlich wurden die Anforderungen an ein Ablagesystem definiert. Um die Dokumentationen bestmöglich erstellen zu können, werden Workflows, welche den Ablauf beschreiben, entwickelt. Um den Datenverwaltungsprozess besser zu verstehen, wurde ein Datenmanagementplan, welcher auf den Anforderungen von Horizon 2020 basiert, entwickelt. Dieser Datenmanagementplan beschreibt den Lebenszyklus der Datenverwaltung für die Daten im Rahmen der GIP Kärnten.
Einfluss eines Dichtemodells auf die regionale Schwerefeldmodellierung
Department für Geodäsie und Geoinformation, Forschungsgruppe Höhere Geodäsie, Technische Universität Wien, 2018
Betreuer: Dipl.-Ing. Jadre Maras, Ao.Univ.-Prof. Dipl.-Ing. Dr. Robert Weber
Kurzfassung/Abstract
Bei der Reduktion von an der Erdoberäche gemessenen Schwerewerten oder Lotabweichungen wird allgemein von einer homogene Dichteverteilung (ρ = 2.67 g/cm3) innerhalb der Erdkruste ausgegangen. Diese Annahme eines konstanten Dichtewertes im Untergrund wird auch bei dem Programm TOPOGRAV zur Berechnung der topographischen Korrektur mithilfe der sogenannten Quader-Methode angewendet. In dieser Arbeit soll der Einuss eines Dichtemodells des Untergrunds auf die Berechnung von reduzierten Lotabweichungen ( ξ und η) und Schwerewerten mithilfe der Software TOPOGRAV untersucht werden. Das vom Autor implementierte ebene Dichtemodell ermöglicht es Tiefe und Dichtewert eines Dichtesprungs im Untergrund als Parameter für die Berechnung der Reduktion zu übergeben. Die in dieser Arbeit angestellten Untersuchungen zeigten, dass die Einbeziehung eines Dichtemodells besonders für Stationen unterhalb massiver Gebirgszüge und hoher Dichtedifferenzen geeignet erscheint. Für eine solche Station ergeben sich für reduzierte Schwerewerte bei unterschiedlichen Berechnungsmethoden (mit oder ohne Dichtemodell) Differenzen von knapp 30mGal. Ein wesentlich geringerer Unterschied (<1) konnte hingegen für reduzierte Lotabweichungen festgestellt werden. Für eine genauere Untersuchung des Einflusses eines Dichtemodells werden Tests mit unterschiedlichen Geländemasken (vor allem Meer) nötig sein.
Bei der Reduktion von an der Erdoberäche gemessenen Schwerewerten oder Lotabweichungen wird allgemein von einer homogene Dichteverteilung (ρ = 2.67 g/cm3) innerhalb der Erdkruste ausgegangen. Diese Annahme eines konstanten Dichtewertes im Untergrund wird auch bei dem Programm TOPOGRAV zur Berechnung der topographischen Korrektur mithilfe der sogenannten Quader-Methode angewendet. In dieser Arbeit soll der Einuss eines Dichtemodells des Untergrunds auf die Berechnung von reduzierten Lotabweichungen ( ξ und η) und Schwerewerten mithilfe der Software TOPOGRAV untersucht werden. Das vom Autor implementierte ebene Dichtemodell ermöglicht es Tiefe und Dichtewert eines Dichtesprungs im Untergrund als Parameter für die Berechnung der Reduktion zu übergeben. Die in dieser Arbeit angestellten Untersuchungen zeigten, dass die Einbeziehung eines Dichtemodells besonders für Stationen unterhalb massiver Gebirgszüge und hoher Dichtedifferenzen geeignet erscheint. Für eine solche Station ergeben sich für reduzierte Schwerewerte bei unterschiedlichen Berechnungsmethoden (mit oder ohne Dichtemodell) Differenzen von knapp 30mGal. Ein wesentlich geringerer Unterschied (<1) konnte hingegen für reduzierte Lotabweichungen festgestellt werden. Für eine genauere Untersuchung des Einflusses eines Dichtemodells werden Tests mit unterschiedlichen Geländemasken (vor allem Meer) nötig sein.
GIS-based Inventory of Tourism Infrastructure in Ukraine
Studiengang Spatial Information Management, Fachhochschule Technikum Kärnten, 2018
Betreuer: FH-Prof. Dr. Adrijana Car
Kurzfassung/Abstract
Tourism industry is one of the most important areas of Ukrainian economy. Ukraine has many natural and man-made attractions that are surrounded by a number of amenities and accommodation. As a result, the number of people from inside as well as the outside of Ukraine that visit popular attractions increases every year. Currently there are very few sources of information on these attractions, surrounding infrastructures like accommodation or amenities. Hardly any applications allow a user to find amenities within the area of interest, routes to access the destination or to calculate the approximate time that is needed for traveling to a specific place. Most of these tasks are spatial in nature and can therefore be solved using a GIS. Thus, the main goal of this thesis is to create a GIS-based inventory of the tourism infrastructure in Ukraine. To achieve this, the following needs to be answered: (1) What kind of tourism infrastructure is needed to plan a trip in Ukraine? (2) How well can the tourism concepts of 4A’s and tourism product be used to model tourism infrastructure for Ukraine? (3) What methods and tools can be used to analyze inventory of tourism infrastructure as well as who can be interested in this kind of analysis? We use the tourism concepts of 4A's and tourism product to conceptualize this GIS application. The 4A's refer to Attractions (e.g. museums, lakes, cathedrals), Accommodation (e.g. hotels), Amenities (e.g. ATM, gas stations) and Accessibility (e.g. roads, railways); it is used to model the tourism infrastructure. Data gathering focuses on open source such as OpenStreetMap and GADM due to limited availability of data from official governmental sources. The tourism product consists of three elements: nucleus (attraction itself), inviolate belt (context of attraction) and zone of closure (all services and facilities); it is used to inspect tourism infrastructure that surrounds and services attractions. The concept is described in an entity-relationship model and corresponding diagram (ERM/D), which is then used as a specification for implementation. The prototype of the application is implemented in an ArcGIS 10.5 environment with the following functionality: (1) visualization of the tourism inventory in a series of thematic maps following the up-to-date cartographic standards; (2) Implementation of the idea of tourism product as one possible means of analyzing distribution and density of the Ukraine’s tourism infrastructure. The prototype of the GIS application created in this thesis contains a geodatabase of the current tourism infrastructure in Ukraine. The spatial analysis of the tourism infrastructure and the visualization of the analysis results in series of thematic maps provide input for discussion of usefulness of the achieved results for different user groups. Potential users of this inventory are primarily experts from tourism industry and from tourism research interested in primarily sustainable tourism development. “Ordinary” tourists however, can benefit from its use given that the tourism inventory is expected to integrate tourism-relevant data and information from different sources.
Tourism industry is one of the most important areas of Ukrainian economy. Ukraine has many natural and man-made attractions that are surrounded by a number of amenities and accommodation. As a result, the number of people from inside as well as the outside of Ukraine that visit popular attractions increases every year. Currently there are very few sources of information on these attractions, surrounding infrastructures like accommodation or amenities. Hardly any applications allow a user to find amenities within the area of interest, routes to access the destination or to calculate the approximate time that is needed for traveling to a specific place. Most of these tasks are spatial in nature and can therefore be solved using a GIS. Thus, the main goal of this thesis is to create a GIS-based inventory of the tourism infrastructure in Ukraine. To achieve this, the following needs to be answered: (1) What kind of tourism infrastructure is needed to plan a trip in Ukraine? (2) How well can the tourism concepts of 4A’s and tourism product be used to model tourism infrastructure for Ukraine? (3) What methods and tools can be used to analyze inventory of tourism infrastructure as well as who can be interested in this kind of analysis? We use the tourism concepts of 4A's and tourism product to conceptualize this GIS application. The 4A's refer to Attractions (e.g. museums, lakes, cathedrals), Accommodation (e.g. hotels), Amenities (e.g. ATM, gas stations) and Accessibility (e.g. roads, railways); it is used to model the tourism infrastructure. Data gathering focuses on open source such as OpenStreetMap and GADM due to limited availability of data from official governmental sources. The tourism product consists of three elements: nucleus (attraction itself), inviolate belt (context of attraction) and zone of closure (all services and facilities); it is used to inspect tourism infrastructure that surrounds and services attractions. The concept is described in an entity-relationship model and corresponding diagram (ERM/D), which is then used as a specification for implementation. The prototype of the application is implemented in an ArcGIS 10.5 environment with the following functionality: (1) visualization of the tourism inventory in a series of thematic maps following the up-to-date cartographic standards; (2) Implementation of the idea of tourism product as one possible means of analyzing distribution and density of the Ukraine’s tourism infrastructure. The prototype of the GIS application created in this thesis contains a geodatabase of the current tourism infrastructure in Ukraine. The spatial analysis of the tourism infrastructure and the visualization of the analysis results in series of thematic maps provide input for discussion of usefulness of the achieved results for different user groups. Potential users of this inventory are primarily experts from tourism industry and from tourism research interested in primarily sustainable tourism development. “Ordinary” tourists however, can benefit from its use given that the tourism inventory is expected to integrate tourism-relevant data and information from different sources.
Quantifying the impact of climate oscillation on Mediterranean hydrology using multivariate statistics
Department für Geodäsie und Geoinformation, Forschungsgruppe Fernerkundung, Technische Universität Wien, 2018
Betreuer: Univ.Ass. Dipl.-Ing. Bernhard Bauer-Marschallinger, Univ.-Prof. Dr. Wouter Arnoud Dorigo MSc
Kurzfassung/Abstract
The Mediterranean area has a complex geography covering several climate zones. Currently the interactions and processes of the hydrological cycle in the area are in the focus of many scientific studies due to the increase in extreme weather events and climate change impact. The ever-increasing need for water in tourism and agriculture reinforces the problem in areas of drought. Therefore, monitoring and better understanding of the hydrological cycle are crucial in order to create better long-term forecasts for this area. The variabilities in climate that follow distinct repeating spatio-temporal patterns known as climate modes, are one of the major drivers for the hydrological cycle. Therefore, this study seeks to quantify the relationship between regional climate modes and the hydrological cycle in the study area. Empirical Orthogonal Functions (EOF), and variations of them, are applied to a wide range of hydrological datasets to extract the major variation over the study period. More than ten datasets, describing precipitation, soil moisture, and evapotranspiration, have been analysed to give further support and enrich findings of earlier studies. The time span of the datasets varies and lies within 1980 - 2015. The resulting EOFs are then correlated with regional climate modes using Spearman Rank correlation analysis. This is done for the entire time span of the EOFs by monthly and seasonal means. There is evidence for relationships between hydrological phenomenon and the climate modes North Atlantic Oscillation (NAO), Arctic Oscillation (AO), Eastern Atlantic (EA), and Tropical Northern Atlantic (TNA). By analysing by seasonal and monthly means, especially high correlation in the winter months are found. However, the results strongly depend on the study area extent. The findings suggest an impact of regional climate modes on the hydrological cycle in the Mediterranean area.
The Mediterranean area has a complex geography covering several climate zones. Currently the interactions and processes of the hydrological cycle in the area are in the focus of many scientific studies due to the increase in extreme weather events and climate change impact. The ever-increasing need for water in tourism and agriculture reinforces the problem in areas of drought. Therefore, monitoring and better understanding of the hydrological cycle are crucial in order to create better long-term forecasts for this area. The variabilities in climate that follow distinct repeating spatio-temporal patterns known as climate modes, are one of the major drivers for the hydrological cycle. Therefore, this study seeks to quantify the relationship between regional climate modes and the hydrological cycle in the study area. Empirical Orthogonal Functions (EOF), and variations of them, are applied to a wide range of hydrological datasets to extract the major variation over the study period. More than ten datasets, describing precipitation, soil moisture, and evapotranspiration, have been analysed to give further support and enrich findings of earlier studies. The time span of the datasets varies and lies within 1980 - 2015. The resulting EOFs are then correlated with regional climate modes using Spearman Rank correlation analysis. This is done for the entire time span of the EOFs by monthly and seasonal means. There is evidence for relationships between hydrological phenomenon and the climate modes North Atlantic Oscillation (NAO), Arctic Oscillation (AO), Eastern Atlantic (EA), and Tropical Northern Atlantic (TNA). By analysing by seasonal and monthly means, especially high correlation in the winter months are found. However, the results strongly depend on the study area extent. The findings suggest an impact of regional climate modes on the hydrological cycle in the Mediterranean area.
Use of Pictures from Social Media to Assess the Local Attractivity as an Indicator for Real Estate Value Assessment
Department für Geodäsie und Geoinformation, Forschungsgruppe Geoinformation, Technische Universität Wien, 2018
Betreuer: Privatdoz. Dipl.-Ing. Dr. Gerhard Navratil
Kurzfassung/Abstract
In recent years there has been a massive increase in the production and collection of data [Goodchild 2007]. Especially in the field of social media an overflowing quantity of pictures is produced. Therefore, the question is raised, if spatial models could be derived from these images. Or, in other words, is it possible to use social media data for spatial and/or semantic purposes? In recent studies by Hochmair [2009] and Alivand [2013] it was found that people tend to make more pictures in places which appear more attractive than in those which seem less appealing. Other Studies (Brunauer et al. [2013] and Helbich et al. [2013]) come to the conclusion that those areas that appear more appealing have higher real estate prices. This study will link all these components together. Images are collected from social media and classified based in their focus - social interaction or documentation of the surrounding. Images in the later case will be used for further analysis. A neural network will be used for classification. As area for the study Vienna is chosen. In the next step another big amount of social media images with geo location features is gathered and filtered with the newly trained neural network. Then the location information of the valid images is stored. Out of these data a heat map is created, with the density of the images taken as indicator. For the validation of the created model the company DataScience Service GmbH compares the heat map with their real estate price model to see if there is a link between social media output and real estate prices.
In recent years there has been a massive increase in the production and collection of data [Goodchild 2007]. Especially in the field of social media an overflowing quantity of pictures is produced. Therefore, the question is raised, if spatial models could be derived from these images. Or, in other words, is it possible to use social media data for spatial and/or semantic purposes? In recent studies by Hochmair [2009] and Alivand [2013] it was found that people tend to make more pictures in places which appear more attractive than in those which seem less appealing. Other Studies (Brunauer et al. [2013] and Helbich et al. [2013]) come to the conclusion that those areas that appear more appealing have higher real estate prices. This study will link all these components together. Images are collected from social media and classified based in their focus - social interaction or documentation of the surrounding. Images in the later case will be used for further analysis. A neural network will be used for classification. As area for the study Vienna is chosen. In the next step another big amount of social media images with geo location features is gathered and filtered with the newly trained neural network. Then the location information of the valid images is stored. Out of these data a heat map is created, with the density of the images taken as indicator. For the validation of the created model the company DataScience Service GmbH compares the heat map with their real estate price model to see if there is a link between social media output and real estate prices.
Untersuchung der Genauigkeit eines Bündelblockausgleichs im Wald
Department für Geodäsie und Geoinformation, Forschungsgruppen Photogrammetrie und Fernerkundung, Technische Universität Wien, 2018
Betreuer: Univ.-Prof. Dipl.-Ing. Dr. Norbert Pfeifer, Dipl.-Ing. Dr. Andreas Roncat
Kurzfassung/Abstract
Aufgabe dieser Diplomarbeit ist es mit Hilfe einer Reihe von Fotos den Pfad des Fotografen zu rekonstruieren, den er durch ein Waldstück genommen hat. Der Pfad hat eine Länge von etwa 50 Metern. Es wird davon ausgegangen, dass der Anfangspunkt koordinativ bekannt ist. Die Aufnahme der Bilder soll ohne Stativ erfolgen. Zur Überprüfung des Ergebnisses soll ein Referenznetz mit Kontrollpunkten geschaffen werden, um die Qualität des rekonstruierten Pfades beurteilen zu können. Dies erfolgt über die Definition und Einmessung von künstlichen und natürlichen Passpunkten im österreichischen Landesvermessungssystem. Als Messgerät wird eine Totalstation verwendet. Die natürlichen Passpunkte werden bei Einmessung im Feld durch Auswahl markanter natürlicher Merkmale festgelegt, die sich in Bild und Natur eindeutig identifizieren lassen. Die Aufnahme der Bilddaten erfolgt reihenweise mit einer digitalen Spiegelreflexkamera. Pro Standpunkt des Fotografen werden fünf Bilder aufgenommen, die sich zu etwa 50 Prozent überlappen und ein Blickfeld von 180 abdecken (90 links und rechts von der Wegachse). Ein Bildverband besteht aus 145 Bildern in 29 Bildreihen. In den ersten beiden Bildreihen finden sich künstliche und natürliche Passpunkte. Sie werden zur Bestimmung der Orientierungsparameter des Ausgangsblocks benötigt. Auch in der letzten Bildreihe werden natürliche Punkte eingemessen. Dabei handelt es sich ausschließlich um Kontrollpunkte, die nur zur Überprüfung des Orientierungsergebnisses dienen und nicht in den Bündelblockausgleich mit einfließen. Die Vororientierung des Bildverbandes wird mit Agisoft PhotoScan vorgenommen, die Bündelblockausgleichung zur Optimierung der Orientierungsparameter mit dem Softwarepaket OrientAL, das an der TU Wien entwickelt wird. Zur Visualisierung der Ergebnisse werden MATLAB von MathWorks und GNU Octave verwendet. Es wird in dieser Arbeit gezeigt, dass das Lösen einer Bündelblockausgleichung unter den gegebenen Voraussetzungen möglich ist. Die Auswertung der Ergebnisse zeigt jedoch auch, dass eine Reihe von Verbesserungen und Erweiterungen des Aufnahmeund Auswerteprozesses denkbar wären, um die Genauigkeit der Bündelblockausgleichung weiter zu steigern.
Aufgabe dieser Diplomarbeit ist es mit Hilfe einer Reihe von Fotos den Pfad des Fotografen zu rekonstruieren, den er durch ein Waldstück genommen hat. Der Pfad hat eine Länge von etwa 50 Metern. Es wird davon ausgegangen, dass der Anfangspunkt koordinativ bekannt ist. Die Aufnahme der Bilder soll ohne Stativ erfolgen. Zur Überprüfung des Ergebnisses soll ein Referenznetz mit Kontrollpunkten geschaffen werden, um die Qualität des rekonstruierten Pfades beurteilen zu können. Dies erfolgt über die Definition und Einmessung von künstlichen und natürlichen Passpunkten im österreichischen Landesvermessungssystem. Als Messgerät wird eine Totalstation verwendet. Die natürlichen Passpunkte werden bei Einmessung im Feld durch Auswahl markanter natürlicher Merkmale festgelegt, die sich in Bild und Natur eindeutig identifizieren lassen. Die Aufnahme der Bilddaten erfolgt reihenweise mit einer digitalen Spiegelreflexkamera. Pro Standpunkt des Fotografen werden fünf Bilder aufgenommen, die sich zu etwa 50 Prozent überlappen und ein Blickfeld von 180 abdecken (90 links und rechts von der Wegachse). Ein Bildverband besteht aus 145 Bildern in 29 Bildreihen. In den ersten beiden Bildreihen finden sich künstliche und natürliche Passpunkte. Sie werden zur Bestimmung der Orientierungsparameter des Ausgangsblocks benötigt. Auch in der letzten Bildreihe werden natürliche Punkte eingemessen. Dabei handelt es sich ausschließlich um Kontrollpunkte, die nur zur Überprüfung des Orientierungsergebnisses dienen und nicht in den Bündelblockausgleich mit einfließen. Die Vororientierung des Bildverbandes wird mit Agisoft PhotoScan vorgenommen, die Bündelblockausgleichung zur Optimierung der Orientierungsparameter mit dem Softwarepaket OrientAL, das an der TU Wien entwickelt wird. Zur Visualisierung der Ergebnisse werden MATLAB von MathWorks und GNU Octave verwendet. Es wird in dieser Arbeit gezeigt, dass das Lösen einer Bündelblockausgleichung unter den gegebenen Voraussetzungen möglich ist. Die Auswertung der Ergebnisse zeigt jedoch auch, dass eine Reihe von Verbesserungen und Erweiterungen des Aufnahmeund Auswerteprozesses denkbar wären, um die Genauigkeit der Bündelblockausgleichung weiter zu steigern.
Electrical modeling for an improved understanding of GPR signatures in alpine permafrost
Department für Geodäsie und Geoinformation, Forschungsgruppe Geophysik, Technische Universität Wien, 2018
Betreuer: Dipl.-Ing. Matthias Steiner, Dr. Adrian Flores-Orozco
Kurzfassung/Abstract
In frame of this diploma thesis a series of Ground Penetrating Radar (GPR) surveys at the summit of Hoher Sonnblick were conducted. The objective was to determine the internal structures and distribution of mountain permafrost and associated changes due to seasonal variations in temperature. 3D GPR surveys organised by the Geophysics Research group of the TU Vienna, were repeated between 2015 and 2017 at different times, as GPR has successfully been applied to delineate frozen materials in permafrost regions. Nevertheless, in comparison with previous studies, GPR investigations aimed not only at the identification of possible interfaces, but to develop a methodology for the modelling of electrical properties of the subsurface that permits an improved understanding and interpretation of GPR and Electrical Resistivity Tomography (ERT) imaging results. Besides the processing and interpretation of the raw data, a quasi-continues model of the electrical properties in the subsurface at the summit of Hoher Sonnblick was obtained, regarding lithological contacts and discontinuities (e.g., fractures) controlling atmospheric-subsurface interactions. The modelling approach was tested on three case studies in porous and unconsolidated media and finally it was applied in the highly fractured media present at the Hoher Sonnblick. For validation, the GPR modelling results were compared to borehole temperature data revealing consistency.
In frame of this diploma thesis a series of Ground Penetrating Radar (GPR) surveys at the summit of Hoher Sonnblick were conducted. The objective was to determine the internal structures and distribution of mountain permafrost and associated changes due to seasonal variations in temperature. 3D GPR surveys organised by the Geophysics Research group of the TU Vienna, were repeated between 2015 and 2017 at different times, as GPR has successfully been applied to delineate frozen materials in permafrost regions. Nevertheless, in comparison with previous studies, GPR investigations aimed not only at the identification of possible interfaces, but to develop a methodology for the modelling of electrical properties of the subsurface that permits an improved understanding and interpretation of GPR and Electrical Resistivity Tomography (ERT) imaging results. Besides the processing and interpretation of the raw data, a quasi-continues model of the electrical properties in the subsurface at the summit of Hoher Sonnblick was obtained, regarding lithological contacts and discontinuities (e.g., fractures) controlling atmospheric-subsurface interactions. The modelling approach was tested on three case studies in porous and unconsolidated media and finally it was applied in the highly fractured media present at the Hoher Sonnblick. For validation, the GPR modelling results were compared to borehole temperature data revealing consistency.
Die Rückführung von Katastergrenzen: Ist die Dokumentation von Änderungen im Kataster dafür gerüstet?
Department für Geodäsie und Geoinformation, Forschungsgruppe Geoinformation, Technische Universität Wien, 2018
Betreuer: Privatdoz. Dipl.-Ing. Dr. Gerhard Navratil
Kurzfassung/Abstract
Der österreichische Kataster ist aufgrund seines 200-jährigen Bestehens mittlerweile zum österreichischen Kulturgut geworden. Seit der Entstehung des stabilen Katasters im Jahre 1817 durch das Grundsteuerpatent gibt es eine mehr oder weniger genau dokumentierte Evidenzhaltung des Katasters. Das bedeutet, dass von Beginn an darauf geachtet wurde Veränderungen im Kataster einzuarbeiten um diesen auf einen aktuellen Stand zu halten. Aus diesem Grund muss es theoretisch möglich sein die Grundgrenzen geschichtlich zurück zu verfolgen und für jeden Zeitpunkt festzustellen wie die Eigentumsverhältnisse waren. Im Zuge dieser Diplomarbeit wird versucht, die alten Zustände/Grenzen im Kataster wiederherzustellen. Zu diesem Zweck werden Teilungspläne, Lagepläne und weitere relevante technische Unterlagen sowie alte Mappenblätter zu Hilfe genommen. Die zentralen Fragen, die mit Hilfe dieser Arbeit beantwortet werden sollen, sind: - Ist eine solche Rückführung der Grenzen möglich und wo treten Probleme auf? - Treten bei der Wiederherstellung Klaffungen zwischen den Grenzen auf? Welche Größenordnung und Ursachen haben diese? Diese Arbeit beginnt mit einem theoretischen Teil, welcher neben der Definition und dem Inhalt des Katasters auch seine Geschichte erläutert. Dabei wird vor allem auf die Evidenzhaltung des Katasters sowie die wesentlichen Änderungen im Katasterkonzept eingegangen. Weiters beschäftigt sich ein eigenes Kapitel mit der digitalen Katastralmappe und ihrer Entwicklung, ihren Ziele sowie Inhalten. Anschließend wird in einem weiteren Kapitel die Dokumentation der Änderungen in der digitalen Katastralmappe näher beschrieben, da hier wesentliche Informationen für den praktischen Teil enthalten sind. Der letzte Abschnitt des theoretischen Teils beschäftigt sich mit dem Festpunktfeld und geht auf dessen Entwicklung und Änderungen ein. Der praktische Teil umfasst die Rückführung der Grenzen, welche in einem kleinen Testgebiet in der Katastralgemeinde Strasserfeld in Niederösterreich durchgeführt wurde. Die Ergebnisse zeigen, dass eine solche Rückführung prinzipiell möglich ist, es jedoch auch immer wieder Probleme gegeben hat, besonders dann, wenn es sich um grafische Grenzen gehandelt hat. Deshalb wird anschließend näher beschrieben, wo es welche Herausforderungen im Rückführungsprozess zu meistern gab und zu welchem Ergebnis der Vergleich zwischen der DKM von 1997 und den rückgeführten Grenzen kommt. Abschließend wird diskutiert, welche Änderungen es in der Dokumentation des Katasters geben müsste um eine bessere Evidenzhaltung und leichtere Rückführung der Grenzen im Kataster zu ermöglichen.
Der österreichische Kataster ist aufgrund seines 200-jährigen Bestehens mittlerweile zum österreichischen Kulturgut geworden. Seit der Entstehung des stabilen Katasters im Jahre 1817 durch das Grundsteuerpatent gibt es eine mehr oder weniger genau dokumentierte Evidenzhaltung des Katasters. Das bedeutet, dass von Beginn an darauf geachtet wurde Veränderungen im Kataster einzuarbeiten um diesen auf einen aktuellen Stand zu halten. Aus diesem Grund muss es theoretisch möglich sein die Grundgrenzen geschichtlich zurück zu verfolgen und für jeden Zeitpunkt festzustellen wie die Eigentumsverhältnisse waren. Im Zuge dieser Diplomarbeit wird versucht, die alten Zustände/Grenzen im Kataster wiederherzustellen. Zu diesem Zweck werden Teilungspläne, Lagepläne und weitere relevante technische Unterlagen sowie alte Mappenblätter zu Hilfe genommen. Die zentralen Fragen, die mit Hilfe dieser Arbeit beantwortet werden sollen, sind: - Ist eine solche Rückführung der Grenzen möglich und wo treten Probleme auf? - Treten bei der Wiederherstellung Klaffungen zwischen den Grenzen auf? Welche Größenordnung und Ursachen haben diese? Diese Arbeit beginnt mit einem theoretischen Teil, welcher neben der Definition und dem Inhalt des Katasters auch seine Geschichte erläutert. Dabei wird vor allem auf die Evidenzhaltung des Katasters sowie die wesentlichen Änderungen im Katasterkonzept eingegangen. Weiters beschäftigt sich ein eigenes Kapitel mit der digitalen Katastralmappe und ihrer Entwicklung, ihren Ziele sowie Inhalten. Anschließend wird in einem weiteren Kapitel die Dokumentation der Änderungen in der digitalen Katastralmappe näher beschrieben, da hier wesentliche Informationen für den praktischen Teil enthalten sind. Der letzte Abschnitt des theoretischen Teils beschäftigt sich mit dem Festpunktfeld und geht auf dessen Entwicklung und Änderungen ein. Der praktische Teil umfasst die Rückführung der Grenzen, welche in einem kleinen Testgebiet in der Katastralgemeinde Strasserfeld in Niederösterreich durchgeführt wurde. Die Ergebnisse zeigen, dass eine solche Rückführung prinzipiell möglich ist, es jedoch auch immer wieder Probleme gegeben hat, besonders dann, wenn es sich um grafische Grenzen gehandelt hat. Deshalb wird anschließend näher beschrieben, wo es welche Herausforderungen im Rückführungsprozess zu meistern gab und zu welchem Ergebnis der Vergleich zwischen der DKM von 1997 und den rückgeführten Grenzen kommt. Abschließend wird diskutiert, welche Änderungen es in der Dokumentation des Katasters geben müsste um eine bessere Evidenzhaltung und leichtere Rückführung der Grenzen im Kataster zu ermöglichen.
Normal equation combination of VLBI and SLR for CONT14
Department für Geodäsie und Geoinformation, Forschungsgruppe Höhere Geodäsie, Technische Universität Wien, 2018
Betreuer: Dipl.-Ing. Jakob Franz Gruber, Univ.-Prof. Dipl.-Ing. Dr. Johannes Böhm
Kurzfassung/Abstract
In this master thesis VLBI and SLR (SINEX-) data from a 15-day measurement campaign in 2014 are combined on the level of normal equations (NEQ). This combination method plays an import role for the generation of Terrestrial Reference Frames and follows an approach by the Deutsches Geodätisches Forschungsinstut (DGFI), which is considered as an alternative to the state-of-the-art method used at the Institut Géographique National (IGN) (ITRF derivation on solution-level). Thereby, residuals (dX) for VLBI and SLR ground stations are estimated by the Least Squares Adjustment (LSA) method and added to given a-priori coordinates. Thus an own terrestrial reference system is generated. Therefore definitions of the geodetic datum are tested. The two space geodetic techniques are connected via local ties at four Co-Location sites. They are implemented to the NEQs as conditions, fixing the distance between the respective observing units. The results are investigated with respect to differences between VLBI and SLR stations, as well as differences between the (inter-technique) combined solution and the technique specific individual solutions. It is shown that the VLBI system is more stable than the SLR system. However this is also based on the fact that the available VLBI data is more homogenous. Hence, they are also used for the definition of the geodetic datum. On average the residuals have a size of 1.5 cm, varying between and within the two techniques. Furthermore, the variation of scale between the systems was investigated. Results show that the radius of the earth (of approximated 6371 km) is about 1 cm longer in the VLBI system than in the SLR system. This indicates a difference in scale of 1.7 ppb, which is comparable to the results found by Altamimi et al. [2016] with the combination of VLBI and SLR data on solution level. This can contribute to a better understanding of technique specific characteristics, which are necessary in order to improve the accuracy of a global TRF. This thesis also points out relevant parameters and their influences on the combination of VLBI and SLR NEQs. Challenging aspects that need to be considered like discrepancies between individual reference systems are discussed.
In this master thesis VLBI and SLR (SINEX-) data from a 15-day measurement campaign in 2014 are combined on the level of normal equations (NEQ). This combination method plays an import role for the generation of Terrestrial Reference Frames and follows an approach by the Deutsches Geodätisches Forschungsinstut (DGFI), which is considered as an alternative to the state-of-the-art method used at the Institut Géographique National (IGN) (ITRF derivation on solution-level). Thereby, residuals (dX) for VLBI and SLR ground stations are estimated by the Least Squares Adjustment (LSA) method and added to given a-priori coordinates. Thus an own terrestrial reference system is generated. Therefore definitions of the geodetic datum are tested. The two space geodetic techniques are connected via local ties at four Co-Location sites. They are implemented to the NEQs as conditions, fixing the distance between the respective observing units. The results are investigated with respect to differences between VLBI and SLR stations, as well as differences between the (inter-technique) combined solution and the technique specific individual solutions. It is shown that the VLBI system is more stable than the SLR system. However this is also based on the fact that the available VLBI data is more homogenous. Hence, they are also used for the definition of the geodetic datum. On average the residuals have a size of 1.5 cm, varying between and within the two techniques. Furthermore, the variation of scale between the systems was investigated. Results show that the radius of the earth (of approximated 6371 km) is about 1 cm longer in the VLBI system than in the SLR system. This indicates a difference in scale of 1.7 ppb, which is comparable to the results found by Altamimi et al. [2016] with the combination of VLBI and SLR data on solution level. This can contribute to a better understanding of technique specific characteristics, which are necessary in order to improve the accuracy of a global TRF. This thesis also points out relevant parameters and their influences on the combination of VLBI and SLR NEQs. Challenging aspects that need to be considered like discrepancies between individual reference systems are discussed.
GIS-based analysis of Tourism Infrastructure in Central Asia
Studiengang Spatial Information Management, Fachhochschule Technikum Kärnten, 2018
Betreuer: FH-Prof. Dr. Adrijana Car
Kurzfassung/Abstract
Central Asia becomes a more and more popular tourism destination for national and international tourists because of its history, culture and natural beauty. Tourists from both in- and outside the Central Asian countries come to enjoy many different outdoor activities and cultural events while enjoying local hospitality. The aim of this project was to develop a prototype of GIS application of Tourism Infrastructure (TI-GIS application) for Central Asia (CA) with a usage of spatial analysis in order to help and support the tourism development in Central Asia. The Tourism Infrastructure (TI) geodatabase of Central Asia includes Kazakhstan, Turkmenistan, Tajikistan, Kyrgyzstan and Uzbekistan. The concept of TI-GIS application combines the tourism research and GIS, i.e. is based on tourism concept of 4A's: attraction, accommodation, amenity and accessibility. Kernel Density Estimation (KDE) was used to analyse the density of attractions, accommodation and amenities. Spatial autocorrelation was used to analyze cross-correlation between TI variables. The tourism product concept was applied to analyze provision of attractions with TI. Different scenarios for TI-GIS application were created to demonstrate usefulness of geodatabase for different tourism users such as tourists, tour operators and tourism planners. Different thematic maps visualize the results of scenarios of TI-GIS application. The results demonstrate the usefulness of the TI geodatabase and associated GIS application to potential users, both in tourism industry and research, and in turn, help and support the sustainable tourism development in Central Asia.
Central Asia becomes a more and more popular tourism destination for national and international tourists because of its history, culture and natural beauty. Tourists from both in- and outside the Central Asian countries come to enjoy many different outdoor activities and cultural events while enjoying local hospitality. The aim of this project was to develop a prototype of GIS application of Tourism Infrastructure (TI-GIS application) for Central Asia (CA) with a usage of spatial analysis in order to help and support the tourism development in Central Asia. The Tourism Infrastructure (TI) geodatabase of Central Asia includes Kazakhstan, Turkmenistan, Tajikistan, Kyrgyzstan and Uzbekistan. The concept of TI-GIS application combines the tourism research and GIS, i.e. is based on tourism concept of 4A's: attraction, accommodation, amenity and accessibility. Kernel Density Estimation (KDE) was used to analyse the density of attractions, accommodation and amenities. Spatial autocorrelation was used to analyze cross-correlation between TI variables. The tourism product concept was applied to analyze provision of attractions with TI. Different scenarios for TI-GIS application were created to demonstrate usefulness of geodatabase for different tourism users such as tourists, tour operators and tourism planners. Different thematic maps visualize the results of scenarios of TI-GIS application. The results demonstrate the usefulness of the TI geodatabase and associated GIS application to potential users, both in tourism industry and research, and in turn, help and support the sustainable tourism development in Central Asia.
Evaluating Temporal Approximation Methods Using Burglary Data
Studiengang Spatial Information Management, Fachhochschule Technikum Kärnten, 2018
Betreuer: FH-Prof. Dr. Gernot Paulus
Kurzfassung/Abstract
During the past 15 years, spatiotemporal crime analysis has almost exclusively focused on the spatial component of crimes, while the temporal component has been rarely considered and little researched until today. But knowing when crimes occur is crucial for preventing them because this information rules out a large group of possible perpetrators. For this reason, fewer tools are available for temporal analysis than for spatial analysis. Nevertheless, because crimes, such as burglaries, often lack precise time information, law enforcement agencies are interested to apply temporal approximation methods to estimate the real, but less accurate, occurrence times of crimes. This research discusses four traditional methods and three novel methods, among them the “Grazer Tatzeitmodell” method, which is used by the Austrian Police. The “Grazer Tatzeitmodell” method has never before been published or compared to any other temporal approximation method. This study aims to fill this gap with an objective comparison and evaluation between the prediction accuracy of each method under the use of different scenarios. Results provide law enforcement agencies with valuable information and improvements to existing models. The first step to achieve this objective is to implement an automatic test environment for all seven methods. In the second step, offenses with inaccurate occurrence times out of a total of 138,752 burglary offenses are applied to each method, using scenarios that vary, for example, between crime types (apartment-, house-, or car burglary), study areas (Vienna or Graz), and time periods (2008 – 2015). Then, each method is evaluated based on crime events with known occurrence times. Each method’s quality is evaluated by measuring the distribution of predicted occurrence times with the distribution of known occurrence times with, for instance, the Root Mean Square Error or the Spearman Correlation Coefficient. Results show that the two aoristic methods approximate a similar and not significantly better result than the naïve random method. The end method shows surprisingly good results for some scenarios. The “Grazer Tatzeitmodell” shows very good results for both evaluation measurements in most scenarios but does not perform very well, when limited data are available, while the other methods calculate constant results in that case.
During the past 15 years, spatiotemporal crime analysis has almost exclusively focused on the spatial component of crimes, while the temporal component has been rarely considered and little researched until today. But knowing when crimes occur is crucial for preventing them because this information rules out a large group of possible perpetrators. For this reason, fewer tools are available for temporal analysis than for spatial analysis. Nevertheless, because crimes, such as burglaries, often lack precise time information, law enforcement agencies are interested to apply temporal approximation methods to estimate the real, but less accurate, occurrence times of crimes. This research discusses four traditional methods and three novel methods, among them the “Grazer Tatzeitmodell” method, which is used by the Austrian Police. The “Grazer Tatzeitmodell” method has never before been published or compared to any other temporal approximation method. This study aims to fill this gap with an objective comparison and evaluation between the prediction accuracy of each method under the use of different scenarios. Results provide law enforcement agencies with valuable information and improvements to existing models. The first step to achieve this objective is to implement an automatic test environment for all seven methods. In the second step, offenses with inaccurate occurrence times out of a total of 138,752 burglary offenses are applied to each method, using scenarios that vary, for example, between crime types (apartment-, house-, or car burglary), study areas (Vienna or Graz), and time periods (2008 – 2015). Then, each method is evaluated based on crime events with known occurrence times. Each method’s quality is evaluated by measuring the distribution of predicted occurrence times with the distribution of known occurrence times with, for instance, the Root Mean Square Error or the Spearman Correlation Coefficient. Results show that the two aoristic methods approximate a similar and not significantly better result than the naïve random method. The end method shows surprisingly good results for some scenarios. The “Grazer Tatzeitmodell” shows very good results for both evaluation measurements in most scenarios but does not perform very well, when limited data are available, while the other methods calculate constant results in that case.
UAS Based Morphological Change Detection of Wetland Areas
Studiengang Spatial Information Management, Fachhochschule Technikum Kärnten, 2018
Betreuer: FH-Prof. Dr. Gernot Paulus
Kurzfassung/Abstract
Unmanned Aerial Systems (UAS) offer a new and innovative approach in the context of high resolution spatiotemporal environmental monitoring. There are not only an increasing number of professional UAS platforms but also a wide range of affordable sensors and software tools available. This research project is focusing on an UAS based morphological change detection of a wetland area. The selected test site is the “Bleistätter Moor” which is a protected wetland area located near Ossiacher See and the Tiebel River in Carinthia, Austria. The “Bleistätter Moor” represents one of the largest renaturation projects in Austria. The major goal of this research project is the monitoring of morphological changes and changes in water level between March 2017 and October 2017 with UAS-based photogrammetric methods. At four different time stamps (T1, T2, T3, and T4) high resolution images were captured using a fixed wing UAS. Additionally, a T0 time stamp prior to construction is used a baseline in the change detection analysis. From the aerial photography, a Structure-from-Motion (SfM) approach was applied to generate orthorectified mosaic images and Digital Surface Models (DSM). The DSM quality was very detailed with up to 0.06 m resolution. The change detection process was developed for all time stamps for the elevation and water level change detection using the Geomorphic Change Detection (GCD) ArcGIS extension tool. 16 comparison scenarios were formulated and a Minimum Level of Detection (minLOD) and Probabilistic Thresholding with a 95% confidence level were applied to each scenario of comparison. This research proves that UAS based morphological change detection on wetland areas could monitor the morphological changes on wetland area including elevation changes and water level changes. The morphological changes have been visually analyzed by domain experts. The quantification of the results detected the total area of surface lowering, the total area of surface raising, an average depth of surface raising, and average depth surface lowering including errors. The biggest changes happened between T2 and T3. Most of the significant changes were detected on open water area and vegetation during the whole period of investigation. On the other hand, the water level change detection results show that the water level is increasing from T1 until T3 and decreasing from T3 until T4.
Unmanned Aerial Systems (UAS) offer a new and innovative approach in the context of high resolution spatiotemporal environmental monitoring. There are not only an increasing number of professional UAS platforms but also a wide range of affordable sensors and software tools available. This research project is focusing on an UAS based morphological change detection of a wetland area. The selected test site is the “Bleistätter Moor” which is a protected wetland area located near Ossiacher See and the Tiebel River in Carinthia, Austria. The “Bleistätter Moor” represents one of the largest renaturation projects in Austria. The major goal of this research project is the monitoring of morphological changes and changes in water level between March 2017 and October 2017 with UAS-based photogrammetric methods. At four different time stamps (T1, T2, T3, and T4) high resolution images were captured using a fixed wing UAS. Additionally, a T0 time stamp prior to construction is used a baseline in the change detection analysis. From the aerial photography, a Structure-from-Motion (SfM) approach was applied to generate orthorectified mosaic images and Digital Surface Models (DSM). The DSM quality was very detailed with up to 0.06 m resolution. The change detection process was developed for all time stamps for the elevation and water level change detection using the Geomorphic Change Detection (GCD) ArcGIS extension tool. 16 comparison scenarios were formulated and a Minimum Level of Detection (minLOD) and Probabilistic Thresholding with a 95% confidence level were applied to each scenario of comparison. This research proves that UAS based morphological change detection on wetland areas could monitor the morphological changes on wetland area including elevation changes and water level changes. The morphological changes have been visually analyzed by domain experts. The quantification of the results detected the total area of surface lowering, the total area of surface raising, an average depth of surface raising, and average depth surface lowering including errors. The biggest changes happened between T2 and T3. Most of the significant changes were detected on open water area and vegetation during the whole period of investigation. On the other hand, the water level change detection results show that the water level is increasing from T1 until T3 and decreasing from T3 until T4.
Quantitative Estimation of the Structure and morphological Parameters in Vineyards using Close Range Photogrammetry
Studiengang Spatial Information Management, Fachhochschule Technikum Kärnten, 2018
Betreuer: FH-Prof. Dr. Karl-Heinrich Anders
Kurzfassung/Abstract
This thesis focuses on developing a non-destructive workflow to estimate the structure and morphological parameters of a grapevine, such as an amount, shape and size of grapes in order to monitor and quantify the phenological changes. The conceptual workflow combines close-range photogrammetry with computer vision resulting in a potential large scale yield estimation in the future. In this thesis, only one grapevine was used as a proof-of-concept. The grapevine is reconstructed in 3D by using close-range photogrammetry. The high density point cloud is classified using the Canupo plug-in in CloudCompare. The point cloud is segmented binary in grapes and no grapes. Quantification of morphological attributes of the grapes the volume can be calculated. The volumetric calculation is done by comparing three different methods: Firstly, with the Ransac Shape Detection tool, secondly, a convex hull approach and thirdly with the Poison Surface Reconstruction method. To validate the proposed workflow one bunch of grapes is removed from the vine plant for exact measurement purposes, i.e. to weigh it, count the berries, and measure the size of each grape in order to establish ground truth.
This thesis focuses on developing a non-destructive workflow to estimate the structure and morphological parameters of a grapevine, such as an amount, shape and size of grapes in order to monitor and quantify the phenological changes. The conceptual workflow combines close-range photogrammetry with computer vision resulting in a potential large scale yield estimation in the future. In this thesis, only one grapevine was used as a proof-of-concept. The grapevine is reconstructed in 3D by using close-range photogrammetry. The high density point cloud is classified using the Canupo plug-in in CloudCompare. The point cloud is segmented binary in grapes and no grapes. Quantification of morphological attributes of the grapes the volume can be calculated. The volumetric calculation is done by comparing three different methods: Firstly, with the Ransac Shape Detection tool, secondly, a convex hull approach and thirdly with the Poison Surface Reconstruction method. To validate the proposed workflow one bunch of grapes is removed from the vine plant for exact measurement purposes, i.e. to weigh it, count the berries, and measure the size of each grape in order to establish ground truth.
Unterstützung der menschlichen Selbstlokalisierung
Department für Geodäsie und Geoinformation, Forschungsgruppe Geoinformation, Technische Universität Wien, 2018
Betreuer: Univ.-Prof. Dr. Ioannis Giannopoulos MSc BSc
Kurzfassung/Abstract
Die menschliche Selbstlokalisierung ist ein wichtiger Bestandteil des Alltags. Zur Bestimmung der eigenen Position und der Orientierung muss die allozentrische Darstellung, meist in Form einer Karte, mit der eigenen egozentrischen Repräsentation der realen Welt angeglichen werden. Dafür sind Objekte (Ankerpunkte) nötig, die in beiden Darstellungen vorhanden sind. In dieser Arbeit werden zwei neuwertige Ansätze vorgestellt, die den Prozess dieser Angleichung und damit auch die Selbstlokalisierung vereinfachen sollen. Der Viewshed-Ansatz basiert auf einer Sichtbarkeitsanalyse, um den NutzerInnen die Wahl geeigneter Ankerpunkte zu erleichtern. Dadurch, dass nur die in der Realität sichtbaren Gebäude in der Karte hervorgehoben werden, können die übrigen Gebäude von der Wahl ausgeschlossen werden. Der Bilderkennungs-Ansatz vereinfacht hingegen den Prozess der Selbstlokalisierung, indem ein Teil der Aufgabe automatisiert wird, und den NutzerInnen ein Ankerpunkt auf der Karte markiert wird. Anhand eines empirischen Experiments mit 30 TeilnehmerInnen im zehnten Wiener Gemeindebezirk wurden die beiden Methoden miteinander und zusätzlich auch mit einer Basis-Methode in verschiedenen Aspekten verglichen. Dabei ging es um die Effizienz, das Nutzererlebnis aber auch um die kognitive Belastung und den nötigen Aufwand. Die Ergebnisse zeigen, dass die Bilderkennungs-Methode bei der Selbstlokalisierung die beste Unterstützung bot und auch bei den NutzerInnen am beliebtesten war. Die Viewshed-Methode blieb deutlich hinter den Erwartungen zurück.
Die menschliche Selbstlokalisierung ist ein wichtiger Bestandteil des Alltags. Zur Bestimmung der eigenen Position und der Orientierung muss die allozentrische Darstellung, meist in Form einer Karte, mit der eigenen egozentrischen Repräsentation der realen Welt angeglichen werden. Dafür sind Objekte (Ankerpunkte) nötig, die in beiden Darstellungen vorhanden sind. In dieser Arbeit werden zwei neuwertige Ansätze vorgestellt, die den Prozess dieser Angleichung und damit auch die Selbstlokalisierung vereinfachen sollen. Der Viewshed-Ansatz basiert auf einer Sichtbarkeitsanalyse, um den NutzerInnen die Wahl geeigneter Ankerpunkte zu erleichtern. Dadurch, dass nur die in der Realität sichtbaren Gebäude in der Karte hervorgehoben werden, können die übrigen Gebäude von der Wahl ausgeschlossen werden. Der Bilderkennungs-Ansatz vereinfacht hingegen den Prozess der Selbstlokalisierung, indem ein Teil der Aufgabe automatisiert wird, und den NutzerInnen ein Ankerpunkt auf der Karte markiert wird. Anhand eines empirischen Experiments mit 30 TeilnehmerInnen im zehnten Wiener Gemeindebezirk wurden die beiden Methoden miteinander und zusätzlich auch mit einer Basis-Methode in verschiedenen Aspekten verglichen. Dabei ging es um die Effizienz, das Nutzererlebnis aber auch um die kognitive Belastung und den nötigen Aufwand. Die Ergebnisse zeigen, dass die Bilderkennungs-Methode bei der Selbstlokalisierung die beste Unterstützung bot und auch bei den NutzerInnen am beliebtesten war. Die Viewshed-Methode blieb deutlich hinter den Erwartungen zurück.
Analyse von Bewegungsdaten mit ArcGIS in der Cloud
Department für Geodäsie und Geoinformation, Forschungsgruppe Geoinformation, Technische Universität Wien, 2018
Betreuer: Privatdoz. Dipl.-Ing. Dr. Gerhard Navratil
Kurzfassung/Abstract
Spatial data mining is a highly emerging field as a consequence of tremendous growth in spatial data collection. Such growth has been made possible through various applications, such as: remote sensing, GIS, environmental assessment, planning, web-based spatial data sharing, and location-based services. Through advanced spatial data mining methods and analysis, valuable knowledge can be extracted. The gained knowledge is used to support decision making based on spatial data. As data based decision making is becoming more and more important and a large proportion of data includes significant spatial components the use of spatial algorithms is becoming an important part of modern data mining. For this thesis the used dataset is based on user data of a smartphone application for indoor navigation. This smartphone application was developed and designed for a fashion trade show in Copenhagen. This thesis evaluates, if it is possible to analyse this movement data to gain beneficial knowledge with the provided toolset of commercial GIS software. The functions that were provided by this software were embedded and adjusted in several scripts to automatically process datasets in post-processing. By testing the feasibility of these methods in post-processing the possibility of future real-time analysis can be evaluated as well. Furthermore, a comparison shall be made how processing large amounts of data differ from smaller datasets and if the use of cloud computing can improve possible issues. In conclusion the study found that it is possible to extract valuable knowledge from the provided movement data despite certain limitations. However, such limitations are primarily related to the aspects of data acquisition rather than the data analysis methods. Firstly, in order to analyse some phenomena, for example detecting movement patterns, large amounts of data are necessary in a dense temporal structure. The weight of this limitation is even more severe for real-time applications. Secondly, a relatively high spatial accuracy is necessary in order to yield high quality results. Lastly, some issues related to pre-processing tasks could be observed, especially concerning coordinate transformations.
Spatial data mining is a highly emerging field as a consequence of tremendous growth in spatial data collection. Such growth has been made possible through various applications, such as: remote sensing, GIS, environmental assessment, planning, web-based spatial data sharing, and location-based services. Through advanced spatial data mining methods and analysis, valuable knowledge can be extracted. The gained knowledge is used to support decision making based on spatial data. As data based decision making is becoming more and more important and a large proportion of data includes significant spatial components the use of spatial algorithms is becoming an important part of modern data mining. For this thesis the used dataset is based on user data of a smartphone application for indoor navigation. This smartphone application was developed and designed for a fashion trade show in Copenhagen. This thesis evaluates, if it is possible to analyse this movement data to gain beneficial knowledge with the provided toolset of commercial GIS software. The functions that were provided by this software were embedded and adjusted in several scripts to automatically process datasets in post-processing. By testing the feasibility of these methods in post-processing the possibility of future real-time analysis can be evaluated as well. Furthermore, a comparison shall be made how processing large amounts of data differ from smaller datasets and if the use of cloud computing can improve possible issues. In conclusion the study found that it is possible to extract valuable knowledge from the provided movement data despite certain limitations. However, such limitations are primarily related to the aspects of data acquisition rather than the data analysis methods. Firstly, in order to analyse some phenomena, for example detecting movement patterns, large amounts of data are necessary in a dense temporal structure. The weight of this limitation is even more severe for real-time applications. Secondly, a relatively high spatial accuracy is necessary in order to yield high quality results. Lastly, some issues related to pre-processing tasks could be observed, especially concerning coordinate transformations.
Vergleich von GNSS‐Echtzeitkorrekturmodellen zur Kompensation von Spannungen im Landesnetz
Department für Geodäsie und Geoinformation, Forschungsgruppe Höhere Geodäsie, Technische Universität Wien, 2018
Betreuer: Ao.Univ.-Prof. Dipl.-Ing. Dr. Robert Weber
Kurzfassung/Abstract
Bei der Reduktion von an der Erdoberfläche gemessenen Schwerewerten oder Lotabweichungen wird allgemein von einer homogenen Dichteverteilung (ρ = 2.67g/cm3) innerhalb der Erdkruste ausgegangen. Diese Annahme eines konstanten Dichtewertes im Untergrund wird auch bei dem Programm TOPOGRAV zur Berechnung der topographischen Korrektur, mithilfe der sogenannten Quader-Methode, angewendet. In dieser Arbeit soll der Einfluss eines Dichtemodells des Untergrunds auf die Berechnung von reduzierten Lotabweichungen (ξ und η) und Schwerewerten mithilfe der Software TOPOGRAV untersucht werden. Das vom Autor implementierte ebene Dichtemodell ermöglicht es Tiefe und Dichtewert eines Dichtesprungs im Untergrund als Parameter für die Berechnung der Reduktion zu übergeben. Die in dieser Arbeit angestellten Untersuchungen zeigten, dass die Einbeziehung eines Dichtemodells besonders geeignet erscheint für Stationen unterhalb massiver Gebirgszüge und hoher Dichtedifferenzen. Für eine solche Station ergeben sich für reduzierte Schwerewerte bei unterschiedlichen Berechnungsmethoden (mit oder ohne Dichtemodell) Differenzen von knapp 30mGal. Ein wesentlich geringerer Unterschied (<1‘‘) konnte hingegen für reduzierte Lotabweichungen festgestellt werden. Für eine genauere Untersuchung des Einflusses eines Dichtemodells werden Tests mit unterschiedlichen Geländemasken (vor allem Meer) nötig sein.
Bei der Reduktion von an der Erdoberfläche gemessenen Schwerewerten oder Lotabweichungen wird allgemein von einer homogenen Dichteverteilung (ρ = 2.67g/cm3) innerhalb der Erdkruste ausgegangen. Diese Annahme eines konstanten Dichtewertes im Untergrund wird auch bei dem Programm TOPOGRAV zur Berechnung der topographischen Korrektur, mithilfe der sogenannten Quader-Methode, angewendet. In dieser Arbeit soll der Einfluss eines Dichtemodells des Untergrunds auf die Berechnung von reduzierten Lotabweichungen (ξ und η) und Schwerewerten mithilfe der Software TOPOGRAV untersucht werden. Das vom Autor implementierte ebene Dichtemodell ermöglicht es Tiefe und Dichtewert eines Dichtesprungs im Untergrund als Parameter für die Berechnung der Reduktion zu übergeben. Die in dieser Arbeit angestellten Untersuchungen zeigten, dass die Einbeziehung eines Dichtemodells besonders geeignet erscheint für Stationen unterhalb massiver Gebirgszüge und hoher Dichtedifferenzen. Für eine solche Station ergeben sich für reduzierte Schwerewerte bei unterschiedlichen Berechnungsmethoden (mit oder ohne Dichtemodell) Differenzen von knapp 30mGal. Ein wesentlich geringerer Unterschied (<1‘‘) konnte hingegen für reduzierte Lotabweichungen festgestellt werden. Für eine genauere Untersuchung des Einflusses eines Dichtemodells werden Tests mit unterschiedlichen Geländemasken (vor allem Meer) nötig sein.
Classification of 3D Point Clouds using Deep Neural Networks
Department für Geodäsie und Geoinformation, Forschungsgruppen Photogrammetrie und Fernerkundung, Technische Universität Wien, 2018
Betreuer: Univ.-Prof. Dipl.-Ing. Dr. Norbert Pfeifer, Dipl.-Ing. Dr. Gottfried Mandlburger
Kurzfassung/Abstract
3D point clouds derived with laser scanning and other techniques are always big amounts of raw data which cannot be used directly. To make sense of this data, and allow for the derivation of useful information, a segmentation of the points in groups, units, or classes fit for the specific purpose is required. Since point clouds contain information about the geometric distribution of the points in space, spatial information has to be included in the classification. To assign class labels on a per-point basis, this information is usually represented by means of feature aggregation for each point from a certain neighbourhood. Studies on the relevance of the different features that can be created from such a neighbourhood exist, but they depend very much on the specific case at hand. This thesis aims to overcome this difficulty by implementing a Deep Neural Network (DNN) that automatically optimises the features that should be calculated. After an introduction into the state-of-the-art methods in both point cloud classification and in neural networks, this novel approach is presented in detail. Three datasets were investigated, including an airborne laser scan (ALS) of a large area (Vorarlberg, 2700km), a UAV-based scan (ULS) with a very high point density of a forest (Großgöttfritz) and a benchmark dataset by the ISPRS (Vaihingen/Enz, 3D Semantic Labelling Contest). The transfer of models between these datasets showed that point distribution patterns and point densities had a large influence on the result. However, using a pre-trained model on a new dataset vastly increased convergence of the method. For the Vorarlberg dataset, the achieved overall accuracy with respect to the reference classification was 82.2%, with a maximum of 95.8% in urban areas. The accuracy showed a strong spatial correlation, especially with respect to land cover, suggesting the use of different models for different land covers. On the ISPRS benchmark dataset, the presented method achieved an overall accuracy of 80.6%, which is comparable to other methods in the benchmark. Tiling of the input dataset into chunks for processing was shown to influence the classification result, especially in areas where the classification was incorrect. A per-class probability for each point was additionally obtained in the classification process and may be used in further processing steps, e.g. as a priori weights in DTM generation. Future applications of the method include tasks such as tree stemor deadwood detection in forests. Especially with a growing number of attributes, the approach significantly reduces the input required from the operator (i.e. the selection of features). The method can also be extended to more dimensions, such as time. This would allow the classification of multi-temporal data, including change detection and displacement monitoring.
3D point clouds derived with laser scanning and other techniques are always big amounts of raw data which cannot be used directly. To make sense of this data, and allow for the derivation of useful information, a segmentation of the points in groups, units, or classes fit for the specific purpose is required. Since point clouds contain information about the geometric distribution of the points in space, spatial information has to be included in the classification. To assign class labels on a per-point basis, this information is usually represented by means of feature aggregation for each point from a certain neighbourhood. Studies on the relevance of the different features that can be created from such a neighbourhood exist, but they depend very much on the specific case at hand. This thesis aims to overcome this difficulty by implementing a Deep Neural Network (DNN) that automatically optimises the features that should be calculated. After an introduction into the state-of-the-art methods in both point cloud classification and in neural networks, this novel approach is presented in detail. Three datasets were investigated, including an airborne laser scan (ALS) of a large area (Vorarlberg, 2700km), a UAV-based scan (ULS) with a very high point density of a forest (Großgöttfritz) and a benchmark dataset by the ISPRS (Vaihingen/Enz, 3D Semantic Labelling Contest). The transfer of models between these datasets showed that point distribution patterns and point densities had a large influence on the result. However, using a pre-trained model on a new dataset vastly increased convergence of the method. For the Vorarlberg dataset, the achieved overall accuracy with respect to the reference classification was 82.2%, with a maximum of 95.8% in urban areas. The accuracy showed a strong spatial correlation, especially with respect to land cover, suggesting the use of different models for different land covers. On the ISPRS benchmark dataset, the presented method achieved an overall accuracy of 80.6%, which is comparable to other methods in the benchmark. Tiling of the input dataset into chunks for processing was shown to influence the classification result, especially in areas where the classification was incorrect. A per-class probability for each point was additionally obtained in the classification process and may be used in further processing steps, e.g. as a priori weights in DTM generation. Future applications of the method include tasks such as tree stemor deadwood detection in forests. Especially with a growing number of attributes, the approach significantly reduces the input required from the operator (i.e. the selection of features). The method can also be extended to more dimensions, such as time. This would allow the classification of multi-temporal data, including change detection and displacement monitoring.