Virtual Reality GIS Platform
Table of contents
The ViRGIS project schema is a custom JSON schema developed based upon components of the following standards and pseudo-standards:
The Full Definition is Shown in the Scripting Refernce
|Layer Type||Feature Types||Formats||Feature Contents|
|Vector||Point, Multipoint||Any GDAL/OGR||Datapoint GO|
|Vector||Linestring, MultiLinestring||Amy GDAL/OGR||Vertex GOs and Line Segment GOs|
|Vector||Polygon, MultiPolygon||Any GDAL/OGR||Vertex GOs, Line Segment GOs and a Mesh body.|
|Raster||Raster Data Cloud||Any GDAL||Particle System|
|Mesh||Graphic Mesh||OBJ, OFF, STL, .3DS||Mesh|
|MDAL||Data Mesh||Any MDAL||Mesh|
|PointCloud||Point Cloud||Any PDAL||Particle System|
The Raster formats supported are detailed here.
The Vector formats supported are detailed here
The Point Cloud formats supported are detailed here
The MDAL mesh supported formats are detailed here
ViRGiS v2 is implemented using the following Data Abstraction Libraries :
Vector features are loaded using the GDAL/OGR library, Raster features using the GDAL library, Point Clouds using the PDAL library (accessed through PDAL to create the raster as a Particle System) and Data-full Meshes and Graphs using the MDAL library. Purely graphic meshes (i.e. wavefront .obj files) are loaded using g34.
The core of the GIS system is the Georeference framework that provides the basis for mapping from Real-World coordinates to VR-World coordinates in a zoomable map.
A special CRS is created by the system for each model. This is a Transverse Mercator projection with:
- the central meridian set to the longitude of the model origin (as defined in the project.json project configuration file),
- the Latitude of Origin to the latitude of the model origin,
- using the WGS84 Ellipsoid and
- meters as the unit.
This gives a model space in Map Local Space (i.e. in the Map Gameobject - which is only different from the World Space by the zoom transformation) that is a flat 3D space where one Unity unit corresponds to one real-world meter and the origin (0,0,0) of the space is the model origin, as per project.json, on the WGS84 ellipsoid.
Proj is then used to reproject all data into this Custom CRS and it is also used to transform the axis from Z-up to Y-up.
GDAL includes a number of access methods to load files, including https and ftp etc.
In particular, this means that the libraries natively support WFS3, WMS and WCS access to Vector and Raster data respectively.
Point Data and Mesh Data must be loaded from the local file system.
Use of the Mapbox layer is not obligatory.
ViRGiS App V2 includes an implementation of geometry3Sharp to provide comprehensive and advanced 3D geometry tools.
As well as the usual geometry tools, this library has a comprehensive set of tools for 3D Mesh manipulation.
A key part of the integration of this toolkit into Unity and Mapbox is the marshalling of multiple data types across the three libraries. See Appendix 2 for details.
In ViRGiS v2, the entity model maintains a basic event model using messaging up and down the entity tree. When an entity triggers an event (which is usually a leaf node - but not necessarily), that entity uses SendMessage to send the event to all its ancestors. When an entity receives an event, it can use BroadcastMessage to broadcast the event to all descendants, based upon the current state of the entity.
This relatively simple model preserves the entity structure into the event structure without needing any further configuration. The basic idea can be shown by the
Selected event that is triggered on a component by the UI. This event is propagated up so that the Feature and Layer that this component is part of know that the event has occurred. If the feature is a line and is in
blockEdit mode it will broadcast the
Selected event to all of the components of the line.
There are following prototypes in the entity model:
- IVirgisEntity. Covers all entities in the Virgis Entity Model
- IVirgisFeature. Covers all visible features and parts of Features (like lines)
- IVirgisLayer. Covers all GIS Layers in the model. Note that a GIS Layer in not the same thing as a Unity Layer. GIS layers are collections of like data. Unity Layers are used by Unity to categorise GameObjects. There is no connection between these concepts.
In V2, Layers can be hierarchical.
There is also an event system called AppState that is used to communicate changes in the application state. This is explained further in the Scripting Reference
The Overall VR Interface architecture is described in this article.
- This is NOT a game. This is an environment the user is exploring. The User’s representation in the space is not a character. It does not need a body or (much) physics (although a bit of physics about how the user moves always make it easier on our brains). The User’s representation in the space is a camera or probe that we use to explore the space.
- The User is NOT the centre of the space. The data is the centre of the space. This is different from many mapping applications on the web and in games - where the space reveals itself almost infinitely as the User moves. This is GIS and in GIS - the data has bounds (called the “extent” of the data). We will use that paradigm. We move around in that extent.
- We have two eyes and two hands. Therefore - the User Avatar has two cameras and two representations of hands. These are provide by the XR Rig (currently OVR XR Rig). The Game Space is set up with a putative scale of 1m in real-world to one Unity unit. But there is also internal scaling (ie. zoom factors) in the map that changes this. The space can be zoomed in-game.
- Real users are going to demand the ability to look from multiple viewpoints without the effort of moving e.g “in close” to change things “wide outside” view to get the overview - or alternate angles to understand parallax effects.
- The hands are used to control things.
- Select and manipulate,
- Move the character in the model,
- Move and scale the model,
- Change the state of the model and of the avatar.
- From Guideline 1 all avatars are only First Person view.
- From Guideline 4 suggests that an approach will be to have a main avatar and one or more drones to get alternate views.
- From Guideline 5:
- We use a ray pointer interactor oon the left hand to allow the selection and manipulation of data entities,
- We use the controls on the right hand to move and control the avatatr and manipulate entities,
- We use the controls on the left hand to zoom and rotate the model, and
- There is an interactive menu attached to the left hand to allow the application state to be changed.
Movement of the avatar is by using a 3D “jet-pack” analog.
This concept is common in GIS software. The basic concept is that changes to geospatial data are complicated and have many interdependencies - which means that a simple “undo” function can get you into more problems than solutions! Therefore, the usual concept is, effectively, to create a simple type of “checkpoint” at the start of an Edit Session and to always be able to get back to the “checkpoint”. The “checkpoint” is always what is currently saved in the data file.
1 JSON.NET is a C# library and is available under an MIT Open Source Licence. ⮐
2 GeoJSON.NET is a C# library and is available under an MIT Open Source Licence. ⮐