Virtual Reality GIS Platform
Table of contents
The ViRGIS project schema is a custom JSON schema developed based upon components of the following standards and pseudo-standards:
The Full Definition is Shown in the Scripting Refernce
ViRGIS App v1 will support the following types:
|Layer Type||Feature Types||Formats||Feature Contents|
|Point||Point, Multipoint||GeoJSON||Datapoint GO|
|Line||Linestring, MultiLinestring||GeoJSON||Vertex GOs and Line Segment GOs|
|Polygon||Polygon, MultiPolygon||GeoJSON||Vertex GOs, Line Segment GOs and a Mesh body.|
|Mesh||Mesh||OBJ, OFF, STL, .3DS||Mesh|
|PointCloud||Point Cloud||PLY||Particle System|
In ViRGIS App V1, data ingestion will be performed by a set of dedicated IO drivers. For the JSON based formats, these are based upon the Newtonsoft JSON.NET library1 using the GeoJSON.NET2 type definitions to address GeoJSON objects. For the meshed based formats the geometry3Sharp library is used to create meshes from the raw data and for Point Clouds, a custom script reads the PLY and creates a Particle System.
ToDo add PCX
The core of the GIS system is the Georeference framework that provides the basis for mapping from Real-World coordinates to VR-World coordinates in a zoomable map.
As well as the usual geometry tools, this library has a comprehensive set of tools for 3D Mesh manipulation.
A key part of the integration of this toolkit into Unity and Mapbox is the marshalling of multiple data types across the three libraries. See Appendix 2 for details.
In V1, the entity model maintains a basic event model using messaging up and down the entity tree. When an entity triggers an event (which is usually a leaf node - but not necessarily), that entity uses SendMessage to send the event to all its ancestors. When an entity receives an event, it can use BroadcastMessage to broadcast the event to all descendants, based upon the current state of the entity.
This relatively simple model preserves the entity structure into the event structure without needing any further configuration. The basic idea can be shown by the
Selected event that is triggered on a component by the UI. This event is propagated up so that the Feature and Layer that this component is part of know that the event has occurred. If the feature is a line and is in
blockEdit mode it will broadcast the
Selected event to all of the components of the line.
There are following prototypes in the entity model:
- IVirgisEntity. Covers all entities in the Virgis Entity Model
- IVirgisFeature. Covers all visible features and parts of Features (like lines)
- IVirgisLayer. Covers all GIS Layers in the model. Note that a GIS Layer in not the same thing as a Unity Layer. GIS layers are collections of like data. Unity Layers are used by Unity to categorise GameObjects. There is no connection between these concepts.
There is also an event system called AppState that is used to communicate changes in the application state. This is explained further in the Scripting Reference
The Overall VR Interface architecture is described in this article
- This is NOT a game. This is an environment the user is exploring. The User’s representation in the space is not a character. It does not need a body or (much) physics (although a bit of physics about how the user moves always make it easier on our brains). The User’s representation in the space is a camera or probe that we use to explore the space.
- The User is NOT the centre of the space. The data is the centre of the space. This is different from many mapping applications on the web and in games - where the space reveals itself almost infinitely as the User moves. This is GIS and in GIS - the data has bounds (called the “extent” of the data). We will use that paradigm. We move around in that extent.
- We have two eyes and two hands. Therefore - the User Avatar has two cameras and two representations of hands. These are provide by the XR Rig (currently OVR XR Rig). The Game Space is set up with a putative scale of 1m in real-world to one Unity unit. But there is also internal scaling (ie. zoom factors) in the map that changes this. The space can be zoomed in-game.
- Real users are going to demand the ability to look from multiple viewpoints without the effort of moving e.g “in close” to change things “wide outside” view to get the overview - or alternate angles to understand parallax effects.
- The hands are used to control things.
- Select and manipulate,
- Move the character in the model,
- Move and scale the model,
- Change the state of the model and of the avatar.
- From Guideline 1 all avatars are only First Person view.
- From Guideline 4 suggests that an approach will be to have a main avatar and one or more drones to get alternate views.
- From Guideline 5:
- We use a ray pointer interactor oon the left hand to allow the selection and manipulation of data entities,
- We use the controls on the right hand to move and control the avatatr and manipulate entities,
- We use the controls on the left hand to zoom and rotate the model, and
- There is an interactive menu attached to the left hand to allow the application state to be changed.
Movement of the avatar is by using a 3D “jet-pack” analog.
This concept is common in GIS software. The basic concept is that changes to geospatial data are complicated and have many interdependencies - which means that a simple “undo” function can get you into more problems than solutions! Therefore, the usual concept is, effectively, to create a simple type of “checkpoint” at the start of an Edit Session and to always be able to get back to the “checkpoint”. The “checkpoint” is always what is currently saved in the data file.
JSON.NET is a C# library and is available under an MIT Open Source Licence ↩
GeoJSON.NET is a C# library and is available under an MIT Open Source Licence ↩
Mapbox Unity SDK is a C# library that is available under an MIT Open Source Licence ↩
geometry3Sharp is a C# library that is available under a Boost Open Source Licence ↩