Page 40 - Fister jr., Iztok, and Andrej Brodnik (eds.). StuCoSReC. Proceedings of the 2018 5th Student Computer Science Research Conference. Koper: University of Primorska Press, 2018
P. 40
METHODOLOGY b = a |c| R (3)
|a| |c|2 − R2 (4)
The rendering procedure, presented in this paper, consists
of following steps. Firstly, the visible tiles are calculated. t = c + |c|2 − R2 b − c
Then, the vertices and their UV coordinates are calculated. |b − c|
The vertex data is sent to the graphics card for drawing. Fi-
nally, the vertex, geometry, and fragment shader finalize the
calculations, and the result is drawn to the screen. These
steps are presented in more detail in the following subsec-
tions.

2.1 Vertex generation Finally, the coordinates of the points are normalized.

The first step in 3D graphics pipeline is the generation of 2.2 UV coordinate calculation
vertices. In many applications that use 3D graphics, some
modelling software is used to generate the model, already The UV coordinates for texturing are calculated from the
complete with vertices, UV coordinates, and normals of the vertex’ coordinates. If the map tiles use Mercator projec-
object. However, such models are not suitable for high- tion, which distorts the image in the direction of vertical
resolution realistic display applications because of three rea- axis (V coordinate), that has to be reversed first [10]. The
sons: U coordinate is calculated simply from the normalized ver-
tex position, p = (x, y, z), as in Equation ??.

• A model file with sufficient number of vertices for real- U = arc tan( x ) + 0.5, (5)
istic display of the highest quality would be extremely z
large,

• Rendering of too many vertices results in a very low
FPS (frames per second) environment when using mediocre With the help of constant M (Equation 7), the V coordinate
graphics cards, due to insufficient available graphics is then calculated using Equation ??.
memory or shader processing cores,

• Older versions of shader programs don’t support the arc sinh[tan(2 M arc sin[y])] (6)
use of 64-bit precision, which is required for rendering V = + 0.5,
of the most detailed views of the Earth.


Because our goal was compatibility with some of the older M = arc tan[sinh(π)]. (7)
versions of shading language, 64-bit precision could not be
used in shader programs. Therefore, in presented applica- Because shader programs use 32-bit precision, an additional
tion, dynamic vertex generation was used, which also meant step is required when calculating the UV coordinates. When
that the precision-critical data could not be processed by rendering a close-up view, only a really small part of the
shader programs on the GPU. Because of that, the FPS Earth is visible, which means the difference between mini-
(frames per second) in our application is a bit lower than it mum and maximum UV coordinates is really small too. Be-
could have been in an ideal setting. cause of that, the UV coordinates can be normalized to the
For the vertex generation, a ray casting algorithm is used. visible part of the Earth. The minimum and maximum vis-
The screen is divided in up to 121 equally spaced points, and ible row and column of tiles were already calculated before
from each of those points, a ray is sent out. This produces vertex generation step. Instead of having the V value 0 at
a grid on the screen with up to 10 rows and columns. The the south pole and 1 at the north pole, we can adjust it to
rays’ intersections with sphere representing the Earth are be 0 at the bottom of the bottommost visible row and 1 to
calculated. If we have a ray r = (rx, ry, rz), sphere with ra- be at the top of the topmost visible row. The U coordinate
dius R, the camera positioned at c = (cx, cy, cz), and the dot is also mapped from 0 at the left of leftmost visible column
product between ray direction and camera position vectors and 1 at the right of rightmost visible column. That way, the
dt, the closest intersection, p, between ray and the sphere precision errors in shader programs are kept in check when
can be calculated by Equation ??. rendering the high resolution close-up views of the Earth.

p = c − (dt + dt2 + R2 − |c|2) r. (1)

Note that the part under the square root is negative when 2.3 Vertex shader
the ray and the sphere do not intersect. In that case, the
ray’s direction is adjusted a bit, so that it touches the sphere. The vertex shader program’s only task is to transform the
The tangent point, t, is calculated using Equation ??, with vertex position from the world space into screen space by
multiplying with the standard model-view-projection ma-
intermediate vectors a (Equation ??) and b (Equation ??) trix. In case the model vertex buffer is used instead of the
dynamically calculated vertex buffer, the vertex shader also
|c|2 (2) remaps the UV coordinates to adapt them to the Merca-
a = c + −dt r tor projection, using Equation ?? and the constant M from
Equation 7.

StuCoSReC Proceedings of the 2018 5th Student Computer Science Research Conference 40
Ljubljana, Slovenia, 9 October
   35   36   37   38   39   40   41   42   43   44   45