1069
1 INTRODUCTION
In recent years, there has been a dynamic development
of Unmanned Aerial Vehicles (UAVs), which have
found wide application in areas such as environmental
monitoring, infrastructure inspection, search and
rescue operations, and precision agriculture. A key
component enabling the autonomous operation of
UAVs is the navigation system, which allows for the
determination of the vehicle’s current position and
orientation in space. In most contemporary UAV
systems, the Global Positioning System (GPS) is
utilized due to its availability, accuracy, and ease of
integration with other control system components [1].
However, in many operational scenariossuch as
missions conducted in dense forests, narrow urban
streets, tunnels, indoor environments, or under
conditions of electromagnetic interference (e.g., due to
intentional GPS jamming)the availability of satellite
signals is limited or entirely absent [2]. In such cases,
the development of alternative localization methods
becomes essential to maintain the navigational
functionality of UAVs in GPS-denied environments.
Figure 1. GPS signal interference in the vicinity of the
Kaliningrad region (source: https://gpsjam.org)
Fusion of Optical Flow and Dead Reckoning Algorithms
for UAV Navigation Without GPS
J. Walczak, B. Szykuła, P. Targowski & T. Pałys
Military University of Technology, Warsaw, Poland
ABSTRACT: The article presents an innovative navigation system for unmanned aerial vehicles (UAVs) operating
in environments with limited or unavailable GPS signal. The developed solution integrates dead reckoning
navigation methods with advanced computer vision algorithms utilizing optical flow analysis, enabling precise
drone positioning. The research includes implementation of data fusion algorithms from inertial measurement
units (IMU) and analysis of image sequences from a downward-facing camera. Experimental results from tests
conducted in both simulated environments and on real flight platforms are presented. The study demonstrates
that the application of optical flow techniques significantly improves dead reckoning navigation accuracy in GPS-
denied conditions, reducing the accumulating position error. The proposed system offers a promising solution
for UAV operations in challenging navigation conditions, such as urban areas, indoor environments, or regions
with electromagnetic interference.
http://www.transnav.eu
the International Journal
on Marine Navigation
and Safety of Sea Transportation
Volume 19
Number 4
December 2025
DOI: 10.12716/1001.19.04.02
1070
One alternative approach involves the use of optical
flow, which enables the estimation of the UAV's
relative motion with respect to its surroundings based
on the analysis of image sequences captured by
onboard cameras [3]. Another approach is inertial
navigation based on dead reckoning, which estimates
the current position by integrating data from
accelerometers and gyroscopes without relying on
external signals [4]. Both approaches, however, have
inherent limitations: optical flow is susceptible to
errors in homogeneous or poorly textured
environments, while dead reckoning suffers from
cumulative position drift over time.
The aim of this study is to propose and evaluate a
fusion method combining optical flow and dead
reckoning navigation to improve the accuracy and
robustness of UAV navigation in GPS-denied
environments. Integrating these two approaches
allows for mutual compensation of their respective
weaknesses, resulting in a more stable and accurate
position estimation. The following sections present a
review of related work on GPS-independent
localization methods, a description of the implemented
algorithms, and the results of simulation-based and
field experiments.
2 SURVEY OF ALTERNATIVE UAV NAVIGATION
APPROACHES
In response to the growing need for reliable UAV
navigation in environments where access to GPS
signals is limited or entirely unavailable, several
alternative localization methods have been proposed
in the literature. Particular attention has been given to
systems based on the fusion of data from optical and
inertial sources, such as cameras and Inertial
Measurement Units (IMUs). This chapter presents
previous approaches that utilize optical flow and dead
reckoning independently, as well as studies that
integrate both techniques within GPS-independent
localization systems.
2.1 Optical Flow in UAV Navigation
Optical flow is a technique for estimating a vector field
that describes the motion of image points between
consecutive frames. It is commonly used in mobile
robotics to estimate both velocity and direction of
movement within an environment. Classical
algorithms such as the HornSchunck and Lucas
Kanade methods enable accurate optical flow
estimation in well-textured scenes [5].
In the context of UAVs, Santos-Victor and Sandini
demonstrated that optical flow can be effectively
employed for docking manoeuvres and navigation
within enclosed spaces [6]. More recent studies, such as
those by Kendall and Cipolla, have applied deep
learning techniques to improve flow estimation
accuracy under challenging lighting conditions and in
environments with low texture [7].
Although optical flow provides valuable
information about relative motion, its accuracy is
highly dependent on image quality, and it may result
in significant errors in homogeneous or poorly lit
environments. Therefore, it is often combined with
other sources of information to enhance robustness and
reliability.
2.2 Inertial Navigation and the Dead Reckoning Method
Dead reckoning is a method for estimating the current
position based on the previous state and information about
the traversed path, typically obtained from inertial sensors
such as accelerometers and gyroscopes. These solutions are
characterized by high-frequency measurements and
independence from external sources of information [8].
However, the primary limitation of this method lies in the
accumulation of error over time, due to its recursive nature
inferring the current state solely from the last known state.
Classical inertial navigation systems (INS) have
been thoroughly described by Titterton and Weston,
who emphasized the necessity of incorporating
auxiliary data sources to compensate for inertial drift
[9]. In the case of UAVs, dead reckoning remains
effective only for short durations without correction
from external positioning systems.
2.3 Fusion of Optical Flow and Dead Reckoning
One of the most promising approaches to GPS-denied
UAV localization involves the integration of visual and
inertial data, commonly referred to as Visual-Inertial
Odometry (VIO). Mourikis and Roumeliotis proposed
the Multi-State Constraint Kalman Filter (MSCKF),
which enables real-time trajectory estimation of UAVs
while accounting for measurement uncertainties [10].
Weiss et al. developed a lightweight VIO system for
micro-UAVs, capable of real-time operation with
minimal computational overheadan important
consideration for onboard applications [11]. Similarly,
Badino et al. demonstrated that combining optical flow
with IMU data enhances the robustness of the system
against sudden lighting changes and temporary loss of
visual features [12].
More recent approaches leverage deep neural
networks to estimate position based on sequences of
images and inertial data. The work by Zhan et al.
introduces DeepVOa recurrent network that learns
temporal dependencies between visual and inertial
observations [13]. Although such methods achieve
high accuracy, their implementation requires
substantial computational resources and access to
extensive training datasets.
A review of the literature indicates that the fusion
of optical flow and inertial data represents a promising
solution to the challenge of UAV navigation in GPS-
denied environments. Key challenges include drift
mitigation, robustness to visual disturbances, and real-
time operation on platforms with limited resources. In
the following sections, the proposed system
architecture is presented, along with the results of its
validation in both simulated and real-world
environments.
3 METHOD FOR ESTIMATING UAV POSITION
3.1 Estimating UAV Velocity Using Optical Flow
To estimate the velocity of the UAV relative to the
ground, optical flow analysis was applied to images
1071
captured by a downward-facing onboard camera. The
method is based on the Farnebäck algorithm [14],
which allows for efficient computation of pixel
displacement vectors between two consecutive image
frames.
The approach proposed by Farnebäck [14] estimates
dense optical flow by approximating image intensity
using a quadratic function within local
neighbourhoods. It is assumed that the displacement of
points in the image between two frames results
primarily from the movement of the camera relative to
the environment. The optical flowv=(vx, vy) is estimated
as a displacement vector for each pixel between frames
It i It+1. The resulting flow field is decomposed into
horizontal and vertical components u(x, y) and v(x, y),
and the average flow magnitude is then computed as:
( ) ( )
22
,
1
,,
xy
v u x y v x y
N
=+
(1)
where N is the number of pixels in the image.
To convert pixel displacement into real-world units
(meters), known camera parameters are used: focal
length f in millimetres, flight altitude h and the image
diagonal length dpx. If the camera's field of view
corresponds to that of a pinhole camera with focal
length f, the physical length of the image diagonal on
the ground dm is given by:
(2)
where
is the camera’s field of view in radians. The
final scaling factor from pixels to meters allows the
computation of real-world velocity as:
real
v
FPS
vsf=
(3)
where fFPS is the frame rate of the video stream (frames
per second).
3.2 Integration of Velocity and Orientation in Position
Estimation
The integration process is based on continuously
updating the UAV’s position P(t) using its previous
state and the currently estimated velocity and flight
direction:
( ) ( ) ( )
Pt Δt ΔP t V t t+ = +
(4)
where V(t) is the velocity estimated from optical flow
using the Farnebäck method, t is the time interval
between successive measurements, and P(t) represents
the UAV’s position in the geographic coordinate
system (latitude, longitude, altitude).
The displacement d over time t is computed based
on the current speed V(t) and the heading angle
(t),
which can be obtained from onboard IMU and
magnetometer sensors:
( )
Vt Δdt=
(5)
Figure 2. Diagram of data flow used for determining UAV
motion parameters (own work).
The new geographic position is calculated using
spherical formulas that solve the direct geodesic
problem, given a distance and a bearing:
( ) ( ) ( )
2 1 1
arcsin sin cos cos sin cos
dd
RR

=+


(6)
( ) ( ) ( ) ( )
2 1 1 1 2
λ arctan2 sin sin cos ,cos sin sin
dd
RR

= +


(7)
where
and
denote geographic latitude and
longitude, respectively, d is the ground distance
travelled by the UAV, and R is the Earth’s radius.
The main advantage of the proposed approach lies
in its resilience to temporary GNSS signal loss. The
system maintains continuity in position estimation by
relying on motion information from independent
sources, such as a downward-facing camera and
inertial sensors. As a result, it enables navigation in
environments with limited satellite visibility, such as
urban canyons, tunnels, or indoor spaces.
Additionally, velocity estimation based on optical flow
contributes to smoother trajectory estimation,
eliminating the abrupt positional jumps typically
associated with weak or degraded GPS signals.
4 EXPERIMENTAL ENVIRONMENT
The experiments were conducted in a simulated
environment, where sensor data (camera images and
IMU measurements) were previously recorded during
real-world UAV flights and subsequently processed
locally on a computer. The objective was to evaluate
the trajectory estimation system under GNSS-denied
conditions.
The block diagram in Figure 3 illustrates the overall
processing pipeline. The system takes as input video
frames and corresponding samples from the Inertial
Measurement Unit (IMU), which include altitude and
orientation data. These inputs are then passed to the
optical flow processing module, which estimates the
horizontal ground-relative velocity (groundspeed).
The resulting velocities are subsequently used by the
dead reckoning module to determine the UAV’s
position in geographic coordinates, based on current
speed and orientation.
1072
Figure 3. Diagram illustrating the operation of the software
prototype developed for experimental purposes (own work).
For the experimental validation, the industrial UAV
platform Yuneec H520 was used. This hexacopter is
equipped with a high-resolution camera (E90, 4K) and
was chosen for its stable flight performance, precise
trajectory control, and full access to sensor logs,
including:
downward-facing video recordings from the
onboard camera,
IMU data: accelerations, angular velocities, and
magnetometer readings,
barometric altitude readings,
telemetry and navigation data (including GPS,
which was used for reference only).
The Yuneec H520 platform allows for data export in
standard formats and ensures proper time
synchronization of onboard sensors, making it a
suitable testbed for evaluating alternative GNSS-
independent navigation algorithms.
5 EXPERIMENTAL RESULTS
5.1 Experiment No. 1
The test flight was conducted over flat, open terrain
with sparse vegetation (individual trees not exceeding
30 meters in height) and good visual texture. The
mission duration was 129 seconds, consisting of 3885
frames, with an average flight altitude of 61 meters.
Weather conditions were sunny with light wind.
Table 1. Position error metrics compared to GNSS
(Experiment 1)
Mean error (MAE) [m]
RMSE [m]
Maximum error [m]
17,82
19,80
33,65
Table 2. Velocity error metrics compared to GNSS
(Experiment 1)
Mean error (MAE) [m/s]
RMSE [m/s]
Maximum error [m/s]
0,25
0,32
1,36
The obtained results indicate high effectiveness of
the proposed navigation method in open-space
conditions. The estimated position trajectory without
the use of GNSS shows a moderate mean error (17.82
m) and a relatively low maximum error (33.65 m),
which can be considered acceptable given the absence
of satellite-based positioning. The nature of the error
suggests a stable, systematic drift caused by motion
integration through dead reckoning without external
position correction.
Velocity estimation using optical flow
demonstrated very good agreement with GPS data.
The low mean error of 0.25 m/s and a maximum
deviation of 1.36 m/s confirm the validity of the applied
algorithm in a well-textured environment with stable
lighting conditions. Minor discrepancies may have
resulted from brief overflights of taller vegetation or
transitions over less distinct surfaces (e.g., uniform
greenery).
Figure 3. Experiment 1: Trajectory comparison GPS vs
estimated (GNSS-free navigation)
Figure 4. Experiment 1: Velocity over time Optical Flow vs
GPS
5.2 Scenario No. 2
The test flight was conducted over flat, open terrain
with isolated trees up to a maximum height of 30
meters and good visual texture. The mission duration
was 120 seconds, comprising 3619 frames, with an
average flight altitude of 120 meters. Weather
conditions were sunny with light wind.
Table 3. Position error metrics relative to GNSS
(experiment 2)
Mean error (MAE) [m]
RMSE [m]
Maximum error [m]
38,67
47,10
85,67
Table 4. Velocity error metrics relative to GNSS
(experiment 2)
Mean error (MAE) [m/s]
RMSE [m/s]
Maximum error [m/s]
1,16
1,22
2,51
1073
Compared to the previous scenario, both position
and velocity estimation errors are noticeably higher.
The average position error exceeds 38 meters, with a
maximum error over 85 meters, indicating a significant
deviation of the estimated trajectory relative to the
GNSS reference. The most probable factor contributing
to this accuracy degradation is the increased flight
altitude. At 120 meters, ground texture becomes
significantly less distinct in the camera image, which
negatively affects the quality of optical flow
calculations. In addition, suboptimal camera alignment
relative to the flight direction may have introduced
distortions in the motion vector field.
Velocity estimation errors are also more
pronouncedan average difference of 1.16 m/s and a
maximum deviation of 2.51 m/s. This confirms that for
flights at higher altitudes, or when the camera is not
properly aligned, a drop in estimation quality should
be expected. Such conditions highlight the importance
of future work focused on error compensation and
adaptive calibration of input parameters.
Figure 5. Experiment 2: Trajectory comparison GPS vs
estimated (GNSS-free navigation)
Figure 6. Experiment 2: Velocity over time Optical Flow vs
GPS
5.3 Scenario No. 3
The test flight was conducted partially over flat, open
terrain with good visual texture, and partially over
densely forested areas. The mission lasted 190 seconds
and included 5712 frames, with an average flight
altitude of 60 meters. Weather conditions were sunny
with light wind.
Table 5. Position error metrics relative to GNSS
(experiment 3)
Mean error (MAE) [m]
RMSE [m]
Maximum error [m]
59,84
74,44
116,40
Table 6. Velocity error metrics relative to GNSS
(experiment 3)
Mean error (MAE) [m/s]
RMSE [m/s]
Maximum error [m/s]
0,97
1,78
6,07
The results of the experiment show a significant
increase in estimation errors in the second half of the
flight. The primary cause is not directly related to the
forested terrain itself, but rather to the fact that the tree
canopies were much closer to the camera than assumed
based on the flight altitude obtained from the
magnetometer.
As a result, the optical flow was incorrectly scaled,
leading to an overestimation of speed and,
consequently, a substantial trajectory deviation. This
phenomenon is clearly visible in the velocity plot (Fig.
8), where sudden spikes in speed appear during the
middle segment of the mission. Future improvements
should include consideration of relative height above
the observed surface, for instance by integrating
additional depth sensors or analysing scene structure.
Figure 7. Experiment 3: Trajectory comparison GPS vs
estimated (GNSS-free navigation)
Figure 8. Experiment 3: Velocity over time Optical Flow vs
GPS
5.4 Scenario No. 4
The test flight was conducted over flat, open terrain
with sparse vegetation (individual trees up to 30
meters in height) and good visual texture. The mission
lasted 138 seconds and consisted of 4,143 frames, with
an average flight altitude of 119 meters. Weather
conditions were sunny with light wind.
Table 7. Position error metrics relative to GNSS
(experiment 4)
Mean error (MAE) [m]
RMSE [m]
Maximum error [m]
11,17
12,34
20,81
Table 8. Velocity error metrics relative to GNSS
(experiment 4)
Mean error (MAE) [m]
RMSE [m]
Maximum error [m]
0,46
0,65
2,92
Despite the relatively high flight altitude (119 m),
the system demonstrated very good accuracy in both
1074
position and velocity estimation. The average position
error was only 11.17 meters, and the maximum
deviation did not exceed 21 meters. The velocity
estimated via the optical flow method closely matched
the GPS data, with an average error below 0.5 m/s.
These results suggest that, given sufficient visual
texture and uniform terrain, the system can operate
reliably even at higher altitudes. A key factor was the
homogeneous and well-defined surface structure (e.g.,
fields, roads, low vegetation), which enabled stable
optical flow analysis and minimized disturbances in
metric scaling.
Figure 9. Experiment 4: Trajectory comparison GPS vs
estimated (GNSS-free navigation)
Figure 10. Experiment 4: Velocity over time Optical Flow vs
GPS
6 CONCLUSION
This work presented a prototype navigation system for
an unmanned aerial vehicle (UAV) designed to
estimate its position in environments where GNSS
signals are unavailable. The proposed method relies on
optical flow analysis from a nadir-facing camera and a
simplified fusion with spatial orientation data. The
implemented model was evaluated in a simulated
environment using real flight data, and the results
indicate that it is possible to maintain consistent
trajectory and velocity estimation even under
challenging observational conditions.
The key advantages of the presented approach
include its resilience to temporary GNSS signal loss
and the smoothness of the estimated trajectory
enabled by velocity estimation derived from optical
flow. The method is characterized by low
computational complexity, making it suitable for on-
board implementation in resource-constrained UAV
platforms.
The observed limitationsparticularly positional
drift over time, sensitivity to heterogeneous terrain
structure, and distortion caused by objects such as trees
or buildingshighlight the need for further
development. Specifically, errors resulting from
inaccurate estimation of the UAV's height above
ground can be mitigated using additional sensors such
as laser rangefinders. Alternatively, more advanced
fusion with classical inertial navigation methods could
be implemented, incorporating airspeed
measurements and inertial sensor data.
The results obtained in this study provide a
promising foundation for future work toward a
lightweight, GNSS-independent positioning system,
intended for use in environments with limited satellite
visibility.
REFERENCES
[1] B. Grocholsky, S. Thrun, V. Kumar, "Multi-Robot
Exploration and Localization Using GPS and Dead-
Reckoning," Robotics and Autonomous Systems, 2006.
[2] T. Bailey, H. Durrant-Whyte, "Simultaneous Localization
and Mapping (SLAM): Part I," IEEE Robotics &
Automation Magazine, 2006.
[3] H. Santos-Victor, G. Sandini, "Visual Behaviors for
Docking," Computer Vision and Image Understanding,
1995.
[4] S. Titterton, J. Weston, "Strapdown Inertial Navigation
Technology," IET Radar, Sonar and Navigation Series,
2004.
[5] B. K. P. Horn, B. G. Schunck, “Determining optical flow,”
Artificial Intelligence, 1981.
[6] J. Santos-Victor, G. Sandini, “Visual behaviors for
docking,” Computer Vision and Image Understanding,
1995.
[7] A. Kendall, R. Cipolla, “Geometric loss functions for
camera pose regression with deep learning,” CVPR, 2017.
[8] J. A. Farrell, Aided Navigation: GPS with High Rate
Sensors, McGraw-Hill, 2008.
[9] D. Titterton, J. Weston, Strapdown Inertial Navigation
Technology, IET, 2004.
[10] A. I. Mourikis, S. I. Roumeliotis, “A Multi-State
Constraint Kalman Filter for Vision-aided Inertial
Navigation,” ICRA, 2007.
[11] S. Weiss, D. Scaramuzza, R. Siegwart, “Monocular-
SLAM-based navigation for autonomous micro
helicopters in GPS-denied environments,” Journal of
Field Robotics, 2012.
[12] H. Badino, D. Huber, T. Kanade, “Visual topometric
localization,” IVS, 2013.
[13] H. Zhan et al., “DeepVO: Towards end-to-end visual
odometry with deep Recurrent Convolutional Neural
Networks,” International Journal of Computer Vision,
2019.
[14] Farnebäck, G. (2003). Two-frame motion estimation
based on polynomial expansion. In: Bigun, J., Gustavsson,
T. (eds) Image Analysis. SCIA 2003. Lecture Notes in
Computer Science, vol 2749. Springer.