1208
[6] Qiao, Y.; Yin, J.; Wang, W.; Duarte, F.; Yang, J.; Ratti, C.
Survey of Deep Learning for Autonomous Surface
Vehicles in Marine Environments. IEEE Trans. Intell.
Transp. Syst. 2023, 24, 3678–3701.
https://doi.org/10.1109/TITS.2023.3235911.
[7] Hao, G.; Xiao, W.; Huang, L.; Chen, J.; Zhang, K.; Chen, Y.
The Analysis of Intelligent Functions Required for Inland
Ships. J. Mar. Sci. Eng. 2024, 12, 836.
https://doi.org/10.3390/jmse12050836.
[8] Li, Y.; Hu, Y.; Rigo, P.; Lefler, F.E.; Zhao, G.; Eds.
Proceedings of PIANC Smart Rivers 2022: Green
Waterways and Sustainable Navigations; Lecture Notes
in Civil Engineering; Springer: Singapore, 2023; Volume
264. https://doi.org/10.1007/978-981-19-6138-0.
[9] Fan, W.; Zhong, Z.; Wang, J.; Xia, Y.; Wu, H.; Wu, Q.; Liu,
B. Vessel-Bridge Collisions: Accidents, Analysis, and
Protection. China Journal of Highway and Transport.
2024, 37(5), 38–66.
[10] Łubczonek, J., i M. Włodarczyk. Wykorzystanie geobazy
danych w procesie tworzenia elektronicznych map
nawigacyjnych dla żeglugi śródlądowe [Application of
geodatabase in the process of creation electronic
navigational charts for inland shipping]. Archiwum
Fotogrametrii, Kartografii i Teledetekcji, t. 21, 2010, s.
221–34.
[11] Łubczonek, J. Opracowanie i implementacja
elektronicznych map nawigacyjnych dla systemu RIS w
Polsce [Elaboration and implementation of electronic
navigational charts for RIS in Poland]. Roczniki
Geomatyki 2015, 13, 359–368.
[12] Adamski, P.; Lubczonek, J. A Comparative Analysis of
the Usability of Consumer Graphics Cards for Deep
Learning in the Aspects of Inland Navigational Signs
Detection for Vision Systems. Appl. Sci. 2025, 15, 5142.
https://doi.org/10.3390/app15095142
[13] SIGNI. European Code for Signs and Signals on Inland
Waterways: Resolution No. 90; United Nations: New
York, NY, USA, 2018.
[14] Redmon J. Darknet: Open Source Neural Networks in C.
Online source: https://pjreddie.com/darknet. Access date
13.07.2025
[15] Jocher G.; Qiu J.; Charasia A. Ultralyiics YOLO. Online
source: https://github.com/ultralytics/ultralytics Access
date:13.07.2025
[16] Redmon, J., Divvala, S., Girshick, R., & Farhadi, A. You
only look 1182 once: Unified, real-time object detection.
In Proc. IEEE Conf. Comput (Vol. 1183, pp. 779-788).
[17] Redmon, J., Farhadi, A. YOLO9000: Better, Faster,
Stronger. Proceedings of the IEEE conference on
computer vision and pattern recognition
[18] Farhadi A., Redmon J. "Yolov3: An incremental
improvement." In Computer vision and pattern
recognition, vol. 1804, pp. 1-6. Berlin/Heidelberg,
Germany: Springer, 2018.
[19] Bochkovskiy, Alexey & Wang, Chien-Yao & Liao, Hong-
yuan. (2020). YOLOv4: Optimal Speed and Accuracy of
Object Detection. 10.48550/arXiv.2004.10934.
[20] Wang, C., Bochkovskiy, A., & Liao, H. M. (2022).
YOLOv7: Trainable bag-of-freebies sets new state-of-the-
art for real-time object detectors. arXiv (Cornell
University). https://doi.org/10.48550/arxiv.2207.02696
[21] Bochkovskiy, A. Open Source Neural Networks in C.
Online source: https://github.com/AlexeyAB/darknet
Access date:13.07.2025
[22] Khanam, R., & Hussain, M. (2024). What is YOLOv5: A
deep look into the internal features of the popular object
detector. arXiv (Cornell University).
https://doi.org/10.48550/arxiv.2407.20892
[23] Li, C., Li, L., Jiang, H., Weng, K., Geng, Y., Li, L., Ke, Z.,
Li, Q., Cheng, M., Nie, W., Li, Y., Zhang, B., Liang, Y.,
Zhou, L., Xu, X., Chu, X., Wei, X., & Wei, X. (2022).
YOLOV6: A Single-Stage Object Detection Framework for
Industrial Applications. arXiv (Cornell University).
https://doi.org/10.48550/arxiv.2209.02976
[24] R. Varghese and S. M., "YOLOv8: A Novel Object
Detection Algorithm with Enhanced Performance and
Robustness," 2024 International Conference on Advances
in Data Engineering and Intelligent Computing Systems
(ADICS), Chennai, India, 2024, pp. 1-6, doi:
10.1109/ADICS58448.2024.10533619.
[25] Wang, C., Yeh, I., & Liao, H. M. (2024). YOLOV9:
Learning what you want to learn using programmable
gradient information. arXiv (Cornell University).
https://doi.org/10.48550/arxiv.2402.13616
[26] Wang, A., Chen, H., Liu, L., Chen, K., Lin, Z., Han, J., &
Ding, G. (2024). YOLOV10: Real-Time End-to-End Object
Detection. arXiv (Cornell University).
https://doi.org/10.48550/arxiv.2405.14458
[27] Khanam, R., & Hussain, M. (2024b). YOLOV11: An
overview of the key architectural enhancements. arXiv
(Cornell University).
https://doi.org/10.48550/arxiv.2410.17725
[28] Tian, Y., Ye, Q., & Doermann, D. (2025). YOLOV12:
Attention-Centric Real-Time Object Detectors. arXiv
(Cornell University).
https://doi.org/10.48550/arxiv.2502.12524
[29] Nvidia Corporation. CUDA Toolkit Documentation 12.4
Available online:
https://docs.nvidia.com/cuda/archive/12.4.0/ (accessed
on 13 July 2025).
[30] Paszke, A.; et al. PyTorch: An Imperative Style, High-
Performance Deep Learning Library. arXiv 2019,
arXiv:1912.01703. Available online:
http://arxiv.org/abs/1912.01703 (accessed on 12 February
2025)