309
1 INTRODUCTION
The education and training of seafarers are regulated
by the requirements of the STCW Convention [1]. To
support the implementation of these requirements, the
IMO has developed specific Model Courses that
provide clear guidelines regarding the content, scope,
and competencies that seafarers must acquire.
However, despite the precisely defined training
objectives, a key challenge remains the assessment of
seafarers' competencies and skills. Although it is
assumed that instructors and evaluators meet the
formal requirements for assessing competencies, the
adequacy of assessment methods remains an open
question. While the STCW Convention and IMO Model
Courses establish standards, their practical
implementation often varies. Additionally, there is a
gap between educational programs and actual
industry needs, further emphasized by the rapid
digitalization and automation of the maritime sector
[2]. Research [3] highlights that the STCW Convention
still defines competencies based on shipboard
functions rather than operational processes, thereby
creating a disconnect between education and real
industry demands. The education of seafarers in
managerial positions requires additional training in
various areas, as the current IMO standards do not
adequately prepare them for the challenges of modern
navigation [4].
To improve the non-technical skills of Officers of
the Watch (OOW), the authors of [5] utilized the
Uchida-Kraepelin (UK) test to assess situational
awareness, decision-making, and communication.
Simulator-based experiments demonstrated a
correlation between test results and success in collision
avoidance, confirming the test’s potential application
in seafarer training.
Simulations play a crucial role in modern maritime
education and training. Cloud-based simulations offer
Pilot Research of AI-Generated Scenarios in Nautical
Simulator Training Using ChatGPT Plus
I. Mraković, I. Stanovčić, I. Petrović, V. Kapetanović & D. Pudar
University of Montenegro, Kotor, Montenegro
ABSTRACT: This study presents a pilot research into the use of artificial intelligence (AI) for generating training
scenarios in nautical simulators, focusing on their potential to support competency-based maritime training. AI-
generated scenarios were created using ChatGPT Plus, following the guidelines of IMO Model Course 1.22 and
STCW Convention A-II/1. These scenarios were developed by three scenarists with different levels of maritime
expertise and subsequently evaluated by five MET experts. The evaluation process incorporated Multi-Criteria
Analysis (MCA) and Intraclass Correlation Coefficient (ICC) to assess scenario complexity, assessment
methodology, and compliance with maritime training standards. The study explores the strengths and limitations
of AI-generated scenarios and considers the role of expert feedback in refining AI-assisted content. The findings
contribute to the ongoing discussion on the applicability of AI in maritime education and highlight directions for
further research, particularly in validating AI-generated training scenarios in real-world simulator environments.
http://www.transnav.eu
the International Journal
on Marine Navigation
and Safety of Sea Transportation
Volume 19
Number 1
March 2025
DOI: 10.12716/1001.19.01.36
310
greater flexibility and skill enhancement through
repetition, yet exercises need to be better designed to
foster engagement and collaborative learning [6].
One of the emerging research directions aimed at
improving Maritime Education and Training (MET) is
the application of Virtual Reality (VR). An analysis of
immersive learning methodologies shows that VR
training enhances cognitive effort and retention by
25.93% compared to traditional learning methods [7].
However, the implementation of VR in seafarer
training presents challenges, such as insufficiently
realistic modelling of fire-fighting and evacuation
scenarios [8]. Additionally, VR applications have been
developed for Fast Rescue Boat (FRB) training.
Although VR enhances seafarers' preparedness, it has
proven most effective when combined with traditional
real-world training [9]. Beyond VR, eye-tracking
technology (ETT) enables the analysis of seafarers'
navigational performance and the identification of
perceptual errors. Research findings [10] indicate that
seafarers frequently exhibit an over-reliance on ECDIS
and visual observation at the expense of using
RADAR/ARPA, potentially leading to navigational
errors.
Regarding the effectiveness of training with
modern technologies, confidence in learning via cloud-
based simulators has been identified as a key predictor
of success, while extroversion positively correlates
with candidate motivation [11]. However, the same
study found no significant correlation between prior
academic knowledge and training outcomes.
In the context of modernizing maritime education,
an increasing number of MET institutions are
modifying existing curricula by integrating digital,
technical, and soft skills to align with contemporary
demands. For instance, research conducted at the
Lithuanian Maritime Academy (LMA) proposes a
Work-Integrated Learning (WIL) approach, which
combines academic and practical training [12]. These
institutions are adapting learning methods to
Generation Z, which favours digital tools, flexibility,
and interactive approaches [13]. A study conducted in
Japan found that practical exercises are the most
effective learning method, whereas traditional lectures
are the least effective, underscoring the need for
interactive teaching strategies to enhance the appeal of
maritime careers [14].
Chatbot technology is also becoming an important
tool for interactive maritime learning. A study [15]
describes the design of the AI-powered chatbot
"FLOKI" developed for training in COLREGs, allowing
students flexible and self-paced learning. User
experience evaluation indicated that the chatbot
enhances understanding of regulations, with
suggested improvements such as voice interaction and
expanded rule coverage.
An analysis of existing competency assessment
methods in simulator-based training identified the
strengths and limitations of subjective and objective
techniques, including eye-tracking technology [16].
The authors concluded that current methods are often
insufficiently objective and that more authentic
assessment approaches are needed to better reflect real
working conditions at sea. Supporting this argument,
Karahalil et al. [17] emphasize the necessity of
replacing the traditionally subjective assessment
methods, which rely on instructor experience, with
more objective and systematic approaches to enhance
training efficiency.
Although the use of simulators is regulated by the
STCW Convention, there is still a lack of empirical
research on optimal pedagogical practices for valid and
reliable simulator-based training outcomes [18].
Authors in [19] research the reliability and validity of
an assessment tool based on the Analytical Hierarchy
Process (AHP) and Bayesian Networks to reduce
assessor bias. The results indicate that the tool
improves the reliability of technical competency
assessments, whereas the evaluation of teamwork
(non-technical competencies) remains a significant
challenge.
Given the challenges associated with objectively
assessing seafarers' competencies, the increasing
presence of AI technology in education and training
raises the question of whether artificial intelligence can
contribute to improving this process. Recent research
increasingly explores the potential of AI tools. In this
study, the authors examine ChatGPT Plus as a tool for
generating scenarios in nautical simulator exercises,
with the aim of optimizing the evaluation process and
enhancing seafarers' competency development.
This paper is structured as follows: Chapter 2
presents the research methodology, Chapter 3 outlines
the research findings, and Chapter 4 provides a
discussion and comparative analysis of the results,
followed by conclusions in Chapter 5.
2 METHODOLOGY
This study examines the use of artificial intelligence
(AI) in creating scenarios for a nautical simulator, in
accordance with IMO Model Course 1.22 Bridge
Resource Management [20] and the provisions of the
STCW Convention A-II/1. The authors utilized this
IMO model course and, through a random selection
method, identified a section from Appendix III
Example of an Exercise Scenario as relevant for this
research. The selected scenario covers the voyage
period before and after pilot embarkation, as outlined
in Table 1.
The diversity in the professional and educational
background of the authors enables an analysis of how
expertise influences the quality of AI-generated
scenarios, providing insights into different approaches
to creating navigational situations.
The scenario creators are categorized into three
groups:
Scenario creator I is student of Nautical Studies and
Transportation at the Faculty of Maritime Studies,
Kotor, and has no previous seafaring experience.
However, the student has user experience in
operating a nautical simulator both during faculty
studies and earlier in maritime high school
education;
Scenario creator II is holder of Master CoC, with 14
years of seagoing experience and 9 years of
experience in maritime education, including
advanced proficiency in operating a nautical
simulator;
311
Scenario creator III is a person who does not have a
seafaring background and is not from the field of
nautical sciences. However, this person has 30 years
of experience in maritime education, although
without familiarity with nautical simulator.
Table 1. Initial input for the exercise scenario [20]
Non-technical
Before
Pilot
Arrival
Sharing information with authority;
Sharing information with deck
team for Pilot ladder arrangement;
Sharing information with engine-
room;
Bridge team should be organized
for port arrival Master, duty
officer, helmsman, lookout;
Engine team should be organized
for port arrival;
Proper communication with the
bridge team (asking officer(s) for
information, challenge and
response, closed-loop
communication on the bridge);
Etc.
After
Pilot
Embark.
Sharing and exchange of
information with Pilot (tugboat
bollard capacity, breaking force of
lines and bollards, first line,
mooring side, etc.);
Concentrating on primary tasks;
Discussion options with team
members and Pilot;
Balancing authority and
assertiveness;
Etc.
Scenario creators were assigned the task of
generating scenarios using the same AI model
ChatGPT-4o Plus. Each creator developed one
scenario, aiming to structure it in accordance with
maritime training standards and real operational
challenges in navigation, relying on their own
interpretation of these requirements. To ensure
consistency and that all scenarios were solely
influenced by AI-generated content, they were
prohibited from consulting any external sources,
including literature, internet searches, or other
reference materials. The only permitted tool for
scenario creation was AI assistance, with the exception
of the STCW Convention and IMO Model Course 1.22,
which were provided as reference guidelines to
support alignment with regulatory and training
requirements.
As a starting point, it was agreed to initiate
communication with ChatGPT-4o Plus using the
following prompt:
"The task is to develop an exercise for a nautical
simulator, with a detailed scenario structured in
accordance with IMO Model Course 1.22 and STCW A-
II/1. In addition to the scenario, a comprehensive guide
for candidate assessment must be created. The exercise
scenario should include the segments 'Before pilot
arrival' and 'After pilot embarkation,' as outlined in the
attached table 1 (extracted from IMO Model Course
1.22)."
To ensure a structured, systematic, and transparent
research approach, a workflow diagram has been
developed to illustrate the key stages of this study
(Figure 1).
Figure 1. Research workflow
While the AI-generated scenarios were created
using ChatGPT Plus, the involvement of human
expertise played a crucial role in refining these
scenarios. Scenario creators were tasked with
developing and refining the scenarios to the point
where they felt the content was adequate for
understanding the scenario and creating the
subsequent exercise for the simulator. The authors
refrained from intervening to indicate when the
scenarios were 'complete' to allow for an exploration of
how the creators' expertise influenced the development
of the AI-generated content. The degree of interaction
between the scenario creators and ChatGPT varied,
influenced by their individual levels of expertise.
2.1 Evaluation Methodology
The scenario evaluation was conducted by five external
experts to ensure a comprehensive and objective
assessment. All experts possess seagoing experience
and extensive expertise in maritime education and
training, particularly in delivering Bridge Resource
Management courses. To enhance methodological
rigor, a blind review approach was applied,
anonymizing the scenarios to ensure that assessments
focused solely on content quality rather than
authorship. This external review provided an
additional layer of validation, ensuring that the
findings reflect diverse professional perspectives.
Table 2 provides an overview of the backgrounds of
both the scenario authors and the external experts,
ensuring transparency in the evaluation process.
Table 2. Background of authors and experts involved in
evaluation process
Role
Origin
CoC
MET experience
[years]
Expert 1
Croatia
Master
9
Expert 2
Croatia
Master
12
Expert 3
Croatia
Master
20
Expert 4
Montenegro
Master
20
Expert 5
Poland
Master
15
The evaluation followed a comparative analysis,
integrating both qualitative and quantitative
approaches to identify structural differences and
similarities among the scenarios. Experts examined key
components, including:
Aim and objective the intended purpose and
learning goals of the scenario;
Scenario details situations, conditions, expected
actions, and decision-making points;
Assessment guide the extent to which the scenario
incorporates elements relevant to candidate
evaluation.
312
In addition to these key components, the expert
assessors were provided with clear guidelines
outlining the importance of each criterion. For instance,
the 'Aim and objective' component was assessed based
on how well the scenario’s learning goals aligned with
the training needs of maritime professionals. The
'Scenario details' were evaluated for clarity, realism,
and relevance to practical training, including whether
the decision-making points were appropriately
challenging and applicable to real-world scenarios. The
'Assessment guide' was analyzed to determine how
well the scenario incorporated methods for evaluating
participant performance, ensuring the scenario was not
only instructional but also assessable in terms of
learning outcomes.
The expert evaluators were also given a scoring
methodology to assign numerical values to each
component. The scoring system ranged from 1 to 10,
where 1 represented poor performance and 10
represented an excellent scenario. Each criterion was
rated individually, and the total score for each scenario
was the sum of the scores across all components.
Experts were instructed to provide narrative feedback
to support their ratings, offering insight into why a
particular scenario scored highly or low. The final score
for each scenario was calculated by averaging the
ratings of all five experts.
To ensure a comprehensive evaluation of scenario
design, a Multi-Criteria Analysis (MCA) was
employed to assign weighted scores based on
predefined criteria, including scenario complexity,
assessment methodology, and adherence to maritime
training standards. The Intraclass Correlation
Coefficient (ICC) was calculated to assess the reliability
and consistency of experts' assessments. The ICC
values, which range from 0 to 1, provide insight into
the consistency of the experts' assessments. A high ICC
value (close to 1) indicates strong agreement among
experts, while a low value (close to 0) suggests
significant variability in their ratings. This structured
approach, combining quantitative scoring with
narrative analysis, provided both a numerical ranking
and qualitative insights into critical scenario elements.
The integration of ICC results strengthened the
evaluation process, ensuring its robustness. The
quantitative findings, supported by ICC analysis,
complemented and validated the qualitative
observations, offering a nuanced understanding of
variations in scenario design.
2.2 Limitations of the Study and Evaluation Process
While this study provides valuable insights into the
structure, content, and pedagogical relevance of AI-
generated training scenarios, certain limitations should
be acknowledged. A key limitation is the variability of
AI-generated content, as identical prompts do not
always produce consistent results. Additionally, the
absence of real-world validation means that the
practical applicability of these scenarios in actual
training environments remains uncertain. Since the
study was conducted without testing the scenarios in a
nautical simulator, their effectiveness in real-life
instructional settings could not be fully assessed.
The evaluation process itself also presents certain
challenges. Although a structured Multi-Criteria
Analysis (MCA) was employed to introduce a
quantitative dimension to the assessment, expert
judgment remained integral to the evaluation. While
the blind review approach minimized bias, expert
subjectivity cannot be entirely eliminated, as
assessments are influenced by individual expertise and
experience. Despite these constraints, the
methodological framework, combining both
quantitative scoring and qualitative analysis, ensures a
structured and transparent approach to scenario
evaluation. This contributes to a deeper understanding
of how AI can support the development of maritime
training exercises.
3 RESULTS
While the AI-generated scenarios were created using
ChatGPT Plus, the level of human involvement varied
considerably based on the creator's expertise. Scenario
creator III, with minimal prior experience in maritime
training, was generally satisfied with the outputs
generated by ChatGPT and made few modifications. In
contrast, scenario creator I, with relevant knowledge
but limited practical training experience, used the AI-
generated content as a basis to explore relevant
maritime applications, further refining the task
structure. Scenario creator II, an expert in maritime
education, provided more targeted feedback to
ChatGPT, specifying exact details regarding traffic
conditions, vessel types, and specific evaluation
criteria for the exercise, ensuring that the scenario
aligned with practical training needs.
The expert evaluations revealed certain differences in
the quality of the generated scenarios. Some of these
variations can be attributed to the individual expertise
of the creators, while others seem to be influenced by
the way the AI model was utilized in the creation
process. These differences were primarily related to
how much the scenario creators relied on the outputs
provided by ChatGPT, which varied depending on
their level of expertise. For example, the creator with
minimal prior experience in maritime training
generally accepted the AI-generated content with
minimal modification, while the more experienced
creators interacted more actively with the model,
refining the scenarios to better fit practical training
requirements. This pattern suggests that the
involvement of human expertise played a key role in
shaping the final scenario outputs, with more
experienced creators leveraging their knowledge to
guide the AI model more effectively. The specific
contributions of both human expertise and AI model
outputs in the scenario development process will be a
topic for future research.
To systematically compare the evaluated scenarios, the
following tables present the expert ratings and their
descriptive statistics. Table 3 presents the individual
scores assigned by five experts for each scenario based
on three key criteria: Aim & Scope (A&S), Scenario
Details (SD), Evaluation Part (EP). It shows the
Average Score (AS), as well.
313
Table 3. Expert Ratings for Scenario Assessment
Expert
A&S
SD
EP
AS
Scenario I
Expert 1
9
9
9
9
Expert 2
9
8
10
9
Expert 3
9
8
9
8.67
Expert 4
10
9
10
9.67
Expert 5
10
10
10
10
Scenario II
Expert 1
8
8
9
8.33
Expert 2
9
10
10
9.67
Expert 3
10
10
10
10
Expert 4
6
6
6
6
Expert 5
9
9
8
8.67
Scenario III
Expert 1
8
9
9
8.67
Expert 2
10
10
10
10
Expert 3
10
10
10
10
Expert 4
7
7
7
7
Expert 5
8
8
9
8.33
ChatGPT Plus assessed the scenarios as shown in
Table 4.
Table 4. ChatGPT Plus Ratings for Scenario Assessment
Scenario
A&S
SD
EP
AS
I
9
9
8
8.67
II
8
8
7
7.67
III
10
10
9
9.67
To provide a quantitative summary of the
assessment results, Table 5 presents the descriptive
statistics of the evaluation scores, including the mean,
median, minimum, maximum, and standard deviation
for each criterion across all scenarios. These statistical
measures help identify variations in expert
assessments and highlight potential discrepancies in
scenario complexity and evaluation methodology.
Table 5. Descriptive Statistics of Scenario Evaluation Scores
Expert
A&S
SD
EP
AS
Scenario I
Mean
9.4
8.8
9.6
9.27
Median
9.0
9.0
10.0
9.0
Min
9.0
8.0
9.0
8.67
Max
10.0
10.0
10.0
10.0
Standard
Deviation
0.490
0.748
0.490
0.490
Scenario II
Mean
8.4
8.6
8.6
8.53
Median
9.0
9.0
9.0
8.67
Min
6.0
6.0
6.0
6.0
Max
10.0
10.0
10.0
10.0
Standard
Deviation
1.356
1.497
1.497
1.409
Scenario III
Mean
8.6
8.8
9.0
8.8
Median
8.0
9.0
9.0
8.67
Min
7.0
7.0
7.0
7
Max
10.0
10.0
10.0
10
Standard
Deviation
1.200
1.166
1.095
1.128
To assess the level of agreement among expert, the
Intraclass Correlation Coefficient (ICC) was calculated
for each evaluation criterion. Higher ICC values
indicate stronger agreement, while lower values
suggest greater variability in assessments. The
calculated ICC values for the three evaluation
criteriaAim & Scope, Scenario Details, and
Evaluation Partalong with the overall ICC value, are
presented in Table 6.
Table 6. Intraclass Correlation Coefficient (ICC) Values
Criteria
Aim & Scope
Scenario
Details
Evaluation
part
Overall
ICC Value
0.85
0.78
0.72
0.79
The results indicate that the highest level of
agreement was observed for Aim & Scope (ICC = 0.85),
followed by Scenario Details (ICC = 0.78) and
Evaluation Part (ICC = 0.72). The overall ICC value
across all criteria was 0.79, reflecting the consistency of
expert assessments throughout the evaluation process.
Further interpretation and discussion of these
findings are presented in Chapter 4.
4 DISCUSSION
The results of this study provide valuable insights into
the feasibility and effectiveness of using artificial
intelligence for generating training scenarios in
maritime education. The evaluation of AI-generated
scenarios revealed important trends in their structure,
quality, and alignment with existing maritime training
standards. During the discussion, the creator of
Scenario I stated that significant effort was invested in
achieving the highest possible quality of the scenario.
The creator of Scenario II acknowledged deliberately
omitting certain details, assuming they were implicit
and did not require explicit clarification. On the other
hand, the creator of Scenario III indicated difficulties in
further developing the scenario due to a lack of subject-
matter knowledge, seafaring experience, and
familiarity with simulators.
The expert evaluations demonstrated differences in
the quality of the generated scenarios, as reflected in
the descriptive statistics and Intraclass Correlation
Coefficient (ICC) analysis. Scenario I consistently
received higher scores across all criteria, suggesting
that its structure and content closely adhered to
maritime training standards. Scenario II, on the other
hand, exhibited greater variability in scores, which
may indicate inconsistencies in its design or differences
in expert interpretations. Scenario III received
moderate ratings overall, with a mean score slightly
higher than Scenario II. However, Scenario II had the
lowest minimum scores (6.0 in all categories),
suggesting that certain aspects of the scenario were
perceived as less developed by some evaluators.
The ICC results further highlight the consistency
among experts. A high level of agreement was
observed for the Aim & Scope criterion (ICC = 0.85),
indicating that the scenarios' objectives were well
understood and consistently assessed. The Scenario
Details criterion showed slightly lower agreement (ICC
= 0.78), suggesting minor variations in the
interpretation of scenario components. The Evaluation
Part criterion had the lowest ICC value (0.72), reflecting
some divergence in how expert assessed the
effectiveness of the evaluation framework embedded
in the scenarios. The overall ICC value of 0.79 confirms
that the assessment process was reliable but also
highlights areas for potential refinement.
Beyond numerical assessments, qualitative
feedback from experts provided further insights into
the strengths and limitations of the AI-generated
scenarios. Expert 1 noted that Scenario III appeared to
314
be a general template rather than a context-specific
exercise, unlike Scenarios I and II, which seemed to be
more tailored to particular navigational conditions.
This raised several questions regarding the
geographical scope and structure of the scenario,
including:
The starting and ending points of the exercise in
geographical terms;
The need for an additional role to simulate the
presence of a pilot during the "After Pilot
Embarkation" phase;
The practical execution of certain tasks, such as
preparing the pilot ladder in accordance with
SOLAS requirements.
Expert 2 was concerned about the traffic density
specified in the scenarios. He pointed out that a 2NM
radius with multiple vessels may be unrealistic for a
pilotage-focused scenario, where excessive emphasis
on RADAR/ECDIS monitoring could distract visual
lookout. Furthermore, expert 3 questioned whether
debriefing sessions were planned after each phase of
the exercise, as pausing the scenario for discussion
could influence the flow of training. Experts 4 and 5
emphasized the need to clarify assessment criteria and
scoring thresholds, particularly in Scenario II, to ensure
consistency in evaluation. Expert 5 expressed concern
about the environmental conditions in the Scenario I,
particularly the inclusion of a 4-knot tidal stream in the
Dover TSS, which was perceived as excessively strong
for the intended training context. This suggests the
need for better calibration of environmental
parameters to align with realistic operational scenarios.
While AI-generated scenarios demonstrate
structured design and adherence to maritime training
standards, certain limitations remain. One key
challenge is the variability in AI-generated outputs
identical prompts do not always produce consistent
results, raising concerns about reproducibility in
training. Additionally, AI lacks the ability to fully
incorporate contextual nuances and experiential
knowledge that MET experts naturally integrate into
training exercises.
Another critical aspect is the need for human
oversight in refining AI-generated content. Although
AI can assist in structuring scenarios, experienced
maritime educators are essential for adapting exercises
to real-world challenges, regulatory requirements, and
industry best practices. The moderate agreement
observed in the Evaluation Part criterion suggests that
further refinement of assessment criteria is necessary to
ensure a more consistent and objective evaluation
framework.
5 CONCLUSION
This study examined the application of artificial
intelligence in the development of training scenarios
for nautical simulators, specifically in accordance with
IMO Model Course 1.22 and the requirements of the
STCW Convention A-II/1. By utilizing AI-generated
scenarios, the research aimed to assess their alignment
with established maritime training standards and their
potential contribution to the competency development
of seafarers. The findings indicate that AI-assisted
scenario creation can provide structured and
comprehensive training exercises that reflect key
navigational challenges. The evaluation process,
conducted by MET experts, demonstrated a high level
of consistency in assessing scenario objectives,
structure, and evaluation components. The results of
the Intraclass Correlation Coefficient (ICC) analysis
confirmed the reliability of expert assessments, with
variations observed across different criteria.
An important insight from this study is the
potential benefit of involving individuals with
different backgrounds in the initial stages of scenario
development. While MET instructors possess the
technical expertise necessary for refining and
perfecting training exercises, early input from students
or individuals non-strictly of maritime background
could help ensure that the scenario structure and key
details are explicitly outlined. This collaborative
approach allows for a clearer foundation upon which
instructors can apply their experience and professional
knowledge to refine and optimize the scenario for
training purposes. By incorporating multiple
perspectives in the initial design phase, MET
instructors can create more comprehensive and well-
balanced training exercises that align with both
regulatory standards and real-world operational
challenges.
Future research should focus on conducting
experimental studies where AI-generated scenarios are
implemented in actual training environments, with
performance metrics used to assess their impact on
skill development. Moreover, combining AI-generated
content with human expertise could enhance the
effectiveness of training scenarios. AI could be used as
a supplementary tool to assist instructors in designing
exercises, while experienced MET experts provide
necessary refinements and adaptations based on
practical knowledge. Such collaboration could
optimize the balance between automation and human
expertise in maritime education.
By advancing AI applications in maritime training,
this study contributes to the ongoing development of
innovative, technology-driven educational approaches
that support the professional growth of seafarers and
the broader maritime industry. These findings
highlight the need for further research into the practical
implementation of AI-generated scenarios in
simulator-based training and underscore the
importance of establishing standardized assessment
frameworks to ensure objective and reliable
competency evaluation.
REFERENCES
[1] IMO, International Convention on Standards of Training,
Certification and Watchkeeping for Seafarers. IMO, 2017.
[2] T. T. Türkistanli, “Advanced learning methods in
maritime education and training: A bibliometric analysis
on the digitalization of education and modern trends,”
Comput. Appl. Eng. Educ., vol. 32, no. 1, Jan. 2024, doi:
10.1002/cae.22690.
[3] A. Gundic, D. Zupanovic, L. Grbic, and M. Baric,
“Determining Competences in MET of Ship Officers,”
TransNav, Int. J. Mar. Navig. Saf. Sea Transp., vol. 15, no.
2, pp. 343348, 2021, doi: 10.12716/1001.15.02.10.
[4] V. Pavic, S. Tominac Coslovich, N. Kostovic, and I.
MiŁlov, “Current Challenges in Professional Education
315
and Training of Seafarers at Management Levels on Oil
Tankers,” TransNav, Int. J. Mar. Navig. Saf. Sea Transp.,
vol. 17, no. 3, pp. 695700, 2023, doi:
10.12716/1001.17.03.21.
[5] M. Saito and T. Takemoto, “Study on Education and
Training Methods to Enhance Non-technical Skills of
OICNW Using the Psychological Test,” TransNav, Int. J.
Mar. Navig. Saf. Sea Transp., vol. 17, no. 1, pp. 121125,
2023, doi: 10.12716/1001.17.01.12.
[6] W. Gyldensten, A. C. Wiig, and C. Sellberg, “Maritime
Students’ Use and Perspectives of Cloud-Based Desktop
Simulators: CSCL and Implications for Educational
Design,” TransNav, Int. J. Mar. Navig. Saf. Sea Transp.,
vol. 17, no. 2, pp. 315321, 2023, doi:
10.12716/1001.17.02.07.
[7] M. V. Miyusov, L. L. Nikolaieva, and V. V. Smolets,
“Future Perspectives of Immersive Learning in Maritime
Education and Training,” Trans. Marit. Sci., vol. 11, no. 2,
Oct. 2022, doi: 10.7225/toms.v11.n02.014.
[8] G. Vukelic, D. Ogrizovic, D. Bernecic, D. Glujic, and G.
Vizentin, “Application of VR Technology for Maritime
Firefighting and Evacuation Training—A Review,” J.
Mar. Sci. Eng., vol. 11, no. 9, p. 1732, Sep. 2023, doi:
10.3390/jmse11091732.
[9] A. Ujkani, A. Kumar, and R. Grundmann, “Development
of Maritime VR Training Applications and Their Use in
Simulation Networks: Fast Rescue Boat Training in
EMSN Connect,” TransNav, Int. J. Mar. Navig. Saf. Sea
Transp., vol. 17, no. 2, pp. 323329, 2023, doi:
10.12716/1001.17.02.08.
[10] I. Petrović and S. Vujičić, “Use of Eye-Tracking
Technology to Determine Differences Between
Perceptual and Actual Navigational Performance,” J.
Mar. Sci. Eng., vol. 13, no. 2, p. 247, Jan. 2025, doi:
10.3390/jmse13020247.
[11] S. Hjellvik and S. Mallam, “Integrating motivated goal
achievement in maritime simulator training,” WMU J.
Marit. Aff., vol. 22, no. 2, pp. 209240, Jun. 2023, doi:
10.1007/s13437-023-00309-2.
[12] I. Bartusevičiene and E. Valionienė, “An Integrative
Approach for Digitalization Challenges of the Future
Maritime Specialists: A Case Study of the Lithuanian
Maritime Academy,” TransNav, Int. J. Mar. Navig. Saf.
Sea Transp., vol. 15, no. 2, pp. 349355, 2021, doi:
10.12716/1001.15.02.11.
[13] I. Bartusevičienė, M. Kitada, and E. Valionienė,
“Rethinking Maritime Education and Training for
Generation Z Students,” Reg. Form. Dev. Stud., pp. 16
29, Oct. 2023, doi: 10.15181/rfds.v41i3.2543.
[14] T. Takimoto, “Case Study of Compare Maritime and
Ocean Educational Style for under MET,” TransNav, Int.
J. Mar. Navig. Saf. Sea Transp., vol. 15, no. 1, pp. 101107,
2021, doi: 10.12716/1001.15.01.09.
[15] A. Sharma, P. E. Undheim, and S. Nazir, “Design and
implementation of AI chatbot for COLREGs training,”
WMU J. Marit. Aff., vol. 22, no. 1, pp. 107123, Mar. 2023,
doi: 10.1007/s13437-022-00284-0.
[16] H. M. Tusher, S. Nazir, S. Ghosh, and R. Rusli, “Seeking
the Best Practices of Assessment in Maritime Simulator
Training,” TransNav, Int. J. Mar. Navig. Saf. Sea Transp.,
vol. 17, no. 1, pp. 105114, 2023, doi:
10.12716/1001.17.01.10.
[17] M. Karahalil, M. Lützhöft, and J. Scanlan, “Formative
assessment in maritime simulator-based higher
education,” WMU J. Marit. Aff., vol. 22, no. 2, pp. 181
207, Jun. 2023, doi: 10.1007/s13437-023-00313-6.
[18] C. Sellberg, “Simulators in bridge operations training
and assessment: a systematic review and qualitative
synthesis,” WMU J. Marit. Aff., vol. 16, no. 2, pp. 247–263,
May 2017, doi: 10.1007/s13437-016-0114-8.
[19] J. Ernstsen and S. Nazir, “Performance assessment in
full-scale simulators A case of maritime pilotage
operations,” Saf. Sci., vol. 129, Sep. 2020, doi:
10.1016/j.ssci.2020.104775.
[20] IMO, Bridge resource management - IMO Model Course
1.22. IMO, 2023.