TG of the Month: Training Technical Group (TTG)
Posted September 11, 2025
Team Training in the Age of AI
Lillian Asiala & Jim McCarthy
Human-AI hybrid teaming is rapidly becoming essential in various fields, prompting a need for focused research and training. Traditionally, team training has assumed all members are human; however, this approach doesn’t fully account for the unique characteristics of AI teammates – their computational strengths contrasted with a lack of real-world understanding and common sense. This discrepancy creates communication and collaboration challenges, especially in complex, highly interactive team workflows.
Several key areas for research and training development include designing training content and platforms, calibrating human expectations regarding AI capabilities, and adapting traditional instructional methods to account for AI’s strengths and limitations. A key consideration is whether to view AI as a teammate or a tool, as this perspective shapes training approaches.
Ultimately, this field requires new theoretical models specifically designed for human-AI teams. Successful training will depend on addressing technical infrastructure needs, developing effective instructional design, ensuring instructor expertise, and finding the right balance in AI scaffolding – providing enough assistance to foster trust without creating over-reliance on the technology.
HFES TTG Newsletter Article
Team Training in the Age of AI: Models and Research Considerations
Dr. Lillian Asiala & Dr. Jim McCarthy
Sonalysts, Inc.
As Artificial Intelligence (AI) becomes increasingly integrated into modern life, the concept of Human/AI Hybrid Teaming has moved from a niche research area to a crucial component of how we work and live. We already team with AI daily – from using GPS navigation and online shopping to conducting simple web searches. This widespread integration highlights the need to understand and optimize how humans and AI interact effectively.
The Growing Importance of Human-AI Team Training
Recognizing this need, organizations are prioritizing training programs focused on human-AI collaboration. In 2021, the Air Force Research Laboratory (AFRL) Human Performance Wing tasked the National Academies of Sciences, Engineering, and Medicine (NASEM) with examining the role of AI in human-AI teams. AFRL was particularly interested in identifying research gaps to guide the development of future systems that maximize human performance when AI is involved. The NASEM report pinpointed training human-AI teams as a primary focus area. Several research sub-areas were identified as critical to advancing our understanding of this field:
- Developing Training Content
- Evaluating Traditional Methods
- Calibrating Expectations
- Designing Training Platforms
- Adaptive Training
- Building Trust
With these priorities in mind, it is important to consider how Human/AI hybrid teams might differ from the traditional all-human team, and the ways that this impacts team training.
Understanding the Differences: Human vs. AI Teammates
AI agents possess virtually unlimited computational power, excelling in data analysis and processing. However, they lack a grounded understanding of real-world situations– the common-sense knowledge general context that humans possess. This difference can lead to communication challenges and hinder effective collaboration, which impacts team performance. These communication and collaboration challenges tend to increase with the complexity of the team workflow and team composition.
Team Workflows & AI Integration
The way AI integrates into a team is heavily influenced by the team’s workflow. Saavedra, Early, and Van Dyne (1993) identified several workflow types with varying degrees of interdependence:
- Pooled Workflow: Individuals work independently, and the team’s output is the sum of individual contributions.
- Sequential Workflow: Tasks flow in a monodirectional linear fashion, like an assembly line.
- Reciprocal Workflow: Tasks flow bi-directionally, with teammates exchanging information and assistance.
- Team Workflow: All members actively interact and collaborate to achieve a common goal.
As you move down this list towards more interactive workflows, the need for clear communication and collaboration increases (qualities that many modern AI-based agents do not possess). As the number of agents and situation complexity increase, the attentional demands on human teammates grow, and more sophisticated communication is required. This has significant implications for both AI development and team training.
Conceptual Models for Hybrid Team Dynamics
A common approach to research in human/AI hybrid teaming is to assume AI teammates require the same skills as humans. However, this “all-human team” model may not be entirely appropriate (and in fact does not have universal support). An alternative is to draw parallels with human-animal teams, where a supervisory control structure is often present. Groom and Nass (2007) highlighted this connection with dog-handling teams. Still others reject the teammate metaphor in favor of a tool metaphor.
Ultimately, the field needs to develop new theoretical models that explicitly account for the unique characteristics of autonomous agents and their human teammates to maximize team effectiveness. This requires fundamental research to understand the various factors that affect hybrid teams.
Considerations for Training Hybrid Teams
What does all this mean for training? Several factors need to be addressed:
- Technical Infrastructure: Ensuring the necessary hardware and software are in place.
- Instructional Design: Developing effective training programs that address the specific challenges of human-AI collaboration.
- Instructor Expertise: Providing instructors with the knowledge and skills to facilitate effective training.
One specific challenge to emerge from various teaming considerations is that of calibrating the expectations of human team members. Zhang and her colleagues (2021) found that humans often have high expectations for their AI teammates, expecting flawless performance and seamless communication. This implies that a key aspect of training is helping human teammates understand the capabilities and limitations of their AI counterparts.
Another challenge is the strategy selected for creating and incorporating AI agents within team training scenarios. Traditional instructional design models, like Merrill's Pebble-in-a-Pond, emphasize scaffolding – providing learners with increasing levels of support as they develop mastery. In the context of hybrid teaming, AI agent design could facilitate scaffolding by adjusting their capabilities over time. However, determining what constitutes appropriate “part solutions” with AI agents remains an open question. Should the AI-agents be more helpful in the beginning, or less helpful? Finding the “sweet spot” for fostering trust without overreliance on the technology poses a challenge for human/AI hybrid teaming researchers and instructors alike.
Sonalysts is currently conducting internal research in this area. For more information, please contact Dr. Lillian Asiala at lasiala@sonalysts.com .
References
Groom, V., & Nass, C. (2007). Can robots be teammates?: Benchmarks in human–robot teams. Interaction Studies, 8(3), 483-500.
Saavedra, R., Earley, P. C., & Van Dyne, L. (1993). Complex interdependence in task-performing groups. Journal of Applied Psychology, 78(1), 61.
Zhang, R., McNeese, N. J., Freeman, G., & Musick, G. (2021). " An Ideal Human" Expectations of AI Teammates in Human-AI Teaming. Proceedings of the ACM on Human-Computer Interaction, 4(CSCW3), 1-25.
Connect with the HFES TTG: LinkedIn Group | Website | HFES Connect
Questions? Email our Communications Chair: KLEBERJ@erau.edu