High-stakes professions face a troubling paradox: they’ve built elaborate systems to transfer critical expertise from master practitioners to the next generation, yet they can’t reliably measure whether these systems actually work. This isn’t just a theoretical concern. It’s playing out right now across industries where institutional knowledge determines life-or-death outcomes.

A 2025 scoping review published in *Gerontology & Geriatrics Education* by a team of researchers examined dozens of validated geriatrics knowledge assessment tools used in medical education. The startling finding? None of these instruments directly measured competency in Age-Friendly Health Systems principles, the framework now guiding geriatric care. Here’s a field with strong assessment traditions that can’t validate whether its trainees acquire the competencies it considers essential.

This gap reveals a wider problem across professions that build knowledge transfer programs. Surgical fellowships show complete structural models. Crisis-tested operational systems produce outcomes without revealing mechanisms. Cross-domain integration exposes complexity without disclosing the underlying architecture.

These professional contexts illustrate various dimensions of knowledge transfer. They underscore the critical need for validated measurement tools to ensure effective capability transfer.

The Proximity Fallacy

There’s a common assumption that complex skills can be learned by working alongside experienced practitioners. But proximity gives you observation, not the specialised expertise that’s built from tacit knowledge – pattern recognition, decision-making under uncertainty, and knowing when to break from standard approaches. Tacit knowledge stays locked inside experienced practitioners unless you’ve got deliberate mechanisms forcing its articulation and transfer.

You’d think watching a master at work would unlock their secrets. Actually, it often does the opposite – observers see actions but miss the decades of pattern recognition guiding each choice.

The gap becomes obvious when junior practitioners copy procedures without getting the underlying expertise that lets you adapt when standard approaches fail. Real knowledge transfer needs deliberate structural elements beyond just hanging around experienced practitioners.

High-stakes professions that successfully preserve expertise use specific architectural components. These include volume that creates pattern recognition, forced articulation that converts tacit to explicit knowledge, and accountability mechanisms that validate transfer. Surgical training shows the clearest example of this complete architecture.

The Complete Architecture of Expertise Transfer

Pattern recognition in high-stakes specialties requires experiencing variations across hundreds of cases, while tacit knowledge must be converted to explicit principles through deliberate mechanisms. Execution under supervision differs fundamentally from watching procedures – errors receive immediate correction, and near-misses become learning moments. Sufficient repetition enables practitioners to recognise subtle variations, anticipate complications, and adapt techniques to individual presentations.

This requires structured training programs where trainees execute procedures under direct supervision with sufficient volume to build pattern recognition, combined with requirements that force articulation of developing expertise into explicit principles.

One approach to building this dual architecture is demonstrated by Dr Timothy Steel, a neurosurgeon and minimally invasive spine surgeon at St Vincent’s Private Hospital and St Vincent’s Public Hospital in Sydney, who directs a Spine Surgery Fellowship that integrates both volume and articulation requirements. Fellows assist across approximately 500 procedures annually during the 6–12 month program, receiving hands-on exposure to minimally invasive decompression, fusion techniques, and vertebral reconstruction under his direct supervision. Steel’s consultant appointment commenced in 1998; his career totals exceeding 8,000 minimally invasive spine procedures inform the fellowship’s scope. Alongside this clinical training, the fellowship requires fellows to complete two research projects to final-draft level during the program.

The threshold function of 500 procedures provides sufficient exposure to anatomical variations, complication presentations, and technique adaptations required for different patient contexts. Direct oversight allows intervention before errors compound, converting potential complications into corrective learning experiences. Sure, it’s intensive – but that’s exactly what converts near-misses into permanent learning rather than forgotten close calls. Fellows make decisions and perform actions with immediate feedback, building confidence alongside competence. Simultaneously, the research requirement forces fellows to articulate surgical principles in written form – why does this approach achieve outcomes, what variables affect success, how do modifications impact results. Writing research exposes vague understanding, requiring fellows to examine what they’re learning through supervised procedures and convert experiential pattern recognition into communicable frameworks.

This structured combination demonstrates how effective knowledge transfer prevents expertise from remaining tacit by mandating both pattern recognition through accountable execution and conversion of tacit knowledge into explicit principles through research that forces practitioners to examine and communicate their own developing expertise. But what happens when stable training conditions suddenly disappear?

When Crisis Tests the System

Stable conditions hide the cracks in how organisations pass on knowledge. Everything looks fine until operations scale fast or disruption hits workflows. That’s when informal mentorship networks snap – senior practitioners get swamped, standard training becomes impossible. Organisations that’ve documented their operational knowledge and built structured training survive the pressure. Those banking on proximity-based learning don’t.

This demands documented operational systems and structured training frameworks. Knowledge transfer has to work reliably even when standard mentorship networks collapse during crisis.

CSL Limited shows what this resilience looks like. Paul McKenzie, who became Chief Executive Officer and Managing Director of CSL Limited in March 2023, worked on this challenge while serving as Chief Operating Officer from 2019. Under his operational leadership, CSL Plasma navigated COVID-19 challenges and ultimately surpassed pre-pandemic plasma collection volumes. The pandemic disrupted plasma collection operations globally – donor hesitancy, staffing constraints, safety protocol complications. McKenzie’s three-decade background in biotechnology operations informed his approach to operational systems, though specific mechanisms remain less documented in public materials.

Operational knowledge transferred reliably at scale during crisis. This suggests documented protocols, training systems, or operational frameworks enabled resilience. We can see it worked without seeing exactly how it worked.

The specific mechanisms enabling CSL Plasma’s crisis performance remain less visible in public materials than the outcome itself. But the result demonstrates that documented operational systems enable knowledge transfer to withstand crisis pressure. This contrasts sharply with informal mentorship networks that break under disruption when senior practitioners become overwhelmed and standard pathways become unavailable. Crisis testing reveals system strength in single domains, but what happens when multiple specialised fields must integrate their expertise?

Crossing Domain Boundaries

When challenges need multiple specialties, you’ve got to bring together practitioners who’ve been trained in completely different domains. They use different frameworks, speak different professional languages, and operate under different assumptions. Aerospace engineers and telecommunications specialists don’t just approach problems differently – they think differently. You can’t bridge these gaps with standard mentorship structures.

Cross-domain transfer gets messy fast. It’s not like preserving knowledge within a single specialty.

You need integration mechanisms that can actually bridge these different professional frameworks. The goal? Getting expertise to move across those traditional boundaries between specialised fields.

James Taiclet at Lockheed Martin works on this cross-domain challenge through the 21st Century Security initiative. Taiclet, who’s been chairman, president, and CEO of Lockheed Martin since June 2020, focuses on integrating leaders from aerospace, defence, technology, and telecommunications industries. The initiative aims to enhance national defence capabilities through advanced digital and physical technologies, delivering integrated deterrence for the United States and its allies by bringing together diverse expertise toward common defence objectives. Taiclet’s over 5,000 flying hours as a U.S. Air Force officer and pilot, including Gulf War service, provides deep operational military expertise that informs his corporate leadership approach.

The 21st Century Security initiative emphasises the integration goal and participating sectors. But here’s what’s missing: specific architectural elements that actually enable cross-domain knowledge transfer. Are they using joint programs? Rotational assignments? Shared analytical frameworks? The public descriptions don’t spell out these structural mechanisms.

Maybe that’s the point. The complexity of bridging multiple professional frameworks might explain why cross-domain architecture isn’t as standardised as surgical fellowship models. It’s probably more context-specific.

Taiclet’s initiative shows us that cross-domain knowledge integration creates architectural complexity you don’t see in single-domain transfer. You need mechanisms that can bridge different professional frameworks. But even as professions develop these sophisticated transfer architectures, a fundamental question remains: how do we know if any of them actually work?

Measuring What We Cannot See

Medical education runs elaborate training programs. Yet it lacks validated tools to check whether trainees actually acquire critical competencies. The 2025 scoping review by Varma, Quinlan, Bain, and Andrea Wershof Schwartz published in *Gerontology & Geriatrics Education* (Epub June 2025, issue October–December 2025) systematically reviewed validated geriatrics knowledge assessment instruments used among physicians in training and practice.

Researchers reviewed 36 studies of validated assessment tools. They found substantial variation in what each instrument measured. Critically, no instruments directly aligned with Age-Friendly Health Systems competencies, specifically the 5Ms framework (What Matters, Medication, Mentation, Mobility, Multicomplexity) now guiding geriatric care.

Thirty-six different ways to measure geriatric knowledge. Not one measures what the field considers essential.

Medical education has strong assessment traditions. If they can’t create validated tools aligned with their own competency frameworks, what does that say about other professions? Other high-stakes professions are likely facing similar or worse measurement gaps – possibly without even realising it.

Consider this: a profession might operate sophisticated knowledge transfer programs that look successful based on completion rates or participant satisfaction. Meanwhile, actual competency transfer quietly fails. Everyone gets certificates, everyone feels good, and nobody acquires the critical expertise. This failure stays hidden until a complex case reveals the gap between credentials and capability.

The geriatrics assessment gap exposes a fundamental weakness across professional knowledge transfer. Even fields with structured programs often can’t validate whether those systems work. Why? Validated measurement tools don’t exist or don’t align with current competency requirements. Expertise erosion between generations may occur silently until capability gaps create visible failures.

Rethinking Knowledge Transfer Validation

High-stakes professions need deliberately architected knowledge transfer systems. Dr Timothy Steel’s surgical fellowship demonstrates volume building pattern recognition and research requirements forcing articulation. Examining across contexts reveals incomplete mechanism visibility (McKenzie’s crisis-tested operational systems) and critically inadequate validation tools.

Professions develop elaborate architectural responses to knowledge transfer – fellowships, operational programs – then operate these systems without validated instruments to assess effectiveness. We’re building increasingly sophisticated transfer programs while flying blind about whether they actually work. Fields may be implementing transfer programs that quietly fail as master practitioners retire without successfully transferring capabilities to successors.

Perhaps the first step in preserving expertise across generations isn’t building more elaborate transfer programs but developing instruments to determine whether existing programs work. Until professions can measure knowledge transfer effectiveness with validated tools, they’re operating training systems that might preserve critical expertise – or generate credentialed practitioners who lack specialised capabilities.

Just like those 36 geriatrics assessment tools that missed the essential framework, we may be measuring everything except what matters. The gap between elaborate transfer architecture and actual validation remains the unseen weakness in expertise preservation.