The Implementation Gap: What Actually Works When Hospitals Deploy AI
NeuroEdge Nexus — Season 1, Week 4 (October 2025) PART 2
The greatest barrier is not technical failure, but the failure to translate potential into practice
PART 2
Last article, we examined why regulatory frameworks block even validated AI from reaching patients. But regulation is only one barrier. Even when AI tools receive FDA approval, most never reach clinical practice. This week: what a Dutch academic medical center built over three years to solve infrastructure, workflow, and cultural challenges—and what it means for neurology.
The Problem Beyond Regulation
Last week, we discussed why regulatory compliance consumes 6-12 months per AI application A. Kim et al., 2024) , why neural data sovereignty remains unresolved, and why FDA approval pathways aren’t designed for continuously learning algorithms.
But here’s the uncomfortable truth: Even when AI tools have regulatory approval, they still don’t deploy.
Multiple FDA-cleared AI algorithms exist for stroke detection, yet adoption rates remain below 30% in eligible hospitals. The barrier isn’t regulation—it’s infrastructure, workflow disruption, and organizational resistance.
Between 2019 and 2022, a large academic medical center in the Netherlands spent three years systematically addressing these barriers. Their experience—documented through 43 days of observations, 30 meetings, 18 interviews, and 41 analyzed documents—reveals what actually works when hospitals attempt to deploy AI at scale. A (Kim et al., 2024).
The lessons apply directly to neurology, where promising AI tools for MS monitoring, seizure detection, and Parkinsonian disorder classification face identical implementation challenges.
Barrier #1: Infrastructure Hell (The 18-24 Month Problem)
Before the Dutch hospital established centralized infrastructure, deploying a single AI application required 18-24 months from initial approval to clinical use—assuming everything proceeded smoothly. A (Kim et al., 2024)
Here’s the breakdown:
Legal and Contractual Phase: 6-12 months
Individual data processing agreements with each AI vendor A (Kim et al., 2024)
Privacy and security reviews for each application A (Kim et al., 2024).
Separate contractual negotiations addressing liability, performance guarantees, and long-term support A (Kim et al., 2024).
Vendor risk assessments evaluating financial stability A(Kim et al., 2024).
GDPR compliance documentation for data retention and patient rights A (Kim et al., 2024).
Technical Integration: 3-12 months
PACS system integration—each vendor (Sectra, Philips, GE, Siemens) requires custom API development A(Kim et al., 2024).
HL7/FHIR data pipeline configuration to connect EHR data with AI inputs
Clinical front-end software modification to display AI results within existing workflows.
API testing, debugging, and version compatibility management
Local Validation: 3-6 months
Retrospective analysis on institution-specific data to verify algorithm performance on local scanner protocols.
Performance verification across patient population characteristics.
Algorithm tuning for site-specific variations
Total: 18-24 months per application—before radiologists or neurologists ever use it clinically.
Most AI tools never survive this timeline. Vendor contracts expire. Institutional priorities shift. Clinical champions lose patience. The innovation dies not from technical failure but from organizational exhaustion.
The Solution: Vendor-Neutral AI Platforms
The Dutch hospital implemented a vendor-neutral AI (VNAI) platform that centralized data processing, legal frameworks, and technical integration. This platform enabled upscaling and streamlining of AI implementations by automatically routing eligible scans to relevant AI applications based on metadata like imaging modality, scanning protocol, and patient age. A(Kim et al., 2024).
What this meant in practice:
Centralized Legal Framework: The VNAI acted as a centralized data processor for all AI applications hosted on the platform, reducing the need for separate privacy and security paperwork with individual vendors. An overarching regulatory framework was established for all AI projects. A (Kim et al., 2024).
Standardized Technical Integration: Instead of custom API development for each AI tool, the VNAI provided standardized interfaces that AI vendors could plug into. Security reviews, contractual standardization, and technical installation were handled centrally. A (Kim et al., 2024).
Accelerated Deployment: After the VNAI became operational, new AI applications could be deployed in months rather than years. The VNAI expedited implementation to a couple of months, whereas before, stand-alone AI applications took over a year. A (Kim et al., 2024).
Cost Reduction: The platform only hosted certified AI applications, allowing the hospital to quickly install and test credible tools without redundant validation processes. They also moved from on-premise to cloud-based VNAI to facilitate software upgrades and shift maintenance work to the vendor.
The lesson for neurology: Individual algorithm deployment is unsustainable. Healthcare systems require AI infrastructure—platforms that handle integration, validation, and monitoring at scale.
Imagine a neurology department attempting to deploy AI for:
MS lesion quantification (FLAMeS or similar)
Stroke triage algorithms
Seizure detection in EEG
Parkinsonian gait analysis
Without centralized infrastructure, each requires separate 18-24 month implementation cycles. With a vendor-neutral platform, marginal deployment time drops to weeks.
Barrier #2: Workflow Disruption (Why Radiologists Reject Accurate AI)
Even when AI applications were technically functional, radiologists often refused to use them if integration was poor. The study documented that “superficial URL integration”—where AI results opened in a separate browser window—was consistently rejected by clinicians as workflow-disruptive, even when the AI itself was accurate.
Why? Radiologists and neurologists operate in tightly optimized workflows. Any tool requiring:
Separate logins
Additional browser windows
Manual data export/import
Extra clicks beyond core workflow
...is abandoned within weeks, regardless of accuracy.
Radiologists at the Dutch hospital demanded:
Seamless integration into existing PACS viewers
AI results accessible with minimal additional clicks
Modifiable outputs (not fixed PDFs)
Real-time interaction capability
Achieving this required extensive collaboration. In one documented case, making AI lung nodule detection results modifiable—so radiologists could accept, reject, or adjust measurements directly in their PACS viewer—delayed integration by several months. This required configuring APIs and communication standards between the AI software and multiple PACS vendors.
But after this investment: Radiologists were substantially more willing to use the tool. They did not reject AI because they mistrusted algorithms—they rejected AI that disrupted their workflow.
The finding: User experience and seamless integration were prerequisites for adoption, not optional enhancements.
The lesson for neurology: AI for EEG seizure detection that requires exporting waveforms to a separate platform will fail. AI for MS monitoring that generates PDFs outside the radiology workstation will be ignored. The interface matters as much as the accuracy.
Barrier #3: Organizational Culture (The Trust and Expertise Gap)
The study identified “divergent expectations and limited experience with AI” among radiologists as a fundamental barrier. One radiology resident noted: “There were a lot of talks, a lot of presentations going on about AI, but in practice, you never get in touch with AI.” A (Kim et al., 2024).
Clinicians had heard about AI’s promise for years. But without hands-on experience, they remained skeptical, uncertain about appropriate use cases, and unable to distinguish credible tools from hype.
The Dutch hospital built three organizational structures to address this:
1. Clinical AI Implementation Group (CAI Group)
A multidisciplinary team of data scientists, legal experts, ethicists, and clinicians that:
Assessed viability of proposed AI projects
Provided checklists for the entire AI lifecycle
Evaluated whether AI was actually necessary—or if simpler technology would suffice
Disseminated lessons learned across departments
Why this mattered: Not every clinical problem requires AI. The CAI Group prevented wasted resources on projects where traditional software or process improvements would suffice.
2. AI Champions Network
Representatives from each subspecialty (neuroradiology, cardiology, musculoskeletal imaging, etc.) who:
Gathered use case ideas through bottom-up approach
Set realistic expectations with colleagues
Facilitated peer-to-peer teaching
Created feedback loops from implementation back to future decisions
Why this mattered: Top-down mandates (”You will use this AI tool”) fail. Clinicians trust peers more than administrators. The champions network enabled organic adoption driven by clinical need rather than institutional decree.
3. Image Processing Group (IPG)
A centralized hub where nine specialized radiographers and two technical physicians deployed AI tools in standardized, protocol-driven ways. Radiographers analyzed images and pre-populated reports before radiologists reviewed cases. Technical physicians validated algorithm performance and continuously monitored quality.
Why this mattered: The IPG fostered technology expertise by centralizing AI use. Radiographers became highly skilled at using AI tools efficiently, while technical physicians—with backgrounds in both medicine and engineering—coordinated stakeholders within and beyond the hospital.
For example, technical physicians collaborated with an AI vendor to retrospectively analyze over 15,000 chest X-rays from the hospital to validate an AI application for normal/abnormal detection. They created a dashboard to automatically compare AI results with radiology reports, enabling quality monitoring.
The lesson for neurology: Cultural change requires dedicated personnel, cross-functional teams, and mechanisms for continuous learning. AI adoption is as much a people problem as a technology problem.
A neurology department attempting to deploy seizure detection AI needs:
Neurophysiologists who understand both EEG interpretation and algorithmic limitations
EEG technologists trained in AI-assisted workflows
A feedback system where neurologists report when AI fails—and those lessons improve future deployments
Without this organizational infrastructure, even excellent AI tools will be underutilized or abandoned.
The Paradigm Shift: Four Layers of Change
The Dutch study concluded that successful AI implementation requires systemic change across multiple dimensions simultaneously: A (Kim et al., 2024).
Infrastructure Layer
Vendor-neutral platforms for AI deployment
Standardized APIs for medical imaging and EHR integration
Cloud-native architectures for scalability
Centralized security and legal frameworks
Workflow Layer
Seamless integration with existing clinical software
User-centered interface design
Modifiable AI outputs with human oversight
Real-time interaction capabilities
Organizational Layer
Multidisciplinary implementation teams
Dedicated technical expertise
Cross-departmental knowledge sharing
Bottom-up use case development
Governance Layer
Clear institutional AI strategies
Ethical review processes
Resource allocation frameworks
Quality monitoring and post-market surveillance systems
Institutions that align these four dimensions holistically achieve sustainable AI adoption. Those inserting AI into legacy systems without structural reform achieve only transient gains—and often revert to pre-AI workflow.
The Timeline: What “Success” Actually Looks Like
The Dutch hospital followed AI implementation over three years before documenting sustainable success. Even with deliberate institutional investment, meaningful AI adoption is measured in years, not months.
Year 1: Infrastructure development, organizational structures established
Year 2: Initial AI applications deployed, lessons learned documented
Year 3: Acceleration of additional applications, sustainable workflows emerging
This is not failure. This is realistic.
For neurology departments considering AI adoption:
If you’re starting from scratch: Expect 2-3 years to establish infrastructure, workflows, and expertise
If you have some infrastructure: Expect 12-18 months for first applications, 6-12 months for subsequent tools
If you’re well-established: Expect 3-6 months for new applications within existing frameworks
The key insight: Early infrastructure investment accelerates all future deployments. The Dutch hospital’s VNAI platform reduced deployment time from 18-24 months to 2-3 months for new applications. A Full Text.
Implications for Neurology
These barriers aren’t unique to radiology. Neurology faces identical challenges:
MS Lesion Monitoring: Tools like FLAMeS require PACS integration, radiologist workflow adaptation, and neurologist training to interpret AI-generated quantifications. Without infrastructure, deployment will take years.
Stroke Triage AI: FDA-approved large vessel occlusion detection algorithms exist, yet adoption remains low because:
Emergency departments lack technical expertise to deploy them
Integration with CT scanners and stroke team notifications is complex
Liability concerns about missed strokes persist
Seizure Detection in EEG: Automated algorithms must integrate with EEG review software, provide modifiable outputs, and include neurophysiologist feedback loops—all requiring workflow redesign.
Parkinsonian Disorder Classification: AI-assisted diagnosis requires movement disorder specialists who trust the tool, understand its limitations, and know when to override it—a cultural challenge, not a technical one.
The pattern is consistent: Technical algorithm performance is necessary but insufficient. Infrastructure, workflow integration, and organizational culture determine whether AI reaches patients.
What Neurologists Should Do Now
1. Stop Waiting for Perfect AI
The algorithms are ready. FLAMeS, stroke detection tools, and seizure detection AI all achieve human-level or better performance. Waiting for “better AI” is not the bottleneck.
2. Invest in Infrastructure Before Algorithms
Build vendor-neutral platforms, standardized data pipelines, and centralized technical teams before selecting individual AI tools. The infrastructure will accelerate all future deployments.
3. Design for Workflow Integration From Day One
Engage end-users (neurologists, EEG technologists, radiographers) from the beginning. Seamless integration is non-negotiable.
4. Build Multidisciplinary Teams
AI deployment requires neurologists, data scientists, engineers, legal experts, and ethicists working collaboratively. Institutions need dedicated personnel with these competencies.
5. Establish Continuous Evaluation
Deploy AI with monitoring systems that track performance over time, detect drift, and enable rapid response to quality concerns.
6. Set Realistic Timelines
Expect 2-3 years for meaningful AI adoption. Anyone promising faster deployment is either overselling or underestimating complexity.
Conclusion: Systems Thinking Required
The Dutch case demonstrates that clinical AI implementation is possible—but it requires fundamentally rethinking how healthcare institutions approach technology adoption.
Impressive algorithms are necessary but insufficient. The infrastructure, workflow integration, organizational culture, and governance frameworks must evolve alongside algorithmic development.
Building AI platforms before deploying individual tools
Redesigning workflows to accommodate AI seamlessly
Training neurologists, technologists, and administrators in AI capabilities and limitations
Establishing quality monitoring systems from day one
Accepting that meaningful adoption takes years, not months
The opportunity exists. The Dutch case proves that institutions willing to invest in holistic, long-term infrastructure can successfully deploy AI at scale.
The question is whether healthcare systems have the organizational capacity, financial resources, and institutional patience required for this transformation.
The answer to that question will determine whether the next generation of neurological care is defined by human-AI collaboration—or by the persistent gap between algorithmic promise and clinical reality.
References
Kim B, Romeijn S, van Buchem M, Mehrizi MHR, Grootjans W. A holistic approach to implementing artificial intelligence in radiology. Insights Imaging. 2024;15:22. doi:10.1186/s13244-023-01586-4
Dereskewicz E, La Rosa F, Dos Santos Silva J, et al. FLAMeS: A Robust Deep Learning Model for Automated Multiple Sclerosis Lesion Segmentation. medRxiv [Preprint]. 2025. PMC12140514.
Editorial Note
Citation Standards: All references have been independently verified through PubMed, PubMed Central, and publisher databases. The primary anchor study (Kim et al., 2024) is peer-reviewed and open access. The FLAMeS study is available in PubMed Central (PMC12140514) pending formal journal publication; claims regarding this work are qualified appropriately throughout this analysis.
Scope: This article uses specific examples (neuroimaging AI, MS lesion segmentation) to illustrate broader systemic challenges in clinical AI implementation. The lessons derived are relevant across medical specialties but may manifest differently in various institutional and clinical contexts.
Perspective: This analysis reflects a systems-thinking approach to healthcare technology adoption, emphasizing that technical algorithm performance is necessary but insufficient for clinical impact. Regulatory, infrastructural, organizational, and cultural dimensions are equally critical determinants of success.
NeuroEdge Nexus examines the intersection of neuroscience, technology, and healthcare systems—identifying not just what is possible, but what is required for meaningful clinical translation.



