The Hospital That Got AI Wrong, and What Hospitality Must Learn From It
Sol Rashidi shared a case study at MURTEC 2026 that did not come from hospitality. It came from healthcare. And it should be required reading for every hospitality technology leader deploying AI in any guest-facing or operationally consequential workflow.
The case study is a precise illustration of how a technically successful AI deployment can become an operationally damaging one, not because the AI failed, but because the humans surrounding it were positioned incorrectly in the workflow. The lesson translates directly to hospitality, where the same positioning error is being repeated at scale.
The Case Study: When AI Works and Things Still Go Wrong
A hospital implemented AI to accelerate cancer diagnosis. The results on the technology side were genuine: diagnosis time dropped from three weeks to two days. By any measure of AI performance, the deployment was a success.
But within three months, the hospital’s liabilities had increased by 33%. The AI was not at fault. The workflow design was.
The hospital had positioned doctors at the end of the AI workflow rather than at the beginning. Clinicians were reviewing AI-generated diagnostic reports after the fact, rubber-stamping outputs they had not been involved in producing, rather than validating AI findings at the point where their judgment could shape the output. The accountability gap between AI generation and human validation created the liability exposure.
The fix was not to remove the AI. It was to reposition the clinicians. Doctors moved from authors to editors, validating AI findings upstream rather than reviewing AI outputs downstream. Liabilities came back down. Diagnosis speed stayed fast.
Why This Pattern is Everywhere in Hospitality
The hospital’s mistake is not unusual. It reflects a nearly universal pattern in enterprise AI deployment: organizations build AI into their workflows at the point of output generation and position humans at the point of output review. The AI does the work. The human approves it. The accountability for what happens between generation and approval is diffuse, and that diffusion is where risk accumulates.
In hospitality, this pattern appears in AI-generated guest communications, where a junior team member reviews and approves outbound messages without the context or authority to meaningfully edit them.
It appears in revenue management, where AI-generated pricing recommendations are reviewed by analysts who lack the time or the data access to interrogate the model’s assumptions before the rate goes live.
It appears in demand forecasting, where AI-generated projections are reviewed in a weekly meeting by a leadership team that cannot easily identify when the model is extrapolating from anomalous historical data.
In each case, the AI is doing what it was designed to do. The human review is real but structurally inadequate. And the accountability gap between AI generation and human validation is the same gap that produced the hospital’s liability problem.
Editors, Not Authors
Rashidi’s reframing of the human role in AI workflows is the most practically useful concept in her MURTEC keynote for hospitality operators trying to redesign their deployments.
The shift is from humans as authors who are replaced by AI, to humans as editors who are repositioned upstream. An editor does not simply approve what has been written. An editor shapes what gets written by establishing the context, constraints, and quality standards the writer, in this case the AI, must work within.
In hospitality terms, this means a revenue manager does not review an AI-generated rate recommendation and approve or reject it. A revenue manager defines the strategic parameters, competitive context, and relationship considerations the AI must incorporate before generating its recommendation. The AI produces within those parameters. The revenue manager edits toward the output, not after it.
The practical difference is significant. An editor positioned upstream can catch a model’s blind spots before they produce a guest-facing error. A reviewer positioned downstream catches them after the damage is done, if they catch them at all.
The Workforce Dimension
The hospital case study involves clinical liability. The hospitality equivalent involves something equally consequential: the guest relationship.
Rashidi identified four human capabilities that AI cannot replicate: creativity, connection, accountability, and integration. The entire value proposition of hospitality as an industry is built on the first two. A guest does not remember the algorithm that optimized their room assignment. They remember the front desk associate who recognized them by name on their fifth stay and upgraded them without being asked.
When AI is positioned as the author of guest interactions and humans are positioned as downstream reviewers, the accountability for relationship quality becomes diffuse in exactly the way the hospital’s clinical accountability became diffuse. The AI finds the most efficient path. The human determines whether it is the right path for this guest, this relationship, and this context. That determination only happens effectively when the human is upstream, not downstream.
The service recovery moment that turns a dissatisfied guest into a lifelong advocate is not algorithmic. It requires the kind of contextual judgment, empathy, and creative problem-solving that Rashidi identified as irreducibly human. AI can surface the information that makes that moment possible. Only the human can execute it.
The Practical Redesign
Moving humans from downstream reviewers to upstream editors requires deliberate workflow redesign, not just a change in policy. Three principles apply directly to hospitality AI deployments.
First, identify the decision points in each AI workflow where human context materially changes the output. In revenue management, that is the point where competitive intelligence, ownership expectations, and relationship considerations need to enter the model. In guest communications, that is the point where tone, history, and relationship sensitivity need to shape the message. Position humans at those points, not at the end of the process.
Second, staff the upstream role with people who have the authority and context to actually change what the AI produces. A junior associate reviewing AI-generated outputs without the authority to edit them substantively is a downstream reviewer regardless of where they sit in the workflow diagram.
Third, build the feedback loop that allows upstream human input to improve the model over time. The editor-AI relationship should be iterative. If the revenue manager’s upstream constraints consistently produce better outcomes than the model’s unguided recommendations, the model should learn from that pattern. That learning only happens if the feedback architecture is built deliberately.
The Bottom Line
The hospital reduced cancer diagnosis time from three weeks to two days and watched its liabilities increase 33% in the same quarter. The AI worked perfectly. The workflow design failed.
Hospitality operators deploying AI in guest communications, revenue management, demand forecasting, and operational workflows are building the same liability, at the same point in the process, for the same structural reason.
The fix is not complicated. Move your people earlier. Make them editors, not reviewers. Validation upstream beats rubber-stamping downstream every time, in hospitals and in hotels.
Up Next in the Series:
This was Post 4. Post 5 examines the data governance gap that Rashidi identified as the primary reason AI projects fail between proof of concept and production, and what hospitality operators must implement before scaling any AI initiative.
IHL Group covers retail and hospitality technology markets globally. For more information on our research, visit https://www.ihlservices.com. Sol Rashidi keynoted MURTEC 2026 in Las Vegas. All data and frameworks cited in this post are attributed directly to her presentation.