The Action Plan: What Hospitality Technology Leaders Should Do Monday Morning
Seven posts into this series, the diagnosis is complete. The failure rates are documented. The framework is clear. The risks are named. The question that remains is the one every hospitality technology leader in that Las Vegas auditorium was asking by the time Sol Rashidi finished her MURTEC 2026 keynote: what do we actually do next?.
Rashidi closed her keynote with a set of action items that synthesize everything in her framework into a sequenced roadmap. This post works through each of them in detail, with specific application to the hospitality operating context.
Action 1: Evaluate Your AI Posture Honestly
The first action is the one most organizations are most reluctant to take, because the honest answer is almost always more uncomfortable than the aspirational one.
Applied AI, Generative AI, and Agentic AI require different organizational maturity levels. Match your use case complexity to your infrastructure and talent readiness.
In practice, this means conducting an honest audit against Rashidi’s three-tier framework. Where does your organization’s current data infrastructure place you? Do you have clean, connected data across your core operational systems? Are sanctioned governance policies for AI tool usage present? Do you have the organizational trust and audit infrastructure to allow autonomous AI decision-making in consequential workflows?
Most multi-property hospitality operators will find themselves solidly in the Applied AI tier on honest assessment, with aspirations toward Generative AI and vendor pitches for Agentic AI arriving simultaneously. Acknowledging that gap is not a failure. It is the precondition for making deployment decisions that will actually succeed.
Action 2: Start with Foundational Data Governance
Implement DSPM and DLP capabilities before scaling any AI initiative. Know your data: where it lives, who accesses it, and whether that access is sanctioned.
This action has a clear sequence. First, complete a data inventory across all operational systems. Not a high-level systems map but an actual inventory of where guest PII (Personally Identifiable Information), payment data, behavioral data, and pricing intelligence lives, in what formats, under what access controls, and with what retention policies.
Second, implement automated DSPM tooling that provides continuous visibility into data posture rather than point-in-time audits. The data environment in a hospitality operation changes constantly as staff turn over, as integrations are added and deprecated, and as AI tools acquire new data access capabilities. Static audits cannot keep pace.
Third, implement DLP controls that prevent sensitive guest and business data from leaving the organization’s governed environment through unsanctioned AI tool usage. This is not a policy exercise. It is a technical control that needs to be in place before generative AI tools are deployed at scale.
This action is unglamorous. It does not generate conference session buzz. It determines whether every other action on this list produces a return.
Action 3: Shift ROI Measurement Beyond Efficiency
Add Relevancy, Cultural, and Operational ROI metrics to every AI investment evaluation.
In practice, this means building a measurement framework before the deployment, not after. Define the specific metrics that will demonstrate value across all six ROI dimensions. Establish the baseline measurements before the AI goes live. Commit to a measurement cadence that allows the organization to demonstrate progress to ownership groups within a twelve-month window.
For hospitality specifically, the Operational ROI dimensions of Prevention and Prediction offer the most accessible quick wins. Churn prediction models, predictive maintenance systems, and demand forecasting improvements are measurable within a single operating season and produce the empirical foundation that makes board conversations about Relevancy and Cultural ROI credible.
The goal is to reframe the board conversation from a binary success-or-failure assessment of financial ROI to a multi-dimensional evaluation of organizational capability building. That reframe requires measurement infrastructure, and measurement infrastructure requires the decision to build it before the deployment launches.
Action 4: Redesign Workflows Before Automating Them
Doing AI requires business process optimization first. Using AI on broken processes produces faster broken processes.
For each workflow targeted for AI deployment, conduct a process audit before selecting a technology solution. Map the workflow at the task and sub-task level. Identify which tasks are genuinely AI-automatable, which require AI-assisted human execution, and which require unassisted human judgment. Identify the data inputs each task requires and whether those inputs are currently clean, accessible, and governed.
Then optimize the workflow for its AI-augmented future state before deploying the technology. This means eliminating redundant steps that exist because of current manual limitations, repositioning human roles from execution to validation and judgment, and building the feedback loops that allow human judgment to improve AI outputs over time.
The hospitality workflows with the highest process redesign ROI are the ones where manual execution currently consumes time that human judgment does not require. Routine guest communication responses, standard housekeeping dispatch, scheduled maintenance notifications, and rule-based revenue management decisions all meet this criterion. They are also the workflows where AI deployment without process redesign produces the most visible disappointments.
Action 5: Move Humans Earlier in AI Workflows
Make your people editors, not reviewers. Validation upstream beats rubber-stamping downstream every time.
This action requires a specific organizational design decision for each AI-augmented workflow: at what point in the workflow does human input materially change the AI output? That is the point where the human must be positioned, with the authority, the information, and the organizational expectation to actually exercise judgment.
The hospital case study is the template. Doctors positioned at the end of the diagnostic workflow were reviewing AI outputs they had not shaped. Doctors positioned at the beginning were editing AI findings they had helped to establish. Diagnosis speed improved in both scenarios. Liability only improved in the second.
For revenue management, this means the revenue manager’s strategic parameters enter the model before the recommendation is generated, not after. When it comes to guest communications, the communication strategy and tone guidelines are built into the AI’s operating context before the first message is drafted, not reviewed after the message is already in the queue. For demand forecasting, local market intelligence and known anomalies are inputs to the model, not corrections applied to the model’s output.
Action 6: Develop a Real Workforce Preparation Strategy
Three hours of Copilot training is not workforce preparation. Build trust and belief before technical skills.
Rashidi’s framing here is important. Most hospitality operators treating workforce preparation as a training exercise are underinvesting in the organizational change management that actually determines whether AI tools get adopted and used effectively. Technical skill acquisition is the last step in workforce preparation, not the first.
The first step is building organizational trust in AI tools by demonstrating that the deployment is designed to augment the workforce, not replace it. This requires transparent communication about how AI will change specific roles, what the organization’s commitments are to the people in those roles, and what the career development pathway looks like in an AI-augmented operating model.
The second step is building belief that the AI tools will actually work as described. Early deployment in low-stakes, high-visibility use cases builds organizational credibility for AI tools before they are deployed in consequential workflows. Procurement, administrative documentation, and scheduling optimization are the hospitality functions where early wins build the organizational confidence that makes higher-stakes deployments successful.
Technical training follows organizational trust and belief. In that sequence, it is effective. In isolation, it is a check-the-box exercise that produces low adoption and no ROI.
The Closing Framing
Rashidi closed her MURTEC 2026 keynote with a statement that should anchor every hospitality AI strategy: AI needs to happen with us and not to us. Technology scales efficiencies, but in relationship-driven industries, relationships scale opportunities.
The six actions in this roadmap are not a technology strategy. They are an organizational readiness strategy that makes technology investment productive. The technology is available. Most vendors will happily sell it regardless of whether the organization is ready to receive it. The operators who follow this sequence will still be running their AI deployments in three years. The operators who skip it will be explaining to their boards why the POC never made it to production.
The hospitality industry’s relationship-driven DNA is not a liability in the AI era. It is the advantage. Protect it. Build on it. Deploy AI in its service, and the industry’s best operators will emerge from this transition stronger than they entered it.
Series Summary: Key Action Items from Rashidi’s Framework
Evaluate your AI posture honestly. Applied AI, Generative AI, and Agentic AI require different organizational maturity levels. Match your use case complexity to your infrastructure and talent readiness.
Start with foundational data governance. Implement DSPM and DLP capabilities before scaling AI initiatives. Know your data: where it lives, who accesses it, and whether that access is sanctioned.
Shift ROI measurement beyond efficiency. Add Relevancy, Cultural, and Operational ROI metrics to every AI investment evaluation.
Redesign workflows before automating them. Doing AI requires business process optimization first. Using AI on broken processes produces faster broken processes.
Move humans earlier in AI workflows. Make your people editors, not reviewers. Validation upstream beats rubber-stamping downstream every time.
Develop a real workforce preparation strategy. Three hours of Copilot training is not workforce preparation. Build trust and belief before technical skills.