Insights from Optum’s James Lukenbill and Karl Schelhammer on Health Biz Talk
Artificial intelligence isn’t just a buzzword for Silicon Valley anymore—it’s rapidly becoming a practical tool for state health and human services agencies, especially in Medicaid. On a recent episode of Health Biz Talk, host Tony Trenkle, former CIO of CMS and health IT industry leader, sat down with two Optum leaders to unpack what AI really means for state programs today.
- James Lukenbill: Strategic Product Manager for Analytics, Optum State Government Solutions, with a focus on advanced AI and machine learning.
- Karl Schelhammer: Senior Director for AI/ML Engineering, Optum State Government Solutions, specializing in turning AI innovations into customer-focused products.
Together, they explored how AI can modernize aging systems, fight fraud more effectively, improve member experiences, and what it takes to adopt AI responsibly in government settings.
The Urgency: Why AI Matters for State Medicaid Programs
Medicaid today is more complex than ever. Karl Schelhammer explained that this complexity is driven by a few major trends: an explosion of policies and rules, standardization challenges due to shared state/federal responsibility, and massive enrollment (peaking around 93 million during the pandemic and still sitting around 77 million). These factors put enormous pressure on systems to scale and manage risk.
AI, he argued, is uniquely positioned to help by simplifying and consolidating complex systems, improving access to information, and speeding time to value for both agencies and members.
James Lukenbill added another critical dimension: legacy technology. Many Medicaid systems are decades old, still running on mainframes. The people who know how to maintain them are retiring. With the rise of Large Language Models (LLMs), there is now an opportunity to:
- Accelerate the retirement of legacy platforms.
- Recode COBOL logic into modern languages using LLM-based approaches.
- Deploy ‘agentic AI’ — virtual investigators and analysts that can read policy, scan claims, and generate draft case summaries — to supplement state staff and help them do more with less.
Demystifying AI: From Hype and Fear to Practical Understanding
State leaders and staff often hear constant hype about AI, both positive and negative. Karl highlighted two extreme misconceptions: AI as a “boogeyman” or as an “all-powerful digital butler.” The reality, he said, is that AI is disruptive, but its real power lies in augmenting people, not replacing them.
James pointed out that technology providers are iterating toward safer systems with improving model accuracy. He emphasized a key cultural shift: employees are already using AI through consumer or enterprise tools. State CIOs shouldn’t try to suppress this curiosity, but rather channel it safely by providing secure, enterprise-controlled tools that protect sensitive data while enabling responsible experimentation.
The Governance Imperative: What Makes a Trusted AI Partner?
To cut through AI hype and risk, a state needs more than just a vendor; it needs a trusted partner that brings rigor and governance.
James described Optum’s internal Machine Learning Review Board (MLRB), which reviews all models before they go into production. This board requires evidence of predictive power, efficacy, and must evaluate and mitigate potential bias. As a health services company, Optum refines these models internally at scale, providing a real-world rigor that ensures responsible implementation for vulnerable populations before the technology is offered to clients.
AI in Action: Real-World Use Cases in Medicaid Today
AI is already transforming key areas of state operations:
- Fraud, Waste, and Abuse (FWA) Detection
An AI capability ingests state Medicaid policy and writes SQL code to detect variances and potential fraud. Since policies change frequently, AI can rapidly generate updated logic, eliminating the need to manually search through complex codebases. Additionally, virtual investigator agents summarize existing analytics and prioritize fraud leads for human review. This makes human investigators dramatically more effective by focusing their time on the highest-impact cases.
- From Static Dashboards to Dynamic Insights
AI transforms traditional analytics by scanning broader datasets and tailoring relevant, personalized findings directly within dashboards for specific job roles. Natural language interfaces are also making dashboards more interactive: users can ask in plain language for new views or breakdowns, reducing friction and helping staff make better, data-driven decisions faster.
- Staying Ahead of Fraudsters
Recognizing that bad actors are also using AI, James explained how AI helps states keep pace. Prepay models interrogate claim streams in real time to spot anomalies before payment, and Agentic AI helps investigators cope with huge volumes of leads by highlighting the most suspicious patterns and opening cases for human review. The goal is to shift human effort toward high-value judgment and decision-making.
AI for the Member: Improving Equity and Access
AI can make Medicaid less confusing and frustrating for beneficiaries, significantly reducing friction:
- Real-Time Multilingual Support: In a country where many residents speak a language other than English, AI agents can provide on-demand, voice-enabled translation in contact centers. Karl noted this improves equity and access by reducing language as a barrier to care.
- Faster, Less Painful Eligibility Applications: AI can pre-fill many fields using existing system information, asking the member only to confirm or correct. This can reduce application time from 30–45 minutes down to just a few minutes, making it easier for members to enroll, dis-enroll, and keep their records current.
Governance and Ethical Frameworks
When agency leaders implement AI, governance is top of mind. Karl and James pointed to essential frameworks:
- NIST AI Risk Management Framework: Karl emphasized this framework as foundational because AI risks (bias, explainability, continuous monitoring) require specialized handling. The four pillars—Govern, Map, Measure, Manage—align with federal expectations.
Section 1557 of the Affordable Care Act: Any new AI feature must be tested for fair outcomes, especially for protected groups, to ensure the technology does not introduce or amplify inequities in care, access, or decisions
What’s Next: AI in State Health & Human Services (1–3 Year Outlook)
Looking ahead, the Optum leaders highlighted key areas for innovation:
- Smarter Insights: AI agents curating and tailoring dashboards to specific roles, delivering more relevant, timely information.
- More Effective FWA Detection: Building “systems of tomorrow” that can spot and prevent emerging threats.
- Serving a Diverse Population: AI that can generate forms, communications, and real-time translations across many languages.
James sees a future where analysts operate at a “meta-analysis” level, interpreting insights surfaced by agentic systems, and IT teams become supervisors of AI agents that recode, test, and update systems.
Rapid-Fire Takeaways
| Question | Answer |
| Why does AI matter for Medicaid right now? | Because the work is exploding in complexity and volume, and AI is the only realistic way to reduce paperwork, manage risk, and keep millions of people from falling through the cracks. |
| What’s the biggest myth about AI in state government? | That it will either replace everyone or magically fix everything. In reality, it’s a powerful tool—but still just a tool. |
| If a state wants to start small with AI, where should they begin? | Pick a narrow pain point—like summarizing case notes or adding a natural language interface to a dashboard—and wrap it in good governance so you can show value quickly without taking on huge risk |
