AI-enabled Clinical Coding

UX Researcher · Change Healthcare

Research to explore how to better support clinical coders to process charts faster while maintaining quality so that insurers can increase risk capture and remain compliant

↳ $19M in cost savings and $12M in revenue within 1.5 years of launch

Highlights: Observational shadowing, Concept testing, Usability testing, 0 to 1 build, Iterative research and design partnership

  • Risk adjustment is a major profitability lever for the health insurance business, and it makes sure people managing a lot of clinical burden get deserved, fair coverage. But to have a profitable risk adjustment business, health insurers have to know what risk their member population has. Historically, this is a very manual process where teams of people with clinical experience comb through medical charts to find evidence of risk. I was tasked with exploring - how we might support Risk Adjustment coders to process medical charts faster while maintaining quality so that health insurers can increase risk capture to drive a profitable, accurate, and compliant Risk Adjustment business?

    • Generative Research: In-depth interviews and observational shadowing to understand coder workflow, existing needs, and pain points

    • Design: Synthesis and design working sessions to build low fidelity wireframes

    • Evaluative Research: Early concept testings to validate critical assumptions

    • Design: Iterative design to address findings from concept testing

    • Evaluative Research: Final concept testing to identify any usability issues

    Sample: 24 clinical coders of varying levels of experience - 50% “First Pass” Coders, 50% “Second Pass” Coders

  • Building a 0 to 1 tool means iteratively capturing and applying countless insights, but here are some of the big one that critically informed the MVP:

    • Users primarily navigate with keyboard shortcuts. Tool had to enable use of tab, arrow keys, etc. to accommodate speed of work.

    • Users were very driven by internal quality scores. Tool had to visualize real-time scoring to match user motivations.

    • To gain trust, tool had to reinforce coders as the expert decision-maker and visually denote AI as as co-pilot to assist.

    • Concept and prototype were validated by users

    • Resulted in output that is 91% faster than human-only coding

    • 1.5 years after go-live, tool processed 5.2M charts, generated $19M in cost savings, and $12M in revenue

    • Automation does not remove the need for user research

    • Harmonizing AI in expert user workflows works best when we reinforce the user as final decision-maker, expert users need to feel like they’re still the expert

    • Ergonomic details matter - workspace set-up, key shortcuts, mouse use all make a huge difference in tool expectations, which necessitates observational techniques

Previous
Previous

Provider Documentation: Reducing Time Documenting Patient Visits