RDI3.png

Roostify Document Intelligence (Doc AI) Case Study

The Challenge

Document processing is the most expensive and time-consuming part of the mortgage lending process. It is repetitive, error-prone, and operationally costly for our clients, the mortgage lenders. With Lending Doc AI, the technology to classify and extract lending document data through our partnership with Google Cloud, our business challenge is to integrate AI into our existing platform. To do so effectively, we need to solve the user’s problem. Our target users for this product are our clients’ employees, the loan processors. They are the middle men that review loan documents for accuracy before handing the file off to an underwriter, who determines whether a loan file is approved.

User Problem: Loan processors currently review documents in a slow, repetitive and error-prone manner. Our goal is to help them process documents more efficiently and accurately so they can hand the completed loan files to the underwriter in the shortest timeframe.

I was the lead designer on this project responsible for the end-to-end process. My task was to design the MVP to get clients’ early adoption and iterate upon.

My Role

Product Design Lead

Time Frame

4 Months (This Phase)

Collaborators

2 Product Managers, VP of Platform Engineering, Data Engineering Team

Project Summary

I began this project by self-learning AI and machine learning. I reached out to designers in my personal network and sought outside resources. I joined the project when the engineering team had already started building AI attributes on the core platform. I recognized there were usability problems with their approach and conducted internal user testing. With evidence from user testing, I influenced the team to shift direction, take a step back, and do more research. Through user research, I distinguished the 3 lender personas that the team had originally conflated. I gained empathy for our target persona, the loan processor. Using insights from research, I defined the design strategy and explored different solutions. After completing phase 1 of the project, we gained 4 key clients in our Early Adopter Program. Through this program, our clients are to share their usage metrics and their employees (our target users) for us do more research and iteratively improve the product.


Context: Background on Roostify

Roostify is a whitelabeled mortgage point of sale platform (B to B to C). Our clients are banks (including Chase, HSBC, TD Bank). The bank’s mortgage borrowers submit applications through our borrower platform and their loan team manage the submitted applications using our lender platform.


Challenge 1: Understanding the AI Logic

I started this project without any background in AI/ML so I reached out to my contacts who had worked in AI and devoured books, blogs and articles on the topic. I created this diagram to help you better understand the AI logic behind this project. Take a look.

 

AI Best Practices for Product Design

I synthesized my key learnings on designing with AL/ML below. I influenced the broader team to apply these guidelines while building the product.

 

Challenge 2: Convincing the Broader Team to Shift Direction

The engineering team had been building AI-generated attributes in the Tasks Tab before I joined the project. The Tasks Tab was designed with the sole intention for users to upload documents as separate tasks. I recognized there were usability issues with this approach. Specifically, this increased the cognitive load for users who were simply trying to upload documents because they now had to reconcile how numeric confidence scores and error messages related to each document.

Convincing a large team with VPs from several departments was no easy feat. I knew I needed substantial evidence. Because of the confidentiality of our project, I could not get outside users to test it. So I approached Roostify employees who had mortgage experience and conducted internal user testing. The testing showed that users spent more time uploading tasks. It also showed that some users were distracted by the confidence score numbers and spent extra minutes trying to make sure they were not making a mistake. Finally, the AI-generated error messages were not clear enough and it cost users more time trying to interpret them.

Using the evidence from testing, I crafted a story and presented the user’s dilemma to the broader RDI team. I suggested that we shift the AI-attributes from the Tasks Tab to the Documents Tab because it was more intuitive for the users and more scalable as we continue to expand AI-attributes in future phases. I also requested that we do more user research before charging ahead. After some work, the team was onboard.

 

Challenge 3: Distinguishing Different Lender Personas

In a typical mortgage transaction, there are 3 lender side players: loan officers, loan processors, and underwriters. Initially, we combined these personas as the “loan team,” but through user interviews, we discovered that the person who handles the mortgage documents is the loan processor. We applied this insight and narrowed our persona to target the loan processor. This finding allowed me to define and focus the problem statement, what had been ambiguous. I captured my interview findings in the following persona and user journey map.

Customer Journey Map.png

Finding Opportunity from the User’s Pain Points

The impediment to action advances action. What stands in the way becomes the way.
— Marcus Aurelius

Our target user’s goals are efficiency and accuracy while their pain points revolve around repetitive action that lead to errors. The research gave us insights into our user’s pain points in the context of their workflow. Knowing this is key to finding opportunities for us to use AI to optimize their workflow and reduce the user’s pain points. To brainstorm more divergently, I rewrote our challenge into separate “how might we” questions. Here is a sample.

  • How might we help users review documents more accurately?

  • How might we surface AI-detected document errors in an easy to digest way?

  • How might we use AI-generated attributes to help users achieve their goals?

  • How might we reduce the repetition in the user’s workflow?

  • How might we increase the speed that users handle documents?

Actionable Design Strategies

After conducting competitive research and an extensive brainstorming process, I synthesized these actionable opportunities to prioritize for phase 1 of this project.

  1. Match the user’s mental models
    Show the status of completion on each document and digitally group documents by their organic relationship (i.e. asset documents, income documents) as users would physically do in their work.

  2. Focus on the user’s key motivations: efficiency and accuracy
    When in doubt, ask whether each decision furthers the user’s goals.

  3. Use human language to convey AI-attributes

  4. Keep the experience as simple as possible

    Show only what needs the user’s attention in a way that helps them take the next step forward. Make AI as invisible as possible. Don’t fall victim to AI ‘featuritis.’

How users group loan documents into organic categories

 

Wireframing and Design Explorations

Here is a tiny snippet of the dozens of pages of wireframes I experimented with before arriving at the final designs.

I conducted 2 more rounds of internal user testing. The user feedback was extremely insightful and I used their input to guide my design decisions.

 

Replacing “Confidence Scores” with Simple Human Language that Conveys the Same Meaning

From user testing, I learned that users were having trouble understanding the concept of '“confidence scores” and they struggled to make connections between these numbers to the documents they were reviewing.
I explored different design solutions address displaying confidence scores, such us using the terms low, medium, and high coupled with color indicators. This approach was more digestible for the users but still made them think. Ultimately, I removed confidence scores as its own category and focused on conveying the same information using simple human language. This solution was chosen because users didn’t have to “think” at all.

 

Final UI

Results

After completing phase 1, our product team showcased the designs to clients. We successfully gained 4 major bank clients in our Early Adopter Program. Our executives were thrilled with the outcome. Through the Early Adopter Program, the banks agree to allow us to interview their loan team users and access their usage metrics. This is critical for us because getting access our clients’ usage metrics has always be a challenge for us. Through this, our team can use the data and iteratively improve the product.

Reflection

Through this project, I gained more confidence at pushing back on product and engineering decisions when their direction does not serve the user’s goals. I experimented with different ways to influence them and found that influence itself is a UX project. Product and engineering are the users and I have to develop empathy for them so I can create a persuasive story that helps them see the design objective that I want to achieve. This is a valuable takeaway that I’ll continue to use in my future projects.