Over the course of two years, I worked with various stakeholders and developers to develop the product line for mdbrain. mdbrain is an AI-powered software that facilitates diagnosis of neurodegenerative diseases by analyzing MRIs and creating comprehensive reports for radiologists. I designed the full reports and the digital web platform used by radiologists.
Live SiteThe rich text element allows you to create and format headings, paragraphs, blockquotes, images, and video all in one place instead of having to add and format them individually. Just double-click and easily create content.
A rich text element can be used with static or dynamic content. For static content, just drop it into any page and begin editing. For dynamic content, add a rich text field to any collection and then connect a rich text element to that field in the settings panel. Voila!
Headings, paragraphs, blockquotes, figures, images, and figure captions can all be styled after a class is added to the rich text element using the "When inside of" nested selector system.
Neuroradiologists have an immense workload that involves analyzing thousands of MRIs daily (there can be up to several hundreds of MRI images per patient) in order to identify and diagnose neurodegenerative disease(s). As routine work increases, so does human error. Fortunately, machine learning algorithms can automate many daily tasks for a radiologist which can provide more granular results, aiding diagnosis and reducing human error. But, without properly communicating the data, radiologists can’t benefit.
The data is best viewed as a PDF report or on the webapp where users send images to be assessed. Since the field of radiology lacks large machine learning data sets, contributing to this data pool would make our business more valuable. Thus, the product must also allow users to mark our software’s analysis as correct or incorrect which would improve our algorithm while fulfilling our business requirements.
SUCCESS CRITERIA:
The first step in answering the problem question was to understand our main user, a neuroradiologist, and the problems they face in their day-to-day life. I determined the best research method would be a contextual inquiry session where I would go to a radiologist’s office and watch the doctor work his usual tasks, asking him questions throughout the session.
I drew up an empathy map to summarize my findings from the session. This helped us to see where our software fits into the user flow and what the goals of the platform and reports should be based on the user’s actions, thoughts, pains and gains.
The research led me to define the following problem statements:
The user research showed us which regions are important to assess to diagnose the diseases Alzheimers/Dementia and Multiple Sclerosis. I prioritized the regions/data by what is most necessary for diagnosing these diseases from top to bottom. Shown below is what the Volumetry and Lesion reports looked like before I joined the team and then the final iteration on the design after conducting research.
The research helped me understand the goals of the platform and helped to inform the new designs. Shown below is a comparison of the initial platform design, existing before I joined the team, to the newer design that was influenced by the research.
1-2 months after upgrading the platform and reports we sent out a Surveymonkey survey to our users asking them questions about their platform use and the effectiveness of the reports. We aimed to find out which features were valuable for our users. We had 8 respondents. From this survey we learned:
After analyzing the results from the survey, we started to think about the next mdbrain update. Since users expressed a need for more features on the reports, and the platform analysis screens are based on the reports, we first ideated on the reports.
For the new reports, we aimed to create a dynamic volumetry report, expand on the dynamic lesion data, quantify more regions, and add a greater visual representation of the data that would help radiologists quickly identify certain diseases.
To validate these reports, we sent out a quick survey with images of the new reports and questions pertaining to the new features. When 8/8 respondents answered that the new reports were aligned with their needs, we shifted our focus to the platform.
We ran usability tests on the existing platform and found some issues, with the following being the most significant:
Taking the learnings from the survey and the usability tests, I began to iterate on the design of the platform. Take a scroll in the section below to see all the changes the new platform underwent.
1. Changed platform from mdbrain platform to mediaire platform, allowing all product lines (incl. mdbrain) to be shown here.
2. Improved visual design by making the platform in dark mode. As radiologists stare at black screens all day, this colour scheme is easier on their eyes.
3.Changed organization of the platform so that it is patient-centric which means the issue of several entries for one patient no longer exists, and all analyses for all products (organs) can be viewed from one analysis screen.
4. Cleaned up the entries in the “completed” tab, with less important data shown upon expanding an entry list.
5. Added options to set patient consent and clinical finding before going to the analysis screen (user requests).
6. Created a third “rejected” tab in the first level of the platform to show more elaborate reasons for erroneously processed studies.
7. Made the different products available via side tabs in the analysis screen.
8. Added option to copy whole report as an image file to attach in medical report.
9. Changed the prime action on the analysis screen from “Send to PACS” to “Approve” as not all users wish to send their reports to their PACS and approval is necessary for algorithm improvement.
10. Created more options to copy data from the reports and paste into users’ medical reports in word processing software.
Once the new platform design was implemented, we waited to see what our customers thought. Here is some of the feedback we got: