Product Designer
Macbook Pro - Light Background.png

Consistency Checker

Consistency Checker: UX Case Study

Project overview

The Consistency Checker is a critical tool within the for:sight platform that uses machine learning to detect potential data anomalies. By referencing historical quarterly and yearly data uploads, it intelligently flags specific cells across large datasets for review. This eliminates the need for users to manually sift through thousands of rows, allowing them to quickly identify and address potential issues.

For each flagged item, the tool provides an analysis explaining why the system suspects a discrepancy. Users can then choose to:

  • Keep the original data

  • Replace it with the expected result suggested by the system

  • Or add notes and approve the row for submission

This process continues until all data rows are reviewed and resolved.

Research objectives

To ensure the redesigned Consistency Checker aligned with user needs, the research phase focused on understanding:

  • How users interact with flagged data and validation tools

  • Pain points with the existing Fix & Verify interface

  • Desired improvements for reviewing and approving flagged data

  • Expectations around usability, clarity, and efficiency in large dataset review

Key insights

Overwhelming interface
Users found the original UI dense and difficult to scan, especially with nested content and large headers.

Lack of clarity around flags
Many users struggled to understand why certain cells were flagged or what the expected correction was.

Frequent scrolling
Due to large row heights and deep layouts, users had to scroll excessively, impacting workflow speed.

High mental load
Users had to manually compare flagged data with expected results and prior knowledge with limited visual assistance.

Desire for batch actions
Users often wanted to approve or edit similar rows in bulk rather than one at a time.

Need for in-row analysis
Users expressed a strong need to view analysis inline without losing the table’s context or increasing visual clutter

How research informed design decisions

Confusing flagged logic
Added inline expandable panels with machine-generated analysis.

Table too deep
Reduced row height and header size; added density toggle.

No bulk actions
Introduced multi-row select and batch edit/approve.

Poor scanability
Improved use of colour, icons, and visual cues to quickly distinguish flagged vs. approved data.

Users unsure on what to do next
Implemented clearer step-by-step flow and call-to-action buttons.

Design Goals
The original interface was functional but overly complex. The objective was to redesign the experience to enhance usability without sacrificing core capabilities.

Key Goals

1. Simplify the interface

  • Reduce visual clutter in the header and streamline table complexity.

  • Provide a cleaner, more approachable UI for both technical and non-technical users.

2. Improve data visibility

  • Decrease table depth to display more rows on screen without scrolling.

  • Optimise layout for large datasets.

3. Enhance flagged data interaction

    • Clearly highlight flagged cells with intuitive visual cues.

    • Ensure users can immediately identify which rows require attention.

4. Contextual analysis and actions

  • Introduce collapsible analysis panels within each row to display expected results and system reasoning.

  • Maintain a compact row height while ensuring clarity of the analysis feature.

5. Introduce new functionality

  • Core interactions to support:

    • Analyse flagged data

    • Edit directly within the table

    • Approve corrected or verified data

    • Add notes for context or documentation

    • Adjust row display density

    • Select/edit multiple rows for batch updates

Design outcome

Macbook Pro - Light Background.jpg

Core functionality

Flagged, edited and flagged & edited cells

Flagged.png

Analysis expand

The analysis expand can be opened in any row that has an “expected result”. The purpose of this expand is to explain to the user why the machine learning believes a specific column value is expected to be a different value (this could be based off o…

The analysis expand can be opened in any row that has an “expected result”. The purpose of this expand is to explain to the user why the machine learning believes a specific column value is expected to be a different value (this could be based off of previous data uploads).

Row edit

Edit.png

Approved row

Approve.png

Add a note to row

Note.png

Row display density

Display density.png

Multi-edit

Learnings

  • Improved visibility and decision-making through contextual analysis

  • Increased efficiency due to batch actions and reduced scrolling

  • Better user comprehension of flagged results and expected corrections

  • Positive feedback from users regarding the simplified interaction model