Cyborg Security
Cyber Space
Results
Reduced user abandonment significantly across investigative flows Helped threat hunters complete tasks faster, with higher confidence
Link
Case Study: Reducing Cognitive Overload in a Threat Intelligence Tool
Overview
Role: UX Designer / Research Lead
Project Type: Workflow Redesign for Cybersecurity Platform
Client: https://hunter.cyborgsecurity.io/login
Duration: 2 months
Team: 1 UX Designer (me), 1 PM, 2 Engineers, 1 Data Analyst
Problem Statement
A threat intelligence platform was designed to give cybersecurity threat hunters access to a rich dataset. However, the tool delivered too much information at once—without clear prioritization or structure. Users were overwhelmed by volume and complexity, leading to frustration and abandonment of searches. Drop-off rates during core search flows had climbed to 60%.
Goal
Redesign the tool to:
Reduce cognitive load and decision fatigue
Improve the search and investigation flow for cybersecurity professionals
Increase task completion and reduce drop-off rates
Align information hierarchy with how threat hunters think and work
Research
Methods:
Contextual user interviews with five experienced threat hunters
Workflow mapping and mental model analysis
Data inventory and content prioritization exercises
Competitive analysis of adjacent investigation platforms
Key Findings:
Users needed to act quickly, often under time pressure
Over 70% of information shown in the tool was rarely used in initial triage
The issue was not the amount of data, but the lack of relevance-based structure
Search results often surfaced noise before signal, requiring unnecessary filtering
Design Process
1. Workflow Understanding
Mapped out common investigation tasks such as entity enrichment, link analysis, and IOC (Indicator of Compromise) triage. Focused on decision points and patterns of information use.
2. Information Architecture Refinement
Grouped data by task-critical priority and structured it around investigative actions. Created basic data object templates tailored to each task (e.g., Domain, IP, File Hash views).
3. Wireframes and Prototyping
Built low-fidelity wireframes that stripped away non-essential content. Used progressive disclosure to layer complexity only when needed.
4. Testing and Iteration
Tested prototypes with 4 users over two rounds. Iterated based on observed navigation behavior, decision-making ease, and information relevance.
Key Metrics (Before vs. After Redesign)
Metric | Before | After | Change |
---|---|---|---|
Drop-off Rate (search workflow) | 60% | 20% | 67% decrease |
Time to First Insight | 4.2 minutes | 1.3 minutes | 69% improvement |
Task Success Rate | 45% | 91% | More than doubled |
User Confidence (survey, 1–5) | 2.4 | 4.3 | Significant gain |
Redundant Information Displayed | High | Low | Drastically reduced |
Design Highlights
Simplified Result View: Displayed only critical attributes on load, with expandable detail cards
Relevance-Based Grouping: Organized information into “Immediate Action,” “Contextual Data,” and “Historical Noise”
Progressive Disclosure Model: Advanced fields and visualizations revealed only after user signals intent
Entity-Centric Interface: Each search result followed a clear, consistent object layout, reducing scan time
Iteration Example
Initial Design:
All data fields were shown at once in dense, scroll-heavy layouts.
User Feedback:
“I can’t tell what’s important. I waste time figuring out what not to read.”
Revised Design:
Implemented collapsible data cards, reordered based on task relevance, and highlighted high-confidence indicators.
Outcome
Reduced user abandonment significantly across investigative flows
Helped threat hunters complete tasks faster, with higher confidence
Platform usability improved not just in perception, but in tangible efficiency
Laid the foundation for a user-centered design system to scale future modules
Tools Used
Figma (wireframes, prototyping)
Miro (workflow mapping and task analysis)
Notion (research synthesis and reporting)
Zoom (user interviews and testing sessions)
Reflection
This project emphasized the importance of prioritization, not simplification. Threat hunters didn’t want less information—they needed smarter presentation of what mattered most at the right time. Had we tested earlier with even basic prototypes, we could have surfaced the issue with cognitive overload before the MVP shipped.
The core lesson: understanding user intent and sequence can be more powerful than adding new features. It’s not about what you show—it's when and why you show it.
Next Steps
Develop customizable data views based on user roles
Add AI-assisted prioritization of key indicators
Conduct longitudinal studies to assess impact on full investigations over time
Would you like this formatted for PDF export, a live portfolio site, or presentation slides?