Mindflow: Blog Entry 6

Initial Insights from Pilot Testing

After conducting pilot tests, we discovered several limitations in our experimental design. We had insufficient quantitative data to perform meaningful quantitative analysis. Additionally, we didn’t properly measure participants' immediate engagement with the design elements. A key oversight was not clearly defining what "effectiveness" meant in the context of our study, especially considering that user preference represents only one aspect of overall effectiveness.

Modifications to Experimental Design

1. Comprehensive Effectiveness Metrics

To address these limitations, we have defined both subjective and objective metrics to measure effectiveness more comprehensively. For reminder type evaluation (minimalist, encouraging, informative, urgency, and personalized), the experiment will now measure:

For distraction log evaluation, effectiveness will be assessed through:

2. Improved Experiment Flow

We have redesigned the experiment flow for reminder testing while maintaining the original popup designs. Slides will now be sent directly to participants for interaction rather than being presented by researchers. The popups will appear at predetermined intervals during the testing session. Participants can dismiss popups by clicking on them and can close distraction elements by using a close button. These were implemented using PowerPoint. To ensure clarity, participants now receive thorough briefing on interaction methods and complete a practice trial featuring one distraction element (a YouTube advertisement) and one fake reminder.

3. Enhanced Measurement Techniques

We have introduced a memory quiz component for the distraction log assessment. Participants will view each log format for a controlled 5-second period before being tested on content retention. Quiz questions will focus on determining whether more time was spent focused or distracted, and identifying the distraction source with highest distraction time.

4. Expanded Participant Pool

To gather more robust data, we have increased our participant numbers. Both reminder testing and log format testing now have 5 participants each.

These adjustments should provide more reliable data and clearer insights into which design elements most effectively support focus maintenance and distraction management.

Due to the changes in variables, we’ve rewritten our hypotheses as well:

H1.1: Either personalized or encouraging reminders will provide higher motivation compared to other types of reminders.

H1.2: Either personalized or encouraging reminders will be more preferred by participants compared to other types of reminders.

H2: Summary distraction logs will lead to higher motivation to change routine behaviors compared to Basic and Detailed logs.

H3: Summary distraction logs will be more preferred by participants compared to Basic and Detailed logs.

H4: Detailed distraction logs will be more memorable compared to Basic and Summary logs.

Experiment Abstract

This study evaluated distraction reminders and logs designed to support focus and reduce distractions in unstructured work environments. In a 5‐minute session, participants engaged in a math exercise while exposed to five reminder types—minimalist, encouraging, informative, urgency, and personalized. In the other experiment, participants completed quizzes about and rated on three distraction log formats—basic, detailed, and summary. We measured motivation to resume work, time to refocus, and preference ratings for reminders, as well as information retention, behavioral motivation, and preferences for distraction logs.

We hypothesized that personalized or encouraging reminders (H1.1, H1.2) would yield higher motivation and be more preferred than other types. For distraction logs, we predicted that summary logs would enhance motivation (H2) and preference (H3), while detailed logs with pie charts and bar graphs would be more memorable (H4). From the ANOVA we ran, we failed to find significant results to support H1, H2, and H3. Through analysis around the mean values and qualitative analysis, it was found that encouraging and informative reminders elicited the highest motivation and preference ratings, although informative reminders were linked to longer refocusing times. Personalized reminders produced mixed effects, whereas minimalist and urgency variants were less effective. Distraction logs were valued for clarifying distraction patterns, with detailed logs achieving significantly higher quiz scores, supporting H4; however, no significant differences emerged in motivation or preference between log types. These results offer insights for optimizing digital productivity tools.

Annotated Output from Quantitative Analysis

Results of asking participants to rank the reminders from worst to best, 1 being worst, 5 being best.

It seems all of our participants for reminder types ranked the encouraging reminder as either the worst (1) or the best (5). There could be any number of confounding factors at play that we do not have the sample size to verify. For example, the two participants that ranked the encouraging reminder the worst preferred flexible work schedules, were recruited by the same researcher, and were both men, and the three that ranked the encouraging reminder the best preferred structured work schedules, were recruited by the same researcher, and were mostly women. The justification given by the participants who ranked the encouraging reminder as the worst was that the encouraging reminder makes them want to take a break after being made aware they have been focusing for some period of time. The justification given by the participants who ranked the encouraging reminder the best was that it felt like what they were doing was rewarding and that it did not feel negative like the other reminders.

Results of running ANOVA on ranking vs reminder type

                               Df Sum Sq Mean Sq F value Pr(>F)
                reminder_type  4    8.4    2.10    1.01  0.426
                Residuals     20   41.6    2.08
            

Results of asking participants how motivated they are to return to work after seeing a reminder, 1 being they would ignore the reminder, 5 being they will return to work right away.

Results of running ANOVA on motivation vs reminder type

                               Df Sum Sq Mean Sq F value Pr(>F)
                reminder_type  4  12.54   3.135   1.984  0.136
                Residuals     20  31.60   1.580             
            

Results of asking participants to rank the log types from worst to best, 1 being worst, 3 being best.

Results of running ANOVA on ranking vs log type

                            Df Sum Sq Mean Sq F value Pr(>F)  
                log_type     2    3.6  1.8000   3.375 0.0687 .
                Residuals   12    6.4  0.5333                 
                ---
                Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
            

Results of asking participants how motivated they are to change their work habits after seeing the logs, 1 being they would not do anything, 5 being they will make an effort every day.

Results of running ANOVA on motivation vs log type

                            Df Sum Sq Mean Sq F value Pr(>F)
                log_type     2    3.9   1.950   2.229   0.15
                Residuals   12   10.5   0.875 
            

Since none of the ANOVA tests suggested a difference between any of the means, we did not need to do paired T-tests to determine which factors are significantly different.

Mindflow: Blog Entry 5

Horizontal vs. Vertical Prototyping

For our prototype, we'll focus more on vertical prototyping. Horizontally, we’ll provide an overview of all key features: different types of reminders (e.g., minimalist, informative, encouraging) and distraction logs (e.g., basic, detailed, summary). This gives participants a sense of the complete system.

Vertically, we'll simulate a real distraction scenario with an advertisement video. During this, we’ll present different types of pop-ups for participants to compare and evaluate their effectiveness. Additionally, we’ll show different formats of distraction logs of different levels of information, visuals, and encouragement, so that they can assess which ones have the best balance between information, ease of understanding, and productivity encouragement.

Simulated vs. Wizard-of-Oz'd Functionality

Simulated Functionality

  • The participant’s online work environment (e.g., web-based math homework) is simulated using a screenshot.
  • The distraction is simulated using a video clip embedded in the slides.
  • We also simulate the appearance and disappearance of the pop-ups with facilitator (us) manual control. We want to show different types of reminders (like minimalist, informative, encouraging, etc.) at set times during the session. For the distraction logs, since we want to focus on the level of information displayed on them instead of user interactions, we will simply present the design to the participant by manually switching to the slides.

Wizard-of-Oz (Faked) Elements

  • Participant’s working environment webpage (simulated via a screenshot).
  • Distraction (played manually by facilitator).
  • The timing of reminders and the tracking of distractions. Instead of tracking real-time data, we’ll use predefined logs and manually control when they appear. For reminders, a facilitator can trigger them by advancing through slides in Google Slides.

Implementation Approach

Simulating the Work Environment: We’ll have a screenshot of webwork math questions. We can ask the participant to read the questions, to simulate their brain’s “work mode”.

Simulating the Distraction: The facilitator will click the video thumbnail on the slide to start playing it. This simulates the spontaneous, disrupting nature of a distraction like an ad.

Simulating Reminders and Logs: The timing of reminders and the information on participant’s daily productivity data: We’ll use Google Slides to simulate the reminders and distraction logs. Each type will have its own slide, and the facilitator will control the flow by switching to the relevant slide at the right moment. The facilitator will decide when to show a reminder or distraction log based on how the session is going. They'll manually move to the correct slide to simulate the different types of reminders and logs, keeping the experience flowing naturally.

This approach lets us simulate the system without needing complex backend functionality, keeping things simple while still providing a realistic experience and letting us focus on key feedbacks.

Timing & Transitions

Reminder Timing

The facilitator will move through the slides at just the right moments to make it feel like reminders are appearing naturally. We’re aiming for a new reminder every 3-5 minutes, but we’ll adjust based on how engaged the participant is.

Distraction Logs Timing

After a reminder pops up, we’ll pause for 30-60 seconds to give them time to acknowledge and react to it. Then, about 2-3 minutes later, we’ll introduce a distraction log, summarizing what the system has “tracked” so far.

Guiding Participants Through Transitions

The facilitator will guide participants through transitions using natural conversation prompts such as:

  • “You just got a reminder—take a second to think about how it felt.”
  • “Okay, let’s check out how distractions get logged.”

Importance of Appearance

Appearance is moderately important. The main focus of this experiment is to assess which reminder or distraction log types the user prefers. The visual design should assist in the texts and overall structure of the reminders and logs. It does play a role in ensuring a smooth user experience and aiding in positive energy/incentives. However, the visuals should be clean and intuitive, with easy-to-read reminders and logs, so the participant can focus on providing feedback without distractions.

Form Mockups

Since this is a digital-only prototype, there are no physical elements involved. All of our elements will be graphical and exist within the Google Slides interface. This means we won’t need any form mockups, as everything participants interact with will be through the slides and their screen.

Use of Prototyping Tools

We’ve decided to use Google Slides for the prototype. It’s a great tool for creating interactive simulations, especially when we’re looking to create a functional, yet simple, prototype. A lot of distraction tracking happens in the background and the user does not initiate the distractions or tracking of the distractions by themselves. Google slides lets us simulate the timing really well by letting us facilitators step in and control the slides. Other than us, our participants can also click through them to experience the different reminders and logs.

Prototype Link: Google Slides Prototype

Mindflow: Blog Entry 4

Revised Goals of Experiment

The experiment goals have been revised between Blogs #3 and #4. The updated goal is now focused on assessing the effectiveness of the distraction tracking system, particularly examining different reminder types and distraction log formats, as described in Blog #4b. The goal is to understand how various reminder strategies (e.g., informative, encouraging, personalized) and distraction log types (e.g., detailed, basic, summary) affect participants' productivity and engagement.

Experiment Method

1. Participants

2. Conditions

The experiment will be accessing design choices for 2 things:

Reminder Types:

Distraction Log Types:

3. Experimental Tasks

Participants will complete a 45-minute work session while the system runs in the background. Their goal is to complete predefined tasks while facing common distractions (e.g., social media, notifications, multitasking).

4. Procedure

5. Apparatus

6. Independent and Dependent Variables

Independent Variable:

Dependent Variables:

7. Hypotheses

8. Planned Statistical Analyses

9. Expected Limitations

10. Within-Subjects Design

In this setup, each participant experiences all conditions (i.e., all reminders and all distraction logs) during a single session. This allows us to compare preferences and effects across all types of reminders and logs for the same individual.

Supplemental Experiment Materials

Pre-Experiment Questionnaire

________

________

1. Demographic Information:

2. Work Style & Productivity Habits:

3. Prior Tool Usage:

4. Experiment Goals:

Post-Experiment Questionnaire

________

________

1. Self-Reported Productivity:

2. Distraction Awareness:

3. Reminder + Distraction Log Preferences:

Observation Sheet

________

________

Distraction Tracking Engagement:

Reminder Preferences:

Task Behavior During Work Session:

General Notes & Observations:

Experiment Script

1. Introduction:

2. Work Session Setup:

3. During the Session:

4. Wrap-Up:

Mindflow: Blog Entry 3

Further Updated Task Examples

Our task examples have not changed since Milestone II. They can be found lower down on the page in the Milestone II blog post, as well as under “Additional Information about Prototype”.

Low-Fidelity Prototype(s) Demonstration

Desktop design

Additional Information about Prototype

Task example 1: Sarah, a remote worker, often gets distracted by social media. A quick notification from Instagram leads to hours of scrolling, leaving her feeling unproductive.

The design supports the task example by sending a pop-up notification to remind Sarah of their overtime distraction. It disrrupts their inertia to continue spending time on the distraction (like social media scrolling), and pulls them back to reality by reminding them how long they’ve been away from the task they initially wanted to focus on. The distraction logs track Sarah’s focus and distraction times, helping them become aware of her habits. If the logs are too much to read, the design also displays their patterns of distractions, helping them identify where improvements can be made.

Task example 2: John finds it difficult to break large projects into smaller tasks, often leading to procrastination and missed deadlines.

The mini-task Feature allows John to break down large tasks into manageable steps. Then, the Task Heap system automatically prioritizes these mini-tasks, ensuring that John focuses on the most important parts first. By tackling one small task at a time, John feels more in control and productive.

Task example 3: Emily works in a field with tasks of varying and dynamic complexity, making it difficult to estimate how long things will take and adjust her workflow accordingly.

The Task Heap system adjusts priorities as Emily works and inserts new tasks, helping Emily refine her time management and productivity, despite high variability and dynamicy. The Task Heap allows the ability to manually adjust task orders, making it flexible for unexpected urgencies. In addition, the Distraction Log & Productivity Report feature helps Emily track and identify patterns in her work habits, and monitor how they change over time. With the Task Heap and the Distraction Logs & Productivity Report, Emily can both respond better to unexpected tasks, and also develop better ideas of how her work patterns changed over time.

The scope revolves around both task management and distraction awareness, both of which were strong desires expressed by study participants. The Distraction Logs, Productivity Report, and Distraction Reminder features speak to the need for better distraction awareness, while the Task Heap responds to the need for better task management.

We’ve also included usability improvements like the 1) ability to rearrange task orders, 2) color codes for different deadlines/estimated duration/importance, and 3) ability to add and remove tasks and subtasks. This was because participants praised the high convenience and usability in existing task management tools, and we want to incorporate those strengths.

Walkthrough Report

Summary:

Good: The straightforward priority ranking system and dynamic heap system reduces fatigue in navigating the system and fatigue in making decisions. The design reflects awareness of user privacy and experience in its details like nudge reminder pop-up, clear action flow, and color codes.

Bad: distraction-tracking is not effective enough to improve from existing tools. Especially, users might not be incentivized enough to check the productivity report and to learn the “hard truth” of their time sink.

Task example 1:

Good: The distraction reminder nudges the user to get back to work, without being too intrusive.

Bad: The user might not be incentivized to check the productivity logs, and chooses to be blind about time sinks. The distraction logs are long and hard to interpret, making it even less appealing to look at. Similarly, there is not enough incentive to acknowledge a reminder and stop the distraction immediately.

Task example 2:

Good: The task heap is straightforward and allows John to focus on one thing at a time, relieving decision fatigue. The color codes help visualize the time, deadline, and importance for each task, creating better intuition and promoting action-taking. There is a clear priority ranking system with levels 1-4.

Bad: Usability could be improved - currently no way to delete a task. No way to indicate information like big projects that span multiple days. The subtask feature helps user with task breakdown, but doesn’t do it for the user automatically.

Task example 3:

Good: The productivity logs and productivity report provide a nice summary, helping the user identify patterns. The heap mechanism is spot-on to Emily’s need for a way to dynamically update task urgency. The ability to rearrange tasks manually allows flexibility, further allowing unexpected changes.

Bad: The heap structure allows the user to focus on the next item, but makes it hard to foresee future plans (say, what the user will be doing on Sunday at 5). Right now it is unclear what exactly will be in the productivity report, therefore hard to judge if it is good or bad.

Proposed goal(s) of the experiment

Goal #1: The distraction tracking system can be more effective. Specifically, there should be more incentive/punishment strategies so that it is difficult to be blind about the time sink in the distractions.

Goal #2: The task planning component can be more comprehensive. For example, right now there is no way to delete a task, to visualize the tasks on a calendar, or to visualize the hierarchy of tasks and subtasks.

Importance: Goal #1 should be higher priority than goal #2.

Ability to test: Goal #1 seems easier to test than goal #2.

We will only address goal #1. It includes assessing our entire distraction system, from the distraction reminder, to the distraction logs and distraction report. Goal #2 is already very explored in the field (with various tools like Todoist, Outlook or Google calendars), and is hard to test (a tool always remains incomplete if we consider all the use cases).

Mindflow: Blog Entry 2

Next Steps

Based on our field study findings, our next development steps will focus on refining key features that address major productivity challenges remote workers face.

Adjusting Project Scope

Given time constraints, we will prioritize adaptive scheduling and distraction tracking as core features for our prototype, while keeping productivity insights and environmental optimizations as future considerations. Our next step is refining feature designs based on these priorities and beginning prototype development.

Revised Task Examples

System Requirements

Absolutely Must Include

Should Include

Could Include

Could Exclude

User Categories

Must Include

Should Include

Could Include

Could Exclude

Design Alternatives

Minimalist Timeline-Based Planner

An interface with a continuous, scrollable timeline where users can plan tasks and track focus time. Distractions are logged as small markers along the timeline, helping users visualize when they lose focus. The user can adjust plans dynamically by dragging tasks. Workload is visualized with time blocks for better planning.

Pros:

Cons:

Desktop designMobile design

Task Heap with Smart Prioritization

Instead of using a calendar or list, this design treats tasks like the heap data structure, where tasks appear as a general backlog, but the tasks that should be done next are always at the top. The system suggests which task to tackle next based on deadlines, focus time, and urgency. Users can reorder the heap manually, but the system provides guidance on priority. The task ranking system should dynamically adjust based on user behavior and input queries such as problems the user describes or unexpected complications the user adds. The urgency levels can be visualized with color-coded priority indicators, and integrated with distraction tracking to recommend optimal work periods.

Pros:

Cons:

Mobile design

Mindflow: Blog Entry 1

Original project direction:

9. Interactive Focus Assistant for Remote Work and Study

This proposal suggests a utility for communication platforms like Slack, Discord, or Microsoft Teams to assist neurodivergent individuals or those with executive dysfunction in managing productivity. Additionally, it would track distractions using a webcam and send reminders to refocus during personalized focus periods. The app would provide daily summaries and productivity suggestions based on user patterns. The target audience includes remote workers and students who struggle with focus and task management.

Changes to project direction:

Instead of targeting neurodivergent individuals, to make recruiting easier we chose to target anyone working remotely. We personally feel it's hard to commit to the 9-5 productivity when working remotely. Plus, apps like Minimalistic Phone already exist to help the normal population manage distractions. We will not decide on the exact implementation at this point, rather we will focus on task planning, and distraction/productivity tracking.

Task examples