DATE
December to Jan 2021
ROLE
Product Designer
DESCRIPTION
Designing a new interface for online tutors to close the learning gap creadted because of COVID
The project is a grant that was set to provide tutors with more meaningful data to facilitate tutoring and help close the learning gap caused due to COVID.
Cignition is a company that provides tutoring for K-12 students. In this context they would provide the tutoring for the project and their tutors would be using the dashboard.
ASSISTments provide a tool for teachers to assign, grade and assess math homework for K-12 students.
For this project ASSISTments would be providing tutors with the student learning progress for math; the goal here is to provide tutors with more context on where students are struggling.
The primary outcome for the grant was to develop a dashboard that provided Cignition tutors with the ability to learn more about their students classroom performance to help them in specific areas.
After developing the interface, the goal was to put this out in the field and see how meaningful the data was for tutors.
The program would run for a hole semester and then measure increase in student performance by looking at their test scores compared to the class average over time.
The process started with meeting the stakeholders involved in the grant writing process. Understand all the goals as outlined by the grant and prepare for discovery with tutors.
The main stakeholders here were the product managers at Cignition to understand how the flow of information would occur from ASSISTments & what would be the various touch points.
The grant requirements had some specific data requirements. For instance, providing tutors with all relevant math standards for a particular assignment. Ability to group students for tutoring purposes.
The program would run for a hole semester and then measure increase in student performance by looking at their test scores compared to the class average over time.
This would be the landing page for tutors where they can selected either individuals or groups.
For each student they can access their recent work, look at all the course work or browse individual concept mastery
The modules section was based on their course work and would allow tutors to understand how the students have performed in their previous assignment.
This helps tutors understand if there are basic topics that student might need help with.
The idea behind oncept mastery was to give tht tutor an easy understanding of how the student is performing compared to the rest of the classroom across various standards
Tutors had about 5-10 minutes on average to prepare for a tutoring session. This would limit the amount of information they can process to make the most impact ful lesson.
Tutors were more interested in looking at the student work than their scores and course information. Because the work will allow them to start a conversation with student and explore where the student might be struggling.
In the current wireframes there was too much information without clear workflows and that made it very frustrating for the tutors to actually make sense of anything.
The landing page was changed to focus on the last week of work that student has worked on, since the tutoring was going to be a weekly session.
We also added a bar graph on the top of the page allow users to quickly grasp the information without having to read the table with all the data.
We added a standard page, which can be access by clicking on the standard in the bar graph.
This page gives them a organized list of problems and allows the tutor to focus on areas the student(s) are struggling.
The standard view would open up to give further details on the student work to help tutors start that exploration conversation with the student.
There would also be an assignment page with all the lession information associated with it. This was a part of the grant requirements.
The usability test included five tutors, three new tutors and two from previous rounds to diversify and make sure people are being bias to the process. The moderation guide focused on testing if users can find the student work and create a lesson plan from the availabe information.
Finding 1: Tutors wanted to see which standards were covered in each assignment at a high level overview. So that they can only browse assignments where the standar they are foucisng on is covered.
Finding 2: Tutors found the category No Credit, Partial Credit and full credit confusing, because it had redundant data when multiple students were involved.
Finding 3: Tutors found it diffcult to understand what score each student had on a particular problem
From the feedback from the usability testing we decided to move into creating high fidelity high-frames and get the piot running because of the program constraints.
Improvements: The standards was added to the assignment list. Along with that we limited the number of students to two at maximum allowing to simplify the screens.
We removed the categories of no credit, partial credit and full credit, replacing it with the actual scores of the students across the standard.
We reduced the visual noise by removing additional indicators for scores for each student.
The lesson information now had a focus standard for each of the lesson so the tutors knew what the assignment was trying to teach.
The assignment detail itself was another view to view similar information for tutors who wanted to focus on assignment over standard.
The pilot program didn't go well, primarily because teachers weren't assigning enough nunber of assignments in the class to provide data for the dashboard.
This provided us an opportunity to improve our operations and provide better training to participating teachers for the grant.
Based on this interface, the program director managed to get a new grant from the Lousiana Department of Education for supporting their tutoring program.
One thing was clear from this process was doing multipe rounds of user research provided results that helped create software that works for the tutors and helps them get their work done.
Designing for when system fails, in our case when teachers aren't creating enough assignments and the dashboard appears fairly empty failing to provide any value to tutors.
The biggest challenge was recruiting diverse set of users for our usability testing. Since the recruitment was handled by Cignition we didn't really have much control in setting the recuriting criteria.
Designing for all the use-cases was a challenge, I predict all the basic engineering outocmes and designed for those cases, but planning for edge cases when data fails was hard to predict.
Balancing the needs of the tutors versus the commitments in the grant was challenge. For instance, the grant required certain peices of data to be available, in reality the tutors never used the data and felt it was excessive.