Georgia Syro

Georgia Syro

SaaS platform for Psychometric assessments

SaaS platform for Psychometric assessments

SaaS platform for Psychometric assessments

SaaS platform for Psychometric assessments

Designing Thomas' 4 core assessments for mobile optimisation, and candidate feedback reports.

Designing Thomas' 4 core assessments for mobile optimisation, and candidate feedback reports.

Designing Thomas' 4 core assessments for mobile optimisation, and candidate feedback reports.

Designing Thomas' 4 core assessments for mobile optimisation, and candidate feedback reports.

Client
Thomas International
Thomas International
Thomas International
Role
Role
Role
UX Designer
UX Designer
UX Designer
When
When
When
2020
2020
2020
Team
Team
Team
Psychology Team
Product Management
Development
Quality Assurance
Product Design

Project Summary

Project Summary

Project Summary

Objective

Thomas International provides online psychometric assessments to aid in recruitment and development processes. In the current solution, employers receive 'raw' scores from candidates which they are trained to interpret. Since there is no automated candidate-friendly report, it is time-consuming for employers to provide feedback and won't do it. If we can create candidate-friendly reports to provide instant feedback and allow individuals to gain value from the assessments, it will also reflect positively on the employer.

Role

My challenges as a Product Designer were to translate complex psychological concepts into easy-to-understand information for the user base, whilst maintaining the integrity of the underlying science in the branding and mobile optimisation of Thomas’ core four products (psychometric assessments).

Outcome

Successful redesign of the 4 core psychometric assessments, and candidate feedback reports.

Benefit hypotheses

Benefit hypotheses

Benefit hypotheses

Updated feel

Updated feel

Updated feel

The new branding will increase the enjoyment for users taking assessments for personal use.

Device agnostic

Device agnostic

Device agnostic

Users can successfully complete assessments on their preferred devices.

Intuitive

To save time for administrators by eliminating the barrier of training and evolving the platform to have a more intuitive experience.

Curated

Curated

Curated

Minimise the potential for bad practice by developing more intuitive results.

Key pain points

1

1

1

Designed for B2B

The existing solution was geared toward B2B customers.


We identified an opportunity and need to create a B2C platform to give candidates a more meaningful and enjoyable experience.

2

2

2

Designed for desktop

Designed for desktop

Designed for desktop

Assessments weren't yet optimised for mobile phones and were proven to negatively influence users' results.

In addition, many users don't have access to a computer or want to save time by having the option to complete an assessment on mobile.

3

Steep learning curve

Steep learning curve

Steep learning curve

Administrators were required to complete a 2-day course in order to interpret results.

Results were delivered in a form that placed the responsibility of bad practices directly on the B2B user.

Personas

1

1

1

Identifyied a persona gap

Identifyied a persona gap

Identifyied a persona gap

It was essential to involve our clinical specialists from the get-go

2

2

2

User interviews

User interviews

User interviews

After creating initial designs, we wanted to test with real life users

3

3

3

Avoiding stereotype bias

Continued to iterate on designs based on further insight and feedback from both U.S and U.K nurses

Continued to iterate on designs based on further insight and feedback from both U.S and U.K nurses

Continued to iterate on designs based on further insight and feedback from both U.S and U.K nurses

Personas: Identifying persona gap

Personas: Identifying persona gap

Personas: Identifying persona gap

The current B2B platform allows businesses to send assessments to candidates or employees but there isn't a platform which gives candidates autonomy and access to their own results.


We need to research to identify who our B2C users are and learn how they would like to get the most out of the assessment feedback.

User interviews

User interviews

User interviews

As part of a multi-disciplined focus group consisting of Customer Support representatives, Engineers, and Product Managers, we identified three key motivations to help structure our persona interviews and source key user groups.

It's important to note that these acted as starting points for our research which would need to be validated through interviews.

Avoid perpetuating stereotypes

Avoid perpetuating stereotypes

Avoid perpetuating stereotypes

Generally, personas are the compilation of research to create a fictional character with a name, an image, and traits to help us empathise meet the needs of users.


After presenting the initial personas, many were concerned that the use of imagery and names, although accurate representations of the industry, would perpetuate stereotypes.

Using the current pool of users, we realised that it was male-centric and ethnically undiverse. We wanted to ensure our personas could be inclusive so that future products were also.


By including ethnicity, gender, clothing, we ended up perpetuating stereotypes and narrowing inclusivity.

Based on this traditional persona model, we end up prioritising bias over motivations and goals, which could be common amongst users regardless of gender, ethnicity, names, or clothing. ​


I worked with Product Managers to collate the interview research and create personas based on mutual motivations, instead of by gender, ethnicity, names or clothing, that would perpetuate stereotypes.

Assessment style

Thomas's core assessments are Behaviour, Aptitude, Emotion, and Personality. Each assessment differs in question type and ranges from Likert scales to multiple choice. Due to the nature of the assessments, we are constrained to use only psychologically-validated question methods.

As a Designer, my responsibility was to ensure the validity of the Science, symbols are culturally universal, and that the solution doesn't impact a person's results.

Case study: Behaviour

Case study: Behaviour

Case study: Behaviour

Case study: Behaviour Assessment

Case study: Behaviour Assessment

Case study: Behaviour Assessment

What is the assessment?

The Thomas Behaviour assessment is a forced-choice assessment that uses an ipsative referencing method – an individual’s response patterns are compared to themselves rather than the scores of a comparison group.

Individuals are asked to select one adjective which they believe describes them most and one which describes them least.

Requirements
  • Translation to 12+ languages, therefore it's necessary to design for varying character counts

  • Ensure Finger-Friendly Design requirements for mobile for ease of use

  • Avoid creating bias of choice for 'most' and 'least'

  • Avoid red and green colours which might create a bias of choice and cultural inferences

Challenges

One of the key goals is to ensure consistency over all assessments. Due to time constraints, the Behaviour assessment would launch first, and the other assessments would come later.

We must future proof the branding and design to fit any future assessments. The latter assessments will differ in terms of length and question style.

Communicating results

Thomas customers rely heavily on training programs that award accreditation and help employers interpret the results of each assessment.


This is particularly important because the 'raw' results, if not interpreted correctly can be negatively impact the candidate. We must put this onus on the platform to deliver user friendly reports and avoid the risk of human error and malpractice. Generally out of all the assessments, people are most sensitive about their Aptitude scores. It can be detrimental to people if the results are not communicated properly. When thinking about migrating the platform to a training-free zone, we wanted to identify what people need to know to achieve their goals, and what are the most common 'bad practices'.

Communicating results

In the new candidate portal, we digested the results and instead of focussing on a number which proved to impact candidate's negatively, we decided to use a sliding scale.

User validation

User validation

User validation

Next, we tested designs through a survey. It emerged that existing users who were used to the complexity of the results lacked trust in the new simplified results.


Existing users were used to seeing a granular level of information. We realised we needed to strike a balance between the level of technicality. Without the use of training, we also noticed the need to add further information about each sub test, to increase trust in the product.

Ways of working

Ways of working

Ways of working

Figma combined variants and auto layout helped us design quicker and more efficiently.