# What is the Best Way to Measure and Analyze Your Data? A Quick and Easy Guide

Understanding the range of analysis options for certain types of data can unlock your research's potential—and you don't even need a data science degree, either.

As researchers, we often take measurements to assess the usability of a product or service. These measures aren’t arbitrary—they’re based on methods from psychology and other social sciences as applied to behavioral science.

But there's a whole other level of complexity that might trip you up. When you have different levels of measurement to assign numbers, labels, and units of analysis to represent categories, how do you make sure you're doing it accurately and in the best way possible?

In order to determine the optimal way to analyze your data, understanding core measurement types is an important place to start.

## Jump to

What measure should you use?Why does this matter?

How to analyze each data type

## Four types of measurement

Four rules are used to sort variables into categories: nominal, ordinal, interval, and ratio measures. I’ll explain each of these categories in detail, and talk about why it’s important for researchers to understand these levels as they apply to their jobs.

### 1. Nominal measurement

The most basic level of measurement assigns a variable into two or more categories on some variable such as age, gender, income level, or education. For research purposes, a number is assigned to each category for ease of collecting and analyzing the data.

For example, age could be classified as:

- Less than 20
- 20 – 29
- 30 – 39
- 40 – 49
- 50 – 59
- 60 – 69
- 70 – 79
- 80+

Even though numbers are assigned to categories, you can’t do any math with these numbers. However, the categories themselves must be both exhaustive *and *mutually exclusive.

**Exhaustive **means that all options are represented in the set. So, looking at the age list above, if we removed #8 for 80+ the list would *not* be exhaustive. Sometimes you may either not know all the possible categories, or the list would be too long (for example, asking about religious affiliation) and in those cases, the “other” option comes in handy.

A list of categories is **mutually exclusive** if one and only one category could be applied. So, looking again at the age list above, if option #2 was 19-29, then someone aged 19 would be categorized as both #1 and #2. You may have heard this requirement described as “MECE”, which stands for mutually exclusive, completely exhaustive.

*Related article:* Choosing the "Right" Number of Research Participants

### 2. Ordinal measurement

Ordinal measures involve ranking items in order based on a particular variable. An example is categorizing items by size, from smallest to largest. However, the numbers you assign to each value are not necessarily proportional.

Even though #1 might represent the smallest value and #2 might be the next smallest of five values, that doesn’t mean there is an equal amount of size difference between #1 and #2 and #2 and #3. This is important in understanding the difference between this type of measurement and the next one.

One reason to use ordinal measures is that social science research has discovered that people typically have an easier time assigning a *relative value* to something as opposed to an *absolute value*. For example, you would be able to assess more quickly and accurately whether you weighed less or more than another person rather than trying to identify the person’s exact weight.

Again, for this type of measure, you can’t perform any mathematical operations when analyzing the data.

### 3. Interval measurement

With interval measures, levels build upon each other. Interval measurements have the characteristics of both the nominal and ordinal levels, with the additional quality of representing equal distances between the numbers in the variable being measured.

The classic example of an interval value is temperature measures, Fahrenheit or Celsius. The intervals between numbers are the same—the difference between 10 and 20 degrees is the same as the difference between 50 and 60 degrees. The ability to measure variables in such a way is based on the establishment of a standard metric for variable measurement.

What this means is that with this kind of data, you can now perform *some* mathematical operations on it. You can add and subtract this type of data, but you can’t multiply or divide.

Going back to the temperature example, you can’t say that 40 degrees is twice as hot as 20 degrees, because this type of measure doesn’t have an absolute zero. The zero value is only arbitrarily assigned. This is why zero degrees Celsius is not the same as zero degrees Fahrenheit.

### 4. Ratio measurement

Now we get to the final level of measurement, ratio. As in ordinal measures, ratio measures include the features of the previous three levels, with the addition of—you may have guessed it—an absolute zero. Adding this feature allows you to apply multiplication and subtraction, creating a ratio.

An example of this kind of measure is the number of likes on a social media post. Zero truly means the person has no likes, and if one post has ten likes and another has twenty, that means the second post has twice as many likes as the first one.

## What measure should you use?

The best advice I can give is to use the highest possible level you can. The higher the measure, the more you can do with the data—so don’t unnecessarily limit yourself.

For example, if you were doing a survey that included a list of potential features that users might find valuable in a product, you could simply provide a list of all the options, and have them multi-select all the ones they want. That would be nominal data.

But even better, you could have users rank the options or indicate how much they value each feature on a Likert scale, so that you can compare the features against each other and have the ability to assign a priority to some over others.

## Why does this matter?

You may be wondering why it’s important to know these measurement levels. The key reason to understand this is so that you don’t misuse data when representing findings and summaries. The level determines the type of reporting and statistical analysis you can or cannot do with the results.

## How to analyze each data type

Now that we’ve gone over the basic types of measurement, let’s explore how you can perform basic analysis and reporting on results for each level of measurement. While some levels afford you the ability to perform statistical tests, that’s a subject for another article.

Also, much of the research we do in the field of UX does not require that treatment primarily for two reasons: first, statistics is not a standard recruitment for a UX Researcher and requires formal training, which is great to have but not typical.

Second, many of these tests require statistical analysis software or a lot of patience with manual or Excel-supported calculations. Typical UX researchers I know don’t have a lot of spare time on their hands to learn these more complex topics.

### 1. Nominal data

Use descriptive statistics to analyze and report nominal data. Two important descriptions you can use are **frequency distribution** and **mode**. A frequency distribution summarizes the number of values in each category, and indicates how the data is dispersed.

This is something you can easily generate if you tracked your findings in an Excel spreadsheet by inserting a simple “COUNTIF” or “COUNTA” function into a table.

Let’s go back to the age range example we used earlier. I added some data to the categories and created a simple distribution table below. You can report the data simply as a count, or as a percentage, or both. You can also put Excel charts to use by showing this data in a bar chart to make it easier to interpret.

Age range | Totals | Percent |

Less than 20 | 4 | 3% |

20 – 29 | 12 | 9% |

30 – 39 | 16 | 12% |

40 – 49 | 24 | 18% |

50 – 59 | 25 | 19% |

60 – 69 | 30 | 23% |

70 – 79 | 14 | 11% |

80+ | 6 | 5% |

The other useful measure here is the **mode**, which is a
measure of central tendency. Simply put, it’s the value that appears
most frequently in the dataset. To test your understanding of this level
of data, why would you only use the mode and not the median or mean?
Close your eyes and think of your answer before reading the next
paragraph.

You got it—since these numbers have no technical meaning, you won’t have a true center answer or average in the dataset. The mode makes sense since it is simply the category with the most data points in it, which says nothing about its relationship to any of the other categories.

### 2. Ordinal data

With ordinal data, you have a few more descriptive statistics you can use. In addition to the **frequency distribution** and the **mode**, you can also report on the **median** and the **range**. The median, as you may know, is the middle value in the dataset. This gives you an idea of what the average number might be, but you can’t compute a true average since you don’t know the actual distance between categories.

One of the more common measures you have likely seen for ordinal data is a Likert scale. Most of the subjective usability measurements in our field are Likert scales—SUS, SEQ, SUMI, UMUX, and similar scales. Users choose an answer to a question based on a scale of agreement with a statement, typically from low to high and shown and/or coded as numbers. The SEQ is depicted below as an example.

The range is a report of the highest and lowest value in the dataset, which speaks to its variability. To compute this, simply subtract the lowest score from the highest. Using the SEQ scale example, if your responses were:

3 5 3 6 5 7 6 6 4 6 5 6 5 3 6 6 7 7 5

Then the highest value is 7 and the lowest 3, so your range is 4.

You might notice that I didn’t mention using the mean for ordinal data. This is an area of controversy in the world of statistics, with some saying that you should never apply means to ordinal data because this is not true interval data, where the distance between each category is equal.

However, one of the leading academics in the field, Jeff Sauro of Measuring Usability, explains in Chapter 9 of his book Quantifying the User Experience why he finds using the mean acceptable. I will leave it up to you where you decide to land on this issue.

### 3. Interval data

Moving up to the interval level, you can add more descriptive statistics on top of the ones from the previous levels: frequency distribution, mode, median, and range.

Now you can include **mean**, **standard deviation**, and **variance**. The mean is simply the average of the values in the dataset. Again, you can easily generate this value in an Excel spreadsheet using the “AVERAGE” function.

**Standard deviation** tells you how much each score differs from the mean value so that you have a sense of how the data is distributed. This gives you an indicator of the size of the observed variability. Excel has a function for this called “STDEV.S” or “STDEV.P”.

For example, let’s say that for whatever reason you wanted to compare two seven-day samples of summer temperature measures (for the sake of simplicity, we’ll assume they were all in the range of the 80s).

Temperature | Sample 1 | Sample 2 |

80 ℉ | 1 | 0 |

81 ℉ | 2 | 1 |

82 ℉ | 0 | 0 |

83 ℉ | 1 | 0 |

84 ℉ | 0 | 0 |

85 ℉ | 1 | 2 |

86 ℉ | 0 | 0 |

87 ℉ | 1 | 4 |

88 ℉ | 1 | 0 |

89 ℉ | 0 | 0 |

For Sample 1, the standard deviation is 0.67 and for Sample 2, it’s 1.34.

Finally, **variance **also looks at the distribution of
your data, here focusing on the total amount of variability of the
values in the dataset from the mean. The Excel function “VAR” gets you
to this value. You can combine the standard deviation and variance
values with a measure of central tendency (such as the mean) to provide a
short, meaningful description of the observed data.

### 4. Ratio data

As you’ve probably caught on to by now, you can use all of the descriptive statistics that are available with interval data with ratio data, with one addition that again probably won’t apply to your role as a UX Researcher: **coefficient of variation**. This is just another look at the variance in your dataset that can only be applied to ratio data since it is fractional. You find this value by dividing the mean into the standard deviation.

In the field of UX research, by far the most common ratio data researchers collect and analyze is time-on-task (ToT). Time obviously has a meaningful zero and the difference between 30 seconds and 40 seconds is the same as that between 70 and 80 seconds.

When analyzing and reporting task time, you can present the data in several ways. Commonly, researchers share the mean time it took for users to successfully complete the task. Conversely or additionally, you can report the mean time for users to fail the task if the failure rate was pretty high. You might also show any of the descriptive statistics on the distribution of the time dataset as well to see how much it varies.

One final question here to consider: whether you should use the median rather than the mean when reporting on task times. That question often comes up when both are available. To cite Jeff Sauro again, he recommends using the **geometric mean** for this type of data because it’s typically positively skewed, as opposed to the more common normal distribution pattern.

## What levels are qualitative and what are quantitative?

Now that you understand the levels of measurement, you can put a wider lens on the data and classify it into types: qualitative and quantitative. Qualitative data is anything that is non-numeric. Ordinal-level data falls into this category. Remember that even if you assign numbers to these categories for the purposes of tracking, it still has no numeric meaning.

The three other levels can be considered quantitative data, which is simply anything that *can* be described with numbers. It’s important to remember that classifying ordinal data as quantitative is still an open debate in the field of overall statistics.

One final grouping that can be relevant when putting data into classes is whether the data is **discrete** or **continuous**. Ordinal data is considered discrete, which means it can only take certain values. For example, if you are counting errors, one observation can’t have a value of 0.5, it would have to be 1.

The highest two levels of measurement—interval and ratio—are called continuous, which simply means that the data can take any value in a range. A good example here is ToT, where you could technically compute very small amounts of time, such as 0.5 seconds.

Knowing the best way to measure and analyze different types of data can give you all kinds of tools for better research and analysis. I hope this discussion will help you feel more confident when discussing data with others that have a background in data science or statistics.

Molly is a User Experience Research Manager in the financial services industry. She has a master’s degree in communication and has over 20 years of experience in the UX field. She loves learning more about how people think and behave, and off-work enjoys skiing, reading, and eating almost anything, but first and foremost ice cream.