Obsessed with understanding what makes People tick

PN WRK Data Viz HERO

Asking Six Million Users and Grading on a Curve

[OPINION]: Research must scale. That requires new methods and tools, an exponential increase of input data, and a greater responsibility to use that data well.



In a recent People Nerds workshop, Nick gave us a hands on look at how we can scale our research with effective data visualization. You can stream it on demand here


“How’m I doin?”

There were times, not long ago, that even talking to our users and daring to ask why was seen as exposing a rare moment of vulnerability. 

In the 1970s and 1980s, New York was under the mayoral supervision of Ed Koch, who ran under the slogan, “How’m I doin’?”. Immersing himself with the people by riding the subway every day, the mayor would repeatedly set up tables on busy street corners to talk to New Yorkers of every borough and township. 

This inclusivity was despite what was a difficult economic and cultural period for the Big Apple. Irrespective of how Ed Koch was actually doing, it is worth admiring both his establishing principles of openness and accountability for such a public figure, as well as his edict to elicit a constant stream of feedback for guidance.

In this, the private sector, the forebearers of our industry assuaged us in that we only need to interview a handful of users (and then grade on a curve), as often-referenced by contributors to this publication. We have been told to position our research efforts as being ‘Lean’, ‘Agile’ or perhaps ‘Just Enough’—conditioning us that even taking time for outreach would be a distraction from more important processes. 

However, nearly ten years on, there’s now an acceptance and formalization of User Experience Research that has grown to include a diaspora of different backgrounds and disciplines, perspectives and walks of life. Research departments are growing by leaps and bounds, now incorporating operations, ethics and transparency responsibilities, standardizing in their refinement of both practice and approach. 

With this specialization in place, I challenge that the definition of Nielsen/Norman’s manifesto of a weighted, conditional approach towards conducting research still holds the same validity today. In fairness, the context of the N/N Group’s instruction was to frame usability and user acceptance research methods, rather than address the generative, primary research as conducted today. However, with greater scope comes greater responsibility.

Responding to scale

During the recent People Nerds Democratization panel, Roy Opata Olende shared his lament about why he couldn’t follow an executive’s request for 10,000 people as a follow-up to their qualitative research practices. I get that instinct from the executive. Often, those who are tasked with growing the same businesses we work for have a hard time not thinking exponentially. They fall into the same trap that we as researchers often do: attempting to stack as many conversations as possible into a limited time and budget. As vendors, one of our most common follow-ups from client RFPs is the question of, “Why do you want to interview [insert exorbitant number here] people?” As any analytics professional will tell you, we as humans are entranced by the definitive sum, the big number—the larger and more specific the better.

Yet, the costs involved with the return-on-investment of research have shifted considerably in our favor. Surveying applications and participant rosters have advanced in both value and convenience. There’s never been more IDI participants categorized, queued up and ready to go. We can now quickly teleport ourselves into people’s living rooms (or in the case of dscout, into their pockets). Further, the hurdles in incorporating some degree of quantitative analysis are relatively low, given the relative ease-of-use of statistical features from surveying results. So, why shouldn’t the executive want the big numbers to increase their own personal confidence interval in validating and sponsoring the research process?

Inspired from attending a qualitative research workshop only a few years ago, I bought a foot pedal to increase my efficiency in transcribing—a practice that was relatively commonplace 18 months ago, but now considered “white glove.” The foot pedal is now an artifact of manual practice that has been quickly rendered obsolete given the increased accuracy of automated transcription services. I point to the influx of voice assistants listening from our homes and in our pockets as the harbingers of this sea change. Whether barking trivial commands to Alexa, Google or Siri, we’ve trained these same algorithms of natural language understanding through our herd’s speech patterns, increasing in confidence, comfort-level and overall accuracy of generated transcripts. Tuning the metaphorical ears on Big Brother’s head to comprehend us better.

A thousand words is worth a picture

I would venture to guess that together, we as a species, have spent more time looking at data in the last six months than any time previously in our collective history. The once foreign concepts of logarithmic scale and linear regression are now relatively commonplace. COVID has forced us to better understand just how fast the virus among us is exponentially spreading and to plot trajectories forecasting the rate of people living or dying. We’ve been forced to transition our perspective to a macro-level view, one that possesses the need for increased data literacy.

This notion carries over to how we approach and conduct qualitative research as well. There exists a precedent for this, as business intelligence gleaned through dashboards displaying quantitative data have been integral to strategic decision-making for decades. Yet, our approach to unstructured text is still in its infancy. Rudimentary techniques of tag clouds have lingered on for decades in their jumbled attempt to show context with little efficacy. Still, among the major qualitative researcher-focused platforms such as MaxQDA and NVivo, there isn’t the same priority to organize unstructured text, nor to leverage natural language understanding to the voluminous data already captured, coded and categorized. 

Like the moribund foot pedal, too often are we building research paradigms without the capacity to handle the exponential amount of unstructured data being generated and needing to be parsed in today’s world. Both our approach and the tools that support us need to catch up. Despite the increased amount of input – we’re oftentimes still looking at spreadsheets, scanning, poking and interpreting for results. Never have we had access to more information, but less capacity to decipher what it’s saying.


For a look deeper at tactics that help us respond to the call for scale: stream Nick’s free People Nerds workshop on data visualization or his upcoming workshop on surveys and segmentation.


Evidence-based fortune

As, you the researcher, are the one who holds insight, your stakeholders turn their lonely eyes to you. You’re the unlucky one that has been tasked with parsing this increasingly large amount of information, making sense from what others cannot readily interpret. 

Your role is the soothsayer, tasseographer, diviner, the fortune teller—tasked to interpret (with some confidence) the scattered bones on the ground, the lines on a stranger’s palm, or the tea leaves lying idle in the bottom of a porcelain cup. The research community are the ones whom others place faith in to guide them to truth.

If you conduct qualitative research, whether you realize it or not, you’re already interpreting unstructured data, which is indeed just that—unstructured and extremely noisy. Human language is constantly evolving and complicated in its syntax, hierarchy, organization and paradigms of inferred tone. 

In the field of Natural Language Processing, there is a common model defined as a “bag of words.” When a colleague referenced this terminology as our method of approach in front of a client, I cringed in the use of such vaguitites to address their needs—not knowing this interpretivist approach is inherent in dealing with such volumes of unstructured data. When does the random begin to look consistent?

I surmise this drives research to an increasing desire to be evidence-based—giving us the tools to provide further justification as to why we’re seeing what we’re seeing in regards to a product, opinion or hypothesis. To codify and prove that which is too often dismissed as merely subjective interpretation. To be able to more confidently replicate and defend our findings. To be able to hold the gaze of doubt with fact in a time, society and cultural environment where the truth is increasingly seen as interpretivist. We start with the acknowledgement that the only known is that there is a cup, and it is full of water. How full or empty is subjective. 

Dangers and detractions

So, what now? This article advocates for pouring gasoline on the tire fire that is the research’s typical role in corporate culture. Attempt more spectacularly and with greater scale so that others will stand-up and take notice—brutally addressing our common angst of “no one is listening to the words that I speak.

Image2

Let’s go slow. As we advocate for introducing shiny new methods of data visualization and analysis into our research process, it is worth weighing the inherent dangers and detractions of such approaches.

  • Ethically, depending upon when and how we aggregate data, this should require a change in the terms of use and approach. Whether our sources are from transcripts of 1:1 interviews or from 10s of millions of rows from web-based applications. We intend to surveil in a way that isn’t expected, and should be held accountable for our intent. Clearly state your practices in treating and archiving data.
  • Increased signal means increased noise. In applying data visualization practices, weaving our narrative becomes abstract and difficult to communicate. We’re overstimulated as is, and bringing shiny objects to attention without conditioning their role in the greater research process can be a distraction.
  • We can fall into the trap of becoming entranced with the big numbers and searching for patterns, without taking the time nor effort to go to a micro-level view. Not listening deeply to clarify our hypotheses through traditional research methods. We fall prey to perceived convenience and scale.

There are numerous other caveats with applying techniques of large scale, qualitative data analysis. Costs and inconvenience, it’s high, very high. Foundational legal and operational logistics need to be addressed early, and the bounds of the research department’s influence and approval will be stretched outside of their friendly confines. Yet, these efforts should not be seen as laborious, but instead as a growth exercise—building rapport and trust from stakeholders within your organization, especially those who are not direct reports, is how our influence and appreciation is earned. We seek to operate at the scale of the organization, to just within our department.

Despite these harsh economic conditions, the application of Research as a corporate practice is flying with a tailwind. UX and the Research that drives it have followed Design in squirming themselves into a ‘seat at the table’. Just as we’re seeing CXOs emerge from the seedlings sowed a decade ago, we’re not far from a Chief Research / Insights Officer title being commonplace in corporate culture. However, as you evolve as a researcher, challenge yourself to instead make your own table, using the tools that will best accommodate the environment you practice in. Set up that table up on the sidewalk and talk to as many people as humanly (or mechanically) possible. How ya doin’?

Nick Cawthon

Nick leads human-centered strategy and research efforts for enterprise and technology companies seeking to better position their digital efforts. He founded a small, boutique agency called Gauge in 2001 to better service corporations, agencies and startups here in the San Francisco Bay Area and beyond; these have grown to include Genentech, Adobe, Wells Fargo and many others. Nick teaches Data Visualization for the MBA in Design Strategy curriculum at CCA. 

Curious as we are about what makes people tick?

Get new People Nerds articles in your inbox.