How We Hack Tech (And What That Says About Us)
Clinical psychologist and former Intel research scientist Margaret Morris on how individual technology hacks shine a light on what users really crave.
“How could a phone be a shrink?” That’s the question clinical psychologist and former Intel researcher Margaret Morris asks in the first sentence of her new book, Left to Our Own Devices (MIT Press). It’s also the question that drove Morris’s research at Intel in the early aughts, and the development of a prototype called the Mood Phone. The Mood Phone was intended to help bring talk therapy into the 21st century. Instead of relegating therapy to once a week on a couch, people could track their problems and reactions in real-time.
But she hadn’t anticipated what would happen when it got into the hands (and daily lives) of end users. People used the Mood Phone for self-reflection, but they also started to use it to help diffuse tension, and to improve their communication with others. One woman took the phone into a bar, and after hearing someone bad-mouth a friend of hers, held her Mood Phone up to the group. The rhetorical question on the phone read: “Might I be villainizing?”
Morris describes that moment as a breakthrough for her in understanding the relationship between people and technology. How much we benefit from tech, she says, depends on how much we make it our own. It’s a pattern Morris has seen play out repeatedly, and it’s changed her perception of how our emotions and well-being are connected to the tech surrounding us every day. Rather than suggesting things like screen time limits or tech detoxes (to address concerns about social media and mental health), in Left To Our Own Devices Morris argues for using tech more intentionally and adapting it to our personal goals and values. “The value we get from technology,” she says, “depends on how we challenge it and let it challenge us.”
The value we get from technology depends on how we challenge it and let it challenge us.
dscout: In your book, you argue that the true benefits of technology often happen when people break the rules—when we deviate from the original intended use of tech and make it our own.
Margaret: It was something I first observed at Intel, with the research prototypes my colleagues and I created. The users who had the most compelling stories about how the prototype tech had helped them were the ones who’d used it in ways that deviated from what we intended, or who “broke the rules.”
For some people, breaking the rules means taking a tool intended for one use, like something serious, and using it in a playful way. One example in the book is about a woman with diabetes who had a continuous glucose monitor. The monitor had sharing functionality, tech that was really designed for kids who needed help taking care of themselves and being reminded to take medicine. But the woman ended up using the sharing tech on the monitor as a way to instigate playful banter with her sister. That ended up being exactly the support that she needed. She was very capable of taking care of herself when it came to the disease, but what she really needed—and got—from the tech, was emotional support.
There are other instances of people taking tech used for one purpose, like self-help, and using it for another, like to help manage interpersonal relationships. One of the stories in the book is about a man who struggled with anxiety for years, and volunteered to use a mood tracking app my colleagues and I had developed. In the course of using it, he noticed that coming home from work was a rough transition for him—his wife would rush out to the gym, his kids would demand dinner, and his mood would drop, all of which contributed to a pattern of resentment. As he continued to use the phone, his resentment shifted to curiosity, and he began wondering how his wife was feeling. He imagined that if she also had a mood-tracking app, they might be able to open conversations about how they were feeling more naturally. That curiosity alone actually helped him open some of those conversations himself.
That kind of shift changes not just how we think about the flexibility of technology, but also psychotherapy. It suggests that sometimes what we think of as individual issues might more effectively be treated as relationship issues. It’s also a very different kind of sharing than the kind we most commonly think about when it comes to tech, like posting on Facebook or Twitter.
On the flip side of that, there are people who break the rules by taking products designed for relationships and using them for self-reflection or personal validation. People using Tinder not to meet other people, but as a solitary self-esteem game. They swipe and like people with no intention of actually meeting someone.
You talk quite a bit about online dating in the book—one of the most compelling stories is about a woman who live-streamed her OkCupid dates to get real-time feedback
Yes—that’s Lauren McCarthy. She’s an artist and professor at UCLA who investigates technology, surveillance and intimacy.
She would take two phones out with her on dates—one of them would live stream the date to an observer—in this case someone from Amazon’s Mechanical Turk employment marketplace, typically used for crowdsourcing tasks—and the other phone streamed that person’s reactions and suggestions about how the date was going back to her. It’s an innovative way of getting real-time feedback.
It’s interesting—many of us are inclined to consult others about the things that are important in our lives. Asking for advice about our personal or professional relationships, or running emails by people before we send them. We often seek out the advice of people whose skills or expertise we feel is relevant to each situation—the person you’re asking for advice about buying a car is probably not the same person you’re asking for advice on your Tinder profile.
What Lauren’s project did is really create a new kind of tool for interpersonal coaching. The Mechanical Turk workers may not have given her the most nuanced guidance, but there was something that she found helpful about getting even imperfect guidance in that moment. It freed her to respond differently than she might ordinarily, opening up new choices for how she could act in that situation.
That kind of real-time feedback could potentially be very helpful. One of the limitations of psychotherapy is that we don’t have therapists following us around in our daily lives and observing how we’re acting. A therapist relies largely on self-report. But being able to watch someone in a rich way, and see how others are responding to them, may be very informative. It could be particularly valuable for people who are trying to change how they relate to other people.
There are a few examples in the book that touch on how tech can help us improve communication. You tell the story of Matthew, a man who has a tendency to be unintentionally terse in his emails and wanted to find a way to mitigate that. In his case, he needed help with how he communicated, rather than what he communicated.
Matthew is someone who often comes across as too negative, brash, or harsh in his emails. So he’s set up a system where, after he pushes send on an email, it goes into his drafts folder for a minute before it’s really sent. That’s just enough time for him to reflect on what he’s said and help him retract a message if he becomes aware after pushing “send” that it was too harsh. He can’t afford to let emails sit for a day, because the issues people are writing him about are too urgent. But that one-minute delay allows just enough time to help him retract a message if he needs to, something he’s done a number of times.
What I like about that example is that it’s so low tech. It doesn’t draw on A.I. or sentiment analysis. It’s just a simple timer. Because while there have been a number of systems developed to help people understand if they’re matching someone else’s emotional tone, none of them are really accessible to a large number of us yet. So people are creating their own hacks.
Technology is pretty good at helping with immediate gratification, like pointing you to the nearest Starbucks or to the Tinder date a couple blocks away. But what we need to be thinking about is a shift away from these isolated tasks, and toward how tech can help us with broader life objectives and long-term personal goals.
Why do you think those more sophisticated tools haven’t been fully integrated into our daily lives yet?
I think some of them are seeping in without major announcements, like automated email response tools. But it’s also hard to get these things just right. Interpersonal communication is so nuanced. There are so many different factors—historical, situational—and it’s hard for an algorithm to get all of that right. If tools are overly prescriptive, they’re bound to backfire. One concern people have raised is that the suggested phrasing in some tools may make everyone’s emails start sounding the same. It may become harder to pick up on meaningful interpersonal cues.
Beyond that, there’s also value in a system like the one Matthew made for himself, because it’s his creation. There’s self-efficacy that comes with making your own solutions and improvising your own tools.
It seems to go to our agency as humans. On that note, you’re also a psychologist. From a human perspective, are there things that we might be asking tech to do that it simply can’t?
Humans have the patience and ability to interpret a wide variety of signals that technology may not capture or process. Take spoken conversation, which requires us to respond to different cues, some of which are non-verbal. A conversational bot can’t read those cues, so communicating with one is very constrained. There’s another example in the book about a couple who resort to location tracking after an affair to try and resolve trust. There’s been a huge rupture in their relationship, and part of the way the betrayed spouse tries to assuage the wound is to demand access to the other’s coordinates. And it doesn’t really work. Location tracking provides information, but information isn’t the same as trust. Sometimes we reach to technology to have assurances, and that’s a different kind of work. That’s not something technology can really do.
You also argue that too often tech design focuses on discrete tasks without considering the individual’s broader objective.
Technology is pretty good at helping with immediate gratification, like pointing you to the nearest Starbucks or to the Tinder date a couple blocks away. But what we need to be thinking about is a shift away from these isolated tasks, and toward how tech can help us with broader life objectives and long-term personal goals. That requires some degree of self-awareness and reflection on our values and priorities. There are different ways of doing that. Some people are quite aware of their goals and conflicts, and they may not need to do anything explicit. But others need some guidance. One thing I’ve done in a class I teach is have people interview themselves using a life story framework developed by psychologist Dan McAdams. It helps them look at the different chapters of their lives, and the various themes, conflicts, and struggles. Then I have them layer on top of that the role technology has played throughout their lives and how it relates to the themes and struggles that they explored. This kind of exercise can help people realize how they could benefit from use technology differently.
Those are things individuals can do. But as an industry, researchers should try to understand the broader picture of what people want and how they want to live. It’s tricky because long term goals (say those for relationships or finances) may be at odds with the way people are currently using tech. But there are certainly ways of looking at the whole person, how they see themselves over time, and their relationships.
Like what? What can designers and product managers and researchers actively do to be more holistic in thinking about user objectives?
It can it be as straightforward as asking users questions about their relationships and goals, and how their daily lives are structured that might pose obstacles. Asking “what are the most important things to you right now?” Finding out what’s getting in the way of doing the things that are most important to them.
For example, if you’re interviewing someone about how they use streaming TV, you might ask not only how you can make the user experience more seamless, but also about the costs of making it more seamless on their end. What does having this service get in the way of in their life? Or: is there a downside of this product? Does it conflict with that person’s values? Is there a way we could make a version that doesn’t conflict?
Research and development shouldn't just focus on giving us seamless or frictionless experiences. It also needs to look at the conflicts that exist for users, to try to understand those conflicts, and to create designs that mitigate the conflicts that arise. (Like between long and short term goals, or different goals for people within the same household.)
Companies have a lot of responsibility in this area. We need to be thinking about how the products we’re creating can address the long-term objectives of users as well as their immediate needs. And design should invite users to adapt technologies to the nuances of their lives.
Carrie Neill is a New York based writer, editor, design advocate, bookworm, travel fiend, dessert enthusiast, and a fan of People Nerds everywhere.
Subscribe To People Nerds
A weekly roundup of interviews, pro tips and original research designed for people who are interested in people