Technologist, humanist, venture capitalist
Josh Elman knows what people want. A visionary product manager who helped grow some of the most prominent startups in Silicon Valley into the juggernauts they are today, Elman seems to have a sixth sense for zeroing in on what’s next.
He was an early product manager at both LinkedIn and Twitter, and during his tenure, each of the social networks grew their user bases by 10x. He also helped launch Facebook Connect, and for the last several years has been a general partner at Venture Capital firm Greylock Partners, where he invests in new social networking and media startups. He serves on the boards of new social apps including Medium, House Party, and Musical.ly. So what makes Elman such a soothsayer when it comes to how we connect and communicate?
Elman says that, even as a kid, he’s always been more interested in the why than the how when it comes to tech. “My brother was the coder,” he says. “I was always a user, never a programmer. I was more into why people used software.”
For Elman, the secret to building the next great product lies in understanding the problems humans are trying to solve. Tech, he says, is just the latest iteration of the tools we’ve been building for thousands of years. Figuring out which tools will be the most successful is largely dependent on understanding what we’re missing in our day-to-day interactions, and what will help us feel more fulfilled.
“What we’re really looking for is something that will add real value to people’s lives, that will replace something that they’re struggling with, or address an issue they didn’t even know they had,” Elman says.
He acknowledges that some of the platforms we’ve built to help us feel more connected can actually make us feel more alone, especially when it comes to social media. The next wave of tech, he says, will help us get back to the fundamentals of the human experience that makes us happiest and healthiest.
“The real times that we’re happiest are when we actually hang out and spend time together, live,” Elman says. “I’m working with a couple of apps that try to emulate those interactions, one in particular with games—sort of like how your grandparents used to play bridge. That wasn’t just about playing the game. They talked about life while they were playing. These are the kinds of things that bring people together that are very different than sharing and shouting at each other, and a lot of this is still untapped.”
dscout recently sat down with Elman to chat about the advantages of being both a technologist and a humanist.
dscout: The intersection of technology and psychology has been a longtime interest for you—your major at Stanford was “Symbolic Systems,” a program that centers on studying the human-computer relationship.
Josh Elman: I got my first computer, a VIC-20, when I was five years old. But I never really thought of myself as a programmer or a coder—I was really into “how can we make things that people use?” and why people used software. I was lucky to find the Symbolic Systems program at Stanford. It’s a mix of psychology, philosophy, linguistics and computer science, and fundamentally it’s about creating a bridge between the design of technology and how we bring it into our lives.
I was actually just at a 30th anniversary reunion for Sym Sys a few weeks ago, and the professor who currently runs the program gave an incredible, impassioned speech about why it exists. He said, “Look, we don’t just build technology for technology’s sake. We’re not just technologists, we’re humanists. We build things that make society better. We have to deeply understand technology in order to do that, but that doesn’t mean we just build technology because it’s possible.”
You put something along those lines on your resume when you were first job-hunting.
Yes, that I wanted to “create great technology that changes people’s lives.” After college, I wanted to work on ideas that I thought could have a very positive impact, things that you could imagine would have a big effect on people’s lives.
It seems safe to say the companies where you’ve worked—LinkedIn, Facebook, Twitter—have done that. Now you’re in the Venture Capital world, looking for startups and entrepreneurs to invest in. Do you think your curiosity about users and people is an advantage as a VC?
You know, I still don’t know if it’s an advantage. When you’re meeting with companies really early on you have to be open-minded. I focus mostly on consumer investments, and so understanding the people, the trends they’re seeing and experiencing in the world, what they want more of—understanding those fundamentals is a big part of it.
For instance, when Snapchat came out, there were a lot of people who thought the only reason you’d ever send a picture that disappeared was if it was a sexual photo. My perception of it was completely the opposite. Think of it this way: for kids who’ve grown up in a world where everything digital is public record, and if you’re tagged on Facebook your mom sees it within 30 seconds—the ability to flip the model and send things that go away makes you more committed, more trusting of a product. I felt it had a chance to be something that reached more people in a meaningful way. That came from really trying to understand those users and those needs and how they keep shifting.
Research is something that often comes into play a bit later in the game in the startup world especially—there’s often a sense of, “let’s get some stickiness first.” Is it something we should be thinking about earlier in the process, or is it better to let it happen organically?
I think research is incredibly important. In the earliest days of start-ups, there’s formal research and then there’s the primary research of just talking to people who are using a product. If someone isn’t constantly talking to their users, learning how they’re using it, and getting feedback, they’re not going to get the insights that will again help them get to those next levels of scale.
The other thing that really seems to be critical early on is the depth of engagement—it’s more important that a product have a big impact on the lives of a few people, rather than a casual impact on a lot of users. That seems to go back to the idea that we need to be thinking about how tech can deeply affect people’s lives.
Yes, that’s the interesting thing, that’s when you can start to see the potential of something being transformative. Imagine a piece of a tech or a platform that’s very small, that very few people use, and that to a lot of people, doesn’t even make sense. And then project a future where it’s adding real value to hundreds of millions of people’s lives, replacing something that they’re struggling with or solving problems they didn’t know they had.
Obviously part of what social networks are doing is capturing the conversation the world is having. That’s probably especially true with Twitter, which is creating this unprecedented historical record of this moment in time. But it probably wasn’t part of the thought process in building it.
I think that’s true. I’m friends with a bunch of the early Twitter folks, and I think if you asked them, “If you knew that we could have gotten a president like Trump elected who would use Twitter as his primary way to express himself, but also potentially browbeat other people, is that a good or bad thing?”—I don’t think they would have said “That’s a good thing. Gosh, I hope we build the product that does that.”
But then you realize those larger implications. What happens when we give a voice to everybody? What happens when we create an open system where everybody can hear whatever is being said in a second’s notice and it can get massively distributed? Are these all good or bad things? It’s a constant tradeoff.
It’s interesting thinking about generations that have never lived in a world without tech—one of the concerns being voiced more and more is whether there’s a tradeoff for our reliance on technology. We’re becoming more technically able every day, but because of that are we losing a human element? That’s definitely a concern people have when it comes to AI as well.
It’s a really interesting question. As humans, we’ve always been toolmakers, and we’ve always created tools that manipulate our environment. We don’t always understand the ramifications of that. We create these great chemicals to make our plants do better, and then we realize we’re causing massive environmental damage.
AI is one of the first things where I worry that people don’t understand it. Now we say, “Oh, let’s just build an AI that will help us figure out the best solutions here,” but then we do and sometimes even the experts can’t explain why the system comes up with a specific solution.
I think education is a big part of it. We’re under-serving ourselves by not teaching computer science and basic computer programming in schools. Part of the reason we understand when something is rusting or that we’re comfortable with food and calories is that we took basic biology and chemistry. I feel fortunate that I learned programming, but unless you were trained in it, you may still look at a lot of tech as sort of these magic boxes that some wizards figured out how to program. It’s something we have to change.
Ultimately there are so many good ways to use the tools we have. If we can use our collective intelligence to get answers faster and get maps that route people because they’ve learned the way people drive and which routes are better at which times, that’s really helpful. If we can use AI to help us find a better search result because we have some problem that we can’t even formulate the right words to describe, but the AI has seen enough patterns that it can help us come up with the right words to get the right result, that’s really helpful. There’s just so much that AI can do. I think we’re scared because we don’t understand all the implications. But I’m a believer that we’re generally toolmakers, that we make things better and if there are ramifications, then we adjust—and hopefully we aren’t too late.