Learnist
Farbood Nivi is an expert on lean startup metrics. Eric Ries featured his online social standardized test-prep company Grockit in his book’s chapter on that topic. Nivi sold the Grockit name to Kaplan in July 2013, and since then, he has focused full-time on Learnist, an online platform for curated and crowdsourced non-fiction ebooks. Learnist entered private beta in May 2012 and launched its first mobile app three months later. Fueled by $20 million from Discovery, Summit, Atlas, Benchmark and others, the service reached 1 million users in late 2013. Nivi took time out to explain how he’s using metrics to capture the next million.
What’s your personal approach to innovation?
I have a history of making fun of the word innovation. It’s a word that people have said over and over until you can’t really hear it anymore. I’m not even sure what it means.
How about your approach to product development?
I have two. One is a gut approach: Wouldn’t it be cool if you could do this? A lot of ideas come out of that question. The other is to suss out whether an idea just sounds cool or whether it actually could be cool: Does it solve a problem? What is the problem? Who has that problem? And, most important, how big a problem is it? I firmly believe that I have an infinite number of problems, and I won’t solve 99.9 percent of them in my lifetime. So the real question is, is this problem big enough that someone is going to do something about it? Building a product because it solves a problem is not enough. Metrics and instrumentation can help you figure out if you’re solving a big enough problem to make it worth investing your time, money, and passion into.
What’s your approach to metrics?
We’ve been hardcore into metrics for years, constantly learning, changing, fine-tuning.
There are macro-level metrics that are important to the business, and once you dial in on them, they probably won’t change much. Those are the ones you use to make business decisions.
Then there are metrics related to specific features you’re building. You use them to get a sense of how, where, and when those features are being used. Those metrics change as you add, subtract, and revise features. You need to watch both kinds. Also, I believe it’s important to have a dedicated metrics person. If your team includes only two to four people, you probably won’t have one. But if a corporation is trying to implement the lean startup method and you have 15 or 20 people, one of them ought to be a full-time data person. That person doesn’t need to have a PhD in computer science, but you need someone working on data full-time, working with the stakeholders to set up instrumentation and reporting so you can tell whether what you’re doing is going anywhere.
How do you pick the right ones?
Finding the metrics that are important for your business is a discovery process. In the end, it has to do with your business model. You may be building a cool app and feeling like, “Let’s see if we can get a few million users and then figure it out a business model.” In that case, you’re just trying to get a few million monthly users, so there’s just one number.
Maybe you want to build an advertising-based business. Then monthly users aren’t important but pageviews are. Start by looking at the numbers that are relevant to comparable businesses in the market, and in your market in general. At some point, you’ll need to attract investors or sell the business. What metrics do investors or acquirers find relevant in your industry? If you’re an ad-based business and you talk about the number of monthly users, the market might not understand: “We don’t care. We want to know how many _pageviews_ those users are consuming each month. Then we can tell whether you have a good business.” The metrics associated with specific features give you the information you need to make those features work the way you want them to. You might think a particular feature should move the needle on one of the macro metrics that are basic to your business, but it doesn’t. That doesn’t mean you should abandon that feature. You need to look at the metrics around it, because maybe you can fix it so that it will have the right effect on the macro metric. Sometimes a feature does well in its own right, but overall it has a detrimental effect on the macro metrics that matter to the business. If you’re just tracking metrics associated with features, you might get a lot of usage, but you won’t improve the business.
But if you’re only watching the macro metrics, you might not understand why a feature that should have an affect isn’t working.
What are the fundamental metrics at Learnist?
There are a just handful of things we really care about: month over month retention rate; pageviews; returning user rate, which is tied to month-over-month retention and the number of times an average user uses it in a month. We build features designed to move the needle those things, launch it, and see whether it works. One hypothesis was that if people had access to more fresh content, they’d engage more with the app and improve the retention rate. So we built a feed-like experience, released it, and watched. We saw a really big improvement. Now we’re looking at what we learned from that experiment, as well as other data, and we’re rolling that into our next sprint. We’ve grown 20x since this time last year, so we feel good, pushing a couple of million active monthlies. Now we have to do it again.
How did you decide on your fundamental metrics?
First we got our assumptions down. What handful of things must occur for this business to be successful? Then we picked a metric to reflect each assumption and instrument to make sure I’m getting that data. We instrumented it, threw the app out there, and got a baseline for each number. From there, we modeled out the business to see whether the data points would result in a scalable business. We built a dashboard that has each of the assumptions written in English, and next to each the metric we’re tracking, and that feeds a model of the business.
Tell us about the dashboard.
We built a pretty simple dashboard because we’ve built complicated ones in the past.
You’ve got to keep things simple. I can’t stress that enough. The overhead of building good instrumentation can crush your ability to get good metrics. We’ve gone from rolling a lot of our own tools to using as much off-the-shelf stuff as we possibly can.
How can focusing on instrumentation crush your ability to get good metrics?
When you roll your own instrumentation, you’re building another product, which requires resources that could be devoted to the product. Instead, they’re devoted to making a chart to show you metrics. Depending on the size of your team and the amount of data you’re gathering, it can slow things down. You’re trying to build Instagram, but you’re also building an instrumentation product, and trying to do too much at once is the kiss of death of a small organization. You don’t realize it’s happening. Suddenly it’s, “We’re building two products and neither is getting done.” I have a quantum physicist doing my data analytics, but I’d rather him spend his time making meaning out of data than writing code to do instrumentation. We go through periods where it seems like the instrumentation is three weeks behind the product, but worrying about that is pointless. We can still look at data a couple weeks later; it’s not the end of the world. The salient point is to try to match the speed of your ability to instrument and your ability to develop new features.
What tools do you use?
We’re super hardcore instrumented in Google Analytics, and I recommend that people put a lot of energy into getting good at using it. If Google Analytics can’t provide the information you need, then I wonder whether you’re looking for the right info. We also use Chartio. It lets you mash up your own data with Google Analytics to produce charts that we use to educate other stakeholders in the organization. We’ve built a nice split-testing infrastructure.
We use a combination of our own code and third-party stuff that makes it easy to run split tests and get reporting back.
Do you recall a pivot-or-persevere moment when metrics made the difference?
The biggest thing for us was looking at return rate. We wanted to see whether we could get people to return without spending money. It doesn’t seem like a serviceable business if the organic return rate isn’t over 30 percent pretty early. I think about it in terms of trying to double that number. If you go from 5 to 10 percent or 10 to 20 percent, you’re not accomplishing much. But if you can go from 30 to 60 percent, that’s great. So it was a major milestone when we got to 30 percent.
Did you consider that point product/market fit?
It depends on your definition. Anything early-stage is the netting out of the team, idea, and market. To the extent that there’s enough alignment between two of those three, the market, including employees, investors, and so on, will probably be interested in seeing what can come of this. Eventually you reach the point where the players involved feel that the unknowns are worth exploring because the potential rewards are very high.
How do you define product/market fit?
Some people say you’ll know when you get there. That’s like saying, you’ll know you’re rich when you have a billion dollars. I want to know when I’m trending toward becoming rich. So it’s not useful to say that product/market fit is when a product is growing like crazy. What’s useful is knowing when you’re close enough that it’s worth going on, or you’re so far away that it’s pointless to continue.
How do you avoid vanity metrics and keep the focus on actionable metrics?
You have to have intellectual honesty with yourself and amongst your team to ask ‘what metrics are relevant to our business’ all other metrics are vanity. It’s not as simple to say that pageviews are a vanity metric. If you’re building an ad-based business the pageviews may be relevant to how you will model your business. If you’re building a SaaS project management tool like PivotalTracker you probably don’t care about pageviews because you want people to pay monthly for the product.
How does a project’s engine of growth affect your choice of metrics?
It’s a useful framework for thinking about what to do. That’s a starting point, and your metrics should match what you’re doing.
At what point in a project do you start modeling?
If it’s a direct purchase business, I always start with a model. If I’m selling, say, jet engines, I’m looking at how many engines I need to sell a year, at what price, and much it costs to build one. If it’s a consumer media property that’s based on usage, I build a product first. A model won’t be useful for telling me whether I have 20 million or 30 million active monthly users, and I just focus on scaling up. Once I have baseline metrics, then I build a specific and sophisticated model of the business that is informed by those baselines, my assumptions about how to get to 20 million active users, and tests that help me determine user acquisition costs and per-user revenue.
What features have you added that had the greatest impact?
Search engine optimization. I didn’t expect that. SEO had a lot to do with our 20x growth over the past year. You have to study how the search engines are looking at you.
The search engines provide tools to do that, and there are other tools as well. They’ll point out problem areas that make your site difficult for Google to see, and if you fix them, you’ll rise in the search rankings if Google finds your pages relevant. It goes beyond optimizing single pages. You can tell Google about how the information is organized and provide additional forms of metadata. Most developers don’t think along that line, so frequently you’ll get a web site that Google can’t read well. About 30 percent of our traffic comes from SEO. That was one of the big learnings early on, and it focused us on other questions: What percentage is mobile? What percentage is organic versus referral versus direct? Is 30 percent the maximum? Can we get more? Asking those questions and having good ways to answer them is extremely helpful.
Farbood Nivi, Co-founder