How it works

  1. Form your cart
  2. Checkout with invoice
  3. We'll send you a link to book a discovery call
  4. Once we clarify everything together & invoice is paid we'd start working on your project
Full service agency

For those who need design, coding, or digital marketing, the GRIN tech is a full service agency that is able to deliver turn-key solutions.

🤙 Book a discovery call

👋 Can a full service agency build you a business?

White label

For agencies & industry professionals who want to focus on strategy and business growth, the GRIN tech features a white-label solution  that covers pretty much full cycle: design, code & marketing (including PPC, SEO, link building, content, and lead generation). 

🤙 Book a discovery call

prospecting platform

For sales professionals engaged in b2b prospecting, the Hound @ is a prospecting platform helping to fill the pipeline with qualified leads. Unlike the competition, our product features both data discovery & message delivery tools.

🤘 Give it a try

& Outreach

For companies who need backlinks, the Launcher is a productized outreach service that delivers opportunities to your Trello board on autopilot. Unlike the competition, our service is transparent & pricing is performance-based.

👋 Watch explainer video

🤙 Book a discovery call

Cohort analysis: growth metrics vs product metrics

The following subject is a vital part of business analytics and business intelligence in pursuit to improve user experience and reduce churn rate. Cohort analysis is a subset of behavioral analytics. You can do analysis via Google Analytics. The datasets you acquire through cohort analysis can provide invaluable information to improve your business (e-commerce platform, SaaS company, etc.). Data-driven decisions will strongly benefit your business.

But before we start speaking about cohort analysis its essential to mind the difference between growth & product metrics.

Let’s compare two cars and try to figure out which one is better:

  • Car A has a mileage of 1200 miles, and car B has a distance of 7200 miles;
  • Car A is used five times a week and car B — 4 times;
  • During the recent month: car A went through an average of 5 miles per day, car B — 10 miles per day;
  • At this very moment car, A moves at a speed of 60 mph, car B — at a speed of 40 mph.

The given data points don’t answer the question of which car is superior. What is this all about? When it comes to online projects or mobile apps development, everyone starts tracking metrics like DAU, MAU, revenue and a total number of registrations to evaluate results of changes and the efficiency of marketing strategy. Just like in the case above: it all makes no sense.

Those above were all growth metrics, and they’re only good to take an overview of the situation but completely useless regarding product development. Why? Because you can neither make product decisions nor evaluate product changes based on these metrics.

The head of the product development should be explicitly interested in its “volume” and “density” first of all, and not its “mass” in general. “Mass” states the fact, without explaining where the mass came from and how you can influence it. You should strive to decompose key metrics into separate elements, determining the leverage they can provide you.

Read also: why you should avoid vanity metrics and what they are

Nobody can handle these tasks without analysts. Analytics is your quality feedback on actions. First, an analyst lets you understand where you are as a business, what kind of product you have and how it is used. Then they let you see how updates and changes affect the product. Analytics consists of three stages: “measure,” “data” and “learn.” See the picture below:


One of the most effective tools of product analytics is a cohort analysis. We are going to look into why it is essential and what it is.

Why growth metrics are meaningless for product analytics

Let’s pretend that there is a product that has the following characteristics:

  • customer acquisition cost — $1;
  • the average revenue per user is $2 for the next four months;
  • 30% of new users continue to use the product after a month (further on, there’s a gradual drop off to 15%);
  • the promotion team will attract 10,000 new users in the first month after launch; 15,000 in the second month, 20,000 in the third, and so on;
  • The product manager, responsible for product development, introduces monthly changes to it. The changes are unsuccessful. After each of those, the average revenue per user drops by $0.1, while the retention rate falls by 2%.

It is customary for the company that develops this product to monitor the monthly audience (MAU or Monthly Active Users) and the profit of each project. KPIs and the team’s success evaluated according to these metrics.

After the first nine months of tracking these metrics, the upper management is delighted with the results and the success of the product manager. But remember: our product manager consistently spoils the product every month! Why the growth metrics steadily go up then?!

You can see the same charts below but on a scale of 16 months. We can finally notice the first signs of lousy product changes. They only showed up after 12 months.


The fact is that the growth metrics are affected by two components: product and promotion. You can not merely separate these two factors tracking growth metrics. That is the reason why growth metrics are inappropriate for product analytics.

With a properly conducted analysis, we would be able to see the impact of product updates during the first months or even weeks.

By the way, we have a solution called Hire dedicated team to build your MVP. It’s also featuring huge writing on launching products and getting feedback as early as possible.

The essence of cohort analysis

The audience of your product is represented by a mix of those who started using your service today, yesterday, a month ago and so on. It is a thankless job to analyze this heterogeneous mass trying to draw any conclusions.

The idea of cohort analysis is to split users into groups sharing common characteristics, and analyze user behavior in these groups over a defined period.

User groups (cohorts) usually formed according to the week (month, year) when users start using the app. By creating such user groups, we can monitor them overtime period and measure critical metrics for each particular cohort. Comparing the metrics of different cohorts side by side, we can objectively compare different versions of the product, which released over time.

There is a correlation between the depth of analysis and cohort size. For more in-depth analysis we need to segment the selected cohorts by the source of traffic, platform, country and other factors that make sense for your product in particular.

The values of critical metrics will most likely differ for each segment, just like different product changes will affect various segments of users in different ways.

Key product metrics: LTV and CAC

Two key metrics that ultimately determine the financial success of your product are LTV (Lifetime Value) and CAC (Customer Acquisition Cost).

  1. LTV is the money that an average user spends in your mobile application during the customer lifecycle.
  2. CAC – how much it costs you to attract that average user.

In this article, we only cover the ways to work with these metrics. There are a couple of articles that explain why these two metrics are so crucial for your product and how they affect your business performance:

LTV is a critical metric that reflects the value of your product for your users and customers. It is this metric that should be at the forefront of product development.

LTV is a beautiful metric, but it is high-level. To understand how to influence it, you need to decompose it into more straightforward and mundane metrics.

Decomposing LTV into product metrics

Metrics are typically bound to the critical points of the user’s life cycle in the application. Thus, we create an opportunity to track the success of the application promotion and find bottlenecks that require our attention.

I usually track the user’s path in the product in terms of his involvement and monetization.

Steps of the user’s lifecycle represent involvement:

  1. Activation in the application
  2. User is stuck on the app
  3. Long-term retention (how many users continue to use the product in a month, two months, and so on after registration)

Monetization represented by the sequence of steps in the user’s lifecycle:

  1. Activation in the application
  2. Purchase offer displayed
  3. First purchase made
  4. Second purchase made

The metrics corresponding to each of the stages of the customer lifecycle in the product (the metrics may differ for different products):

  • Activation in the application (% of those who passed the tutorial or committed a crucial target in the application, for example, registered and added the first friends);
  • User is stuck on the app (% of users who reached the N level or, for example, added N friends: the number N is determined experimentally);
  • Purchase offer display (% of users who saw the offer to buy);
  • The first purchase (% of those purchasing something in the app, the average amount of the first purchase);
  • Second purchase (% of those who made a repeat purchase, the average sum of repurchase and the average number of repeated purchases);
  • User retention (% of users who use the application after a month/two/three after registration).

All these metrics ultimately affect LTV. Every product may have its individual aspects, but given basic milestones/metrics will do well for most of them.

Product metrics and how they affect LTV

Consider the product metrics described above and how they affect LTV, using the example of an abstract game.

Activation in the application

In any game, the user first trained through a tutorial. Those who have not gone past the tutorial will most likely not continue to play, let alone pay. That’s why it’s critical for us to track users who successfully passed this stage.

It is also useful to track the share of those who were able to perform some targeted actions at the end of the tutorial (meaning a user, who has successfully learned and can now play independently). Such a metric will reflect on how well the learning process designed.

The user is stuck on the application

Users, most likely, will not pay if they are not carried away by your game. That’s why we need to track those who play often and consistently. So we measure the share of those who have made it to the level N or those who have used the app more than five times during the week since installation.

This metric is usually determined experimentally.

The user saw a purchase offer and made the very first purchase

One of our goals is to generate revenue, so we need to stimulate the first purchase in the application. But the purchase is made from a particular screen of our app (for example, from the store screen), so you need to track the share of users who made it to this screen.

If only 10% of incoming users see the purchase screen, this automatically limits the share of users who can make the first purchase in our game.


The first purchase is good, but financially successful products usually have a high repeat purchase rates. The first purchase is an absolute credit of the user’s trust to the application – if they are satisfied with the result and the benefit obtained, then the repurchase is most likely going to follow. Therefore, the share of users making repeated purchases (as well as the average number of repurchases) is another critical metric.


For users to have a chance to make several purchases, they must continue to play your game for a long time, rather than abandon and uninstall it after a day. To track this phenomenon, we measure retention.

Building product analytics and an example of using cohort analysis

The easiest way to implement product analytics is to create funnels for each of the events described above.

In most cases, you will get a monetization funnel and a user involvement funnel.

Next, you need to compare the metrics of your product for each specific cohort, formed based on the week they started using the app. Mixpanel and Localytics are ideal tools for such analysis.

The use of cohort analysis for product analytics is more complex, but also the most productive approach. It provides a deeper understanding of the product, how people use it and the user/time relation (a behavioral analytics approach allows us to form behavioral cohorts).

We will build user cohorts based on when they started using the application. To simplify the example, it contains only these metrics: CAC, LTV, Retention, % of users who made the first purchase, % of users who made the repurchase, acquisition date. Also, the cohorts were not segmented additionally.

Below is a cohort analysis chart of the product in question (you can assume that this is a game or a tourist application).

During the first week, the first version of our application gained 3000 users. At the end of “week zero,” 25% of them completed the tutorial, but no one has paid. By the end of the first week, another 5% went through the tutorial (that is, only 30%), while 1,2% made the first purchase. By the end of the second week, the tutorial was completed by 34% of users from the cohort in question, and the first purchase was made by 1.4%.

A week later we changed the tutorial and released a new version of the application. As we can see from the cohort analysis report — it worked! By the end of the fourth week, 47% of users had already completed the tutorial (previously only 34%). The expansion of the monetization funnel at the stage of the tutorial also increased the share of those who made the first purchase. Unfortunately, our users do not make repeated purchases, which makes it impossible to make our product break-even, though the promotion team managed to significantly reduce CAC (albeit reducing the inflow of new users). We spend $ 0.8 to attract a customer, and after eight weeks we earn only $0.5 from the average user.

In the third version of the app, we finalized the tutorial and added new purchases, increasing the variety. That allowed us to improve the share of repeat purchases and equalize LTV with CAC.

That is an approximate example of how cohort analysis allows us to understand our product and figure out what updates work and which don’t.

To sum it up on the cohort analysis

The most challenging stage of working on a product comes after the primary metrics have emerged, so the following questions arise:

  • Are the values of these metrics excellent or bad?
  • Which metric should I prioritize to work on in the next iteration of the product?
  • Which improvement hypotheses should I test first?

Stay tuned for upcoming articles.