HELPING YOU LAUNCH
                                      & GROW YOUR BUSINESS

LEARN MORE
Services
Blog

Startup Metrics

Sometimes, however, the metrics may not be the best gauge of what’s actually happening in the business, or people may use different definitions of the same metric in a way that makes it hard to understand the health of the business. So, while some of this may be obvious to many of you who live and breathe these metrics all day long, we compiled a list of the most common or confusing ones. Where appropriate, we tried to add some notes on why investors focus on those metrics. Ultimately, though, good metrics aren’t about raising money from VCs — they’re about running the business in a way where founders know how and why certain things are working (or not) … and can address or adjust accordingly. Business and Financial Metrics #1 Bookings vs. Revenue A common mistake is to use bookings and revenue interchangeably, but they aren’t the same thing. Bookings is the value of a contract between the company and the customer. It reflects a contractual obligation on the part of the customer to pay the company. Revenue is recognized when the service is actually provided or ratably over the life of the subscription agreement. How and when revenue is recognized is governed by GAAP. Letters of intent and verbal agreements are neither revenue nor bookings. #2 Recurring Revenue vs. Total Revenue Investors more highly value companies where the majority of total revenue comes from product revenue (vs. from services). Why? Services revenue is non-recurring, has much lower margins, and is less scalable. Product revenue is the what you generate from the sale of the software or product itself. ARR (annual recurring revenue) is a measure of revenue components that are recurring in nature. It should exclude one-time (non-recurring) fees and professional service fees. ARR per customer: Is this flat or growing? If you are upselling or cross-selling your customers, then it should be growing, which is a positive indicator for a healthy business. MRR (monthly recurring revenue): Often, people will multiply one month’s all-in bookings by 12 to get to ARR. Common mistakes with this method include: (1) counting non-recurring fees such as hardware, setup, installation, professional services/ consulting agreements; (2) counting bookings (see #1). #3 Gross Profit While top-line bookings growth is super important, investors want to understand how profitable that revenue stream is. Gross profit provides that measure. What’s included in gross profit may vary by company, but in general all costs associated with the manufacturing, delivery, and support of a product/service should be included. So be prepared to break down what’s included in — and excluded — from that gross profit figure. #4 Total Contract Value (TCV) vs. Annual Contract Value (ACV) TCV (total contract value) is the total value of the contract, and can be shorter or longer in duration. Make sure TCV also includes the value from one-time charges, professional service fees, and recurring charges. ACV (annual contract value), on the other hand, measures the value of the contract over a 12-month period. Questions to ask about ACV: What is the size? Are you getting a few hundred dollars per month from your customers, or are you able to close large deals? Of course, this depends on the market you are targeting (SMB vs. mid-market vs. enterprise). Is it growing (and especially not shrinking)? If it’s growing, it means customers are paying you more on average for your product over time. That implies either your product is fundamentally doing more (adding features and capabilities) to warrant that increase, or is delivering so much value customers (improved functionality over alternatives) that they are willing to pay more for it. See also this post on ACV. #5 LTV (Life Time Value) Lifetime value is the present value of the future net profit from the customer over the duration of the relationship. It helps determine the long-term value of the customer and how much net value you generate per customer after accounting for customer acquisition costs (CAC). A common mistake is to estimate the LTV as a present value of revenue or even gross margin of the customer instead of calculating it as net profit of the customer over the life of the relationship. Reminder, here’s a way to calculate LTV: Revenue per customer (per month) = average order value multiplied by the number of orders. Contribution margin per customer (per month) = revenue from customer minus variable costs associated with a customer. Variable costs include selling, administrative and any operational costs associated with serving the customer. Avg. life span of customer (in months) = 1 / by your monthly churn. LTV = Contribution margin from customer multiplied by the average lifespan of customer. Note, if you have only few months of data, the conservative way to measure LTV is to look at historical value to date. Rather than predicting average life span and estimating how the retention curves might look, we prefer to measure 12 month and 24 month LTV. Another important calculation here is LTV as it contributes to margin. This is important because a revenue or gross margin LTV suggests a higher upper limit on what you can spend on customer acquisition. Contribution Margin LTV to CAC ratio is also a good measure to determine CAC payback and manage your advertising and marketing spend accordingly. See also Bill Gurley on the “dangerous seductions” of the lifetime value formula. #6 Gross Merchandise Value (GMV) vs. Revenue In marketplace businesses, these are frequently used interchangeably. But GMV does not equal revenue! GMV (gross merchandise volume) is the total sales dollar volume of merchandise transacting through the marketplace in a specific period. It’s the real top line, what the consumer side of the marketplace is spending. It is a useful measure of the size of the marketplace and can be useful as a “current run rate” measure based on annualizing the most recent month or quarter. Revenue is the portion of GMV that the marketplace “takes”. Revenue consists of the various fees that the marketplace gets for providing its services; most typically these are transaction fees based on GMV successfully transacted on the marketplace, but can also include ad revenue, sponsorships, etc. These fees are usually a fraction of GMV. #7 Unearned or Deferred Revenue … and Billings In a SaaS business, this is the cash you collect at the time of the booking in advance of when the revenues will actually be realized. As we’ve shared previously, SaaS companies only get to recognize revenue over the term of the deal as the service is delivered — even if a customer signs a huge up-front deal. So in most cases, that “booking” goes onto the balance sheet in a liability line item called deferred revenue. (Because the balance sheet has to “balance,” the corresponding entry on the assets side of the balance sheet is “cash” if the customer pre-paid for the service or “accounts receivable” if the company expects to bill for and receive it in the future). As the company starts to recognize revenue from the software as service, it reduces its deferred revenue balance and increases revenue: for a 24-month deal, as each month goes by deferred revenue drops by 1/24th and revenue increases by 1/24th. A good proxy to measure the growth — and ultimately the health — of a SaaS company is to look at billings, which is calculated by taking the revenue in one quarter and adding the change in deferred revenue from the prior quarter to the current quarter. If a SaaS company is growing its bookings (whether through new business or upsells/renewals to existing customers), billings will increase. Billings is a much better forward-looking indicator of the health of a SaaS company than simply looking at revenue because revenue understates the true value of the customer, which gets recognized ratably. But it’s also tricky because of the very nature of recurring revenue itself: A SaaS company could show stable revenue for a long time — just by working off its billings backlog — which would make the business seem healthier than it truly is. This is something we therefore watch out for when evaluating the unit economics of such businesses. #8 CAC (Customer Acquisition Cost) … Blended vs. Paid, Organic vs. Inorganic Customer acquisition cost or CAC should be the full cost of acquiring users, stated on a per user basis. Unfortunately, CAC metrics come in all shapes and sizes. One common problem with CAC metrics is failing to include all the costs incurred in user acquisition such as referral fees, credits, or discounts. Another common problem is to calculate CAC as a “blended” cost (including users acquired organically) rather than isolating users acquired through “paid” marketing. While blended CAC [total acquisition cost / total new customers acquired across all channels] isn’t wrong, it doesn’t inform how well your paid campaigns are working and whether they’re profitable. This is why investors consider paid CAC [total acquisition cost/ new customers acquired through paid marketing] to be more important than blended CAC in evaluating the viability of a business — it informs whether a company can scale up its user acquisition budget profitably. While an argument can be made in some cases that paid acquisition contributes to organic acquisition, one would need to demonstrate proof of that effect to put weight on blended CAC. Many investors do like seeing both, however: the blended number as well as the CAC, broken out by paid/unpaid. We also like seeing the breakdown by dollars of paid customer acquisition channels: for example, how much does a paying customer cost if they were acquired via Facebook? Counterintuitively, it turns out that costs typically go up as you try and reach a larger audience. So it might cost you $1 to acquire your first 1,000 users, $2 to acquire your next 10,000, and $5 to $10 to acquire your next 100,000. That’s why you can’t afford to ignore the metrics about volume of users acquired via each channel. Product and Engagement Metrics #9 Active Users Different companies have almost unlimited definitions for what “active” means. Some charts don’t even define what that activity is, while others include inadvertent activity — such as having a high proportion of first-time users or accidental one-time users. Be clear on how you define “active.” #10 Month-on-month (MoM) growth Often this measured as the simple average of monthly growth rates. But investors often prefer to measure it as CMGR (Compounded Monthly Growth Rate) since CMGR measures the periodic growth, especially for a marketplace. Using CMGR [CMGR = (Latest Month/ First Month)^(1/# of Months) -1] also helps you benchmark growth rates with other companies. This would otherwise be difficult to compare due to volatility and other factors. The CMGR will be smaller than the simple average in a growing business. #11 Churn There’s all kinds of churn — dollar churn, customer churn, net dollar churn — and there are varying definitions for how churn is measured. For example, some companies measure it on a revenue basis annually, which blends upsells with churn. Investors look at it the following way: Monthly unit churn = lost customers/prior month total Retention by cohort Month 1 = 100% of installed base Latest Month = % of original installed base that are still transacting It is also important to differentiate between gross churn and net revenue churn — Gross churn: MRR lost in a given month/MRR at the beginning of the month. Net churn: (MRR lost minus MRR from upsells) in a given month/MRR at the beginning of the month. The difference between the two is significant. Gross churn estimates the actual loss to the business, while net revenue churn understates the losses (as it blends upsells with absolute churn). #12 Burn Rate Burn rate is the rate at which cash is decreasing. Especially in early stage startups, it’s important to know and monitor burn rate as companies fail when they are running out of cash and don’t have enough time left to raise funds or reduce expenses. As a reminder, here’s a simple calculation: Monthly cash burn = cash balance at the beginning of the year minus cash balance end of the year / 12 It’s also important to measure net burn vs. gross burn: Net burn [revenues (including all incoming cash you have a high probability of receiving) – gross burn] is the true measure of amount of cash your company is burning every month. Gross burn on the other hand only looks at your monthly expenses + any other cash outlays. Investors tend to focus on net burn to understand how long the money you have left in the bank will last for you to run the company. They will also take into account the rate at which your revenues and expenses grow as monthly burn may not be a constant number. See also Fred Wilson on burn rate. #13 Downloads Downloads (or number of apps delivered by distribution deals) are really just a vanity metric. Investors want to see engagement, ideally expressed as cohort retention on metrics that matter for that business — for example, DAU (daily active users), MAU (monthly active users), photos shared, photos viewed, and so on. Presenting Metrics Generally #14 Cumulative Charts (vs. Growth Metrics) Cumulative charts by definition always go up and to the right for any business that is showing any kind of activity. But they are not a valid measure of growth — they can go up-and-to-the-right even when a business is shrinking. Thus, the metric is not a useful indicator of a company’s health. Investors like to look at monthly GMV, monthly revenue, or new users/customers per month to assess the growth in early stage businesses. Quarterly charts can be used for later-stage businesses or businesses with a lot of month-to-month volatility in metrics. #15 Chart Tricks There a number of such tricks, but a few common ones include not labeling the Y-axis; shrinking scale to exaggerate growth; and only presenting percentage gains without presenting the absolute numbers. (This last one is misleading since percentages can sound impressive off a small base, but are not an indicator of the future trajectory.) #16 Order of Operations It’s fine to present metrics in any order as you tell your story. When initially evaluating businesses, investors often look at GMV, revenue, and bookings first because they’re an indicator of the size of the business. Once investors have a sense of the the size of the business, they’ll want to understand growth to see how well the company is performing. These basic metrics, if interesting, then compel us to look even further. As one of our partners who recently had a baby observes here: It’s almost like doing a health check for your baby at the pediatrician’s office. Check weight and height, and then compare to previous estimates to make sure things look healthy before you go any deeper! #17 Total Addressable Market (TAM) TAM is a way to quantify the market size/ opportunity. But using the size of an existing market might actually understate the opportunity of new business models: For example, SaaS relative to on-premise enterprise software may have much lower average revenue per user but more than make up for it by expanding the number of users, thus growing the market. Or, something that provides an order of magnitude better functionality than existing options (like eBay relative to traditional collectible/antique dealers) can also grow the market. While there are a few ways to size a market, we like seeing a bottoms-up analysis, which takes into account your target customer profile, their willingness to pay for your product or service, and how you will market and sell your product. By contrast, a top-down analysis calculates TAM based on market share and a total market size. (There’s a primer with more detail about these approaches here.) Why do we advocate for the bottom-up approach? Let’s say you’re selling toothbrushes to China. The top-down calculation would go something like this: If I can sell a $1 toothbrush every year to 40% of the people in China, my TAM is 1.36B people x $1/toothbrush x 40% = $540M/year. This analysis not only tends to overstate market size (why 40%?), it completely ignores the difficult (and expensive!) reality of getting your toothbrush into the hands of 540M toothbrush buyers: How would they learn about your product? Where do people buy toothbrushes? What are the alternatives? Meanwhile, the bottoms-up analysis would figure out TAM based on how many toothbrushes you’d sell each day/week/month/year through drugstores, grocery stores, corner mom-and-pop stores, and online stores. This type of analysis forces you to think about the shape and skillsets of your sales and marketing teams — required to execute on addressing market opportunity — in a far more concrete way. It is important not to “game” the TAM number when pitching investors. Yes, VCs seek to invest in big ideas. But many of the best internet companies sought to address what appeared to be modest TAMs in the beginning. Take eBay (collectibles and antiques) and Airbnb (rooms in other people’s places); in both these cases, the companies and their communities of users took the original functionality and dramatically expanded use cases, scaling well beyond original market size estimates. [See also our partner Benedict Evans on ways to think about market size, especially as applied to mobile.] #18 ARR ≠ Annual Run Rate While we’ve already made this point in part one of this post, we want to emphasize again that when software businesses use ARR, they mean annual recurring revenue, NOT annual run rate. It’s a mistake to multiply the recognized bookings — and in some cases revenue — in a given month by 12 (thus “annualizing it”) and call that number ARR. In a SaaS business, ARR is the measure of recurring revenue on an annual basis. It should exclude one-time fees, professional service fees, and any variable usage fees. This is important because in a given month you may recognize more revenue as a result of invoicing one-time services or support, and multiplying that number by 12 could significantly overstate your true ARR potential. In marketplace businesses — which are more transaction-based and typically do not have contracts — we look at current revenue run rates, by annualizing the GMV or revenue metric for the most recent month or quarter. One mistake we frequently see is marketplace GMV being referred to as “revenue”, which can overstate the size of the business meaningfully. GMV typically reflects what consumers are spending on the site, whereas revenue is the portion of GMV that the marketplace takes (“the take”) for providing their service. #19 Average Revenue Per User (ARPU) ARPU is defined as total revenue divided by the number of users for a specific time period,  typically over a month, quarter, or year. This is a meaningful metric as it demonstrates the value of users on your platform, regardless of whether those users buy subscriptions (such as telecom monthly subscriptions) or click on ads as they consume content. For pre-revenue companies, investors will often compare the prospects of a company against the known ARPU for established companies. For example, we know that Facebook generated $9.30 ARPU in FY2015Q2 from its U.S. and Canada users: So if we’re evaluating a company with an advertising business that has monetization potential comparable to Facebook, we ask: Do we believe the company can generate a quarter, half, just as much, or even more ARPU compared to Facebook? What would need to be true to justify this belief? How would the company achieve that (and do they have the ability do so)? #20 Gross Margins Continuing the conversation about gross margins from our first post, we wanted to say a little more here. Gross margin — which is a company’s total sales revenue minus cost of goods sold — can be considered an equalizer across businesses with different business models, where comparing relative revenue would otherwise be somewhat meaningless. Gross margin tells the investor how much money the company has to cover its operating expenses and (hopefully!) drop to the bottom line as profitability. A few examples to illustrate the point: E-commerce businesses typically have relatively low gross margins, as best exemplified by Amazon and its 27% figure. By contrast, most marketplaces (note here the distinction between e-commerce) and software companies should be high gross-margin businesses. Paraphrasing Jim Barksdale (the celebrated COO of Fedex, CEO of McCaw Cellular, and CEO of Netscape), “Here’s the magical thing about software: software is something I have, I can sell it to you, and after that, I still have it.” Because of this magical property, software companies should have very high gross margins, in the 80%-90% range. Smaller software companies might start with lower gross margins as they provision more capacity than they need, but these days with pay-as-you-go public cloud services, the need for small companies to buy and operate expensive gear has vanished, so even early stage companies can start out of the gate with relatively high gross margins. #21 Sell-Through Rate & Inventory Turns Sell-through rate is typically calculated in one way — number of units sold in a period divided by the number of items at the beginning of the period — but has different uses and implications in different types of businesses. In marketplace businesses, sell-through rate can also go by “close rate”, “conversion rate”, and “success rate”. Regardless of what it’s called, sell-through rate is one of the single most important metrics in a marketplace business. As investors, we like to see a relatively high rate so that suppliers are seeing good returns on the effort they put into posting listings on the marketplace. We also like to see this ratio improving over time, particularly in the early stages of marketplace development (as it often indicates developing network effects). In businesses that buy any kind of inventory — retailers, wholesalers, manufacturers — the sell-through rate is a key operating metric for managing inventory on a weekly or daily basis. It can reveal how well you matched supply of your product to demand for it, on a product-by-product basis. For many investors, however, inventory turns is a more useful metric than sell-through rate in inventory-based businesses, because it: — Talks to the capital efficiency of the business, where more turns are better — Provides clues as to the quality of the inventory, where slowing inventory turns over time can signal slowing demand as well as potential inventory impairments (which can lead to mark-downs or write-offs) Inventory turns typically are calculated by dividing the cost of goods sold for a period against the average inventory for that period. The most typical period used is annual. There are two different ways to improve inventory turns — (1) By increasing sales velocity on the same amount of inventory; (2) By decreasing the inventory needed to generate a given amount of sales. While both are fine, one caution on the latter: Managing inventory too closely can potentially impact sales negatively by not having enough stock to fulfill consumer demand. Economic and Other Defining Qualities #22 Network Effects Simply put, a product or service has a network effect when it becomes more valuable as more people use it/ devices join it (think of examples like the telephone network, Ethernet, eBay, and Facebook). By increasing engagement and higher margins, network effects are key in helping software companies build a durable moat that insulates them from competition. However, there is no single metric to demonstrate that a business has “network effects” (Metcalfe’s Law is a descriptive formulation, not a measure). But we often see entrepreneurs assert that their business has network effects without providing any supporting evidence. It’s hard for us to resolve whether a business indeed has network effects without this — leading us to more heated debates internally as well! Let’s use OpenTable as an example of a business with network effects. The OpenTable network effect was that more restaurant selection attracted diners, and more diners attracted restaurants. Here are some of the measures that helped demonstrate those network effects (we typically used measurements within one city to illustrate the point, as OpenTable’s network effect was largely local): The sales productivity of OpenTable sales representatives grows substantially over time, due in part to large increases in the number of inbound leads from restaurants over time.  This is more meaningful than the fact that the total restaurant base grows over time, as that can happen even without network effects. The number of diners seated at existing OpenTable restaurants grows substantially over time. This again is more meaningful than the fact that the total number of diners grows over time. The share of diners who come directly to OpenTable to make their reservation (versus going to the restaurants’ websites) grows substantially over time. Restaurant churn declines over time. As you can see, most of these metrics are specific to the network that OpenTable is building.  Other network-effects businesses — such as Airbnb, eBay, Facebook, PayPal — have very different metrics. So the most important thing in managing a business with network effects is to define what those metrics are, and track them over time. This may seem obvious, but the more intentional you are about — vs. “surprised” by — your network effects, the better your business will be able to sustain and grow them. Similarly, it’s important for prospective investors to see evidence of a network effect, that the entrepreneur understands exactly what it is, and how he or she is driving it. #23 Virality Where network effects measure the value of a network, virality is the speed at which a product spreads from one user to another. Note that viral growth does not necessarily indicate a network effect; this is important as these concepts are sometimes conflated! Virality is often measured by the viral coefficient or k-value — how much users of a product get other people to use the product [average number of invitations sent by each existing user * conversion rate of invitation to new user]. The bigger the k-value, the more this spread is happening. But it doesn’t only have to happen by word-of-mouth; the spread can also occur if users are prompted but not incentivized to invite friends, through casual contact with participating users, or through “inherent” social graphs such as the contacts in your phone. Here’s the basic math behind the k-value [there are some other more nuanced and sophisticated calculations here]: 1. Count your current users. Let’s say you have 1,000 users. 2. Multiply that count by the average number of invitations that your user base sends out. So if your 1,000 users send an average of 5 invites to their friends, the total number of users invited is 5,000. 3. Figure out how many of those invited users took the desired action within a defined period of time. As with all measurements, pick a meaningful metric for this action. For example, app downloads are not a great metric, because someone could easily download your app but never actually launch it. So let’s say you instead count users who register and play the first level of your game, and that comes out to 15% of the people who got invited or 750 people. 4. This means you started with 1,000 people and ended up with 1,750 people through this viral loop during your defined time period. The viral coefficient is the number of new people divided by the number of users you started with; in this case, 750/1000 = 0.75. Anything under 1 is not considered viral; anything above 1 is considered viral. The higher the number, the better, because it means your cost to acquire new customers will be lower than a product with a lower virality coefficient. Now if you can marry that with a high ARPU or lifetime value per customer, you have the beginnings of a great business. #24 Economies of Scale (“Scale”) Economies of scale imply that the product becomes cheaper to produce as business increases in size and output. A good measure of economies of scale is decreasing unit cost over time. A classic example is amazon’s 1P sales: It has economies of scale (shared warehouse facilities, cheaper shipping options, etc.). As the volume goes up, cost per unit of output decreases as fixed costs are spread over more units. Economies of scale could also reduce variable costs because of operational efficiencies. Just remember that “economies of scale” is different from “virality” and from “network effects”! Other Product and Engagement Metrics #25 Net Promoter Score (NPS) This is one that a number of people mentioned as missing from part one of this post. Which is a bit ironic given that we ourselves measure it for our own business (i.e., with both entrepreneurs we turn down and those who join our portfolio)! Basically, net promoter score is a metric (first shared in 2003) used to gauge customer satisfaction and loyalty to your offering. It is based on asking How likely is it that you would recommend our company/product/service to a friend or colleague? Here’s one way to calculate NPS: Ask your customers the above question and let them answer on a 0-to-10 Likert-type scale, with 10 being definitely likely % of promoters = number of respondents who ranked 9 or 10, divided by total number of respondents % of detractors = number of respondents who ranked ≤ 6, divided by total number of respondents NPS = % of promoters minus % of detractors One obvious issue with reporting NPS scores is skewing the sample by only surveying a subset of customers. The un-obvious issue here is that you may think it’s only worth measuring people who use your product “enough” — e.g., users who used the service >x times a month or for a period of at least y months — but that creates a biased sample. Some other common issues with reporting NPS metrics include only showing % of promoters (not accounting for detractors), or basing the score off a too-small sample size. Another issue, [as raised by Brad Porteus via Facebook comment], is comparing companies, which leads to misunderstanding and gaming scores; “Rather, focus on same company NPS trends — and pay close attention to optional comments from users.” Porteus also shares the UI advice that if NPS ratings are presented vertically on mobile devices, “the scores can differ by 20 points depending if you put 10 at the top and scroll down to 0, or vice versa”, and therefore recommends doing a 50/50 split on phone screens. When looking at NPS, we look for a couple of things: 1. To state the obvious, the higher the score the better. It indicates satisfied users, and satisfied users are more likely to be retained over time. On a related note, we also evaluate a score relative to the company’s competitive trend set whenever that information is available. 2. We also like to see NPS scores trending up over time. It’s a good leading indicator that the company is not only focused on their users, but is improving its value proposition over time. #26 Cohort Analysis Cohort analysis breaks down activities/ behavior of groups of users (“cohorts”) over a specific period of time that makes sense for your business — for example, everyone who signed up for your service in the first week of January — and then follows this group of users longer term: Who’s still using your product after 1 month, 3 months, 6 months, and so on? A good cohort analysis helps reveal how users engage with your product over time. Startup investors especially appreciate this because it helps us gauge how much people really love your product, since many startups are pre-revenue and so users may not have voted with their wallets just yet. Here are the steps for a cohort analysis: Pick the right set of metrics rather than a vanity metric (like app downloads) Pick the right period for a cohort — this will be typically be a day, a week, or a month depending on the business (shorter time periods typically make sense for younger businesses, and longer ones for more mature businesses) Period 1 (day, week, or month) — 100% of install base takes some action that is a leading indicator for revenue, such as buying a product, listing a product, sharing a photo, etc. Period 2 — calculate the % of install base that is still engaging in that action a week or month later Repeat the analysis for every subsequent cohort to see how behavior has evolved over the lifetime of each cohort Here’s an example of a weekly cohort analysis in Mixpanel. In this chart, you can observe the engagement levels of each cohort over time as measured by week. For example, of the 44 people who joined the week of October 7th, 2013, 2.27% were still engaged (color-coded below as a sort of “heat map” with shades getting lighter) 12 weeks later: The two trends we like to see in cohort analyses are: 1. Stabilization of retention in each cohort after a period such as 6 or 12 months. This means you are retaining your users and that your business is building a progressively larger base of recurring usage. 2. Newer cohorts performing progressively better than older cohorts. This typically implies that you are improving your product and its value proposition over time — and also gives us an indication of the team’s capabilities. #27 Registered Users Commenters pointed out the absence of this metric in the first post we published. And in some businesses, the number of registered users (as a proxy for engaged customers) can indeed provide some useful signal. But we often tend to discount registered users since we’ve seen multiple instances where it has been gamed, and growth in registered users did not lead to a growth in actual product usage. Also, registered users is one of those dreaded “cumulative” metrics that can go up-and-to-the-right even when a business is shrinking. So in most cases our preferred user metric is active users, which is more indicative of actual product use — and often translates directly to revenue potential over the long term. Read on for more about measuring and reporting on active users… #28 Active Users What does “active” users really mean? Inquiring minds want to know! But there is no single answer, since the definition of active user really varies by company; it depends on the business model. For instance, Facebook defines “active” as a registered user who logged in and visited the site via any device, or as a user who took an action to share content or activity with Facebook friends via 3rd-party sites integrated with Facebook. The important things to remember when measuring your active users are to: (1) clearly define it; (2) make sure it’s a true representation of “activity” on your platform; and (3) be consistent in applying that definition. Here are a few other examples of how companies define active users for their general categories of business… …on social sites In social and mobile platforms, common metrics of measure for activity are MAUs (monthly active users), WAUs(weekly active users), DAUs (daily active users), and HAUs (hourly active users). When evaluating social businesses, we look carefully at the ratios of these metrics — e.g., DAUs-to-MAU or WAUs-to-MAUs — to get a sense of user engagement. The most valuable social properties typically demonstrate high relative engagement rates on all these ratios. …on content sites A common measure of active users and activity on all kinds of content-based sites has been “uniques” (monthly unique visitors) and visits (pageviews or sometimes “sessions” if defined at a minimum period of complete activity). While there is much debate about the merits and tradeoffs of each — which ones are more accurate, revealing, etc. — the key is to optimize for the measure that matters for your business, and that you can actually do something with. For example, as media sites and types of advertising have evolved, some sites and advertisers may care more about true engagement as measured by time on site, repeat visits, shares, number of commenters/comments, uptake in content, results of sentiment analysis, or other such metrics. While the metrics depend on your business goals and what moves you’re trying to optimize for, we tend to look at both uniques and visits/sessions, since the former reflects the size of the audience (and if growing through new visitors brought in every month), and the latter reveals stickiness (though for engagement, time on site is perhaps still best). The very best businesses have both: large, growing audiences that are highly engaged. …on e-commerce sites We don’t typically place a lot of weight on active users in most e-commerce businesses. These businesses have a much more telling metric — actual revenue (and gross margin) — so then “show me [us] the MONEY” by showing total revenue, revenue per user, average order size, repeat usage, gross margins, return rates, and other measures that tell us about the transactions per visitor rather than the number of visitors. How many users visit the company’s properties could provide a modest indication of their conversion efficiency, but this is also impacted by other factors like how much of their traffic comes from mobile — which typically converts at significantly lower rates than the website, at least for now. #29  Sources of Traffic You — and we — don’t want all your revenues to be driven by a single source; it’s the online equivalent of putting all your eggs in one basket. This is because the economics of customer acquisition can change over time (for example, Facebook mobile ads generated strong returns for companies early on but costs got quickly bid up); the channel could elect to compete for that same traffic (Google adding its own sponsored links in the search engine results page); or the channel partner could change its policy in a way that results in a dramatic, material reduction of traffic. This is why it’s key to differentiate between sources of traffic — i.e., whether direct or indirect — because it reveals platform risk (dependence on a specific platform or channel). This is very similar to customer concentration risk, defined below. More importantly, the ability to differentiate traffic reveals your understanding of where your customers are coming from, especially if your goal is to build a standalone destination brand. Direct traffic is traffic that comes directly — i.e., not through an intermediary — to your online properties. Users going directly to Target.com (as opposed to buying Target products on Amazon.com) are direct users. Users searching for specific items on Google and arriving at a website like Target.com or Amazon.com are not technically direct users. But this definition does get tricky as Google searches that include your brand in the search term can be considered direct traffic in some ways, because many people don’t bother typing in URLs anymore! Organic traffic definitions vary. SEO experts and certain marketing-analytics providers define “organic” as purely unpaid traffic from search results. Others define it more broadly as the opposite of anything paid or paid sources, in which case it would include direct traffic as defined above; traffic that came from search results for specific keywords; and even traffic generated via retention marketing efforts (such as emails to their existing customer base) … as long as it’s all “free”. There is no right or wrong definition for organic traffic. It is just important for you to track and understand it as distinct from other channels, so you can see where customers come from and where to focus your existing or new customer efforts. But we do get a little more excited when we see a company with a high proportion of direct traffic. A hitch: An important nuance to be aware of when considering traffic sources is the existence of “dark social”, as coined by tech editor Alexis Madrigal. This term describes web traffic that comes from outside sources or referrals that web analytics are not able to track, for example, users coming in via a link shared over email or chat. [Some sites just started clumping people pursuing links outside the homepage and landing page as “direct social”.] Finally, another nuance to be aware of when considering traffic is the difference between search engine optimization (SEO) and search engine marketing (SEM), because they are sometimes used interchangeably even though they are different: SEO is the process of optimizing website visibility in a search engine’s “unpaid” results through carefully placing keywords in metadata and site body content, creating unique and accurate content, and even optimizing page loading speed. SEO impacts only organic search results and not paid or sponsored ad results. SEM, on the other hand, involves promoting your website through paid advertising or listings, whether in search engines or promoted ads in social networks. SEO and SEM are thus complementary not competing services and many businesses use both. #30 Customer Concentration Risk In keeping with the “don’t keep all your eggs in one basket” theme, we often look at customer concentration when evaluating enterprise businesses. Customer concentration is defined as the revenue of your largest customer or handful of customers relative to total revenue, with both revenues reflecting the same time period. So if your largest customers pay you $2M/year and your total revenue is $20M/year, the concentration of your largest customer is 10%. As a rule of thumb, we tend to prefer companies with relatively low customer concentration because a business with only one or few customers runs a number of risks. Besides the most obvious one of the customer(s) moving their business elsewhere, which creates a large revenue hole, the risks include the reality that: — The customers have all the leverage over pricing and other key terms — The customers may unduly influence the product roadmap, sometimes demanding features unique to only their needs — The customers use their importance to force the company to sell to them at below-market terms There is a flip side here, however: In some industries there are relatively few customers, but those customers are gargantuan. Industries with these characteristics include mobile phone carriers, cable networks, and auto companies. Very successful companies can be (and have been!) built supplying to these industries, but they tend to have a higher degree of go-to-market risk because the small number of buyers know how to exercise their power — which you’ll see in metrics such as median time to close a deal, discount from list price, number of approvers (including the dreaded procurement department), and cost of sales. Presenting Metrics Generally #31 Truncating the Y-Axis Please do not do this when presenting data for evaluation. Here’s a less tongue-in-cheek example of why changing the data range in y-axis to “zoom in” on differences is misleading, as originally presented by Ravi Parikh in Gizmodo. The zero baseline below (right) shows how interest rates are not, in fact, skyrocketing (left): [Another interesting concept to be aware of when considering baselines — especially when considering historical and multi-generational data — is the notion of shifting baselines.] #32 Cumulative Charts, Again We mentioned the problem of cumulative charts in our previous post, but a related issue is presenting metrics that are not supposed to be cumulative in a cumulative chart. For example, please do not do this (also originally presented by Ravi Parikh here)… …when this is what’s really going on: Metrics that should never be reported in a cumulative fashion include revenue, new users, and bookings. Bottom line: if you are reporting something in a cumulative fashion, make sure you can explain why that’s material and why it’s appropriate to measure your business that way. Source: https://a16z.com/2015/08/21/16-metrics/ and https://a16z.com/2015/09/23/16-more-metrics/

Leggi

The relationship between web design and user engagement

As reported by Fast Company and Inc. Magazine, a new EyeQuant study has shown that there's a surprisingly strong relationship between the "visual clarity" of a website (as rated by an algorithm) and its bounce rate. In fact, the results suggest that up to one-third of a user's decision to stay or bounce comes down to a snap judgment of whether or not the page is too cluttered. In this post, we'll take a closer look at the data and the methodology behind the study. Why study the impact of visual clarity? Within the design community, there's been a definite trend towards simpler, more stripped-back design. At EyeQuant, we've seen many of our customers "de-clutter" their way to higher conversion rates, and even observed that amongst a collection of online retailers, the ones with "cleaner" design were growing the fastest. What we wanted to understand is this: does "clean" design have a positive impact on user engagement across the board, or is it limited to specific cases like overly cluttered-sites or retail? The Experiment Setup Using Amazon's Alexa service, we gathered approximate engagement stats for 300 popular websites across several website categories: from fashion, to insurance, to travel. In particular, we looked at bounce rates for the desktop homepage of those websites. Why bounce rates? First, it's one of the engagement metrics that almost all companies measure. But it's also easier to compare bounce rates across different website categories than, for example, time on site or number of page views. To measure how "clean" each of the designs were, we used the EyeQuant visual clarity algorithm, which assigns a 0-100 rating to each design. We built the algorithm by recruiting hundreds of users to participate in a study where they were shown a series of randomized pairs of designs. The participants' task was simple: to identify which of the 2 designs on the screen they felt was "more clean". This "forced-choice" approach helps to identify patterns in which kinds of designs people feel are more "clean", and helps us determine how much people tend to agree with each other (turns out, it's more often than you'd think). Using machine learning, we were able to take this data and build a predictive model that analyzes any design and rates it, with over 85% accuracy compared to a 200-person study. The Extremes: here are examples of a very low clarity score (the famous Ling's Cars), and a very high clarity score (Google).  The clarity score is driven by factors like the amount of text on the screen, layout, and the imagery used on the page (pictures with many sharp contrasts and lines tend to make the whole page feel cluttered). Finally, we calculated a Pearson Correlation between the Clarity Score and Bounce Rate for all 300 websites. Results We observed a surprisingly strong negative correlation (r= -0.57, p < 0.001) between Clarity Scores and Bounce Rates across the 300 websites, meaning that cleaner sites do tend to have lower bounce rates. Similar results were observed when we looked at individual website categories by themselves. The graph above plots each website's clarity score and bounce rate. The green line shows the trend in the data. Perhaps the most striking data point is the r-squared value of 0.327, which implies that roughly one-third of the variance in bounce rates can be explained by the variance in clarity scores - a much stronger effect than we expected to find. An open question is whether or not there's another variable at play here that tends to move in the same direction as the clarity score and magnifies the perceived impact of visual clarity. But if such a variable exists, we haven't found it yet. What should we take away from this? For anyone involved in design decisions online, this study should serve as a warning to fight the natural tendency for pages - particularly home pages - to get cluttered over time. Think about which content is really important for users and focus on (only) that content. The results also suggest that it's worthwhile to try "de-cluttering" existing designs, as improved user engagement often leads to higher conversion rates. This is especially true for companies that are running an A/B testing program, and are capable of measuring the impact of de-cluttering on their own website. Source: http://www.eyequant.com/blog/website-clarity-bounce-study

Leggi

The Easiest Guide to Cohort Analysis

Cohort is a group of users experiencing a common event within the same time period. An oft-repeated but very relevant example of a cohort is- a group of students joining in the same year. So the class of 2017 is a cohort and so is a class of 18, and so on and so forth. What is cohort analysis? Cohort analysis is an analytical modeling employed to study the cohorts characteristics over a period of time and the elements that influence change in those characteristics. It traces its roots to medical research where cohort studies are done to identify the cause of a disease. “In a prospective cohort study, researchers first raise a research question, forming a hypothesis about the potential causes of a disease. The researchers then observe a group of people, the cohort, over a period of time (often several years), collecting data that may be relevant to the disease. This allows the researchers to detect any changes in health in relation to the potential risk factors they have identified.” via Medical News Study So, to identify the cause of lung cancer doctors would create a hypothesis that it is caused by smoking. Then they will take two groups- smokers and non-smokers. Thereafter, both groups would be studied to identify the influence of smoking on the person’s likelihood to get lung cancer. How do we employ this in business analytics? In business applications, we compare cohorts- users sharing a common experience in a given time frame- or analyze the behavior of a single cohort, to identify a pattern that supports a growth hypothesis. That hypothesis could be anything. For instance, we may create a hypothesis that users getting acquired via display ads have higher LTV than the ones getting acquired by Facebook. To prove the hypothesis we would do the cohort analysis. Likewise, let’s suppose we want to identify the cause of the aggregate dip in your retention.So we would form a hypothesis that retention has a correlation with the first purchase of the customer. To establish the relation we shall cohortize users on the basis of their first purchase and plot their, say monthly, retention %. From the graph above it is apparent that the users who purchased marshmallows the first time displayed higher LTV than the others. This despite the fact that overall retention of the product has declined. Naturally, the intent of business now would be to get more users purchase marshmallows post acquisition. Important- That’s not to say that Marshmallows are the cause of retention. Our analysis simply told us that there is a correlation between marshmallows and retention. Correlation doesn’t amount to causation. So we have to test if Marshmallows really amount to higher retention or not. Cohort analysis gives us insight into the trend and basis for testing. Not the cause. Cohorts and Segments are not the same Most folks interchangeably use ‘Cohort’ and ‘Segment’ which is not correct. For two users to be part of the same cohort they have to be bound by the common event and time period. Eg 2017 graduates, 1990 born men. However, to create a Segment you could use almost any condition as a basis which cannot necessarily be time and event based. Eg graduates, men. Cohort is a subset of Segment. So, there can be a cohort of ‘new users this week’ and likewise, there can also ‘segment of new users this week’. Now that we have understood fundamentals of cohorts, let’s understand some business use-cases. Some powerful use-cases of Cohort Analysis To explain the use-cases start with the google sheets (linked below) where you can start with the cohort chart for every use-case. Cohort Analysis | Worksheet 1. Understanding customer retention But before we do that, a little throwback to how to read a cohort chart. We are skipping the data crunching part and jumping right into the presentation. How to read a cohort chart? Table 1 Link- Cohort by Active users- Sheet 1 | Excel Let’s go through row and column one by one. You could well see that column is for activation month and row is for the number of returning customers. Rows So, B4 represents the number of new customers we acquired in the month of Jan. C4 tells us the number of customers who were acquired in Jan but they returned in Feb. Likewise C4- number of customers acquired in Jan who returned in March. D4- the ones who returned in April And so on and so forth. Basically, as we move along the Jan’s row. we understand how the retention of new customers acquired in Jan fluctuated until Dec. Columns Column represents the number of returning or new customers. D4 represents the number of customers acquired in Jan who returned in March. D5- the number of customers acquired in Feb who returned in March. D6 is the number of new customers acquired in March. The same pattern repeats as we move along the row. Table 2 Now, let’s understand how the each cohort, retention wise, behaves over the period of time. To do that, we would slightly pivot the above table. We would change the column from the actual month to the ‘# of months since acquisition’. From Jan, Feb to 0. 1, 2 which would pull all the row data to the left. You may notice that the table changed from right aligned triangle changed to left aligned. So, in the first row, as we move along, we would know how many customers acquired in Jan returned in the succeeding months. Table 3 In this table, we changed the numbers into percentage to get better view of the data. Now looking at each row we may get the retention curve of the corresponding month. However, what if we want to understand how the retention has been over the past 12 months? So, in the final row, we have calculated the aggregate. The aggregate gives us the retention curve of the past 12 months. 2. Correlation between category and retention A friend of mine had worked on the cohort analysis of one of the world’s largest retailer. He told me that one of the conclusions from their analysis was that the users who purchased baby products in their first visit showed higher propensity to visit again. This prompted the retailer to promote their baby section more aggressively. One can create a hypothesis that there are some categories which trigger maximum stickiness among users when they are the first purchase. To determine that category let’s cohortize users on the basis of category of their first purchase and plot their retention. Link- Cohorts by Category- Sheet 2 From the chart it is evident one can draw the following conclusions: Users buys Sportswear in the first purchase showed higher retention than the rest. Users buying Jewelleries in the first purchase showed the lowest retention rate. 5th month is critical as the churn seems to increasing beyond that. Some possible inferences can be that the marketing expense for sportswear needs to be decreased. Likewise, the retention strategies for Jewellery purchasers need to be relooked. Retention strategy for users entering 5th month since their acquisition has to be evaluated. 3. What features correspond to maximum retention A report by Quettra shows that an average app loses 77% of the DAUs within 3 days post install. Now, if your product itself isn’t deserving, then nothing can evade uninstall. However, if it is not, then apparently the first three days are critical and determinant of the user’s retention. 3 days was the average trend and your critical number could accordingly vary. You could determine your own critical number through the method that we discussed in #1. Let’s suppose it is x days for the time being then you have to do something within the first x days post install to hook users. How cohort analysis comes into picture Let’s create a hypothesis that there are some features in the app which when used increases the stickiness among users. Create an aggregate retention curve of the last 12 months like we did in #1. Note- The retention curve of the mobile app unlike a web-app is going to decrease linearly because a web-app doesn’t need to be installed on your device. A user can login any time he wishes. With mobile app, once it is uninstalled you potentially lose the user forever. Now, screen the users who have retained and jot down the features used by them on the first day. Suppose you are analysing for a e-commerce app and concluded the following traits to be common among all retained users. Let’s say “push notification clicked” and “added to wishlist” are two most common actions Now we would narrow our analysis for both of these events and do a comparison between them The result Cohort Analysis | Cohort by Features Visit the above sheet and change the value for each feature from the drop down to see how the graph changes. From the above chart, it would be clear that users who added-to-wishlist display higher propensity to retain than the rest. The ones who clicked push notification perform even worse than the average. Again, this graph gives us the correlation not the cause of retention. P.S. This is a very interesting method and extensively used by consumer businesses. I just discussed the basic framework and there are various edges that can lead you to a more definite conclusion. 4. How customers react to a new feature release Inversely the above cohort analysis could also be used to figure out what are the obsolete features that needs some rework. For instance, the cohorts curve of users who clicked on push notification fare poorly than the average retention curve. Push notification is obviously meant to complement your retention so the above chart prompts us to rethink our strategy. Creating cohorts in Mixpanel, Amplitude, Adobe- First event and Returning event If you are using Amplitude or Mixpanel, or any of the similar products, to do your cohort analysis, these are the two fields that you have to specify for creating cohort chart First event Returning event Let’s see some examples Amplitude Mixpanel Adobe Localytics First event is the primary criteria to build the cohort- the ‘experience’ element in creating cohort that we discussed in the very beginning. Returning event is the baseline that you want to track for your users. In the above charts, retention has been the baseline of our analysis. In analytics, retention could be defined as ‘any event performed by the user’ on your platform. So, if we have create cohort in Amplitude then it would somewhat look like this Conclusion Cohort analysis is a respite from vanity metrics. At any time momentary growth can be bought which may give you temporary pleasure but cohort analysis allows to be cynical. It gives a very critical view of churn and doesn’t let it get masked by growth. For instance if you are investing into acquisition there can be instant surge in the MAU but high MAU is not the indicator of growth. A cohort analysis will tell how many of those acquisitions are actually sticking with you. Similarly, a particular channel might be amounting to highest acquisition. But a cohort analysis will tell which of them contribute to maximum profit. Whatever your key metrics may be you would be able to see how it evolves over the customer lifecycle or product lifecycle. Source: https://monk.webengage.com/cohort-analysis/

Leggi

Customer Lifetime Value in Ecommerce

For any company to be profitable, it must profit more from each customer (Customer Lifetime Value or LTV) than it spends on acquiring them (Customer Acquisition Cost or CAC). So if your average Customer Lifetime Value is lower than your Cost Per Acquisition, that should be a big point of concern for your company because it means that you are losing money. Being unable to maintain Costumer Acquisition Cost lower than Customer Lifetime Value is one of the main causes for business failure. How to calculate Customer Lifetime Value Lifetime value is how your store profits from your clients during the time they remain customers. For example, if your average client comes back to your store three times to buy something, spends on average $100 per purchase and your profit margin is 10% ($10), your Customer Lifetime Value is $30. This is important because LTV is directly linked to profitability, since a company with high LTV will be able to spend more to attract customers and will have a higher margin. To estimate LTV, you need to look into your historical data and: Forecast the average customer lifetime (or how long the customer continues to purchase your product or service); Forecast future revenues, based on estimations about future products purchased and prices paid. Estimate the costs of distributing those products. Calculate the net value of these future amounts. Famous best practices in Retention and Customer Lifetime Value Companies that have high Retention (their customers keep coming back to shop more) are more successful because their Customer Lifetime Value gets higher. For example: Zappos won against their competition by keeping their customers coming back with an excellent customer service strategy. The more often they buy, the higher Zappos’ LTV gets. Amazon’s massive product offerings helps them in upselling or cross-selling to nearly everyone using an automated and personalized email marketing system. This means users spend more, which in turn improves Amazon’s LTV. Netflix’s recommendation system keeps viewers constantly engaged in new content. Netflix’s customers keep their subscriptions for a year or more, paying every month, which increases LTV. Facebook “habit loop” keeps their users coming back to the site on a daily basis (and often, multiple times a day). When users visit Facebook more often, they tend to click more on ads. Since Facebook profits from each ad click, this greatly improves their LTV. Why the ratio between CAC and LTV is crucial for running your business Customer Acquisition Cost (CAC) is calculated based on the amount of money you spend to acquire a customer. For example, if you pay $1 for each click that a person makes on your Facebook Ad and 1 in every 10 people who click on that ad ends up buying from you, your CAC is $10. Considering the example above, of a company who’s Customer Lifetime Value is $30, if their CAC is $10, that means their profit is $20 per customer. A $20 profit is not so bad if your company has a high volume of sales. However, if that company’s clients came back to shop at an average of 10 times, their LTV would be $100. If the LTV is $100 and the CAC is $10, then the final profit would be $90. Which is obviously much better. If you’re currently running Facebook Ads or any other paid marketing channels, you’ll appreciate how difficult it is to keep Costs per Acquisition down. So it’s in your interest to keep Customer Lifetime Value as high as possible. And the secret for keeping LTV high is retention. See below: How Retention Rates impacts Customer Lifetime Value The first example is of a company struggling to retain their customers. The second shows a business with high retention rates. The graph below demonstrates the retention curve of a company with only 30% of their customers returning in the next month. You can see that they end up with almost zero customers from each cohort in less than 5 months: Low retention rates result in Customer Lifetime Value barely increasing over time. Companies with low Customer Lifetime Value can only really count on one purchase per customer to draw all of their profits. If a company has low Customer Lifetime Value, average CAC needs to be below average to profit from each customer. On the other hand, the company represented in the next graph has a subscription based model. They maintain a much higher retention rate. More than 20% of their users are still active after 18 months from their first payment. That reflects very positively in their Customer Lifetime Value, as we can see in the graph below. Even though revenue from their first purchase is low, in the long run each customer becomes extremely valuable because of the high retention rate. Whatever the case and the market you are in, a low LTV / CAC ratio is a problem that should be addressed as soon as possible. If that’s a problem you have,  we strongly encourage you to your Retention after the first transaction. Conclusion For many young businesses, keeping a healthy CAC / LTV ratio is a challenge. If that’s your case, you need to identify whether your CAC is high or your LTV low (or both). Benchmark your numbers against your competition to understand which of them is your biggest problem. Then divert all of your focus to getting it fixed. If the problem is retention, you have a few different options to test: Focus on customer satisfaction by providing an excellent experience with your product. Build a recommendations engine and an email automated system. Use them to personalize the offers to your customers based on their activity with your site or app. Work on developing a habit-forming loop to insert your product in the daily routine of your users. Or you can look at your own data and come up with your own strategy. The important thing is to focus on your CAC / LTV ratio immediately. The lifetime value of your company depends on it. Source: https://blog.compass.co/how-to-build-a-profitable-business-demystifying-customer-lifetime-value-with-exclusive-data-from-compass/

Leggi

4 Steps for Effective Customer Acquisition in the Digital Era

It’s no secret that acquiring new customers is difficult. While most companies work to derive as much value as possible from existing customers–which they should—your business will have a tough time reaching its growth goals if new customers are never brought into the fold. In the digital era, customer interactions occur online, and in shorter, more frequent stints, rather than the longer in-person, but less frequent, interactions of old. Likewise, traditional marketing meant focusing on customer segmentation and campaign performance measurement. That no longer works. Instead, the focus needs to be on individual preferences and intentions. Not doing so can lead to missed opportunities. There are several factors to consider in your customer acquisition strategy, and they all come down to an explicit focus on the customer: details such as understanding how (and why) individuals interact with each channel differently, recognizing how to leverage multi-channel data to connect with the right customers at the right time, and respecting that customers want to be treated as individuals. To win the battle for new customers, companies must continuously leverage digital technologies to attract new customers and connect in a relevant, meaningful way based on the prospect’s individual preferences. Here are four key steps to acquire new customers in the digital era. 1. End the Siloes  Your business can’t be effective with your consumer interactions if you’re working with data that is in siloes. Marketing teams must know and understand information around sales calls, online behavior, marketing program feedback, etc., to make the most of each marketing campaign and next best offer. These make up an ongoing cycle of events that contribute to a personalized understanding of the prospect or consumer. Having a holistic, real-time view is the only way to be relevant and effective in your marketing efforts. 2. Detect Opportunities Pinpointing new opportunities at the prospecting stage will allow your teams to allocate resources in the areas that will have the most impact. Signals of intent – like multiple visits to your website – need to be merged and used during the acquisition process as quickly as possible. Typical prospect journeys will become visible and can be mapped to other prospects, all the while improving acquisition. You want to be able to compare past activities of the customers you have acquired and apply those behavioral patterns to new prospects to gain a better understanding of predictive behavior. This can help you make the right offers to transition the prospect into a customer. 3. Turn Insight into Action  Detecting an opportunity is not enough on its own to improve customer acquisition; you need to process that opportunity as quickly as possible and use the most appropriate channel to connect with that prospect while the opportunity is still there. That process can be a call by the sales team, but it can also be a digital interaction – an email, online banner offer or other digital conversation, depending on the profile of that customer. The key is delivering the right message that will resonate with that particular prospect based on the behavioral, contextual data mentioned above. 4. Test Multiple Strategies  If you’re in a rut, it’s beneficial to engage prospects with different marketing messages and track response rates to learn what’s working and what’s not. Being able to track and change based on individual behavior patterns will allow you to improve your customer acquisition tactics. Delivering the right message via the right channel at just the right time is crucial, so despite the number of ways in which we can reach prospects, if we haven’t carefully considered what it is they want or need, or how they want to hear that message, we likely won’t get very far. Instead, companies that embrace a customer-centric approach include customer-focused concepts in their entire makeup. “Personal,” “thoughtful,” “anytime, “anywhere” – these are requirements for growth in the digital era and impacting the customer acquisition process. It may take some trial and error, but a successful acquisition strategy is all about committing to better understanding your customer—at all stages of engagement and via a variety of digital channels—and building personalized relationships with each. Source: http://customerthink.com/4-steps-for-effective-customer-acquisition-in-the-digital-era/

Leggi