I think data is always the starting point: the platform that allows development. After all we can’t improve what we can’t measure.
Since 2006 I have been involved in multi touch attribution projects, I have studied, (to the extent of my moderate understanding of statistics), data science, and implemented analytics frameworks.
It turns out that some stuff just can’t be measured accurately.
The other interesting thing is that the more sophisticated you get with data, the more uncertainty creeps in – interpreting that uncertainty is a skills in itself. The ones who have been there know this: they are precisely 78% confident of it :)
Accuracy of attribution, in particular marketing attribution is a key challenge: another key challenge is the famous single customer view. The SCV, for those not familiar, is the ability to know everything about your customers, in aggregate and in detail. E.g. what channels they came in from, when, how much did it cost, how much did they spend, what did they buy, where do they live, what was their second purchase, lag from first purchase to second, delivery time of second purchase etc.
That’s also close to impossible. Getting all the databases together might even be possible, (data lake), but the extent to what that data is usable is whole different level of complexity.
The following factors come into play:
Here comes in what is called the Analytics Operating Model
To start this can be implemented in 5 steps:
1. Map all KPIs (see next framework for reference)
2. Map out the stake holders
3. Understand the minimum accuracy, granularity and frequency of data required
4. Ensure consistency between the KPIs
5. Design distribution and visualisation
Point 4 here is the most critical one and this leads to another key concept: local truth vs absolute truth. As a perfect single customer view is not possible, the next best thing is having separate tracking solution, that may deliver different results, but in a consistent way.
This means that if one report shows that sales go up by 20%, other reports might show a different total number of sales, but that number for the same period should also go up by about 20%. If this isn’t the case, then fixing that is a priority.
While perfect data accuracy will never be achieved, ongoing work will over time bring more accuracy and more intelligence. To achieve this it’s important to look at analytics as a constant workflow, managed in sprints, that aims to get closer to the truth. With time different areas of improvement can be addressed:
- Cookie deletion
- Consistency between tracking solutions (e.g. Doubleclick vs Omniture vs Salesforce vs finance data)
- Multi touch attribution
- Online to offline sales
- Predictive analytics
- Cohort analysis
- Multi device attribution
- …and more
All the above will improve accuracy, but the most important output of the Analytics Operating Model needs to be actionability. That’s why once the initial 5 points are decently implemented, the bias should be on improving distribution rather than accuracy. This means ensuring that all the relevant people get the data they need to take the decisions their role requires – that will move things forward.
For marketing even if there might be just a single KPI, (sales), this may require multiple layers of reporting. For example
- Campaign Manager: requires data daily on a granular level, accuracy is secondary, but enough to ensure campaigns are going in the right direction
- CMO: may needs data only monthly, but accuracy needs to be high in order to make budget allocation decisions across channels
- CFO: may need data only bi-yearly, very top line, but very accurate. How much was spent and how much was made
In order to set up this model in the most effective way, it’s important to bring together big data/analytics skills together with subject matter expertise in marketing and growth. This combination will be able to make wisely the trade-off calls between accuracy and distribution tasks and prioritising areas of work. Marketers often forget that analysts and data scientists have amazing skills, but don’t necessarily know what data is needed and how to read that data in a marketing and performance context.
Another interesting area of work is that of turning analytics from reporting to a diagnostic solution. This is a framework that I started to implement in 2012 and that has a strong bias towards action.
The first concept to clarify here is that of metrics hierarchy:
- KPIs: these are very few for a business, (around 5), and must give an top line view of overall company’s performance. They should represent the internal value creation process of the firm
- Primary metrics: these impact directly the performance of the KPIs. Each KPI will be impacted by 2-4 other metrics
- Secondary metrics: these impact directly the performance of the primary metrics. Each primary metric will be impacted by 2-4 other metrics
- Tertiary metrics: ….you get the idea
Here is a graphical representation:
The one above is a simplification and not necessarily does it need to span across marketing and product, but in larger companies this can help to increase speed of analysis and collaboration.
It’s important to keep in mind that the scenarios analysed in the Diagnostic Solution are not mutually exclusive. E.g. registrations might increase because of seasonality AND site speed.
Building a hierarchy allows us to exclude scenarios quickly, (e.g. if website traffic didn’t increase, we know that all the metrics below can be ignored), and drill down where it’s needed. Similarly a set of automated alerts at each stage can flag up step-changes timely, and later contextualise performance for all team members.