Data collection is the foundation of your analytics stack, and it forms the basis of building a data-driven company culture.
I’ve never seen a website analytics setup that was free of mistakes, however, and many of the mistakes lie at the collection level.
This article will cover the most common event tracking mistakes I see.
Before we cover mistakes, though, let’s briefly get on the same page on the definition of “events” and “event tracking.”
What is “Event Tracking”?
In analytics, events are simply “things that your users do.” This is a really broad definition, and that’s on purpose. See, an event could be anything from a pageview to a button click to a user signup and more. How you define your events is both an important and a strategic (as well as a technical) matter.
In many cases, people talking about “event tracking” are referring to a specific feature in Google Analytics (and an advanced one at that). However, some analytics tools are completely based upon event tracking, and almost every analytics solution will, in some way, revolve around events to make sense of data.
How can you distinguish event analytics from other forms of data? By exclusion: think of events as anything that happens, whereas other forms of data can be descriptive or about the user themselves (which would be firmographic or demographic or psychographic data). In reality, most of what we talk about with web and product analytics is actually event analytics.
So in this article, we’ll be referring to the generic capturing of user and visitor interactions and the codifying of these interactions as “events.”
9 Common Event Tracking Mistakes (and How to Avoid Them)
- Lack of Regular Reviews and QA
- No Knowledge Sharing
- Unclear Data Governance
- No Data Cataloging
- Too Many Event Types
- Disorganized Naming Conventions
- Events firing incorrectly
- Interactive vs Non-Interactive Events
- Too Much Unactionable Event Data
1. Lack of Regular Reviews and QA
This issue, and many others on this list, isn’t confined to the world of event tracking. As analysts, it’s important for us to be both skeptical and organized.
Many companies take their data at face value and, because of that, suffer from data integrity issues and entropy. Just because you’ve set something up correctly in the past doesn’t mean your existing collection is still set up correctly.
Imagine, for example, that you undergo a website redesign or change blog templates (or even just implement winning A/B test results). Are the events you want to track still the same? In many cases, your needs will change progressively over time.
Personally, I think much of this can be resolved by assigning a singular person or team as the “DRI” (directly responsible individual) for analytics collection. This prevents “too many cooks in the kitchen” or the tragedy of the commons.
But beyond that, you should still have quality assurance rituals, both at a high level and at each level of implementation.
A quarterly audit and review should suffice for the former, and for the latter, everything that is implemented should undergo a quality assurance process (and you should also implement in a staging or sandbox view first).
Even if you have a DRI in-house, I like to bring in an external consultant from time to time to clean things up and look at things with fresh eyes.
2. No Knowledge Sharing
What good is information if it’s confined to a singular team?
Much of the value in modern analytics comes from enablement. Giving people the data they need to make decisions as well as the understanding and data literacy to comprehend the numbers properly.
There are many levels to knowledge sharing when it comes to analytics. Many of them can be solved swiftly using self-serve analytics tools like Woopra.
Fundamentally, to get the most value of your analytics, these things should be shared:
- Who should someone contact when they want to implement or change the data that is being collected?
- What does the data mean? Where can someone find the events they care about within the system?
- How can business users export, share, and visualize reports important to them?
Again, a tool like Woopra can get you far on this path due to user-friendly reporting, user permissions, and intuitive visual dashboards.
But you can further clarify your event tracking infrastructure by creating a team or project Wiki (with links to resources and contact information), building out a data catalog that is readable by non-technical users, and creating dashboards for non-technical teams.
3. Unclear Data Governance
Who owns what? A common problem in many domains, but critical to answer in the world of analytics.
If anyone can implement or change event tracking protocols, you’re going to be living in chaos. If only one person has access, you’re going to have a bottleneck.
The best structures tend to be multi-layered, with a single person or team being responsible and accountable for your event tracking efforts, but more people empowered to implement, edit, or remove events (within the confines of your agreed-upon protocol, naming conventions, and overall process).
Data governance is a complex issue, one without a clear and universal solution. At a minimum, you want to make ownership clear (at whatever company stage you’re at). As your company and team grows, things will become more complex, so make sure to adapt as your needs change.
4. No Data Cataloging
A data catalog, according to Oracle, is “a data catalog is an organized inventory of data assets in the organization. It uses metadata to help organizations manage their data. It also helps data professionals collect, organize, access, and enrich metadata to support data discovery and governance.”
Simply put, it’s like a glossary for what you’re tracking.
Perhaps you think this is not useful until you hit a certain scale of event collection; you may be right, but it helps to form the habit early, otherwise you may reach a point of clutter and chaos that is overwhelming later on.
Even something as simple as a spreadsheet to track UTM campaigns can be useful at a small scale (it’s super common for marketers not to tag their drip campaigns, which results in murky data).
And when you scale, well, there are some great products to help you with data cataloging (my favorite being data.world).
A data catalog also helps you with things like knowledge sharing and quality assurance. It’s an organization tool at its core, but it has many purposes beyond simple data organization.
5. Too Many Event Types
A lot of what I’m covering here deals with the balance between utility and complexity.
Sure, you can track every single event and event type that could possibly occur on your product or website. But like customer metrics, gluttony can sometimes be worse than starvation. Sometimes the more data you have available, the harder it is to make decisions.
From a data utility perspective, too much collection can place you in a worse position to make sense of your data. Take this quote from Nassim Taleb’s Antifragile:
“More data – such as paying attention to the eye colors of the people around when crossing the street – can make you miss the big truck. When you cross the street, you remove data, anything but the essential threat.”
Outside of that, it just becomes an organizational nightmare for whoever is running your analytics implementation (and also those who are using the data for analysis).
Good analytics programs start with strategy and planning and then work towards implementation (not the other way around).
6. Disorganized Naming Conventions
A data catalog can help you organize your events, but your team will still need to determine and enforce naming conventions (both upfront and as you implement more events).
Standardization of naming conventions helps everyone stay on the same page and understand data, and it also helps you to avoid errors, duplicate events, and messy data in general.
While your naming conventions could differ from those of other companies, there are a few helpful rules to consider (from David Wells):
- The pattern established must scale to fit multiple products/touch-points
- The pattern must be parseable by humans and machines
- All products/touch-points must adhere to the pattern
- Validation + enforcement are also quite important. We’ll cover that in the implementation section below.
Little things matter, too. Are you going to use dashes or underscores? Are you going to use all lowercase or camelcase? How will you deal with abbreviations?
The answer, to a certain extent, doesn’t matter; it’s only important that you think these things through and agree on them (and make sure everyone else adheres to the agreement).
7. Events firing incorrectly
Despite your best intentions, implementations will fail. It could happen during QA or it could happen months later due to unforeseen interactions with other website or product implementations.
Finding events that are misfiring is one huge value add of doing a quarterly analytics audit.
Simply put, if your events aren’t firing correctly - maybe that’s due to double tracking, spurious firing, or simply not firing at all - your data isn’t going to be precise, and therefore, it’s not going to be actionable.
While your debugging process will vary based on the tool you’re using, this is a great post if you’re implementing events with Google Tag Manager (and the learnings can often be applied to other tools, too).
8. Interaction vs Non-Interaction Events
This one mostly applies to Google Analytics, but other event analytics tools can also be affected.
In Google Analytics, there are interaction and non-interaction events. The main difference, in layman’s terms, is that interaction events affect engagement metrics like bounce rates. Non-interaction events due not.
For example, let’s say you want to trigger an event when someone views your navbar (or hovers on an item in the menu). If this were set up as an interaction event, that action would count as engagement, and your bounce rate on the page would probably be near 0% (because most users would see or hover on the navbar).
How you define events is up to you, but this is one of the most common errors I see, at least in Google Analytics setups.
9. Too Much Unactionable Event Data
Some new analytics tools help you “track everything.”
I see the appeal, especially for small teams with low development resources.
However, it’s a Faustian bargain.
Track everything, and you may find you’re learning very little. At the end of the day, analytics isn’t about collecting information; it’s about using it to make better business decisions.
Like I mentioned before, gluttony of data is often worse than starvation. Before you “track everything,” ask yourself, “do I really need to know [x]?” Or better yet, “what action will I take if I learn [x] by tracking [y]?”
If you have a huge team of data scientists, you may be able to learn surprising things through exploratory data analysis. But for the bulk of companies, you’re better off tracking what matters and making decisions on that.
Data collection is the centerpiece of an analytics strategy and program. Most errors that occur do so at the collection stage.
Event tracking can deliver rich insights that help inform experiment hypotheses, strategy pivots, and all kinds of optimization opportunities.
But if your event tracking is riddled with errors, you won’t get the full value from your data.
With a couple quick steps, you can avoid most mistakes. Assign a DRI to your implementation efforts. Catalog and define your data and naming conventions. Audit regularly and skeptically.
And above all, prioritize utility from your data. It’s there to help you make better business decisions, build growth models, and identify optimization opportunities, not to lie dormant and passive in some data warehouse.