Designing, implementing, and maintaining a data layer is not for the faint of heart, especially the latter. As your analytics endeavors grow, your data layers may also grow into an unwieldly beast of a maze – track these users, store this data, change this location, deploy this design, etc. While your organization may constantly be putting plans into motion to expand, difficulties may arise with time as you add more to your already complicated data layer setup.
For example, let’s say your data layer is either constructed through a multitude of deployments in Ensighten or designed to implement through Ensighten. Everything may be going well at first, but you may find yourself running into roadblocks when handling legacy data, which essentially refers to data that is from an older system or data that needs to be upgraded. Technical issues aside, maintenance issues can influence a variety of other factors, including user experience. What’s the solution? While it may seem crazy, the key is to think of legacy data layers as living, breathing objects which need to be maintained correctly in order to translate information to the consumer. Here’s why:
The Bad & The Ugly
In practice, this old data layer scheme is problematic when any changes or additions to the data layer need to be made. Because the many deployments affecting the data layer had multiple functions, names, conditions, spaces, and tag code were not uniform. Finding a deployment with the proper data layer variable and scope became a hassle. Moreover, whenever a new deployment needed data from the data layer, the deployment’s dependencies needed to include all other deployments that set or change the required data layer variables. Bottom line: this is an undesirable practice, confusing for anyone operating within the Ensighten profile.
So, how did we resolve this challenge? Simple. We chose to launch a single deployment that would handle data layer variable definitions. Doing so meant all changes and additions to the data layer would be applied during this step since each data layer variable follows a naming convention and could be easily searched within the deployment. In addition, all states and conditions of each data layer variable were now visually localized, providing a thorough overview of the use cases. Ultimately, deployments requiring the data layer only needed to set their dependencies to this single data layer deployment.
Lastly, we took advantage of the dataManager API to allow all data layer variables to be accessible as data definitions. One of the limitations of data definitions stipulates that, upon creation, the data definition is immediately active and in production. Therefore, having the data layer both consolidated and utilizing dataManager allowed us to control this to a greater degree.
Apart from the technical process of upgrading legacy data, consider overall governance, as well. For example, the following steps should always be mapped out:
Governance, just like data layers, can always change. The governance of legacy data layers should also reflect the current state of your team, your strategy, and your goals.
Legacy Data Relies On Preemptive Action
As I noted in the beginning of this post, you need to think of legacy data layers as living, breathing objects – maintenance is key. While you may take the time to update a data layer once, continuous updates may always be in the picture if you ever change business strategies, have a new offering, or change your design. The overhaul is certainly arduous, but the more time it takes for you to update your data layer, the more time you spend working around its inadequacies and the longer it will take to overhaul in the future. Instead, have a plan in place to not only update current data layers, but also legacy data layers that may hold just as much importance.