Operational analytics is a relatively new term being used more frequently within the data integration space. This data use case has also been referred to as Reverse ETL, but are they really the same? Regardless of the terminology used, why is this important and how does one best activate their data?

In Part One of our “Be a Data Hero” series, we focused on five initial steps leaders can take to overcome common challenges. Once you’ve begun your data journey and implemented a cloud data warehouse, how can you begin to better operationalize data flows within your organization to drive business growth and transformation?

Understanding operational analytics is a good starting point.

What is Operational Analytics?

Operational analytics refers to the process of syncing data that has already been centralized within a data warehouse (such as Snowflake, Databricks, BigQuery, Redshift, etc.) with other business critical SaaS applications (such as Salesforce, Hubspot, Netsuite, and Marketo), as well as with various marketing applications. This process is often also referred to as Reverse ETL. It allows for data that has already been modeled or transformed within the data warehouse to be passed back into its original application, often at high frequencies. This allows business users to leverage the most up-to-date data from within their source data application.

For example, in order to better target your customers with more personalized marketing messages, let’s say your team decides to create a new industry vertical field within your CRM. Once this field is updated within your data warehouse to reflect your new customer segmentation strategy, any new customer activity (ie, events being tracked by other company marketing applications) will now not only be captured within the data warehouse, but passed back into the CRM. This allows the marketing team to immediately update its segmentation models based on near-real time data. Therefore, setting up this operational analytics flow is a crucial step to better understanding customers and activating customer data.

Related Resource: The Essential Guide to Modern Marketing Analytics

How to Activate Your Data

Once you have your modern data stack infrastructure in place (including the components outlined above such as a cloud data warehouse, data ingestion, data modeling and transformation, data visualization, etc.), here are some more specific steps you can take to help you get started on activating your data.

1: Evaluate Operational Analytics vendors based on your specific needs and requirements.

The ‘Reverse ETL’ space is still relatively new and, as such, vendor capabilities tend to differ as do targeted use cases each vendor supports. As a short list, some of the vendors that we work with include Flywheel Software, Hightouch, Census, and Matillion.

2: Evaluate integrations.

Look for native data source and data destination integrations from each vendor that correspond with the specific business applications you are using. Almost all vendors in this space will include integrations with cloud data warehouse and data lake environments, as well as common data sources such as Salesforce, Hubspot, Marketo, Google Analytics, etc. Verifying any out-of-the-box integrations will allow you to quickly and easily set up the necessary operational analytic flows.

3: Set up analytic flows and establish business rules.

Setting up a series of operational analytic flows is only part of the equation. Once these are set up, there will also be some business rules that will need to be established within any operational analytic vendor and targeted data source. Per our previous example, the need for automation might require consistent customer segmentation criteria across all your applications, verifying data sync frequencies (keeping in mind that higher sync frequencies might drive higher costs), updating individual data source dashboards as well as any company-wide dashboards or visualizations, and modifying any external communications or activity (advertising, marketing, etc.) based on automated data workflows.

4: Work with a partner to help with customization or  implementation.

Data Clymer and other solutions integration (SI) partners can greatly help accelerate the development and implementation of your overall modern data stack, or can help with the implementation of a specific use case (such as setting up an operational analytic solution). At Data Clymer, we not only help to architect and implement a solution but can also train internal data engineers and analytics (as desired) in order to help them ultimately take over the day-to-day management of any modern data stack solution.

As an example, in a recent engagement with Kentik, a leading network observability software provider, Data Clymer helped Kentik to not only build out its modern data stack but to also remove data silos and improve data visibility across the organization. In this use case, Kentik already had some of the necessary data tools. However, the challenge was that these tools were not operating seamlessly with one another, leading to a lack of data trust and an inability to create any efficient operational analytic workflows. With Data Clymer’s help, Kentik was able to make its data warehouse more efficient, improve overall company dashboarding, and set up new operational workflows with its financial applications to gain insight into its sales pipeline and revenue forecasts.

The Benefits of Operational Analytics to Your Business

Some of the benefits that can be realized from an operational analytics investment include:

  • Complete your data journey:  Operational Analytics completes the data cycle, allowing data to move from source to warehouse, from basic schema to transformed data model, and from point of data activation back into source. This complete cycle provides data consumers with the up-to-date data they need in order to make better, more informed business decisions.
  • Create real-time analytics: In this Operational Analytics model, source data applications no longer need to wait for nightly or weekly updates. Data can be pushed back into source applications, based on desired frequency, in order to ensure the freshest data is made available.
  • Enable data democratization: Up-to-date data can now be made available to all data consumers of a specific application vs. only those who might only have visibility to an executive dashboard. This empowers these users with the right information they need in order to be successful in their roles.
  • Increase data efficiency: Similar to the Kentik example above, Operational Analytics can help improve the efficiency of existing data tools and an existing data stack.  This improved efficiency allows you to not only centralize data but also then distribute that data to any number of business applications based on business needs.  This allows for a one-to-many data distribution approach, thereby improving overall efficiency.
  • Improve data trust: Eliminating data silos and creating a single source of truth for all your operational data will create a more consistent and accurate picture of your business across all your data tools. This consistency will help ensure all data consumers are accessing the same data and improve overall data trust.

The Emergence of Data Apps

In Part 3 of our “Be a Data Hero” series, we explore the emergence of data apps: applications that are not SaaS based, but that ultimately run directly on your cloud data warehouse. Many cloud data warehouse providers are working to unlock marketplaces that provide a platform-as-a-service approach for developers to create data applications and for customers to consume and run these applications within the warehouse.

Ready to Continue Your Data Journey?

If your organization has a cloud data warehouse but is now struggling with how to best operationalize your data, our team at Data Clymer can help! We have helped many organizations like the Las Vegas Raiders, Kentik, and the Big Ten Conference advance their data strategies and operationalize their data to better understand their customers and their business.

Contact us or send an email to sales@dataclymer.com to learn more about how our team can help you become a data hero.


About the Author

Jesse McCabe, Head of Marketing

Jesse has over two decades experience as a marketing professional within many successful technology and data management companies including Fivetran, Matillion, and SendGrid / Twilio. Jesse is Head of Marketing at Data Clymer and responsible for promoting the Data Clymer brand and amplifying the success of Data Clymer customers.