Extract test statistics using the Reporting API

Learn how to automatically retrieve test statistics, summary data, and statistical design parameters for any sequential evaluation (AGILE) test.

This article will help you
  • implement a Reporting API endpoint to retrieve data and statistics for your A/B tests

The Analytics Toolkit Reporting API allows you to retrieve A/B test data and statistical estimates through a programmable endpoint. It is a powerful way that advanced analytics teams and CRO agencies use to employ the statistical engine of Analytics Toolkit for the analysis of their online experiments.

Prerequisites for using the Reporting API

You should be conducting sequential evaluation tests through the A/B testing hub.

At the cost of a minimal investment to program your end of the API you can apply the powerful AGILE statistical engine to the analysis of your A/B tests. The API accepts data via both GET and POST requests, though POST is recommended as a future-proof method. The API is simple enough to use in Google Sheets if that is your preferred platform for storing and processing A/B test statistics.

The Reporting API is currently only available to clients on our Enterprise plan.

Setting up the Reporting API for the first time

To begin, navigate to Account > Sources & APIs > Reporting API. If this is the first time setting up the API, you will see a screen like the one below:

initial reporting API configuration details
initial reporting API configuration details

You should download the API reference and go through the document to understand the parameters supported by the interface. It also contains detailed information on responses and error codes.

The two pieces of information which are shared across all A/B tests that you use the API to get test data and statistical estimates are the endpoint and the authentication hash. The endpoint URL should therefore be hardcoded in your code. Ideally the code will handle redirects gracefully so that in case it changes your API implementation continues to work as before, but it is a forward looking requirement and we will make sure to communicate any API changes to all users in advance.

The second piece of information is the authentication string which is required for your API requests to be honored. It is a security mechanism against unauthorized data extraction.

You can then create a new test or use any existing sequential evaluation test to debug your implementation.

Ongoing use of the API

To retrieve the test ID of any test, simply look at the URL of any page related to that test, e.g. a reporting page is of the format https://www.analytics-toolkit.com/tools/testhub/test/analysis/NNNN/ where NNNN is the test ID you need to use the API. The ID is also provided on the screen confirming the successful creation of a test in the A/B testing hub.

Once you start using the API the same page under Sources & APIs will contain a list of your most recent API calls, including their outcome and any errors encountered. This can be useful in monitoring and debugging your API calls.

configuration details for the Reporting API
configuration details for the Reporting API

Should you need to get an updated documentation file or retrieve the endpoint URL or your authentication hash, these will be available in the right sidebar, alongside the API call log, as shown above.

Common use-cases for the Reporting API

Your implementation of the reporting API can be as lightweight or as deep as you need it to be. While each request returns a wealth of information, it is not necessary to store and/or process it all. Below are some example scenarios which can be implemented separately or all at the same time:

  • Using the API to generate and store shareable links - since each call of the reporting API creates a shareable link (if one has not already been created), the returned accessHash can be stored for future use, such as to generate easily accessible links to the test results in a test management system.
  • Get information on the status of a test - extracting meta information for a test, such its current status (pending, ongoing, success, fail) can be valuable as it can be used to trigger further actions. Even though Analytics Toolkit automatically sends out emails on major events, it may be beneficial to automate the processing of such information.
  • Extract statistical estimates - for ongoing and completed tests one can use the reporting API to extract key statistics such as the estimated lift, p-value, and confidence interval bounds. This can be used to automate reporting or to feed data to meta analyses.
  • Retrieve bounds data - efficacy and futility bounds, as well as interim statistics can be extracted for all tests. These can be used to generate test design graphs such as the ones present in the A/B testing hub.
  • Get test design parameters - number and spacing of the planned data evaluations (analyzes), maximum sample size, and other test design parameters can be extracted and used, including to guide the operation of a Data API implementation.
Key tips
  • once the API has been set up, you only need the test IDs to start retrieving data for any A/B test
  • calling the API for a given test automatically creates a shareable link to its test results
  • API responses can be used to determine when to stop gathering more data in a test
  • recent API responses are also available in the Analytics Toolkit interface under Account > Data Sources > Reporting API
A/B test reporting API

Last updated:

See this in action

A/B Testing Hub