How to monitor your data flow for issues and delays and resolveUpdated 31/07/2019

The status monitor is available on the overview page of your organization. It shows you in one single place the state of key components responsible for processing your data efficiently.

Screen Shot 2019-11-15 at 12.19.51 PM

Getting an overview of your Organization's Dataflow Health

Why is data taking some time to appear in Hull ?

Monitoring the Ingestion Pipeline

All incoming records coming from connectors, website tracking or imports are processed by the platform to discover attributes and segment changes to eventually send it out to all relevant connectors. The latency you see on the status monitor is the upper time limit it can take on average for a single user, event or account to be fully processed.

Fully processed changes are visible in the dashboard on the segment preview, on the user or account profile and is queued on all connectors backlogs as well.

The status monitor currently shows observed delays rounded to 15 minutes - so 10 minutes delay will be reported as < 15 minutes while 19 minutes delay will be reported as < 30 minutes. Also, it reports 24 hours history with all occurrences of latency exceeding a threshold. Currently, we apply the following thresholds:

  • < 15 minutes - OK - green
  • 15 - 60 minutes - WARNING - yellow
  • > 60 minutes - CRITICAL - red

NOTICE: as for now, reported platform delays do not include the time a connector needs to pick up changes from external service. Some changes are transferred via webhooks which do not cause significant delays, but some require polling 3rd party API on a predefined time interval. Most of the connectors have an interval hardcoded while some allow to adjust it in the settings. The actual value can differ, but the default we apply for most connectors is 5 minutes.

Read more about ingestion

Why is data taking some time to be updated in the destination service?

Monitoring Connector Updates

Connectors integrate your Hull organization with all external services or sources of data. They are not only responsible for sending out updates, but also to detect and fetch changes back into Hull. Alerts and warnings on integration can slow down outgoing or incoming data flow or in the worst case stop it completely.

Update Notifications emitted by the platform are queued before being processed by connectors. If the destination service cannot keep up with the pace of new updates the queue gets larger, those notifications waiting to be sent are called "backlog".

The backlog size will affect the delay you can observe between the moment a change is captured at the platform level and the moment this change is reflected in your external service.

Having no backlog is obviously the ideal situation because it means that changes are processed right away without any delays.

In reality there are different reasons for which the backlog can grow. For instance, you will probably observe transient backlogs due to sudden traffic spikes or manual mass operations.

On the other hand, a growing backlog sometimes can mean deeper data flow problems, such as infinite loops, slow external services, exhausted API call quotas or lack of optimization of your Hull setup.

The Status monitor reports general connector warnings and alerts for all installed and configured connectors (if a connector was just installed and required configuration was not set yet it won't appear in the status monitor).

For those connectors which process outgoing backlogs, it additionally monitors the size of the backlog and takes it into consideration when showing connector status. Currently, we apply the following thresholds:

  • < 500 - OK - green
  • 500 - 10000 - WARNING - yellow
  • > 10000 - CRITICAL - red

Screen Shot 2019-11-15 at 12.52.42 PM

Additionally, the status monitor shows the global health of all connectors in the past 24 hours. Every critical or warning entry on the timeline means that at least one connector was either in error state or its outgoing backlog exceeded the threshold.

How did the backlog size evolve over time?

In the connector Overview, the Metrics section shows you how the backlog size evolved over time. It's useful to understand why you might have noticed a delay in the past, and if you should expect more delay.

Screen Shot 2019-11-15 at 12.55.15 PM

How the number of outgoing notifications translates into number of 3rd party API calls? It’s very hard to tell what will be the resulting number of api calls, because some of the updates in the outgoing backlog may not be relevant because of your configuration and the connector will ignore them, on the other hand connector is doing it’s best to group into one api call as much changes as possible. That way your api quota usage is optimized.

Why do I have backlog sending data? How can I solve this ?

There may be multiple reasons for growing backlog. Below you can see list of some of them and proposed resolutions:

Traffic spikes

You had a traffic spike within the organization (this can be related to natural traffic spike, a manual mass operation such as new marketing campaign or an import of new data). Usually this is a transient situation and if no other issue are present, the spike should be processed over time.

Connector misconfiguration

One common cause of the growing backlog can be connector misconfiguration. For instance, missing or expired credentials prevent the connector from updating the service, accumulating backlog.

External Service API problem

There is a slowness or outage on the 3rd party API. Occasionally external service API may be slow or unavailable and connector won't be able to process any data before the 3rd party recovers. In that case the best thing is to check status of external service to confirm the root cause. If this is the downtime there is nothing to be done but wait for them to resolve the outage.

API calls limits

Most services have strict quotas regarding the usage of their APIs. Hull uses the best available methods to batch, consolidate traffic and reduce volume, but it might not be enough to ensure zero backlog. The connector won't be able to process any more updates before the limit is reset. Some services provides different API quotas in different plans so an upgrade can be an option. A If you frequently hit API limits, it is a good strategy to try and save some API calls by optimizing your data flow.

Update loops

Sometimes updates coming from different sources can "fight" with each other due to wrong configuration. Then you can see the same updates happening multiple times for the same attributes. You can use the attributes view to identify such loops. If you see anything concerning, review your connectors setup and contact our CS team.

Not optimized dataflow

More complex dataflow scenarios may trigger multiple consecutive updates on the same entity. This may lead to multiple update notifications over time related to the same user or account. You can try to optimize your logic to limit number of times you touch a given entity but combining the changes together. One update notification can carry any number of changes.

In case of any concerns related to your outgoing backlogs feel free to contact


  • currently, we can provide delay values reported in 15 minutes time windows only and there may be still occurrences of items which processing takes longer than the reported value
  • the status monitor does not show actual error messages for historical connector alerts, these details can be found on the overview page of the connector overview page (you can click directly on the status monitor list) and change the timeline to "Notifications" to see historical errors.
  • as for now some connector warnings and alerts are informational only and do not prevent the connector from operating normally, those levels will be adjusted for all integrations, so that going forward warnings and alerts are only reported when there is an actual issue affecting dataflow