The status monitor is available on the overview page of your organization. It shows you in one single place the state of key components responsible for processing your data efficiently.
Monitoring the Ingestion Pipeline
All incoming records coming from connectors, website tracking or imports are processed by the platform to discover attributes and segment changes to eventually send it out to all relevant connectors. The latency you see on the status monitor is the upper time limit it can take on average for a single user, event or account to be fully processed.
Fully processed changes are visible in the dashboard on the segment preview, on the user or account profile and is queued on all connectors backlogs as well.
The status monitor currently shows observed delays rounded to 15 minutes - so 10 minutes delay will be reported as < 15 minutes
while 19 minutes delay will be reported as < 30 minutes
. Also, it reports 24 hours history with all occurrences of latency exceeding a threshold. Currently, we apply the following thresholds:
NOTICE: as for now, reported platform delays do not include the time a connector needs to pick up changes from external service. Some changes are transferred via webhooks which do not cause significant delays, but some require polling 3rd party API on a predefined time interval. Most of the connectors have an interval hardcoded while some allow to adjust it in the settings. The actual value can differ, but the default we apply for most connectors is 5 minutes.
Monitoring Connector Updates
Connectors integrate your Hull organization with all external services or sources of data. They are not only responsible for sending out updates, but also to detect and fetch changes back into Hull. Alerts and warnings on integration can slow down outgoing or incoming data flow or in the worst case stop it completely.
Update Notifications emitted by the platform are queued before being processed by connectors. If the destination service cannot keep up with the pace of new updates the queue gets larger, those notifications waiting to be sent are called "backlog".
The backlog size will affect the delay you can observe between the moment a change is captured at the platform level and the moment this change is reflected in your external service.
Having no backlog is obviously the ideal situation because it means that changes are processed right away without any delays.
In reality there are different reasons for which the backlog can grow. For instance, you will probably observe transient backlogs due to sudden traffic spikes or manual mass operations.
On the other hand, a growing backlog sometimes can mean deeper data flow problems, such as infinite loops, slow external services, exhausted API call quotas or lack of optimization of your Hull setup.
The Status monitor reports general connector warnings and alerts for all installed and configured connectors (if a connector was just installed and required configuration was not set yet it won't appear in the status monitor).
For those connectors which process outgoing backlogs, it additionally monitors the size of the backlog and takes it into consideration when showing connector status. Currently, we apply the following thresholds:
Additionally, the status monitor shows the global health of all connectors in the past 24 hours. Every critical or warning entry on the timeline means that at least one connector was either in error state or its outgoing backlog exceeded the threshold.
In the connector Overview, the Metrics section shows you how the backlog size evolved over time. It's useful to understand why you might have noticed a delay in the past, and if you should expect more delay.
How the number of outgoing notifications translates into number of 3rd party API calls? It’s very hard to tell what will be the resulting number of api calls, because some of the updates in the outgoing backlog may not be relevant because of your configuration and the connector will ignore them, on the other hand connector is doing it’s best to group into one api call as much changes as possible. That way your api quota usage is optimized.
There may be multiple reasons for growing backlog. Below you can see list of some of them and proposed resolutions:
Traffic spikes
You had a traffic spike within the organization (this can be related to natural traffic spike, a manual mass operation such as new marketing campaign or an import of new data). Usually this is a transient situation and if no other issue are present, the spike should be processed over time.
Connector misconfiguration
One common cause of the growing backlog can be connector misconfiguration. For instance, missing or expired credentials prevent the connector from updating the service, accumulating backlog.
External Service API problem
There is a slowness or outage on the 3rd party API. Occasionally external service API may be slow or unavailable and connector won't be able to process any data before the 3rd party recovers. In that case the best thing is to check status of external service to confirm the root cause. If this is the downtime there is nothing to be done but wait for them to resolve the outage.
API calls limits
Most services have strict quotas regarding the usage of their APIs. Hull uses the best available methods to batch, consolidate traffic and reduce volume, but it might not be enough to ensure zero backlog. The connector won't be able to process any more updates before the limit is reset. Some services provides different API quotas in different plans so an upgrade can be an option. A If you frequently hit API limits, it is a good strategy to try and save some API calls by optimizing your data flow.
Update loops
Sometimes updates coming from different sources can "fight" with each other due to wrong configuration. Then you can see the same updates happening multiple times for the same attributes. You can use the attributes view to identify such loops. If you see anything concerning, review your connectors setup and contact our CS team.
Not optimized dataflow
More complex dataflow scenarios may trigger multiple consecutive updates on the same entity. This may lead to multiple update notifications over time related to the same user or account. You can try to optimize your logic to limit number of times you touch a given entity but combining the changes together. One update notification can carry any number of changes.
In case of any concerns related to your outgoing backlogs feel free to contact support@hull.io