Connector Outgoing BacklogUpdated 18/05/2019


The Connector Outgoing backlog numbers can help you understand why a given update hasn't arrived yet in your external service. Learn more about what creates backlog and how to reduce it


Outgoing backlog

Update Notifications emitted by the platform are queued before being processed by connectors. When the connector cannot keep up with the pace of new updates the queue gets larger, those notifications waiting to be sent are called "backlog".

IMPORTANT: currently, the outgoing backlog does NOT include replays of users or accounts sent from the dashboard.

The backlog size will affect the delay you can observe between the moment a change is captured at the platform level and the moment this change is reflected in your external service.

Having no backlog is obviously the ideal situation because it means that changes are processed right away without any delays.

In reality there are different reasons for which the backlog can grow. For instance, you will probably observe transient backlogs due to sudden traffic spikes or manual mass operations.

On the other hand, a growing backlog sometimes can mean deeper data flow problems, such as infinite loops, slow external services, exhausted API call quotas or lack of optimization of your Hull setup.

Current backlog overview

Screenshot 2019-05-21 at 16 24 29

You can get a quick overview of the current backlog size for a connector on the connectors listing.

  • Connectors with a small (< 500 messages) backlog will show "< 500".
  • Connectors that don't consume data (such as Incoming Webhooks or Google Sheets) won't show any number

You can see a historical trend for backlog size by entering the connector details screen. This graph shows the last 3 hours of data.

How the number of outgoing notifications translates into number of 3rd party API calls? It’s very hard to tell what will be the resulting number of api calls, because some of the updates in the outgoing backlog may not be relevant because of your configuration and the connector will ignore them, on the other hand connector is doing it’s best to group into one api call as much changes as possible. That way your api quota usage is optimized.

Screenshot 2019-05-21 at 16.25.11

Reasons and possible resolutions

There may be multiple reasons for growing backlog. Below you can see list of some of them and proposed resolutions:

Traffic spikes

You had a traffic spike within the organization (this can be related to natural traffic spike, a manual mass operation such as new marketing campaign or an import of new data). Usually this is a transient situation and if no other issue are present, the spike should be processed over time.

Connector misconfiguration

One common cause of the growing backlog can be connector misconfiguration. For instance, missing or expired credentials prevent the connector from updating the service, accumulating backlog.

External Service API problem

There is a slowness or outage on the 3rd party API. Occasionally external service API may be slow or unavailable and connector won't be able to process any data before the 3rd party recovers. In that case the best thing is to check status of external service to confirm the root cause. If this is the downtime there is nothing to be done but wait for them to resolve the outage.

API calls limits

Most services have strict quotas regarding the usage of their APIs. Hull uses the best available methods to batch, consolidate traffic and reduce volume, but it might not be enough to ensure zero backlog. The connector won't be able to process any more updates before the limit is reset. Some services provides different API quotas in different plans so an upgrade can be an option. A If you frequently hit API limits, it is a good strategy to try and save some API calls by optimizing your data flow.

Update loops

Sometimes updates coming from different sources can "fight" with each other due to wrong configuration. Then you can see the same updates happening multiple times for the same attributes. You can use the attributes view to identify such loops. If you see anything concerning, review your connectors setup and contact our CS team.

Not optimized dataflow

More complex dataflow scenarios may trigger multiple consecutive updates on the same entity. This may lead to multiple update notifications over time related to the same user or account. You can try to optimize your logic to limit number of times you touch a given entity but combining the changes together. One update notification can carry any number of changes.

In case of any concerns related to your outgoing backlogs feel free to contact support@hull.io