Firehose Guarantees and Best Practices
Firehose API provides the following guarantees as described in this documentation. Typically, many customers will be sharing internal Firehose event queues and delivery characteristics may be impacted by total traffic exacted on these event queues.
Availability
Each and every request will be handled within no more than 1s. If no data is fetched from a Firehose internal queue within that time, Firehose will return an empty collection. This may happen even if there is still some data further in the queue for particular account but Firehose had to pass through other accounts' data. When such a situation occurs your position can be moved forward and you should just query the API again with your new, desired position.
You must always re-check the top
property to determine if there still is data in the stream—do not rely on the number of items returned in the page.
End-to-end Latency
This SLA is guaranteed as a function of a data generation rate. Firehose guarantees that event(s) representing user interaction will be available in a few seconds, provided that number of such interactions is less than 10 per second.
Consistency
Data appearing in Firehose API stream will eventually become consistent. Firehose API client will attempt to find a consistent state that's subject to the Latency SLA.
Ordering
The Firehose API stream is returned in order of event processing times (e.g. from the oldest to newest event) for a single resource id
(e.g. particular deal instance).
Firehose does not provide ordering guarantees between different resources e.g. no correlation between changes on deals and contacts. Neither does it provide ordering guarantees between different instances of single type of resource - e.g. between different deals.
Sequence number
This number will be an upward-trending number that exists in every snapshot. You can use it to determine which snapshot is older within particular resource id
.
While you cannot see two different snapshots with the same number, you can view the same sequence more than once with the same snapshot (duplicates from at-least-once delivery guarantee). Gaps may occur in the delivery of this number.
Message Delivery
Events in Firehose API are delivered with at-least-once delivery guarantees. This means that some events may appear within the stream more than once but they will always maintain their expected order. Duplicates are identical, and display the same data while retaining the same sequence number. As a result, clients should handle events in an idempotent way.
Data Retention
Event Stream Endpoint
This endpoint keeps full events history for the last 72 hours. The events older than 72 hours will be deleted. You can use this data to back fill your application if it has been down for some time.
Firehose guarantees that retention time is at least 72h, sometimes it could be a little bit longer.
Events Granularity
User interaction with Sell UI (Web or Mobile) may produce one or more events in Firehose API. Firehose tries to aggregate changes to produce single event per user interaction, but some data changes in Sell happen asynchronously and Firehose API favours low latency over smaller number of events. As a result you may see more than one event in Firehose from the same user interaction. For example when changing address of your contact and customer and prospect status at the same time you will see separate events as customer and prospect status changes are made asynchronously in Sell.
Best Practices
Consumption Strategies
Client uses position
parameter to control its position in the stream. Depending on actual use case two different consumption strategies can be implemented with Firehose API - at least once
and at most once
consumption.
At Least Once
In this case client aims to guarantee that every message is processed at least one time. To achieve that the following algorithm needs to be implemented:
- Send a
GET
request against Firehose API with the position received in the last call (ortail
if reading from the beginning). The response will include a page of events since the specified position and will return the new position to get next page events. - Process the events.
- Save the position parameter you received in a non-volatile memory to track the fact that you processed all the messages up to this point. During next request, use this position to get newer events.
If your worker crashes at any time during processing phase the new instance of worker can take over from the last saved position. It means that messages from last save-point will be processed again.
In case of individual failures during processing phase there are two options:
- Store a message in dead letter queue for later processing or retries.
- Retry to process the data, typically with some back-off policy. In this case you will not proceed within the stream until current item is processed successfully.
At Most Once
In this case clients aims to guarantee that messages are processed at most one time, which means some items may not be processed at all. To achieve that the following algorithm needs to be implemented:
- Send a
GET
request against Firehose API with the position received in the last call (ortail
if reading from the beginning). The response will include a page of events since the specified position and will return the new position to get next page of events. - Save the position parameter you received to track the fact that you possibly processed all the messages up to this point.
- Process the events.
If your worker crashes before you processed all data, next time it will start from the new position, meaning some messages might be skipped.
Scaling Events Collection
With Firehose API you can use single client process to effectively collect data from single account with high read rate, proper ordering and very limited number of duplicates. Using multiple client processes for single account would require complex synchronisation of client's position or duplicate processing.
Scaling Data Processing
If your client needs to consume at very high throughputs, you should not process the data on the same thread that collects the data. Processing time would effectively reduce your collection rate and as mentioned above. You cannot parallelize collection within single account. Recommended steps to decouple event processing from consumption are:
- Send a
GET
request to collect the page of events. - Possibly filter out uninteresting data.
- Spawn asynchronous job to process events or push the data to the queue consumed by another worker or set of workers. In case of failures, background processes should handle retries.
- Save your position and proceed within a stream.
This way you will keep consuming Firehose stream very fast and will be able to independently scale data processing.