Most of the APIs provided by Chakra are sync (synchronous) APIs. This means that upon receiving an API request, the request processing will be completed before a response is returned to the client. Simple examples being the Update Process API.
However in certain scenarios it is not possible to complete the processing before the API response is sent back. This might be because the time needed to process is significantly higher than our nominal API response time. Or it could be because the system needs to wait for some event that has not happened yet.
In either case, these set of Async(Asynchronous) APIs accept the request and push the same to our Background Job service and returns a reference to the same job. In these cases it is expected that the client check back at a later time to figure out the status of the background job.
Bulk process update API is an example of such an asynchronous API.
Sync API Characteristics
Unless otherwise mentioned Chakra APIs are synchronous. Synchronous APIs ensure consistency of execution when invoked synchronously. Example:
In this example we update a particular process 3 times one after the other. Under these circumstances the final score of the lead will be 3. Which is expected as 3 was the last score update.
However, if we change the example and fire all the updates almost simultaneously without waiting for preceding requests to complete:
The final score of the lead in this case could be 1,2 or 3. There is no way to predict the final score of the lead after all the updates. This is because all 3 api requests will be accepted parallely and will result in a race condition.
So in case of synchronous APIs it is recommended that the client ensure synchronization while making requests to a single resource.
Async API Characteristics
In case of async APIs the request payload and processed in the background as soon as possible. These APIs will return a background job id which can be used to track the status of the request.
Bulk Async APIs
Async APIs which accept bulk data i.e. a large number of sub-requests try to parallelize as much of these requests as possible. Higher order of parallel processing increases the speed of processing.
Because of the parallel processing bulk apis do not guarantee order of execution for sub-requests. So these can result in race conditions if the same resource (process/task/record) is part of multiple sub-requests in the same request api.
So plan your bulk api calls to ensure that the a resource is part of only one sub-request.