Design Decision Log
Sync
Server should be responsible to handle or offer a mechanism to handle errors once data reaches it. (Error handling mechanism being designed by Platform).
Sync will be manually triggered by user (except for automatically syncing configuration down on first login).
Sync down of data is not required to supported (good to have) for first roll out.
Sync will be done through bulk APIs
Sync will send each save action performed by the field user while offline to the server to preserve an audit of actions performed by the field user.
Eg: If a field user registers an individual and saves the record and then goes back to update the individual, this will result in 2 API calls to the server (one create and one update).
API Response compression Gzip will be turned on for all the endpoints
Unique identifier to be used between client and server
Client will create a unique (client generated) identifier per entity (clientReferenceId) to relate entities while operations are being performed offline and send this id to the server for create operations.
Any subsequent operations (like update, delete) will be on server gen id, so client has to get server gen id by searching using client generated id. Client will call search API with clientReferenceId and will get server generated id.
Client will update the server generated id into the records on the client and use the server generated id for subsequent updates for those entities
Processing on the backend will continue to be asynchronous and not changed for the health use case.
Configurability
Backend
Search API max limit and default limit
Service details and endpoints
Frontend
Service details and endpoints
Number of entities sent to server in bulk API calls
Additional fields per entity in the app
The drop-down values or select options to be presented in the app for field users
Configuration changes are expected to be additive in order for the data captured against such configuration to continue being usable.
Number of retries on API call failure after which app should stop retrying
Timeout for API calls
Access Control
Access to data will be defined by the linkages of staff to project and the corresponding boundary while respecting role-action mappings instead of the tenant based approach.
Data
All deletes are soft deletes
Server is responsible to delete nested/child entities if applicable when the parent entity is deleted
Undeletion is not permitted
Duplicate updates do not need to be detected or filtered out. Client is responsible to send duplicate updates on entity as separate requests to server.
Since persistence in DIGIT is asynchronous but the sync process from the field worker app is likely to send dependent data in a short span of time, a cache is to be introduced against which dependent data can be validated if required while it is in the process of being persisted to the DB. This cache is exposed via the search API of the corresponding service and the caller is agnostic to whether the result was returned from cache or database.
For v1, we can maintain sequence of updates only for requests on a single entity by a single user. i.e. Updates on the same entity by multiple users will result in last caller's updates going through.
In order to maintain sequence of updates, a rowVersion field is introduced in every entity. Callers pass the rowVersion as received from the server. The server can detect if any out of sequence updates have come in and reject it / pass it to the error handling mechanism.
Search APIs will not return (soft) deleted data by default. If deleted data is required the includeDeleted param can be passed to the search APIs.
Limitations
From the Product requirement perspective, there is no unique identification identifier, so Platform is unable to check uniqueness of registry entities.
Last updated