This document defines the software requirements for the distribution point features for Reveal. A distribution point is a centralized location where individuals meet to receive care. The primary workflows are funded under a grant for Neglected Tropical Diseases (NTDs) which focuses on distributing medication among children who are attending schools in Eswatini. The functional requirements are outlined on NTD School Workflow and the data dictionary. This document applies those functional requirements into the Reveal technical architecture.
...
The reporting needs are defined in the data dictionary from Akros. This will require changes to the Canopy flow to make sure the data is available to the data warehouse and Superset.
Web UI Reporting Needs
Dashboards will follow the current IRS model
Drill down to school level with the same set of stats (Region, Inkhundla, School)
At the school level, cut statistics by age category
6-10
11-15
16-18
The reporting needs are defined in the data dictionary from Akros. This will require changes to the Canopy flow to make sure the data is available to the data warehouse and Superset.
Canopy Components
A NiFi and Superset instance will need to be deployed (along with all necessary backend data stores) to support the OpenSRP Web UI dashboards and task generation features. The NiFi instance will have two main flows:
Latest version of the Reveal OpenSRP connector, or a generic version if at all possible
Customized NTD task generation logic. Initially this will be single-pass generation of all tasks when a plan becomes active.
If it is impossible to guarantee all clients affected by a plan will be registered in OpenSRP when the plan becomes active, a second iteration which does some form of dynamic task generation in response to client changes will need to be researched and built.
The Superset instance will have the following slices exposed:
Aggregate target statistics per juridiction+level
Non-aggregated target statistics
Due to the the complexity of these statistics, they will probably need to be recomputed on a (smart or dumb) periodic basis vs aggregated for every request. As a first iteration a simple time interval will be used, but if this is not sufficient we can iterate with marked-as-dirty tables and triggers or other solutions.
CSV Upload → Event Generator
...
Users should not have the capability to download large lists that contain personally identifiable information about the clients.
Access to download rosters will, at maximum, provide identifiers that can be linked to people registered in the system.
All uploads need to be validated to include the following:
Valid CSV architecture with UTF-8 encoding
Individual fields are parsed and validated to remove exposure to SQL injection threats
Individual fields are parsed and validated to remove exposure to cross-site scripting threats
(We probably need to add more here)
Out of Scope
Linking people who were registered in the distribution point workflows to families or structures
...