UAT 2 SOP
Objective
The purpose of this document is to outline the User Acceptance Testing (UAT) process for the final round of testing of the Salama (DIGIT-HCM) platform rollout in Mozambique as part of the Group 4 distribution in the Tete province. The first round of testing was conducted between April 27th and 28th, 2023, and the platform was found to be satisfying most of the requirements for the campaign. Key learnings and the feedback from the first round of testing have been incorporated in the platform and the app is now being presented for the final round of testing.
UAT Methodology
During the testing process, a pre-approved list of test cases/scripts will be executed to verify the behaviour and performance of various functionalities with special focus on the feedback that was gathered from the last round of testing. The observations from the testing will be noted and classified as defects or enhancements. The suggested enhancements, if any, will be taken up for inclusion in the platform if found necessary.
UAT observation classification:
#
Observation type
Description
Addressing mechanism
1
Defects
Any observation pertaining to a feature or functionality not working as expected or agreed at the time of scope review, will be classified as issue
The observations classified as defects will be taken up by the eGov programme team for further validation, prioritisation and fixing. Minor issues originating due to incorrect configurations or erroneous labels, or translations, will be fixed and be made available for re-testing during the next UAT cycle.
2
Change requests (CR)
Any recommendations to enable or disable a functionality from the initial requirements or a new functionality to be added
These will be handled via the change control process as per the SoP defined, and will be evaluated based on their impact and effort.
The UAT team will execute all the test scripts. Users may also perform additional tests not detailed in the plan but remain relevant and within the scope of the project.
Users will report feedback to the eGov team for documentation and escalation using a Google sheet. These defects will be described, prioritised, and tracked by using screen captures, verbiage, and steps necessary for the development team to reproduce the defect. For change requests, the requirements will be described, analysed, prioritised and tracked. Information on defect and CR processing can be found later.
Test Phases
The various phases of UAT are shown in the below diagram:
Scope for UAT
UAT will be conducted in two phases with the below-mentioned scope. This document deals with the UAT-2 scope as UAT-1 has already been completed.
Test Phases
Tracks
Scope
UAT 1
HCM mobile app
Registration and Service Delivery, Supervision, Inventory, raising complaints from the mobile app
UAT 2
Retest issues/ observations from UAT 1
Retesting observations and accepted CRs/bug fixes and sign-off
Out of scope of UAT
For the below-mentioned functionalities, separate demos and/or training sessions will be conducted for the targeted user groups.
Helpdesk for complaints and user management
a. User management module to add/update/ activate/deactivate users and role mapping.
b. Handling complaints via inbox functionality.
Central, provincial, district dashboards, and reports.
During UAT, the team will validate the end-to-end flow and features of the application to validate:
End-to-end business flow for each of the identified user flows
All the required fields are present
System is accepting only valid values
User can save after form filling all the mandatory fields
Correctness of the label and its translations
Prerequisites for UAT
Following is the list of prerequisites for conducting the UAT:
The HCM mobile app for UAT deployed in the UAT environment.
Mobile phones setup with HCM app.
Configuration of the HCM mobile app in the UAT environment with master data from Zambezia.
Readiness of handouts
- Test case document
- Mockups
- Defect/change request reporting template
Availability of teams from NMCP, DIS and DTIC for UA testing.
Nomination of the participants for the UAT session so that test accounts can be created by eGov.
Configuration of ticket tracking tool for UAT (JIRA).
UAT Environment
The UAT environment will be similar to the production environment in terms of specifications, and will be provided by eGov, so that accurate assumptions can be drawn regarding the application’s performance.
Applicable IP addresses and URLs will be provided by eGov team to the UAT team, and all the mobiles will be configured for access to the test environment.
UAT Process
Each test participant will be provided with a checklist to verify access to the applications within the defined scope of testing. The tester will then login, perform the prescribed steps and generate the expected results for each activity. Any feature identified as missing or bug during testing from the UAT environment should be reported back to eGov.
UAT Data
Access to test data is a vital component in conducting a comprehensive test of the system. All UAT participants will require usage of test accounts and other pertinent test data which should be provided by NMCP upon request. All user roles should fully emulate production in the UAT path. The test accounts and role mapping shall be done for the users identified by eGov. Following are the sample test data for UA testing:
Sample master data
- Users data for test login creation
- Location master (AP/locality/village )
- Inventory module master (warehouse+product)
UAT Team
The test team is composed of members who possess a knowledge of the operations and processes on the ground.
These team members will be able to initiate test input, review the results,
Have prior experience/learnings from the campaign digitisation in Mozambique
All team members will be presented by the eGov team with an overview of the test process and what their specific role in UAT will be. The eGov team will oversee testing by assigning scripts to testers, providing general support, and serving as the primary UAT contact point throughout the test cycle.
Name of the participant
Project role/designation
Phone extension
Entity
(NMCP/DIS/DTIC/partners)
Note: The above table needs to be filled by NMCP/DIS/DTIC with the details of the nominated participants of the UAT session.
UAT Deliverables
The following sections detail milestones crucial to the completion of the UAT phase of the project. Once all dependent milestones have been completed, NMCP/DIS/DTIC will formally sign-off on the system’s functionality and distribute an email to all project stakeholders.
UAT Activities and Schedule
All core UAT activities along with the deliverable dates are described in the below table:
Task
Owner
Start date
Expected closure date
UAT-2
Incorporating the UAT-1 feedback in the HCM application based on the results of triaging process
eGov
8-May-23
7-June-23
Creation of the UAT test cases/scripts, and sharing with NMCP for review
eGov
31-May-23
7-June-23
Nomination of UAT participants so that user accounts can be created
NMCP/DIS/DTIC
5-June-23
7-June-23
Arrangement of venue and other logistics for UAT session
NMCP
5-June-23
9-June-23
Readiness of the UAT server instance
eGov
5-June-23
2-June-23
Installing test version of the HCM application in test phones
CHAI
8-June-23
9-June-23
Conducting the second round of UAT testing along with NMCP stakeholders
eGov, NMCP, DIS , DTIC, CHAI
16-Jun-23
16-Jun-23
Collation and triaging of UAT feedback and communication to NMCP
eGov
16-June-23
19-June-23
Incorporating any minor suggestions arising out of second round of UAT
eGov
20-June-23
20-June-23
Fit for go-live
The test results would be reviewed and the application will be declared fit for use during the Group 4 distribution
eGov
21-June-23
UAT - 2 Session Plan & Structure
The UAT session will be conducted physically in Maputo. The agenda for the UAT session is mentioned below. This may undergo change based on the scope for UAT-2.
#
Activity
Approximate duration
Time
Day 1(16th June-23)
1
Introduction to the UAT-2 session
15 minutes
8.00 am -8.15 am
2
Walkthrough of the mobile app
Walkthrough of the feedback from the previous (UAT-1) session
Walkthrough of the test scenarios to be executed
Walkthrough of the defect/CR reporting templates
Distribution of printed templates for capturing defects/CR
Distribution of test cases and mobile devices
60 Minutes
8.15 am-9.15 am
Testing of the Salama (DIGIT HCM app)-Session 1
45 minutes
9.15 am-10.00 am
Tea Break-1
30 minutes
10.00 am-10.15 am
3
Testing of the Salama (DIGIT HCM app)-Session 2
150 Minutes
10.15 am-12.45 am
4
Review of the testing performed
15 minutes
12.45 am-1.00 pm
Lunch Break
60 minutes
1.00 pm-2.00 pm
5
Testing of the Salama (DIGIT HCM app)-Session 3
2.00 pm-3.00 pm
Tea Break-2
15 minutes
3.00 pm-3.15 pm
1
Defect review: Clarifying Q&A on defects raised (if any)
45 Minutes
3.00 pm-3.45 pm
4
Session closure
15 minutes
3.45 pm-4.00 pm
UAT Sign-off
The mutually agreed defects/CR from UAT 1 will be re-tested in UAT 2.
Post the second round of testing as part of UAT-2 , the application will be deemed ready for the deployment in the production environment.
UAT Feedback
Any minor feedback arising out of the UAT-2 testing that requires minimal changes in the Salama (DIGIT HCM) platform will be incorporated before the campaign goes live.
UAT Defect Lifecycle
Defects must be clearly captured and escalated to ensure prompt resolution by development. Each defect submitted by UAT will be assigned a priority, worked by development, resolved, and re-tested by UAT prior to closure. The following is a snapshot of the standard defect lifecycle:
Prioritisation
eGov and NMCP together will prioritise and classify defects. Defects found in UAT can be assigned one of three levels of severity:
Critical defects: These are the defects found during the testing that render the application useless and prevent it from being used during the campaign. These will be resolved on priority.
Major defects: These defects make a part of the application functionality unavailable to the user but the user should still be able to use the application with limited functionality or a workaround exists. These will be taken up and fixed before the campaign starts.
Minor: These are the defects that reflect a deviation from the agreed scope but do not hamper the use of the application in any way. These will be considered for fixing only if the time permits.
UAT Change Request Lifecycle
Change requests (CRs) must be clearly captured and reported for analysis to identify effort and Impact to the eGov team. Each CR submitted will be validated and categorised for acceptance and then assigned with a priority. The development team will work on it and will be made available for testing. Following is a snapshot of the standard CR lifecycle:
Categorisation
eGov in consultation with NMCP will decide acceptance and categorisation of change requests. Change requests found in UAT can be assigned one of three levels of category:
Must have – Change requests that are needed for the success of a campaign. No workaround exists.
Should have – Change requests that are required for better tracking and monitoring and will increase the ease of use of the system for users. A workaround has been identified and is listed in the CR.
Good to have – Change requests that are simply for better visualisation and reporting. It could be excluded if not resolved by the scheduled implementation date.
eGov will endeavour to cover the "Must Have" changes before distribution. Lower priority changes will be taken through the eGov gating process for planning subsequent releases.
Success criterion
No critical defects found during the execution of the test scenarios.
90% of the total test cases could be executed successfully and the observed behaviour was found to match the expected results.
Post test activities and checklists
1. SUS Questionnaire : (For the application)
(On 5 point scale: Strongly Disagree, Disagree, Neutral, Agree, Strongly Agree):
I think I would like to use this tool frequently
I found the tool unnecessarily complex
I thought the tool was easy to use
I think that I would need the support of a technical person to be able to use this app
I thought there was too much inconsistency in this tool
I would imagine that most people would learn to use this tool very quickly
I found the tool very difficult to use
I felt very confident using the tool
I needed to learn a lot of things before I could get going with this tool
I could efficiently complete my tasks using the system
2. Feedback on the UAT process
On a scale of 1 to 10, how well did the UAT task represent real-world scenarios and user interactions with the system?
Was the UAT task comprehensive, covering all key functionalities and features of the system?
Did the UAT include a variety of scenarios, data inputs, and user interactions to thoroughly test the system?
Was the UAT task representative of different user roles, permissions, and use cases that are relevant to the system?
Did the UAT task adequately address any specific requirements or criteria that were defined for the UAT phase?
3. Sign-Off Checklist
All UAT test cases completed: Verified that all UAT test cases, as defined in the UAT plan, have been executed and completed.
Business requirements validated: Validated that all business requirements, as defined in the requirements documentation - all features, functions, workflows, calculations, translations have been thoroughly tested and verified.
Compatibility tested: Verified that the application has been tested on the specified device, which is used for the distribution (Samsung A09) and the operating system (Android), and any compatibility issues have been addressed.
All feedback have been identified and documented. Agreed on the priority.
Documentation updated: Ensured that all relevant documentation, including user manuals, has been updated to reflect any changes made during UAT.
UAT summary report prepared: Prepared a UAT summary report that includes the UAT results, findings, and any outstanding issues or risks identified during UAT.
References
The following are reference documents which have been leveraged for project specific information in the design of the UAT plan:
SUS Questionnaire
Annexure
UAT-2 Test Scenarios for Salama
Location: MISAU, Maputo
Day: 16th June, 2023
Scenarios
Registration & Distribution:
Register 5 households residing in a village in Zambezia. Capture how many numbers are living in the household, and fill in details (age, gender, mobile number) for the head of the household. Then, proceed to deliver bed nets:
a. Which is less than the number of bed nets suggested by the app for 2 households
b. Which is more than the number of bed nets suggested by the app for 1 household
c. Which is same as the number of bed nets suggested by the app for 2 households
As a distributor, you are required to do the distribution in another village than the one you did prior. Change the village and register two households. This time, skip adding the landmark in the household details page. Deliver the exact number of bed nets as suggested by the app.
You realise that you made a mistake while registering one member and want to correct your mistake. Search for a household head you have registered which has individual members added. First, correct the household head details by changing the age. Next, change the mobile number you entered previously with a new mobile number.
You are a registador and you forgot your password. What will you do?
You are not able to see your assigned village in the list of villages. What action will you take?
Sync all the records pending to be synced in the application.
You are a registador and you realise that one of the members you registered has already been registered by your colleague, and is a duplicate entry. What will you do to remove the duplicate entry from your records?
Supervision:
You are the supervisor for the team working in village A in Zambezia, and your team members are in the field distributing nets. As per protocol, you are expected to go and observe them work and record your observations in the Registration and Distribution checklist. Fill out the checklist and submit your observations for the following checklists. Mark ‘No’ as response for at least 2 questions in each checklist and provide the reasons.
a. Registration & distribution monitoring
b. District warehouse monitoring
c. Satellite warehouse monitoring
d. Local monitors training monitoring
e. Registration & distribution training monitoring
You are the supervisor for the team working in a village in Zambezia, and you have made four checklist entries so far. You are supposed to make five entries for the checklist "Registration & Distribution Monitoring" daily. How will you check how many entries you have made so far for today and for which boundaries?
Sync all the records pending to be synced in the application.
Warehouse Management:
You are a warehouse manager for a warehouse in an administrative post in Zambezia and received a stock of 2 bales today into your satellite warehouse from a district warehouse. Enter the receipt of this in the app.
You are a local monitor for an administrative post and received a stock of 45 bed nets back from the distribution team. Enter the receipt of this for the satellite warehouse you are managing in the app as a return.
You are a district warehouse manager and you distributed the following:
a. 200 nets to the delivery team
b. 5 bales to a community warehouse
c. Make entries for these distributions
You are inspecting the warehouse at the end of the day as a community warehouse manager and you counted the number of nets in stock at the end of the day. How will you make an entry for this for the following cases?
a. There are 50 more nets in the warehouse that you counted than the number of nets suggested by the system.
b. There are 10 less nets in the warehouse that you counted than the number of nets suggested by the system.
Sync all the records pending to be synced in the application.
Last updated