OpenSRP Developer's Guide

Introduction to OpenSRP

OpenSRP is an open source mHealth platform for government frontline health workers who provide Reproductive, Maternal, Newborn, and Child Health (RMNCH) services in low-resource settings. The platform combines a simple, easy-to-learn tablet application with the health expertise of WHO and pragmatic approach of top field implementers. The code base is open to outside collaborators, with the current core technical collaborators located in Indonesia, Bangladesh, Pakistan, and Kenya.

Who should read this

If you want to commit code to any OpenSRP repository, you need to read this entire document.

Coding best practices

All OpenSRP code is hosted on GitHub at https://github.com/OpenSRP.  

Write readable code.

Naming conventions

  • Classes - Class names always begin with a capital letter and if there are multiple words in the class name then each word must also begin with a capital letter, i.e. CamelCase.

  • Packages - Package names should always be in lowercase.

  • Objects/Variables - Instances and variables should start with lowercase and if there are multiple words in the name, then you need to use Uppercase for starting letters for the words except for the starting word. This is known as lowerCamelCase.

  • Methods - Methods in Java also follow the same lowerCamelCase convention like Objects and variables.

  • Constants - In Java constant variables are declared using “static final” modifiers. And such variables must contain only UpperCase characters and multiple words must be separated using ‘_’. Other constants that change based on build flavours, e.g. test, staging, prod should be specified in xml files for Android and .properties file in the webapps, e.g. db connection url.

  • Database tables - Relational database tables should be in lowercase, plural. If there are multiple words in the name, separate the words with an underscore. The corresponding Java model should be in singular and apply the classes naming conventions.


Third party libraries

External libs/jar files should only be added when it’s extremely necessary, e.g. adding Apache Commons lib in the Android code for the purposes of using classes like StringUtils is undesirable since this may lead to multidex issues coming up unnecessarily. Always check external library license details before including them in the project. If you are not sure about a library licensing interpretation, do not use it.

Imports

Do not use wildcard imports. If you are importing multiple classes from a package, list them all individually, like in the following code block, since this makes it clear exactly what imported classes are in use in any file.

import java.util.ArrayList;
import java.util.List;
import java.util.Random;

Do not do the following:

import java.util.*;

In eclipse, organize imports can be enabled to execute automatically on file save by enabling the feature in Project → Properties → Save Actions. This automatically gets rid of unused imports in a class.


Code formatting

It is recommended to use the default formatting feature of an IDE, e.g. Android Studio (AS), to format code. This generates consistently formatted code among developers, which in turn prevents unnecessary conflicts when merging code. To format code in AS use Ctrl+Alt+L (Option+Command+L for Mac). For eclipse use Ctrl+Shift+F (Command+Shift+F for Mac). In eclipse, code formatting can be enabled to execute automatically on file save by enabling the feature in Project → Properties → Save Actions.

Code commenting

Do not leave commented out code in the source files. Just remove it. There are some exceptions of course, such as if you're pretty sure it needs to be re-enabled at some point. To make sure you don't forget why that is, leave a clear TODO comment, e.g. "TODO this needs re-enabled when TRUNK-XXXX is fixed." Most IDEs are good at parsing out all TODO statements. When adding comments to the code follow the javadoc guidelines going forward to ensure that any developer can get the code and know what it does (remember we span out throughout the globe).

Exception handling

Use a given platform’s best exception logging infrastructure, e.g in Android, using Log.*  classes is the recommended way of logging exceptions since exceptions logged this way can automatically be captured by crash reporting tools e.g Crittercism, Crashlytics. When creating new exception classes, create an intuitive hierarchy of exceptions within the package they belong, only creating new Exception classes when branching on different types of exceptions within client/webapp is needed (add as needed instead of making extra classes "just in case" for every possible variation). In Android, as a general rule, use the class name as tag and define it as a static final field at the top of the file. For example:

public class LoginActivity {
   private static final String TAG = LoginActivity.class.getSimpleName();

   public myMethod() {
       Log.e(TAG, "error message");
   }
}

VERBOSE and DEBUG logs must be disabled on release builds. It is also recommended to disable INFORMATION, WARNING and ERROR logs but keep them enabled wherever they may be useful to identify issues on release builds. If enabled, make sure that they are not leaking private information such as email addresses, user ids, etc.

Resources


Code testing in OpenSRP

OpenSRP adopts Test Driven Development (TDD), which is an evolutionary approach to development that combines test-first development where we write a test before we write just enough production code to fulfill that test and refactoring. It’s one way to think through your requirements or design before we write our functional code (implying that TDD is both an important agile requirements and agile design technique).

Using this process, you write an automated test first, ensure that the test fails, write the code, and then ensure that the test passes. The benefits of the TDD process are that it drives the API and design, provides good code coverage, promotes safe code refactoring, and keeps developers focused on the task.

Generally, following TDD methodology, the developer will:

  • Create a feature branch

  • Write failing high-level tests that fail and commit (with a well defined commit message) this to the new branch.

  • Start working on the implementation code as the code is continuously committed to the feature branch

  • When the feature is finished, update and run the tests to make sure they pass

  • Merge the feature branch into the release branch

Unit tests

The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. A unit test should address a single point of functionality and should be fast, understandable (by a human), automatic (no human intervention), and isolated from external resources such as a database, file system, web service, and JMS broker etc.

Unit tests are useful in that:

  • They allow you to check the expected behavior of the piece(s) of code you are testing, serving as a contract that it must satisfy.

  • They also allow you to safely refactor code without breaking the functionality (contract) of it.

  • They allow you to make sure that bug fixes stay fixed by implementing a Unit test after correcting a bug.

  • They may serve as as a way to write decoupled code (if you have testing in mind while writing your code).


Rule of thumb: At least one unit test method per nontrivial public method (99% of methods will need multiple tests)

Integration testing

Integration tests address another aspect of testing where we test software modules in realistic production conditions to validate overall functionality. As a white-box strategy, integration testing is aware of internal implementation details. This time, we use a real database, real API calls and mocking is generally discouraged in these tests.

Since integration tests use a real database, extreme care should be taken to make sure that the tests are not set to run on production databases or resources. To handle this a test.properties file should be setup with the test resources definitions and the test build should also be made to fail if not provided with the build environment params. I.e. test builds should always check or automatically switch the properties file to test.properties.

How to distinguish integration tests from unit tests:

  • a test uses the database

  • a test uses the network

  • a test uses an external system (e.g. a queue or a mail server)

  • a test reads/writes files or performs other I/O

Server side integration testing

The Spring Framework provides support for integration testing with the Spring TestContext Framework, which lets you write tests for a framework like JUnit. In JUnit, the @RunWith annotation sets a Runner class, which is responsible for launching tests, triggering events, and marking tests as successful or failed

For server side integration testing, use the following Spring TestContext Framework features:

  • The @ContextConfiguration annotation is used to load a Spring context from an XML file, for example, @ContextConfiguration("/path/context.xml"). By convention, if an application context path isn’t set, the path is set to [Test- ClassName]-context.xml.

  • Spring TestContext caches the application context for better performance. This saves time on each test startup.

  • The @DirtiesContext annotation indicates that the application context for a test is dirty and that TestContext should close it after each test.

  • Spring TestContext defines a listener API for tests to interact with objects with the @TestExecutionListeners annotation. For example, the DependencyInjectTestExecutionListener class provides support for dependency injection and initialization of test instances. Spring TestContext adds this listener by default and injects field by type, annotated with @Autowired or by name with @Resource.

The following simple example uses these features:

@RunWith(SpringJUnit4ClassRunner.class)
@ContextConfiguration("/path/to/context.xml")
public class OpenSRPIntegrationTest {
  @Autowired
  private OpenSRPObject object;
  @Test
  public void test() {
  }
}

Functional testing

Functional tests focus on the functional requirements of an application, and use the black-box testing strategy. This involves providing input values and verifying output values without concern for any implementation details. Functional testing ensures that the app meets its functional requirements and achieves a high standard of quality such that it is more likely to be successfully adopted by users.

OpenSRP-client (Android)

Functional tests should be setup using Espresso to simulate user interaction using ViewMatchers class, Hamcrest Matchers or the onData method to locate view items.

OpenSRP-server(Java, HTML)

Functional tests should be setup using Selenium to simulate user interaction.

For both server and client apps, if a click on a view results in the creation, updating, or deletion of records in the database, the respective test should test for this change as well. E.g A test for the login page should test that all the required views are visible and respond to the clicks by either displaying validation error messages or navigating to a new screen and also validate that user data is saved in the sharedpreferences or in the local client/server database.

As a good practice and for code reuse and maintainability, all view access code in a given page should be put in a pageobject.

Continuous integration

OpenSRP uses Travis as its continuous integration server. On both our OpenSRP server and client, we have set up the .yml file, which calls the tests build tool.

On the OpenSRP server, we use the Maven build tool to build our code dependencies. Therefore, on the travis.yml file, we call Maven tests as shown below:

- mvn test

Testing conventions

  1. To implement a test case, start by creating a public class; by convention, post- fix the class name with Test or TestCase and let the name be prefix be similar to the class being tested, for example, SignInServiceTest. Because we use annotations, test class no longer needs to inherit from the JUnit TestCase class as it did in JUnit version 3. In Android every Espresso test class usually targets an Activity, therefore the name should match the name of the targeted Activity followed by Test, e.g. LoginActivityTest.

  2. Sometimes a class may contain a large amount of methods, that at the same time require several tests for each method. In this case, it's recommended to split up the test class into multiple ones. For example, if the SignUp class contains a lot of methods we may want to divide it into SignUpResetAccountsTest, SignUpRegisterUserTest, etc. Generally, you will be able to see what tests belong together, because they have common test fixture.

  3. To create a test method, you write a public method with no return value, no arguments, and a @Test annotation. By convention, the method name starts with the name of the method that is being tested. i.e.

    1. Template

      1. @Test void methodNamePreconditionExpectedBehaviour()

    2. Example

      1. @Test void signInWithEmptyEmailFails()

    3. Note: Precondition and/or expected behaviour may not always be required if the test is clear enough without them.

  4. At the start of a test run, JUnit/TestNG runs methods with @Before annotations before each test method. Use @Before methods to initialize new instances of test objects stored in instance variables. These objects are available in each test method, and are called fixtures. JUnit runs methods with @After annotations after each test method to clean up fixtures or other resources as necessary

  5. Because some resources are expensive to initialize and manage, you can have JUnit call setup and teardown methods once per test run. Use the @BeforeClass annotation to call a method once per test run before JUnit has run all test methods. Use the @AfterClass annotation to call a method once per test run after JUnit has run all test methods

  6. Use Methods in the JUnit Assert class to help you compare an expected value to an actual value. If an assert method contract is respected, the test continues; if not, the method throws an Error, and the test is stopped and marked as failed

Testing best practices

  • At least one test class per new class

  • All tests should be granular i.e., a single test should test for a single condition or outcome only. E.g. it’s not recommended to have if statements in tests, every condition in a method should have its own test method in a test class.

  • Every test must use Assert.assert___ methods to test output. DO NOT use println statements!

  • If committing new code, the tests for that code should be committed at the same time

  • If fixing a bug, write a failing test proving the bug, then fix the code to make the test pass.

  • Every unit test method should be well documented (javadoc) about what it is expected and where it is operating on.

  • Every database-based test must be accompanied by a dbunit test dataset.

  • Always remember to test for exceptions for methods that throw Exception

  • For a complex object, you only want to verify the behavior of the object, not the dependencies the object relies on. To achieve this goal, create mock objects for dependencies. E.g. when writing a unit test for a method that makes an expensive API call to other services, consider mocking the call to the other services.

  • Tests that manipulate db/file data or any other shared resource should reset the resources on exit

  • Tests shouldn’t have try/catch but instead should throw exceptions so that the tests report failures when an exception occurs. In the very special cases where a try/catch is required, in the catch block add Assert.fail(“”).

Resources


Milestones

OpenSRP milestones are one week durations - they start each Wednesday morning and go for two weeks, till COB on Tuesday of the 2nd week.  Milestones are labeled with the OpenSRP version number, the milestone number, and the date of the last day of the milestone. E.g. V2 -1- 04 Aug 2015 means this milestone falls under the version 2 development tasks, it’s the first milestone and the milestone is due by COB on 04 August 2015 (a Tuesday).    

The current milestone contains work expected to be completed in the current week.  Later milestones show upcoming work, or they show work that may take longer than 2 weeks.

When to change an issue's milestone

  • If you run out of work in the current milestone, pull up an issue from the next milestone, based upon prioritization, and make a comment.

  • If you realize you cannot complete your assigned issues in the current milestone, bump an issue down, based upon prioritization, and make a comment.

  • If you think there is an error in the prioritization, e.g. dependencies/blockers, production errors, user request, make a comment and talk to a Project Manager.


Issues

We use GitHub issues to track development tasks, bugs and documentation.  All issues should be labeled as appropriate, assigned to someone, and given a milestone.  All discussion about an issue should take place (or be transferred to) comments in the issue. Details, plans, and questions on implementation should be put in comments as well.

Sizing issues

  • All issues should have a size label of:

    • Small (< 1 week)

    • Medium (~1 week)

    • Large (1 - 2 weeks)

    • Extra large (2+ weeks)

NB: Before assigning Large or Extra large labels, explore possibilities of splitting the issue into smaller, more manageable tasks.

  • If you think an issue is the wrong size, change the label and make a comment.

  • If the issue is taking longer than the size the label says, change the label and make a comment that will assist the team to correctly determine the size.

  • If you need help sizing an issue, ask someone.

  • Only create issues that can be closed.


Prioritization

Prioritization of tasks is determined by two factors:

  1. Milestone: issues in the current milestone should be completed before issues in later milestones

  2. The Priority labels: issues within the same milestone can be prioritized using the Low, Medium, and High Priority labels  

NB: There is a third factor that affects prioritization: specific directions communicated by a Project Manager. This communication should be immediately translated into a milestone and appropriate labels.

Blockers

If an issue depends on another open issue, feedback from a stakeholder, or discussion with the tech/science team, it is considered blocked.

  • Add the Blocked label and make a comment. If it is blocked by another issue, reference that issue in your comment.

  • If all blockers are removed, i.e. you close an issue that was blocking another issue, remove the Blocked label on the other issue and make a comment.

Errors and deploy blockers

  • Reports of and found errors (bugs are errors) should get the Bug label.

  • Critical errors and any regressions (things that used to work but are now broken) need to be fixed before a deploy, these get the Blocks Deploy label.


Branches

We use GitHub Flow for our code control. GitHub Flow is a lightweight, branch-based workflow that supports teams and projects where deployments are made regularly.  If you are unfamiliar with GitHub Flow, please review the above link carefully to become familiar with the steps.  

Branches should be named according to the issue or issues (refer to the issues section above) it relates to in order to make it very easy for other developers to see what is currently being worked on, except for release branches. For example, on the OpenSRP client repo, if you are working on issue #166, then your work should be in a corresponding branch named issue166.  If there is no existing issue for what you want to work on, then first create the issue, then the branch.

No one should be opening branches in personal or private repositories - all OpenSRP-related work should go in the OpenSRP source code repository.

Rule of thumb: Always be updating your branch with the latest master changes to avoid having conflicts when merging back to the production branch.


Commits

First, be aware of ABC - Always Be Committing! Make frequent and small commits.

Second, the 7 rules of

writing
great
commit
messages:

  1. Separate subject from body with a blank line

  2. Limit the subject line to 50 characters

  3. Capitalize the subject line

  4. Do not end the subject line with a period

  5. Use the imperative mood in the subject line

  6. Wrap the body at 72 characters

  7. Use the body to explain what and why vs. how

Now some OpenSRP specifics:


Prefix all commits and pull requests with the initials of the person(s) who worked on it

When you see a commit or pull request, you should easily know who to ask to get more details about what's going on. To make this easy at a glance, prefix all commits and pull requests with the initials of the person or persons who worked on it, with multiple people delimited using a pipe (|). For example:

DC|RA: Migrate to gradle
MK: Remove live server calls from connector and web

Your initials should be the first characters of your first and last name, you can make an exception as along as it is 2-3 characters and it is consistent, easily attributable to you, and always the same 2-3 characters.


Include the issue number in the commit message, this enables GitHub to include the commit messages in the issue’s page

DC|RA Fixed Issue: Migrate to gradle #212


Closing issues via commit messages

To close an issue in the same repository, use one of the keywords: close, closes, closed, fix, fixes, fixed, resolve, resolves, resolved followed by a reference to the issue number in the commit message. For example, a commit message with fixes #45 will close issue 45 in that repository once the commit is merged into the default branch.


Rule of thumb:

  • Don’t commit broken code

  • Always commit at the end of the day before leaving the office

  • Don’t commit untested code

  • Don’t commit when the build is broken

  • Don’t go home after committing until the system builds and there are no broken features.

e.g. for version source control

https://blog.rainforestqa.com/2014-05-28-version-control-best-practices/

https://github.com/torvalds/linux/pull/17#issuecomment-5659933

Pull Requests (PRs)

Everyone should open pull requests early and aggressively to get others looking at your work and get feedback. A good rule of thumb is to have one PR per issue. This leads to two types of pull requests:

  1. Pull requests that can be merged

  2. Pull requests that can not (or should not) be merged

The title should succinctly describe the issue it is closing (including the issue number) and the solution chosen. If it is the second type of PR and cannot be merged, the title of the pull request needs to be prefixed with [DO NOT MERGE], for example:

[DO NOT MERGE] 63 Remove live server calls from connector and web

For either type of PR, the description should follow the below format:

  • New Features - list the new features and/or functionalities that are being added

  • Changes - list any changes that were made to existing code

  • Fixes - list the issues that are being fixed with this PR

  • Known Issues - list any remaining known issues that are still not resolved

Someone other than the original author should review a PR prior to merging.  One should never merge one’s own PRs unless absolutely necessary.  All PRs should be merged within the same milestone they were created.

What to look out for before merging a pull request:

  • Tests to confirm that the code changes are fine (include the full test class package and name)

  • Well commented and formatted code

  • No conflicts

To effectively merge a pull request the following procedure should be followed:

Assume you want to merge a pull request for a branch called issue212:

  1. First, make sure you have the latest master code in your local repo.
    git checkout master
    git pull origin master

  2. Then, pull the changes to your local repo and switch to the pull request branch.
    git fetch origin
    git checkout -b issue212  origin/issue212

  3. Merge master into the pull request branch.
    git merge master

  4. Test the code for conflicts, build errors, failing tests and most importantly confirm that the issue has been fixed. At this point you could use the github user interface to merge the pull request and skip step 5.

  5. Switch back to master and merge the changes from the pull request branch and push the changes.
    git checkout master
    git merge --no-ff issue212
    git push origin master

  6. Close the pull request.

  7. Delete the remote branch

    git push origin :issue212

Peer Reviews (Code Reviews)

Every time a feature is updated or a bug is fixed, a Peer review or Code review is required before the code/patch is merged to master.

The Developer is required to:

  • Create a PR(Pull Request) to the parent branch, usually master. However it could any other depending on the project set up e.g. develop branch
  • Assign 1 or 2 reviewers to the PR
  • Notify them. While Github does have notifications for the above actions, it is recommended to notify the reviewers you assigned especially depending on urgency 

The Reviewer is required to: 

  • Ensure all the CI status checks as set up on Github pass e.g. Build, Codacy, Code coverage 
  • Perform a manual review of the code. If anything requires an update, add a comment to the corresponding lines on the file within the Github PR
  • Check out the code on your local work station, deploy and :
    • Test the new feature while referencing the corresponding spec/Github issue
    • Test any other related feature the update might affect
    • Test for any possible backward compatibility breaks

If all the above pass, then the reviewer approves the PR.  Note that though Github does have notifications for the above actions, It is recommended to notify the requesting developer once you are done reviewing incase the update/patch is an urgent one.

The developer requesting the PR then follows these steps:

  • Merge the PR code from the issue branch to master
  • Delete the issue branch

In the case that the update required a release/snapshot, the developer is also required to:

NB: If its a release (as opposed to a Snapshot), then create a Release on Github as described here https://help.github.com/en/articles/creating-releases. (Uploading artefacts with the tags is optional) and also check out the OpenSRP guidelines here How to create a release APK

Release management

The release tag for major releases should also have an accompanying text document with complete, clear release notes for the code released. This release document should also be shared with the other developers through the mailing list.
Also check out How to create a release APK











Related pages