Most Powerful Open Source ERP

ERP5 Guideline Commit Best Practice

Table of Contents

Introduction

This guideline summarizes conventions covering testing and committing code to the ERP5 codebase, but it should also be applied to publishing quality code in general.

Testing

ERP5 is developed in a test-driven approach. This means that before the first line of code is written, a failing test should be made. Then the code to make this test pass is created. Fast forward to complete ERP5 development and large scale projects and there will be the need to automate to ensure that any code submitted does not compromise the existing codebase. Having tests for is very important, because tests allows to save large chunks of develoment of time. They allows checking some parts of the code quickly, automate regression tests and doing critical refactoring of code. Without tests, it's impossible to reach good quality of code and services.

Using Test Suites And Automated Tests

A large ERP5 project uses "test suite" documents accessible via the Test Suite Module to run automated tests within ERP5 and output results to the Test Result Module. Inside an ERP5 repository:

/tests/__init__.py
/product/*
/bt5/*

tests/__init__.py defines a test suite with a list of tests to be executed in a project specific manner (ERP5 repo itself has a class "ERP5" for unit and a "PERF" class for performance tests.

Test Suite Configuration

Several parameters can be defined on a test suite level:

  • Title: must be unique and is used to retrieve results in Test Result Module.
  • Test Suite: name of suite, like "ERP5" or "PERF".
  • Additional bt5 path list: List of bt5 paths which are not part of the softwae release of ERP5 (this means missing for example custom bt5s).
  • Project Title: link to associated project.
  • Distributor: Should be ERP5-Projects, used to select different clouds of testnodes.
  • Priority: Number of cores required. Be reasonable. 15 cores for small projects makes no sense and pollutes available infrastructure.

Once a test suite is set you can add test suite repository objects for every repository relevant for this test (usually only one repo is needed) specifying the following parameters:

  • Url: url of the repository, it might contain password for private repositories (for private repository, you could use nxdtestbot:opah9Fo9Vi username and password.
  • Branch: which branch of this repository should be used.
  • Buildout section id: what is the section name of the software to override (the section in the software getting this repository).
  • Profile Path: path, in first given repository, to a buildout configuration file describing the environment to test.

The automated test suite will checkout every repository in vcs_repository_list, and as soon as there is a commit in one of them, the list of revision of all these repositories will be given to Nexedi ERP5. Then Nexedi ERP5 can check if there are other automated test suites running on other computers and will either create a new test result or ask the current node to synchronize.

Once the testnode knows that it must run tests, it first run slapos commands in order to update/create software and to create new instance (so we have software and instance inside erp5testnode partition). Once the software is built and the instance is created, the command runTestSuite is launched.

In order to define where email notification will be sent, please set the field "Stakeholder" on a project. If you are using a mailing list, you can create an organisation representing the mailing list (which is a group of person).

Selenium UI Tests

Running UI tests happens with help of firefox which is run inside a virtual X screen using Xvfb. As these test simulate user interaction with system it is required to make sure that always node-quantity=1. If developer wants to see how firefox is working he can use following command on the machine running tests (or use ssh tunneling if appropriate):

x11vnc -display :123
vncviewer localhost:5900
If tests are on remote machine he can use ssh tunnel as follows:

ssh -t -L 5900:localhost:5900 user@machine_ip-474 'x11vnc -localhost -display :123'
vncviewer localhost:5900

Live tests and production environment

Live tests are really good candidates to check if a production server is running correctly. Some advices above can help, but there is some risk. Like it can be possible to show to users data entered by tests. Some messages could be wrongly sent to external software (like bank money transfer !). Some data entered by users could be altered.

Here we will publish more informations once we have more experience with it.

Testing infrastructure

Architecture

Unit test are run by launching one or several nodes, each launching one or more parallel tests. All nodes are synchronized through Nexedi ERP5 which is in charge of distributing the work and to make sure every node is testing same revision.

Each node is installed thanks to Slapos tools, soon it will be even managed through Slapos Master (Vifib). Code is handled by git, and nodes are in charge of checking if there is new commit, and if there is new commit they can ask Nexedi ERP5 if there is some work to do.

The trick is that some projects might depends on several repositories. You can check below how Nexedi ERP5 is doing synchronization.

Nexedi ERP5 Test Suite Distribution

Test Nodes doesn't know in advance on which project they are going to work on. Therefore every test node is defined with the url of a distributor. The test node will call "startTestSuite" on the distributor and it will get all needed parameters to work on one or many projects.

The first time a test node calls startTestSuite, Nexedi ERP5 is going to look if this test node already exists. If not, then it will be created under test node module.

From time to time, an alarm (task_distributor_alarm_optimize) looks at all defined test suites and available test nodes and distribute the work to do. This alarm avoid moving test suite from a test node to another uselessly. In the same time, this alarm is checking for all test node that seems dead (10 hours without sending message) and invalidate them. Like this test suite allocated to a dead test node will be moved to another test node automatically.

Nexedi ERP5 Task API

Nexedi ERP5 provide an API to start, stop unit test. This API also allows to report failures, results (tests passed or failed), status and allows to know if there still ongoing work. The API is available here.

Any tool test running reporting results on Nexedi ERP5 must use this API. Soon a library will exist to allow using it easily. This API allow to handle from simple cases (like one unit test at a time on a single server) to complex cases (an unit test running on several servers, with different number of cores on each server). For now a working example using this library is available in runTestSuite and ERP5TypeTestSuite.

Test Nodes

Test nodes are created on vifib.net using nexed_development_service account. Their configuration is quite trivial, please see existing examples if you wish to create a new test node. A typical configuration is:

<?xml version='1.0' encoding='utf-8'?>
<instance>
  <parameter id="test-node-title">COMP533-3Nodes-ERP5PROJECT1</parameter>
  <parameter id="node-quantity">3</parameter>
  <parameter id="test-suite-master-url">https://webbuildbot:a2fgt5e1@www.tiolive.com/nexedi/portal_task_distribution/erp5_project</parameter>
</instance>

Parameters:

  • test-node-title: title of the test node, must be unique and should give idea of the computer used.
  • node-quantity: how many parallel test to run
  • test-suite-master-url: url of the distributor, make sure to select the right one depending on wich "cloud" you want your test node to belongs to (erp5_project, erp5_performance, erp5_scalability ?)

For a 4 core server (8 with hyperthreading), it is good to create only 2 test nodes with a node-quantity of 3. Like this at most 6 threads are used. Also having only 2 test nodes is important, if we use 6 testnodes of 1 core, then we need to install much more software than if we have only 2 nodes. So to save disk space, it is better to not have too much test nodes.

Nexedi ERP5 synchronization

Let's say we have 2 testing nodes daemon (A and B) running on two different computers. Each daemon is doing checkout or update of repositories. Since A and B can run with several minutes or even more of interval, Nexedi ERP5 needs to make sure that both A and B are testing same revision. Therefore testnode A (assuming no test is already started) will do:

  • Checkout/update repository 1, get revision x, checkout/update repository 2, get revision y.
  • Ask Nexedi ERP5 to start unit test for a particular test_suite title with following pair of repository and revisions: [('repository 1',x), ('repository 2',y)].
  • And then Nexedi ERP5 check if there is already running test, if not create new one and then returns pair to tests. So here we assume there was not test, so it returns [('repository 1',x), ('repository 2',y)].
  • Then runTestSuite is launched for [('repository 1',x), ('repository 2',y)]

And then testnode B will do (running a little bit after testnode A):

  • checkout/update repository 1, get revision X, checkout/update repository 2, get revision Y.
  • Ask Nexedi ERP5 to start unit test for a particular test_suite title with following pair of repository and revisions: [('repository 1',X), ('repository 2',Y)].
  • And then Nexedi ERP5 check if there is already running test, see there is one, so returns previous commit x, and y;[('repository 1',x), ('repository 2',y)].
  • Then runTestSuite reset git repositories and launch test for [('repository 1',x), ('repository 2',y)].

Like this we are sure that all computers running the same test suite will be synchronized.

Commits

Checklist

The following applies only to the commit itself. It is expected that, changes to be committed have been tested, which means

  • You checked that system is usable after your change (instance is able to start, rendering is working, etc).
  • You used specific tools to validate files (for python minimum is to run pyflakes/pylint EVEN after fixing a typo)
  • You read your patch before committing.
  • You admit being responsible of the code you commit and its effects.
  • You never commit any pdb-related code (import, call to set_trace, ...).
  • You follow the rules stated in Subversion Community Guide.

The Case of Business Templates

Since Business Templates are issued from automatically-generated files, the developer's control on their content is limited. Things like indentation changes happen independently from developer will. Some of the following rules can't be strictly applied to Business Templates. Still, the developer must stick to them as much as possible, for example by committing separately his changes and file format changes (indentation, ...).

Recommendation

#

Create Resonable Log Messages

The code one commits will end up in multiple contexts :

  • Developer testing a site
    He can accept to see many logs, but often he is only interested in his own.
  • Unit test automatons
    Logging can be very useful to track down the problem after the test was over and the problem disappeared along with the site which generated it.
  • Production environment
    Logging has a cost, which is slowing down zope by doing extra disk I/O. Those I/O are done synchronously and so cost much more than any other kind of disk access.
  • Newcomer trying it out,
    Anybody stating an unknown service for the first time can get lost among the heap of log message it might happen to generate. So we must make it as easy as possible for a newcomer to understand whether the system is working at all, which is often harder to tell than developers can imagine.

Keep these "audiences" in mind when adding a log message.

#

Read Patch Before Commit

Read your patch before committing. You are responsible of the code you commit and its effects.

Rule

#

Live Tests Should Not Be Run On Production

(used to be "all tests must be live tests") Live tests are really good candidates to check if a production server is running correctly. Some advices above can help, but there is some risk. Like it can be possible to show to users data entered by tests. Some messages could be wrongly sent to external software (like bank money transfer !). Some data entered by users could be altered.

So some rules and good usage practice must be written to do live test on production environment in a safe way. Here we will publish more informations once we have more experience with it.

#

Modify Correct Changelog On Business Template Commits

There are two changelogs to be aware of.

One is a property of the Business Template. This one should contain only information about major feature change like addition of a workflow, a module, an action on a portal type, etc.

One asked at commit time. This one must contain the purpose of the commit itself, like refactoring, feature change, bug fix, ...

#

Log Clear Enough To Disable Easily

The log must be clear enough to track it in the code (applies both to products and to ZODB) to disable it.

#

Whitespace In A Separate Commit

We are explicit about white[spaces|lines]. If the purpose is not to remove/add an empty line, then the patch must not contain such change. Same goes for python-meaningless spaces. A separate commit must be done when fixing the coding style, especially in python where some extra spaces can cause bugs. Although I believe everyone reading this page has a decent python knowledge, it's better to be safe than sorry.

#

End A File With Empty Line

Always have a blank line at the end of files, so that unified diff does not show a diff for the last line in some cases. Some tools also misbehave if the last line of data in a text file is not terminated with a newline or carriage return. They ignore that line as it is terminated with ^Z (eof) instead. An empty new line will also show the file opened was not trunceted.

#

Logs Must Contain Their Cause

The log must be complete enough so that you can provide ideas of its cause by just looking at it in a unit test report.

#

Test Changes Before Committing

Test your changes, by test runs and/or manually. Check that system is usable after change (instance is able to start, rendering is working, etc). You can use specific tools on files (for python minimum is to run pyflakes/ pylint EVEN after fixing a typo)

#

Commits Are Purpose Based

A change in the code comes from a motivation : fix a bug, add a functionality, fix typos, add comments,... This purpose must be clearly stated in the changelog message, and it must actually be the purpose itself, not how it was implemented. This way, one provides others with a way to check that a given behavior in the modified code is desired or not.

#

Commits Are Atomic

Provided such motivation exist, all the modification made on the working copy needed to fulfill this motivation must be committed at the same time. This prevents the repository from being broken during a split commit.

#

Use Log Levels

Use wisely the log level so that any user can understand whether it's an important failure or just some execution progress symptom.

Crime

#

Never Create Interdependent Test Methods

When unit tests are interdependant, it is hard to identify which unit is really broken. Indeed, if test_Y needs test_X, then when test_X fails, test_Y will also fails, even if the unit tested by test_Y might still work. Also, if you do this way you can't run a single test at once and you will thus loose so much time either to run all tests or to find which tests must be run before this one.

There is many way of avoid such practice, see below good example with some shared code that will create needed objects.

Good Example:

Bad Example:

def test_X(self):
  foo = self.getFooModule().newContent(id='foo')
  # Here start some assertions for functionnality A
 
def test_Y(self):
  foo = self.getFooModule().getObject('foo')
  # Here start some assertions for functionnality B

Good Example:

def createFoo(self):
  return self.getFooModule().newContent()

def test_X(self):
  foo = self.createFoo()
  # Here start some assertions for functionnality A

def test_Y(self):
  foo = self.createFoo()
  # Here start some assertions for functionnality B
#

Never Commit Debugger Code

Never commit any Pythong Debugger (pdb) related code, such as (import, call to set_trace, ...).

#

Never Log Level Info Or More In Execution Path

There should be no "trivial" logs in the normal exection path (see Logging in Python and EventLogger).

Existing Log Levels are:


def severity_string(severity, mapping={
    -300: 'TRACE',
    -200: 'DEBUG',
    -100: 'BLATHER',
       0: 'INFO',
     100: 'PROBLEM',
     200: 'ERROR',
     300: 'PANIC',
    }):
    """Convert a severity code to a string."""
    s = mapping.get(int(severity), '')
    return "%s(%s)" % (s, severity)