Skip to content

Project Conventions

The main project conventions are outlined below. They consists of conventions related to the development of analysis notebooks and API code.

1. Notebook Conventions

The notebook conventions aim at providing a coherent experience across all circuits and analysis types

1.1. Reproducibility

The project is an R&D endeavour, with continuously evolving requirements. In order to keep track of changes, both API and notebooks are versioned. Both versions are indicated at the top of an HTML report. This allows for reproducibility of results even after changes in the project.

1.2. Plotting

  • Scales are labeled with signal name followed by a comma and a unit in square brackets, e.g., I_MEAS, [A].
  • If a reference signal is present, it is represented with a dashed line.
  • If the main current is present, its axis is on the left. Remaining signals are attached to the axis on the right. The legend of these signals is located on the lower left and upper right, respectively.
  • The grid comes from the left axis.
  • The title contains timestamp, circuit name, and signal name allowing to re-access the signal.
  • The plots assigned to the left scale have colors: blue (C0) and orange (C1). Plots presented on the right have colors red (C2) and green (C3).
  • Each plot has an individual time-synchronization mentioned explicitly in the description.
  • If an axis has a single signal, then the color of the label matches the signal's color. Otherwise, the label color is black.

1.3. Analysis

  • We consider standard analysis scenarios, i.e., all signals can be queried. If a signal is missing and/or is corrupted, an analysis can raise a warning and continue or an error and abort the analysis. One can continue with analyzing the following cells.
  • It is recommended to execute each cell one after another. However, since the signals are queried prior to analysis, any order of execution is allowed. In case an analysis cell is aborted, the following ones may not be executed (e.g. I_MEAS not present).
  • Each analysis has a cell with description explaining assumptions, criteria, etc.
  • A code snippet for analysis should be compact and contain as little code as possible. This should improve user experience (less scrolling) and reduce the need for updating the notebooks (changes performed in API instead, which is transparent for a user).
  • Each function is characterised by a static behaviour, i.e., its execution does not alter the input arguments. As a consequence, a cell can be executed multiple times generating the same output. The only exception is the update of the MP3 quench table which spans across multiple cells in a notebook. Nonetheless, a re-execution of a cell setting fields in the MP3 table results in an update of the field.

2. Coding Conventions

The project follows several coding conventions. Furthermore, for certain tasks we employed standard design patterns. Last but not least, the project components undergo manual and automatic tests.

2.1. Variable Naming

  • For the most part we follow python PEP8 standard (https://www.python.org/dev/peps/pep-0008/)
  • For the naming of physical quantities, LaTeX style: i_meas_rb, u_res_qds, etc
    • suffix _df denotes a dataframe
    • suffix _dfs to denote a list; _dfss for nested lists (quite rare)
  • Type hints used to support development and reduce ambiguity
  • Modest error checking (decorators to handle missing data, NaN timestamps; if a signal can’t be queried, an empty dataframe is returned)

2.2. Design Patterns

The main design patterns used in the project are:

  • Factory – database query (read/write)
  • Builder – pyeDSL
  • Fluent interface - pyeDSL
  • Mediator – GUI
  • Strategy – time conversion
  • Closure – handling of missing timestamps/signals
  • Mixin – multiple inheritance for analysis modules

2.3. Testing

Given the importance of the HWC tests and signal monitoring applications, we perform a thorough testing of the API and notebooks:

  • Manually
    • GUI (hard to automate)
    • Integration tests for notebooks (time consuming)
  • Automatically
    • Unit and integration tests (most of API calls from notebooks for standard scenarios and for each reported error)
      • arrange, act, assert pattern
      • ~75% coverage, ~1000 tests
    • Tests of examples in documentation