2.10. Test script#

2.10.1. Overview#

This script provides a workflow for the CROCO preprocessing toolchain. It automates:

  • Preparing required datasets

  • Building the conda environment

  • Compiling Fortran modules

  • Running any combination of preprocessing phases

  • Producing a JUnit XML test report

  • Displaying a colored CLI summary of all results

This script is designed for:

  • GitLab CI pipelines

  • Local debugging of full preprocessing workflows

It ensures consistent, reproducible runs with structured reporting.

2.10.2. Main Features#

  • Data Preparation: Download bathymetry, tide data, shapefiles, and Mercator data required for CROCO examples.

  • Environment Management: Creation of the croco_pyenv environment from env.yml or use of an existing environment.

  • Fortran Module Compilation: Build f2py modules located in ../Modules/tools_fort_routines.

  • Phase Execution: Run specific CROCO preprocessing phases (grid generation, ERA5 download, HYCOM, Mercator, GLOFAS, initial/boundary files, tides, rivers).

  • Case Management: Read a JSON configuration file describing processing phases for one or more test cases.

  • JUnit Reporting: Generates a machine-readable XML report for CI systems.

  • Colored Logging: Clear, human-readable terminal output.

2.10.3. Usage#

test.py [-h] [--cases CASES] [--env_dir ENV_DIR]
            [--modes MODE [MODE ...]]
            [--cdsapikey CDSAPIKEY]
            [--cmems_user CMEMS_USER] [--cmems_pass CMEMS_PASS]
            [--skip_download] [--report REPORT] [--continue_on_error]

Command-Line Arguments :

--cases CASES_FILE      Path to a JSON file defining test cases
--env_dir ENV_DIR       Path to an existing conda environment or
                        to the env that wil be built
--modes MODES           Script modes: prepare_data, build, run, all.
                        Default is 'all' (equivalent to 'prepare_data build run').
--cdsapikey KEY         CDS API key for ERA5 and GLOFAS downloads
--cmems_user USER       CMEMS username
--cmems_pass PASS       CMEMS password
--skip_download         Skip all download phases during run mode
--report FILE           Output JUnit XML file (default: junit.xml)
--continue_on_error     Continue executing remaining phases
                        even after failure

The script accepts the following modes:

  • prepare_data – Download and organize required datasets.

  • build – Build the conda environment and compile Fortran modules.

  • run – Run preprocessing phases described in a JSON cases file.

  • all – Equivalent to: prepare_data build run.

To run a case defined in ``benguela.json``you can use the command :

./test.py --modes all --cases benguela.json

Or if you have already the needed data and the environment already activated :

./test.py --modes run --cases benguela.json --env_dir $CONDA_PREFIX

Or if you have done the build but you have no access to the needed data :

./test.py --modes prepara_data run --cases benguela.json --env_dir $CONDA_PREFIX

The cases JSON file maps case names to a list of entries defining:

  • The path to a .ini file

  • The list of phases to execute with this file in input

Example:

{
    "benguela": [
        {
            "file": "input_benguela.ini",
            "phases_list": ["all"]
        }
    ]
}

Supported meta-phase keywords:

  • all – All phases

  • all_run – Only non-download phases (make_*.py scripts)

  • all_dwl – Only download phases (download_*.py scripts)

Depending on how you have defined your configuration (all in on file like in Example/benguela or one file by phase like in Example/benguela_multifiles) you will have to specified the phases you want to run.

If you have already downloaded your data, you can skip the download phases with the option --skip_download. It could be usefull if you just want to update one parameter of you .ini files.

2.10.4. Order of the phases#

Phases are executed in canonical order because a lot of scripts use the grid file as input.

So make_grid is done first, then download and then the others.

make_grid
download_era5
download_hycom
download_mercator
download_glofas_river
make_ini
make_bry
make_tides
make_rivers

Each phase triggers a Python script located in the parent directory (e.g. make_grid.py, download_era5.py).

2.10.5. Environment and Module Building#

If the build phase is requested, the script automatically creates a conda environment from env.yml (provided in parent directory) and put it in --env_dir if specified or in conda envs directory if not.

Then the script compiles Fortran modules using make through conda run. After compilation, compiled shared objects are checked to ensure successful build.

If the build phase is not requested, the script :

  • locates an existing croco_pyenv if --env_dir is not provided

  • uses --env_dir if provided.

2.10.6. Data Preparation#

prepare_data creates necessary directory structures and downloads:

  • ETOPO2 bathymetry

  • TPXO7 tidal constituents

  • GSHHG shapefiles

  • Mercator example data for 2013/01

Downloaded archives are unpacked and symbolic links created when required.

2.10.7. JUnit Reporting#

At the end of execution, a JUnit XML document is generated (default: junit.xml). Each test case/phase records:

  • Status (passed / failed)

  • Duration

  • Failure message (when applicable)

2.10.8. Logging and Summary#

All steps are logged both to:

  • Terminal (with colorized output)

  • test.log (detailed log)

A final summary is printed:

CASE                     PHASE               STATUS
-------------------------------------------------------
benguela                 make_grid           passed
benguela                 make_ini            failed