Testing#

The Testing module provides tools for testing the code. This might be part of the integration tests or unit tests.

assertions#

Functions asserting certain conditions are met (used e.g., in integration tests).

testing.assertions.assert_expected_output(file, expected_output)[source]#

Assert that the expected output is present in the sim_telarray file.

Parameters:
file: Path

Path to the sim_telarray file.

expected_output: dict

Expected output values.

testing.assertions.assert_file_type(file_type, file_name)[source]#

Assert that the file is of the given type.

Parameters:
file_type: str

File type (json, yaml).

file_name: str

File name.

testing.assertions.assert_n_showers_and_energy_range(file)[source]#

Assert the number of showers and the energy range.

The number of showers should be consistent with the required one (up to 1% tolerance) and the energies simulated are required to be within the configured ones.

Parameters:
file: Path

Path to the sim_telarray file.

testing.assertions.check_output_from_sim_telarray(file, expected_output)[source]#

Check that the sim_telarray simulation result is reasonable and matches the expected output.

Parameters:
file: Path

Path to the sim_telarray file.

expected_output: dict

Expected output values.

Raises:
ValueError

If the file is not a zstd compressed file.

configuration#

Integration test configuration.

exception testing.configuration.VersionError[source]#

Raise if model version requested is not supported.

testing.configuration.configure(config, tmp_test_directory, request)[source]#

Prepare configuration and command for integration tests.

Parameters:
config: dict

Configuration dictionary.

tmp_test_directory: str

Temporary test directory (from pytest fixture).

request: request

Request object.

Returns:
str: command to run the application test.
str: config file model version.
testing.configuration.create_tmp_output_path(tmp_test_directory, config)[source]#

Create temporary output path.

Parameters:
tmp_test_directory: str

Temporary directory.

config: dict

Configuration dictionary.

Returns:
str: path to the temporary output directory.
testing.configuration.get_application_command(app, config_file=None, config_string=None)[source]#

Return the command to run the application with the given config file.

Parameters:
app: str

Name of the application.

config_file: str

Configuration file.

config_string: str

Configuration string (e.g., ‘–version’)

Returns:
str: command to run the application test.
testing.configuration.get_list_of_test_configurations(config_files)[source]#

Return list of test configuration dictionaries or test names.

Read all configuration files for testing. Add “–help” and “–version” calls for all applications.

Parameters:
config_files: list

List of integration test configuration files.

Returns:
list:

list of test names or of configuration dictionaries.

validate_output#

Compare application output to reference output.

testing.validate_output.compare_ecsv_files(file1, file2, tolerance=1e-05, test_columns=None)[source]#

Compare two ecsv files.

The comparison is successful if:

  • same number of rows

  • numerical values in columns are close

The comparison can be restricted to a subset of columns with some additional cuts applied. This is configured through the test_columns parameter. This is a list of dictionaries, where each dictionary contains the following key-value pairs: - TEST_COLUMN_NAME: column name to compare. - CUT_COLUMN_NAME: column for filtering. - CUT_CONDITION: condition for filtering.

Parameters:
file1: str

First file to compare

file2: str

Second file to compare

tolerance: float

Tolerance for comparing numerical values.

test_columns: list

List of columns to compare. If None, all columns are compared.

testing.validate_output.compare_files(file1, file2, tolerance=1e-05, test_columns=None)[source]#

Compare two files of file type ecsv, json or yaml.

Parameters:
file1: str

First file to compare

file2: str

Second file to compare

tolerance: float

Tolerance for comparing numerical values.

test_columns: list

List of columns to compare. If None, all columns are compared.

Returns:
bool

True if the files are equal.

testing.validate_output.compare_json_or_yaml_files(file1, file2, tolerance=0.01)[source]#

Compare two json or yaml files.

Take into account float comparison for sim_telarray string-embedded floats.

Parameters:
file1: str

First file to compare

file2: str

Second file to compare

tolerance: float

Tolerance for comparing numerical values.

Returns:
bool

True if the files are equal.

testing.validate_output.validate_application_output(config, from_command_line=None, from_config_file=None)[source]#

Validate application output against expected output.

Expected output is defined in configuration file. Some tests run only if the model version from the command line equals the model version from the configuration file.

Parameters:
config: dict

dictionary with the configuration for the application test.

from_command_line: str

Model version from the command line.

from_config_file: str

Model version from the configuration file.

helpers#

Helper functions for integration testing.

testing.helpers.skip_camera_efficiency(config)[source]#

Skip camera efficiency tests if the old version of testeff is used.