e3.testsuite
============

.. py:module:: e3.testsuite

.. autoapi-nested-parse::

   Generic testsuite framework.



Submodules
----------

.. toctree::
   :maxdepth: 1

   /autoapi/e3/testsuite/_helpers/index
   /autoapi/e3/testsuite/control/index
   /autoapi/e3/testsuite/driver/index
   /autoapi/e3/testsuite/find_skipped_tests/index
   /autoapi/e3/testsuite/fragment/index
   /autoapi/e3/testsuite/main/index
   /autoapi/e3/testsuite/multiprocess_scheduler/index
   /autoapi/e3/testsuite/optfileparser/index
   /autoapi/e3/testsuite/process/index
   /autoapi/e3/testsuite/report/index
   /autoapi/e3/testsuite/result/index
   /autoapi/e3/testsuite/running_status/index
   /autoapi/e3/testsuite/testcase_finder/index
   /autoapi/e3/testsuite/utils/index


Attributes
----------

.. autoapisummary::

   e3.testsuite.logger


Exceptions
----------

.. autoapisummary::

   e3.testsuite.ProbingError
   e3.testsuite.TestAbort


Classes
-------

.. autoapisummary::

   e3.testsuite.GAIAResultFiles
   e3.testsuite.ReportIndex
   e3.testsuite.Log
   e3.testsuite.TestResult
   e3.testsuite.TestStatus
   e3.testsuite.ParsedTest
   e3.testsuite.TestFinder
   e3.testsuite.YAMLTestFinder
   e3.testsuite.CleanupMode
   e3.testsuite.ColorConfig
   e3.testsuite.TestsuiteCore
   e3.testsuite.Testsuite


Functions
---------

.. autoapisummary::

   e3.testsuite.deprecated
   e3.testsuite.dump_gaia_report
   e3.testsuite.dump_result_logs_if_needed
   e3.testsuite.generate_report
   e3.testsuite.summary_line
   e3.testsuite.dump_xunit_report
   e3.testsuite.dump_environ
   e3.testsuite.enum_to_cmdline_args_map
   e3.testsuite.isatty


Package Contents
----------------

.. py:function:: deprecated(stacklevel: int = 1) -> Callable[[F], F]

   Return a decorator to emit deprecation warnings.

   The function that the decorator returns emits the deprecation warning only
   the first time it is called.


.. py:class:: GAIAResultFiles

   Filenames for a given result in a GAIA report.

   In a GAIA testsuite report, each result is described as one entry (line) in
   the "results" text file, and several optional files. This object contains
   the names for the files, when present, corresponding to a given result.


   .. py:attribute:: result
      :type:  Optional[str]


   .. py:attribute:: log
      :type:  Optional[str]


   .. py:attribute:: expected
      :type:  Optional[str]


   .. py:attribute:: out
      :type:  Optional[str]


   .. py:attribute:: diff
      :type:  Optional[str]


   .. py:attribute:: time
      :type:  Optional[str]


   .. py:attribute:: info
      :type:  Optional[str]


.. py:function:: dump_gaia_report(report_index: e3.testsuite.report.index.ReportIndex, output_dir: str, discs: object = None, result_files: Optional[Dict[str, GAIAResultFiles]] = None) -> None

   Dump a GAIA-compatible testsuite report.

   :param report_index: ReportIndex instance for all the test results to
       include in the report.
   :param output_dir: Directory in which to emit the report.
   :param discs: List of discriminants associated to the testsuite report, if
       any. See "dump_discriminants" for the expected format.
   :param result_files: If the log files for each result have already been
       generated, mapping from test names to result file names. None
       otherwise.


.. py:function:: dump_result_logs_if_needed(env: e3.env.Env, result: e3.testsuite.result.TestResult, output_dir: str) -> Optional[GAIAResultFiles]

   Shortcut to call dump_result_logs if a GAIA report is requested.


.. py:function:: generate_report(output_file: IO[str], new_index: e3.testsuite.report.index.ReportIndex, old_index: Optional[e3.testsuite.report.index.ReportIndex], colors: e3.testsuite.utils.ColorConfig, show_all_logs: bool, show_xfail_logs: bool, show_error_output: bool, show_time_info: bool) -> None

   Generate a text report for testsuite results.

   :param output_file: Output file for the report.
   :param new_index: Testsuite results to display.
   :param old_index: Results from a previous testsuite run. If present, used
       to compute the new/already-detected/fixed regressions.
   :param colors: Color configuration for the output.
   :param show_all_logs: Whether to display logs for all testcases (successful
       tests are not displayed by default).
   :param show_xfail_logs: Whether to display logs for XFAIL results (hidden
       by default).
   :param show_error_output: Whether to display logs in test results.
   :param show_time_info: Whether to display time information for test
       results, if available.


.. py:function:: summary_line(result: e3.testsuite.result.TestResultSummary, colors: e3.testsuite.utils.ColorConfig, show_time_info: bool) -> str

   Format a summary line to describe the ``result`` test result.

   :param colors: Proxy to introduce (or not) colors in the result.
   :param show_time_info: Whether to include timing information in the result.


.. py:class:: ReportIndex(results_dir: str)

   Lightweight index for test results.


   .. py:attribute:: INDEX_FILENAME
      :value: '_index.json'



   .. py:attribute:: INDEX_MAGIC
      :value: 'e3.testsuite.report.index.ReportIndex:1'



   .. py:attribute:: results_dir

      Directory that contain test results (YAML files).



   .. py:attribute:: entries
      :type:  Dict[str, ReportIndexEntry]

      Map test names to their ReportIndexEntry instances.



   .. py:attribute:: status_counters

      Number of test result for each test status.



   .. py:attribute:: duration
      :type:  Optional[float]
      :value: None


      Optional number of seconds for the total duration of the testsuite run.



   .. py:method:: save_and_add_result(result: e3.testsuite.result.TestResult) -> None

      Save a test result in the results directory and add it to the index.

      :param result: Test result to save/add.



   .. py:method:: add_result(result: e3.testsuite.result.TestResultSummary, filename: str) -> None

      Add an entry to this index for the given test result.

      Note that unlike ``save_and_add_result``, this does not write the
      result data in the results dir: it is up to the caller to make sure of
      that.

      :param result: Result to add.
      :param filename: Name of the file that contains test result data.



   .. py:method:: read(results_dir: str) -> ReportIndex
      :classmethod:


      Read the index in the given results directory.



   .. py:method:: write() -> None

      Write the index on disk.



   .. py:property:: has_failures
      :type: bool


      Return whether there is at least one FAIL/ERROR test status.



.. py:function:: dump_xunit_report(name: str, index: e3.testsuite.report.index.ReportIndex, filename: str) -> None

   Dump a testsuite report to `filename` in the standard XUnit XML format.

   :param name: Name for the teststuite report.
   :param index: Report index for the testsuite results to report.
   :param filename: Name of the text file to write.
   :param duration: Optional number of seconds for the total duration of the
       testsuite run.


.. py:class:: Log(content: AnyStr)

   Bases: :py:obj:`yaml.YAMLObject`, :py:obj:`Generic`\ [\ :py:obj:`AnyStr`\ ]


   Object to hold long text or binary logs.

   We ensure that when dump to yaml the result will be human readable.


   .. py:attribute:: yaml_loader


   .. py:attribute:: yaml_tag
      :value: '!e3.testsuite.result.Log'



   .. py:attribute:: log
      :type:  AnyStr


   .. py:property:: is_binary
      :type: bool


      Return whether this log contains binary data.



   .. py:property:: is_text
      :type: bool


      Return whether this log contains text data.



   .. py:method:: __iadd__(content: AnyStr) -> Log[AnyStr]

      Add additional content to the log.

      :param content: a message to log



   .. py:method:: __str__() -> str


.. py:class:: TestResult(name: str, env: Optional[dict] = None, status: Optional[TestStatus] = None, msg: str = '')

   Bases: :py:obj:`yaml.YAMLObject`


   Represent a result for a given test.


   .. py:attribute:: yaml_loader


   .. py:attribute:: yaml_tag
      :value: '!e3.testsuite.result.TestResult'



   .. py:attribute:: test_name


   .. py:attribute:: env
      :value: None



   .. py:attribute:: status


   .. py:attribute:: msg
      :type:  Optional[str]
      :value: ''



   .. py:attribute:: log


   .. py:attribute:: processes
      :type:  list
      :value: []



   .. py:attribute:: failure_reasons
      :type:  Set[FailureReason]


   .. py:attribute:: expected
      :type:  Optional[Log]
      :value: None



   .. py:attribute:: out
      :type:  Optional[Log]
      :value: None



   .. py:attribute:: diff
      :type:  Optional[Log]
      :value: None



   .. py:attribute:: time
      :type:  Optional[float]
      :value: None



   .. py:attribute:: info
      :type:  Dict[str, str]


   .. py:method:: set_status(status: TestStatus, msg: Optional[str] = '') -> None

      Update the test status.

      :param status: New status. Note that only test results with status set
          to ERROR can be changed.
      :param msg: Optional short message to describe the result.  Note that
          multiline strings are turned into single-line strings.



   .. py:method:: __str__() -> str


   .. py:method:: save(results_dir: str) -> str

      Write this test results as a YAML file.

      :param results_dir: Name of the directory in which to write the test
          result. When writing a testsuite report, this corresponds to the
          report's ``results_dir`` (see
          ``e3.testsuite.report.index.ReportIndex``).
      :return: The base filename of the file written. It is generated from
          the testname.



   .. py:property:: summary
      :type: TestResultSummary



.. py:class:: TestStatus(*args, **kwds)

   Bases: :py:obj:`enum.Enum`


   Testcase execution status.


   .. py:attribute:: PASS


   .. py:attribute:: FAIL


   .. py:attribute:: XFAIL


   .. py:attribute:: XPASS


   .. py:attribute:: VERIFY


   .. py:attribute:: SKIP


   .. py:attribute:: NOT_APPLICABLE


   .. py:attribute:: ERROR


   .. py:method:: color(colors: e3.testsuite.utils.ColorConfig) -> str

      Return the ANSI color code for this test status.

      This returns an empty string if colors are disabled.



.. py:class:: ParsedTest

   Basic information to instantiate a test driver.


   .. py:attribute:: test_name
      :type:  str

      Name for this testcase.



   .. py:attribute:: driver_cls
      :type:  Optional[Type[e3.testsuite.driver.TestDriver]]

      Test driver class to instantiate, None to use the default one.



   .. py:attribute:: test_env
      :type:  dict

      Base test environment.

      Driver instantiation will complete it with test directory, test name, etc.



   .. py:attribute:: test_dir
      :type:  str

      Directory that contains the testcase.



   .. py:attribute:: test_matcher
      :type:  Optional[str]
      :value: None


      Textual text matcher.

      If not None, string to match against the list of requested tests to run: in
      that case, the test is ignored if there is no match. This is needed to
      filter out tests in testsuites where tests don't necessarily have dedicated
      directories.



.. py:exception:: ProbingError

   Bases: :py:obj:`Exception`


   Exception raised in TestFinder.probe when a test is misformatted.


.. py:class:: TestFinder

   Interface for objects that find testcases in the tests subdirectory.


   .. py:property:: test_dedicated_directory
      :type: bool


      Return whether each test has a dedicated test directory.

      Even though e3-testsuite is primarily designed for this to be true,
      some testsuites actually host multiple tests in the same directory.
      When this is the case, we need to probe all directories and only then
      filter which test to run using ParsedTest.test_matcher.



   .. py:method:: probe(testsuite: e3.testsuite.TestsuiteCore, dirpath: str, dirnames: List[str], filenames: List[str]) -> TestFinderResult
      :abstractmethod:


      Return a test if the "dirpath" directory contains a testcase.

      Raise a ProbingError if anything is wrong.

      :param testsuite: Testsuite instance that is looking for testcases.
      :param dirpath: Directory to probe for a testcase.
      :param dirnames: List of directories that "dirpath" contains.
      :param filenames: List of files that "dirpath" contains.



.. py:class:: YAMLTestFinder

   Bases: :py:obj:`TestFinder`


   Look for "test.yaml"-based tests.

   This considers that all directories that contain a "test.yaml" file are
   testcases. This file is parsed as YAML, the result is used as a test
   environment, and if it contains a "driver" key, it uses the testsuite
   driver whose name corresponds to the associated string value.


   .. py:method:: probe(testsuite: e3.testsuite.TestsuiteCore, dirpath: str, dirnames: List[str], filenames: List[str]) -> TestFinderResult

      Return a test if the "dirpath" directory contains a testcase.

      Raise a ProbingError if anything is wrong.

      :param testsuite: Testsuite instance that is looking for testcases.
      :param dirpath: Directory to probe for a testcase.
      :param dirnames: List of directories that "dirpath" contains.
      :param filenames: List of files that "dirpath" contains.



.. py:class:: CleanupMode(*args, **kwds)

   Bases: :py:obj:`enum.Enum`


   Mode for working space cleanups.


   .. py:attribute:: NONE


   .. py:attribute:: PASSING


   .. py:attribute:: ALL


   .. py:method:: default() -> CleanupMode
      :classmethod:



   .. py:method:: descriptions() -> Dict[CleanupMode, str]
      :classmethod:



.. py:class:: ColorConfig(colors_enabled: Optional[bool] = None)

   Proxy for color management.

   This embeds colorama's Fore/Style, or DummyColors instances when colors are
   disabled.


   .. py:attribute:: Fore


   .. py:attribute:: Style


.. py:function:: dump_environ(filename: str, env: e3.env.Env) -> None

   Dump environment variables into a sourceable file.


.. py:function:: enum_to_cmdline_args_map(enum_cls: Type[EnumType]) -> Dict[str, EnumType]

   Turn enum alternatives into command-line arguments.

   This helps exposing enums for options on the command-line. This turns
   alternative names into lower case and replaces underscores with dashes.


.. py:function:: isatty(stream: IO[AnyStr]) -> bool

   Return whether stream is a TTY.

   This is a safe predicate: it works if stream is None or if it does not even
   support TTY detection: in these cases, be conservative (consider it's not a
   TTY).


.. py:data:: logger

.. py:exception:: TestAbort

   Bases: :py:obj:`Exception`


   Raise this to abort silently the execution of a test fragment.


.. py:class:: TestsuiteCore(root_dir: Optional[str] = None, testsuite_name: str = 'Untitled testsuite')

   Testsuite Core driver.

   This class is the base of Testsuite class and should not be instanciated.
   It's not recommended to override any of the functions declared in it.

   See documentation of Testsuite class for overridable methods and
   variables.


   .. py:attribute:: root_dir
      :value: b'.'


      Root directory for the testsuite, i.e. directory from which the test
      directory (see ``self.test_dir``) is looked up.



   .. py:attribute:: test_dir

      Root directory for the tree in which testcases are searched.



   .. py:attribute:: consecutive_failures
      :value: 0



   .. py:attribute:: return_values
      :type:  Dict[str, Any]


   .. py:attribute:: result_tracebacks
      :type:  Dict[str, List[str]]


   .. py:attribute:: testsuite_name
      :value: 'Untitled testsuite'



   .. py:attribute:: aborted_too_many_failures
      :value: False


      Whether the testsuite aborted because of too many consecutive test
      failures (see the --max-consecutive-failures command-line option).



   .. py:attribute:: use_multiprocessing
      :value: False


      Whether to use multi-processing for tests parallelism.

      Beyond a certain level of parallelism, Python's GIL contention is too
      high to benefit from more processors. When we reach this level, it is
      more interesting to use multiple processes to cancel the GIL
      contention.

      The actual value for this attribute is computed once the DAG is built,
      in the "compute_use_multiprocessing" method.



   .. py:method:: _test_counter() -> int


   .. py:method:: _test_status_counters() -> Dict[result.TestStatus, int]


   .. py:method:: _results() -> Dict[str, result.TestStatus]


   .. py:property:: test_counter
      :type: int


      Return the number of test results in the report.

      Warning: this method is obsolete and will be removed in the future.



   .. py:property:: test_status_counters
      :type: Dict[result.TestStatus, int]


      Return test result counts per test status.

      Warning: this method is obsolete and will be removed in the future.



   .. py:property:: results
      :type: Dict[str, result.TestStatus]


      Return a mapping from test names to results.

      Warning: this method is obsolete and will be removed in the future.



   .. py:method:: compute_use_multiprocessing() -> bool
      :abstractmethod:


      Return whether to use multi-processing for tests parallelism.

      See docstring for the "use_multiprocessing" attribute. Subclasses are
      free to override this to take control of when multiprocessing is
      enabled. Note that this will disregard the "--force-multiprocessing"
      command line option.



   .. py:method:: testsuite_main(args: Optional[List[str]] = None) -> int

      Main for the main testsuite script.

      :param args: Command line arguments. If None, use `sys.argv`.
      :return: The testsuite status code (0 for success, a positive for
          failure).



   .. py:method:: get_test_list(sublist: List[str]) -> List[testcase_finder.ParsedTest]

      Retrieve the list of tests.

      :param sublist: A list of tests scenarios or patterns.



   .. py:method:: add_test(dag: e3.collection.dag.DAG, parsed_test: testcase_finder.ParsedTest) -> None

      Register a test to run.

      :param dag: The DAG of test fragments to execute for the testsuite.
      :param parsed_test: Test to instantiate.



   .. py:method:: dump_testsuite_result() -> None

      Log a summary of test results.

      Subclasses are free to override this to do whatever is suitable for
      them.



   .. py:method:: collect_result(fragment: fragment.TestFragment) -> None

      Import test results from ``fragment`` into testsuite reports.

      :param fragment: Test fragment (just completed) from which to import
          test results.



   .. py:method:: add_result(item: driver.ResultQueueItem) -> None

      Add a test result to the result index and log it.

      :param item: Result queue item for the result to add.



   .. py:method:: add_test_error(test_name: str, message: str, tb: Optional[str] = None) -> None

      Create and add an ERROR test status.

      :param test_name: Prefix for the test result to create. This adds a
          suffix to avoid clashes.
      :param str message: Error message.
      :param tb: Optional traceback for the error.



   .. py:method:: setup_result_dirs() -> None

      Create the output directory in which the results are stored.



   .. py:method:: run_standard_mainloop(dag: e3.collection.dag.DAG) -> None

      Run the main loop to execute test fragments in threads.



   .. py:method:: run_multiprocess_mainloop(dag: e3.collection.dag.DAG) -> None

      Run the main loop to execute test fragments in subprocesses.



   .. py:property:: tests_subdir
      :type: str

      :abstractmethod:


      Return the subdirectory in which tests are looked for.

      The returned directory name is considered relative to the root
      testsuite directory (self.root_dir).



   .. py:property:: test_driver_map
      :type: Dict[str, Type[driver.TestDriver]]

      :abstractmethod:


      Return a map from test driver names to TestDriver subclasses.

      Test finders will be able to use this map to fetch the test drivers
      referenced in testcases.



   .. py:property:: default_driver
      :type: Optional[str]

      :abstractmethod:


      Return the name of the default driver for testcases.

      When tests do not query a specific driver, the one associated to this
      name is used instead. If this property returns None, all tests are
      required to query a driver.



   .. py:method:: test_name(test_dir: str) -> str
      :abstractmethod:


      Compute the test name given a testcase spec.

      This function can be overridden. By default it uses the name of the
      test directory. Note that the test name should be a valid filename (not
      dir seprators, or special characters such as ``:``, ...).



   .. py:property:: test_finders
      :type: List[testcase_finder.TestFinder]

      :abstractmethod:


      Return test finders to probe tests directories.



   .. py:method:: add_options(parser: argparse.ArgumentParser) -> None
      :abstractmethod:


      Add testsuite specific switches.

      Subclasses can override this method to add their own testsuite
      command-line options.

      :param parser: Parser for command-line arguments. See
          <https://docs.python.org/3/library/argparse.html> for usage.



   .. py:method:: set_up() -> None
      :abstractmethod:


      Execute operations before running the testsuite.

      Before running this, command-line arguments were parsed. After this
      returns, the testsuite will look for testcases.

      By default, this does nothing. Overriding this method allows testsuites
      to prepare the execution of the testsuite depending on their needs. For
      instance:

      * process testsuite-specific options;
      * initialize environment variables;
      * adjust self.env (object forwarded to test drivers).



   .. py:method:: tear_down() -> None
      :abstractmethod:


      Execute operation when finalizing the testsuite.

      By default, this cleans the working (temporary) directory in which the
      tests were run.



   .. py:method:: write_comment_file(comment_file: IO[str]) -> None
      :abstractmethod:


      Write the comment file's content.

      :param comment_file: File descriptor for the comment file.  Overriding
          methods should only call its "write" method (or print to it).



   .. py:property:: default_max_consecutive_failures
      :type: int

      :abstractmethod:


      Return the default maximum number of consecutive failures.

      In some cases, aborting the testsuite when there are just too many
      failures saves time and costs: the software to test/environment is too
      broken, there is no point to continue running the testsuite.

      This property must return the number of test failures (FAIL or ERROR)
      that trigger the abortion of the testuite. If zero, this behavior is
      disabled.



   .. py:property:: default_failure_exit_code
      :type: int

      :abstractmethod:


      Return the default exit code when at least one test fails.



   .. py:property:: auto_generate_text_report
      :type: bool

      :abstractmethod:


      Return whether to automatically generate a text report.

      This is disabled by default (and controlled by the
      --generate-text-report command-line option) because the generation of
      this report can add non-trivial overhead depending on results.



   .. py:method:: adjust_dag_dependencies(dag: e3.collection.dag.DAG) -> None
      :abstractmethod:


      Adjust dependencies in the DAG of all test fragments.

      :param dag: DAG to adjust.
      :param fragments: Set of all fragments added so far to the DAG.



   .. py:property:: multiprocessing_supported
      :type: bool

      :abstractmethod:


      Return whether running test fragments in subprocesses is supported.

      When multiprocessing is enabled (see the "use_multiprocessing"
      attribute), test fragments are executed in a separate process, and the
      propagation of their return values is disabled (FragmentData's
      "previous_values" argument is always an empty dict).

      This means that multiprocessing can work only if test drivers and all
      code used by test fragments can be imported by subprocesses (for
      instance, class defined in the testsuite entry point are unavailable)
      and if test drivers don't use the "previous_values" mechanism.

      Testsuite authors can use the "--force-multiprocessing" testsuite
      option to check if this works.



.. py:class:: Testsuite(root_dir: Optional[str] = None, testsuite_name: str = 'Untitled testsuite')

   Bases: :py:obj:`TestsuiteCore`


   Testsuite class.

   When implementing a new testsuite you should create a class that
   inherit from this class.


   .. py:property:: tests_subdir
      :type: str


      Return the subdirectory in which tests are looked for.

      The returned directory name is considered relative to the root
      testsuite directory (self.root_dir).



   .. py:property:: test_driver_map
      :type: Dict[str, Type[driver.TestDriver]]

      :abstractmethod:


      Return a map from test driver names to TestDriver subclasses.

      Test finders will be able to use this map to fetch the test drivers
      referenced in testcases.



   .. py:property:: default_driver
      :type: Optional[str]


      Return the name of the default driver for testcases.

      When tests do not query a specific driver, the one associated to this
      name is used instead. If this property returns None, all tests are
      required to query a driver.



   .. py:method:: test_name(test_dir: str) -> str

      Compute the test name given a testcase spec.

      This function can be overridden. By default it uses the name of the
      test directory. Note that the test name should be a valid filename (not
      dir seprators, or special characters such as ``:``, ...).



   .. py:property:: test_finders
      :type: List[testcase_finder.TestFinder]


      Return test finders to probe tests directories.



   .. py:method:: add_options(parser: argparse.ArgumentParser) -> None

      Add testsuite specific switches.

      Subclasses can override this method to add their own testsuite
      command-line options.

      :param parser: Parser for command-line arguments. See
          <https://docs.python.org/3/library/argparse.html> for usage.



   .. py:method:: set_up() -> None

      Execute operations before running the testsuite.

      Before running this, command-line arguments were parsed. After this
      returns, the testsuite will look for testcases.

      By default, this does nothing. Overriding this method allows testsuites
      to prepare the execution of the testsuite depending on their needs. For
      instance:

      * process testsuite-specific options;
      * initialize environment variables;
      * adjust self.env (object forwarded to test drivers).



   .. py:method:: tear_down() -> None

      Execute operation when finalizing the testsuite.

      By default, this cleans the working (temporary) directory in which the
      tests were run.



   .. py:method:: write_comment_file(comment_file: IO[str]) -> None

      Write the comment file's content.

      :param comment_file: File descriptor for the comment file.  Overriding
          methods should only call its "write" method (or print to it).



   .. py:property:: default_max_consecutive_failures
      :type: int


      Return the default maximum number of consecutive failures.

      In some cases, aborting the testsuite when there are just too many
      failures saves time and costs: the software to test/environment is too
      broken, there is no point to continue running the testsuite.

      This property must return the number of test failures (FAIL or ERROR)
      that trigger the abortion of the testuite. If zero, this behavior is
      disabled.



   .. py:property:: default_failure_exit_code
      :type: int


      Return the default exit code when at least one test fails.



   .. py:property:: auto_generate_text_report
      :type: bool


      Return whether to automatically generate a text report.

      This is disabled by default (and controlled by the
      --generate-text-report command-line option) because the generation of
      this report can add non-trivial overhead depending on results.



   .. py:method:: adjust_dag_dependencies(dag: e3.collection.dag.DAG) -> None

      Adjust dependencies in the DAG of all test fragments.

      :param dag: DAG to adjust.
      :param fragments: Set of all fragments added so far to the DAG.



   .. py:property:: multiprocessing_supported
      :type: bool


      Return whether running test fragments in subprocesses is supported.

      When multiprocessing is enabled (see the "use_multiprocessing"
      attribute), test fragments are executed in a separate process, and the
      propagation of their return values is disabled (FragmentData's
      "previous_values" argument is always an empty dict).

      This means that multiprocessing can work only if test drivers and all
      code used by test fragments can be imported by subprocesses (for
      instance, class defined in the testsuite entry point are unavailable)
      and if test drivers don't use the "previous_values" mechanism.

      Testsuite authors can use the "--force-multiprocessing" testsuite
      option to check if this works.



   .. py:method:: compute_use_multiprocessing() -> bool

      Return whether to use multi-processing for tests parallelism.

      See docstring for the "use_multiprocessing" attribute. Subclasses are
      free to override this to take control of when multiprocessing is
      enabled. Note that this will disregard the "--force-multiprocessing"
      command line option.



