e3.testsuite.driver.classic
===========================

.. py:module:: e3.testsuite.driver.classic


Attributes
----------

.. autoapisummary::

   e3.testsuite.driver.classic.TIMEOUT_OUTPUT_PATTERN
   e3.testsuite.driver.classic.TIMEOUT_OUTPUT_STR_RE
   e3.testsuite.driver.classic.TIMEOUT_OUTPUT_BYTES_RE


Exceptions
----------

.. autoapisummary::

   e3.testsuite.driver.classic.TestSkip
   e3.testsuite.driver.classic.TestAbortWithError
   e3.testsuite.driver.classic.TestAbortWithFailure


Classes
-------

.. autoapisummary::

   e3.testsuite.driver.classic.ProcessResult
   e3.testsuite.driver.classic.ClassicTestDriver


Module Contents
---------------

.. py:exception:: TestSkip

   Bases: :py:obj:`Exception`


   Convenience exception to abort a testcase.

   When this exception is raised during test initialization or execution,
   consider that this testcase must be skipped (TestStatus.SKIP).


.. py:exception:: TestAbortWithError

   Bases: :py:obj:`Exception`


   Convenience exception to abort a testcase.

   When this exception is raised during test initialization or execution,
   consider that something went wrong (TestStatus.ERROR).


.. py:exception:: TestAbortWithFailure

   Bases: :py:obj:`Exception`


   Convenience exception to abort a testcase, considering it failed.

   When this exception is raised during test initialization or execution,
   consider that it failed (TestStatus.FAIL or TestStatus.XFAIL, depending on
   test control).


.. py:data:: TIMEOUT_OUTPUT_PATTERN
   :value: 'rlimit: Real time limit ([^\\n]+) exceeded\\n'


.. py:data:: TIMEOUT_OUTPUT_STR_RE

.. py:data:: TIMEOUT_OUTPUT_BYTES_RE

.. py:class:: ProcessResult(status: int, out: Union[str, bytes])

   Record results from a subprocess.


   .. py:attribute:: status


   .. py:attribute:: out


.. py:class:: ClassicTestDriver(env: e3.env.Env, test_env: Dict[str, Any])

   Bases: :py:obj:`e3.testsuite.driver.TestDriver`


   Enhanced test driver base class for common behaviors.

   This test driver provides several facilities to automate tasks that driver
   often duplicate in practice:

   * run subprocesses;
   * intercept subprocess failures and turn them into appropriate test
     statuses;
   * gather subprocess outputs to ``self.result.out``;
   * have support for automatic XFAIL/SKIP test results.


   .. py:attribute:: Style
      :type:  e3.testsuite.utils.DummyColors


   .. py:attribute:: Fore
      :type:  e3.testsuite.utils.DummyColors


   .. py:attribute:: output
      :type:  e3.testsuite.result.Log


   .. py:attribute:: test_control
      :type:  e3.testsuite.control.TestControl


   .. py:property:: copy_test_directory
      :type: bool


      Return whether to automatically copy test directory to working dir.

      If this returns True, the working directory is automatically
      synchronized to the test directory before running the testcase:



   .. py:method:: run() -> None
      :abstractmethod:


      Run the testcase.

      Subclasses must override this.



   .. py:property:: default_process_timeout
      :type: int


      Return the default timeout for processes spawn in the ``shell`` method.

      The result is a number of seconds.



   .. py:property:: default_encoding
      :type: str


      Return the default encoding to decode process outputs.

      If "binary", consider that process outputs are binary, so do not try to
      decode them to text.



   .. py:property:: test_control_creator
      :type: e3.testsuite.control.TestControlCreator


      Return a test control creator for this test.

      By default, this returns a YAMLTestControlCreator instance tied to this
      driver with an empty condition environment. Subclasses are free to
      override this to suit their needs: for instance returning a
      OptfileCreater to process "test.opt" files.



   .. py:method:: shell(args: List[str], cwd: Optional[str] = None, env: Optional[Dict[str, str]] = None, catch_error: bool = True, analyze_output: bool = True, timeout: Optional[int] = None, encoding: Optional[str] = None, truncate_logs_threshold: Optional[int] = None, ignore_environ: bool = True) -> ProcessResult

      Run a subprocess.

      :param args: Arguments for the subprocess to run.
      :param cwd: Current working directory for the subprocess. By default
          (i.e. if None), use the test working directory.
      :param env: Environment to pass to the subprocess.
      :param catch_error: If True, consider that an error status code leads
          to a test failure. In that case, abort the testcase.
      :param analyze_output: If True, add the subprocess output to the
          ``self.output`` log.
      :param timeout: Timeout (in seconds) for the subprocess. Use
          ``self.default_timeout`` if left to None.
      :param encoding: Encoding to use when decoding the subprocess' output
          stream. If None, use the default enocding for this test
          (``self.default_encoding``, from the ``encoding`` entry in
          test.yaml).  If "binary", leave the output undecoded as a bytes
          string.
      :param truncate_logs_threshold: Threshold to truncate the subprocess
          output in ``self.result.log``. See
          ``e3.testsuite.result.truncated``'s ``line_count`` argument. If
          left to None, use the testsuite's ``--truncate-logs`` option.
      :param ignore_environ: Applies only when ``env`` is not None.
          When True (the default), pass exactly environment variables
          in ``env``. When False, pass a copy of ``os.environ`` that is
          augmented with variables in ``env``.



   .. py:method:: add_test(dag: e3.collection.dag.DAG) -> None

      Create the test workflow.

      Amend a DAG with the test fragments that should be executed along with
      their dependencies. See BasicTestDriver for an example of workflow.



   .. py:method:: push_success() -> None

      Set status to consider that the test passed.



   .. py:method:: push_skip(message: Optional[str]) -> None

      Consider that we skipped the test, set status accordingly.

      :param message: Label to explain the skipping.



   .. py:method:: push_error(message: Optional[str]) -> None

      Set status to consider that something went wrong during test execution.

      :param message: Message to explain what went wrong.



   .. py:method:: push_failure(message: Optional[str]) -> None

      Consider that the test failed and set status according to test control.

      :param message: Test failure description.



   .. py:method:: set_up() -> None

      Run initialization operations before a test runs.

      Subclasses can override this to prepare testcase execution.

      Having a callback separate from "run" is useful when dealing with
      inheritance: overriding the "set_up" method in subclasses allows to
      append setup actions before the testcase execution actually takes place
      (in the "run" method).

      If everything happened in "run" method, that would not be possible
      unless re-implementing the "run" method in each subclass, with obvious
      code duplication issues.



   .. py:method:: cleanup_working_dir() -> None

      Remove the working directory tree.



   .. py:method:: tear_down() -> None

      Run finalization operations after a test has run.

      Subclasses can override this to run clean-ups after testcase execution.
      By default, this method removes the working directory (unless
      --disable-cleanup/--dev-temp is passed).

      See set_up's docstring for the rationale behind this API.



   .. py:method:: run_wrapper(prev: Dict[str, Any], slot: int) -> None


   .. py:method:: process_may_have_timed_out(result: ProcessResult) -> bool

      Return whether the process that yielded ``result`` may have timed out.

      This assumes that ``result`` is the returned value from a call to the
      ``shell`` method. Note that this uses simple heuristics to determine
      whether the process may have timed out, as this information is not
      preserved reliably under the hood: process is wrapped under the rlimit
      program, which just prints a known error message and returns some
      specific exit code in case of timeout.



   .. py:method:: compute_failures() -> List[str]

      Analyze the testcase result and return the list of reasons for failure.

      This architecture allows to have multiple reasons for failures, for
      instance: unexpected computation result + presence of Valgrind
      diagnostics. The result is a list of short strings that describe the
      failures. This method is expected to write to ``self.result.log`` in
      order to convey more information if needed.

      By default, consider that the testcase succeeded if we reach the
      analysis step. Subclasses may override this to actually perform checks.



   .. py:method:: analyze() -> None

      Analyze the testcase result, adjust status accordingly.



