PyFoam.Infrastructure.CTestRun module

A wrapper to run a solver as a CTest

class PyFoam.Infrastructure.CTestRun.CTestRun[source]

Bases: object

This class runs a solver on a test case, examines the results and fails if they don’t live up the expectations

_CTestRun__doInit(solver, originalCase, minimumRunTime=None, referenceData=None, tailLength=50, headLength=50, **kwargs)

Initialzation method to be called before running the actual test (purpose of this method is to avoid cascaded of constructor-calls

Parameters:
  • solver – name of the solver to test
  • originalCase – location of the original case files (they

will be copied) :param minimumRuntime: the solver has to run at least to this time to be considered “ran successful” :param referenceData: directory with data that is used for testing :param tailLength: output that many lines from the end of the solver output :param headLength: output that many lines from the beginning of the solver output

_CTestRun__recursiveInit(theClass, called)

Automatically call the ‘init’-method of the whole tree

_CTestRun__setParameterAsUsed(keys)
__dict__ = mappingproxy({'__init__': <function CTestRun.__init__>, 'fatalFail': <function CTestRun.fatalFail>, 'shell': <function CTestRun.shell>, 'runTests': <function CTestRun.runTests>, '__module__': 'PyFoam.Infrastructure.CTestRun', 'failGeneral': <function CTestRun.failGeneral>, 'warn': <function CTestRun.warn>, 'run': <function CTestRun.run>, 'sizeClassString': <function CTestRun.sizeClassString>, 'messageGeneral': <function CTestRun.messageGeneral>, 'writeRunInfo': <function CTestRun.writeRunInfo>, '__getitem__': <function CTestRun.__getitem__>, 'solution': <function CTestRun.solution>, 'timeoutDefinitions': [('unknown', 60), ('tiny', 60), ('small', 300), ('medium', 1800), ('big', 7200), ('huge', 43200), ('monster', 172800), ('unlimited', 2592000)], 'addToClone': <function CTestRun.addToClone>, 'parameterValues': <function CTestRun.parameterValues>, 'controlDict': <function CTestRun.controlDict>, 'parallelPrepare': <function CTestRun.parallelPrepare>, 'runAndCatchExceptions': <function CTestRun.runAndCatchExceptions>, 'dataDir': <function CTestRun.dataDir>, 'runCommand': <function CTestRun.runCommand>, 'meshPrepare': <function CTestRun.meshPrepare>, '_CTestRun__doInit': <function CTestRun.__doInit>, 'setTimeout': <function CTestRun.setTimeout>, 'isEqual': <function CTestRun.isEqual>, 'preRunTestCheckMesh': <function CTestRun.preRunTestCheckMesh>, 'testName': <function CTestRun.testName>, '__dict__': <attribute '__dict__' of 'CTestRun' objects>, 'postprocess': <function CTestRun.postprocess>, 'wrapCallbacks': <function CTestRun.wrapCallbacks>, 'autoReconstruct': <function CTestRun.autoReconstruct>, '__weakref__': <attribute '__weakref__' of 'CTestRun' objects>, 'isBigger': <function CTestRun.isBigger>, 'wrapACallback': <function CTestRun.wrapACallback>, 'execute': <function CTestRun.execute>, 'autoDecompose': <function CTestRun.autoDecompose>, 'compareTimelines': <function CTestRun.compareTimelines>, 'generalTest': <function CTestRun.generalTest>, 'cloneData': <function CTestRun.cloneData>, 'endTest': <function CTestRun.endTest>, 'compareSamples': <function CTestRun.compareSamples>, 'fail': <function CTestRun.fail>, 'line': <function CTestRun.line>, 'readRunInfo': <function CTestRun.readRunInfo>, 'processOptions': <function CTestRun.processOptions>, 'workDir': <function CTestRun.workDir>, 'isNotEqual': <function CTestRun.isNotEqual>, '__doc__': "This class runs a solver on a test case, examines the results\n and fails if they don't live up the expectations", 'casePrepare': <function CTestRun.casePrepare>, '__new__': <staticmethod object>, 'which': <function CTestRun.which>, '_CTestRun__setParameterAsUsed': <function CTestRun.__setParameterAsUsed>, 'decompose': <function CTestRun.decompose>, 'runInfo': <function CTestRun.runInfo>, 'shortTestName': <function CTestRun.shortTestName>, 'addFunctionObjects': <function CTestRun.addFunctionObjects>, 'isSmaller': <function CTestRun.isSmaller>, 'status': <function CTestRun.status>, 'reconstruct': <function CTestRun.reconstruct>, '_CTestRun__recursiveInit': <function CTestRun.__recursiveInit>, 'setParameters': <function CTestRun.setParameters>})
__getitem__(key)[source]

Get a parameter

__init__()[source]
__module__ = 'PyFoam.Infrastructure.CTestRun'
static __new__(*args, **kwargs)[source]
__weakref__

list of weak references to the object (if defined)

addFunctionObjects(templateFile)[source]

Add entries for libraries and functionObjects to the controlDict (if they don’t exist :param templateFile: file withe the data that should be added

addToClone(*args)[source]
autoDecompose()[source]

Decomposition used if no callback is specified

autoReconstruct()[source]

Reconstruction used if no callback is specified

casePrepare()[source]

Callback to prepare the case. Default behaviour is to do nothing

cloneData(src, dst)[source]

Copy files recurivly into a case :param src: the source directory the files come fro :param dst: the destination directory the files go to

compareSamples(data, reference, fields, time=None, line=None, scaleData=1, offsetData=0, scaleX=1, offsetX=0, useReferenceForComparison=False)[source]

Compare sample data and return the statistics :param data: the name of the data directory :param reference:the name of the directory with the reference data :param fields: list of the fields to compare :param time: the time to compare for. If empty the latest time is used

compareTimelines(data, reference, fields)[source]

Compare timeline data and return the statistics :param data: the name of the data directory :param reference:the name of the directory with the reference data :param fields: list of the fields to compare

controlDict()[source]

Access a representation of the controlDict of the case

dataDir()[source]
decompose()[source]

Callback to do the decomposition (if automatic is not sufficient)

endTest()[source]
execute(*args, **kwargs)[source]

Execute the passed arguments on the case and check if everything went alright :param regexps: a list of regular expressions that the output should be scanned for

fail(*args)[source]

To be called if the test failed but other tests should be tried :param args: arbitrary number of arguments that build the fail-message

failGeneral(prefix, *args)[source]
Parameters:args – arbitrary number of arguments that build the

fail-message :param prefix: General classification of the failure

fatalFail(*args)[source]
Parameters:args – arbitrary number of arguments that build the

fail-message

generalTest(testFunction, args, *message)[source]
isBigger(value, threshold=0, message='')[source]
isEqual(value, target=0, tolerance=1e-10, message='')[source]
isNotEqual(value, target=0, tolerance=1e-10, message='')[source]
isSmaller(value, threshold=0, message='')[source]
line()[source]
meshPrepare()[source]

Callback to prepare the mesh for the case. Default behaviour is to run blockMesh on the case

messageGeneral(prefix, say, *args)[source]

Everything that passes through this method will be repeated in the end :param args: arbitrary number of arguments that build the fail-message :param prefix: General classification of the message

parallelPrepare()[source]

Callback to prepare the case in parallel (after it was decomposed). Default behaviour is to do nothing

parameterValues()[source]
postprocess()[source]

Callback to run after the solver has finished. Default behaviour is to do nothing

preRunTestCheckMesh()[source]

This test is always run. If this is not desirable it has to be overridden in a child-class

processOptions()[source]

Select which phase of the test should be run

readRunInfo()[source]

read the runInfo from a file

reconstruct()[source]

Callback to do the reconstruction (if automatic is not sufficient)

run()[source]

Run the actual test

runAndCatchExceptions(func, *args, **kwargs)[source]

Run a callable and catch Python-exceptions if they occur :param func: The actual thing to be run

runCommand(*args)[source]

Run a command and let it directly write to the output

runInfo()[source]

return the run information. If the solver was actually run

runTests(namePrefix, warnSerial=False)[source]

Run all methods that fit a certain name prefix

setParameters(**kwargs)[source]

Update the parameters with a set of keyword-arguments

setTimeout(quiet=False)[source]
shell(*args)[source]

Run a command in the case directory and let it directly write to the output :param workingDirectory: change to this directory

shortTestName()[source]
sizeClassString()[source]
solution()[source]

Access to a SolutionDirectory-object that represents the current solution

status(*args)[source]

print a status message about the test

testName()[source]

Return the full test name with which this test is identified

timeoutDefinitions = [('unknown', 60), ('tiny', 60), ('small', 300), ('medium', 1800), ('big', 7200), ('huge', 43200), ('monster', 172800), ('unlimited', 2592000)]
warn(*args)[source]
Parameters:args – arbitrary number of arguments that build the

warning-message

which(command)[source]

Like the regular which command - return the full path to an executable

workDir()[source]
wrapACallback(name)[source]

Has to be a separate method because the loop in wrapCallbacks didn’t work

wrapCallbacks()[source]

Wrap the callback methods with a Python exception handler. This is not done here so that methoids that the child classes overwrote will be wrapped to

writeRunInfo()[source]

read the runInfo from a file

PyFoam.Infrastructure.CTestRun.isCallback(f)[source]