- 
          
- 
                Notifications
    You must be signed in to change notification settings 
- Fork 6
dasharo-performance-concurrent: Add proof of concept for concurrent tests #1130
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: develop
Are you sure you want to change the base?
Conversation
5f8f666    to
    1988ada      
    Compare
  
    | Added the  | 
3c685d6    to
    3eb3d7e      
    Compare
  
    | Added the  | 
| Added the option for  They are available as  Tested using a temp test case: *** Settings ***
Default Tags    automated
*** Test Cases ***
TEST001.001 Test scanning args
    ${TEST_CASES}=    Evaluate    list(${TEST_CASES})
    ${TEST_TAGS}=    Evaluate    list(${TEST_TAGS})
    Log To Console    \ntags: ${TEST_TAGS}
    Log To Console    \ncases: ${TEST_CASES}
    FOR    ${case}    IN    @{TEST_CASES}
        Log To Console    \tcase: ${case}
    END
CPT001.201
    Pass Execution    1
CPF005.201
    Pass Execution    2
ABC000.000 automated test
    Pass Execution    3
ABC001.000 Not-automated case
    [Tags]    semiauto
    Pass Execution    4==============================================================================
Test                                                                          
==============================================================================
TEST001.001 Test scanning args                                        ..
tags: ['automated']
.
cases: ['TEST001.001', 'CPT001.201', 'ABC000.000', 'ABC001.000']
.       case: TEST001.001
       case: CPT001.201
       case: ABC000.000
       case: ABC001.000
TEST001.001 Test scanning args                                        | PASS |
------------------------------------------------------------------------------
CPT001.201                                                            | PASS |
1
------------------------------------------------------------------------------
ABC000.000 automated test                                             | PASS |
3
------------------------------------------------------------------------------
Test                                                                  | PASS |
3 tests, 3 passed, 0 failed
==============================================================================logs for a good measure: The next step is to integrate this into the proof of concept, which should become much simpler | 
| With 6125b76 there is no real overhead for running tests in parallel. 
 Other than that, there are a couple Keyword tools to check what tests should be run and decide which steps to perform in parallel gathering tasks. All the keywords and variables related to managing this were moved to a library file. I don't think there is a smart way of simplifying places like the  I'd like to ephasize, that this approach is only needed for tests which require synchronization. SSH test cases, when they don't reboot, or in any other way affect the shared resource (device) can simply be run in parallel using  I am not sure how to keep a single definition of a test case and somehow treat it differently when it comes to parallelisation if it's performed via SSH. SSH test cases don't need the constructions suggested in this PR if they don't affect the device state, so they shouldn't be joined into large suites like the  This approach would be also needed if there are a couple test cases in a suite, where at least one depends on any other, or if any single one reboots or in other way affects the device. | 
2376d06    to
    3a39e7a      
    Compare
  
    | Added the tests under load (reusing all gathering code) and stability tests (extending existing gathering code to run it in parallel with temp&freq measurements) to show how much effort does making parallel test cases actually take | 
06145d2    to
    373bfd3      
    Compare
  
    | Added checking the power source: NA/Battery/AC/USB-PD | 
1ab09a6    to
    dfd892f      
    Compare
  
    | @philipanda fix pre-commit and rebase | 
It's a proof of concept for running test cases in parallel, with common execution steps. This apporach, while might look complicated and non-axiomatic, can allow for devastating improvements in test suite execution times. By using the supported test cases dict, _CANARY_ and _PSEUDO_ test cases the actual test scope can be determined, then all the tests can be performed in parallel in any way we find reasonable. Then the actual test cases can access suite variables created by the _PSEUDO_ test cases to determine whether they PASS or not. Signed-off-by: Filip Gołaś <[email protected]>
Signed-off-by: Filip Gołaś <[email protected]>
Signed-off-by: Filip Gołaś <[email protected]>
Signed-off-by: Filip Gołaś <[email protected]>
Signed-off-by: Filip Gołaś <[email protected]>
Signed-off-by: Filip Gołaś <[email protected]>
Signed-off-by: Filip Gołaś <[email protected]>
Signed-off-by: Filip Gołaś <[email protected]>
Signed-off-by: Filip Gołaś <[email protected]>
Signed-off-by: Filip Gołaś <[email protected]>
…e gather steps Signed-off-by: Filip Gołaś <[email protected]>
Signed-off-by: Filip Gołaś <[email protected]>
Signed-off-by: Filip Gołaś <[email protected]>
It's not parallel execution. It's just concurrent
0485a30    to
    ba42735      
    Compare
  
    
It's a proof of concept for running test cases concurrently, with common
execution steps. This approach, while might look complicated and
unconventional, can allow for devastating improvements in test suite
execution times.
By using the supported test cases dict, CONCURRENT test cases
the actual test scope can be determined, then all the tests can be
performed concurrently in any way we find reasonable.
Then the actual test cases can access suite variables created
by the CONCURRENT test cases to determine whether they PASS or not.
TODO before merging: