Have TestExecutor run multiple tests with a single instance of node by ozyx · Pull Request #1383 · microsoft/nodejstools

For communicating between the test runner node process and the C# code, you could look into some of the communications channel approaches I mentioned previously or you can keep using stdout as the communications channel.

The main difference compared what the code is currently doing and my proposal, is that you'd send results incrementally, as soon as they become available. Here's an overview of what this would look like:

  1. C# code spawns node test runner and sends it a JSON list of tests to run.
  2. In the C# code, start a loop that processes lines received from stdout of the node test runner.
  3. In the test runner JavaScript code, create a callback function called postResult that will be invoked when each test is completed.
    • This function takes a result object, and converts it to a single line JSON string, logging this string to stdout.
  4. For each test:
    • Schedule it to be run, passing in the postResult callback to be invoked when test is completed.
  5. Once all tests have been run on the JavaScript side, terminate the program.

This would then allow you to easily add support for the additional events, such as when a test starts. To support these, in the JS code, instead of sending results directly, send events objects:

{
    "type": "result",
    "result": {
        "title": "my test",
        "passed": true,
        ...
    }
}

Then you can define additional events for the C# code to handle.

Right before a test is run for example, you could send a test start event:

{
    "type": "testStart",
    "title": "my test"
}

The C# code would then only have to be able to tell what type of event it received on stdout based on the type field and handle the event appropriately. Adding a test start would allow you to correctly set the test run times and durations.


For testing, unfortunately we do not have a good automated testing story in this area so most of it has to be manual testing. I recommend that you setup a few projects that cover the core test runner scenarios. Make sure to hit all the test frameworks: export runner, Mocha (both v2 and v3), and Tape. Then, start thinking of other cases that should be handled properly too. This could include:

  • Running only selected tests.
  • Multiple test in multiple files.
  • Test files in different folders.
  • Tests that write to stdout or stderr.
  • The test runner throws an exception (perhaps the test library was not found) or crashes.
  • There is a long running test.
  • Multiple test frameworks user in a single project (both Export and Mocha for example.)
  • Asynchronous tests (usually this should be handled by the test library itself, but it's worth checking).
  • Tests with odd names.
  • Tests with the same name.
    ...

Just make sure to document what should be tested and how it should be tested. We'll add those notes to our test plans. This doesn't have to be terribly extensive, but should give a developer some idea of what to cover on a test pass.

Once we are confident that the new test runner code handles the majority of cases well enough, we'll merge the code into master and start getting it into people's hands using devbuilds. This will give the code some real world exposure and probably reveal some bugs that will have to be addressed before shipping.