Test execution steps playback

125 views
Skip to first unread message

Magnus Teekivi

unread,
Mar 28, 2021, 10:51:06 AM3/28/21
to GraphWalker
Hello!

Let's say a GraphWalker-based test has finished running due to an assertion error. To be able to know what could have caused the error, I would like to be able to playback the steps (vertices, edges) that were taken on the test models, without needing to actually re-run the test (although that could be another option). I imagine that certain issues with a System Under Test (SUT) might surface only if some specific sequence of actions were previously taken, so being able to inspect what steps led to an issue can be beneficial.

I think that this kind of playback could be made possible by the usage of an Observer on the TestExecutor which writes the test "trace" (e.g. one line per step which contains the active model, the vertex/edge entered, possibly together with the current values of model variables) into a file.

Given that I have this trace file for a test run, how could I achieve this kind of playback of steps? Is this something that can be done in GraphWalker Studio? If not currently, how difficult or sensible would it be to implement something like that into/with the Studio? The way the Studio can currently visualize test execution as it is progressing (over a WebSocketServer connection) is basically what I would want to achieve, but with the possibility of playing back the steps after the test has already been executed, together with the ability of stepping both forwards and backwards in the playback. It would also be good to see the current values of model variables at each step.

Would it make more sense to create a separate GUI tool for this? For this option, I have thought of using the code from GraphStreamObserver of graphwalker-example/java-petclinic. I could embed the GraphStream viewer into a Java Swing GUI window together with a list of steps from a chosen trace file, alongside with some playback-related controls. This solution wouldn't probably look or work as nice as something integrated with/into the Studio.

Or are there other approaches to consider regarding this topic?

Thanks in advance,
Magnus.

Kristian Karl

unread,
Mar 31, 2021, 5:01:48 AM3/31/21
to GraphWalker
Hi,

This is a very interesting topic!

We have the ReplayMachine, here's a test showing how it works, but it needs a Profiler object.
The Profiler object does not readily ingest a log or such from previous execution. 
But. If the Profiler could do that, it should be fairly simple to add that Replay-from-earlier-execution-feature to Studio. 
I think we should integrate that replay feature into Studio for the visualization.

So I see it as 2 stories to implement:
1 - Make a Profiler that can ingest a previous execution.
2 - Visualize the replay in Studio

Anything I missed?

/Kristian

Magnus Teekivi

unread,
Apr 3, 2021, 5:18:33 AM4/3/21
to GraphWalker

Hello.

I would like to take an attempt at implementing these stories. I've done a bit of initial experimenting.

Regarding the first story, I chose a different approach than making a new Profiler implementation. Instead, I took a look at the source code of SimpleProfiler and noticed that it can be made into a profiler that can ingest a previous execution by making it possible to construct an instance of it from an execution path. For that, I created a static method I named SimpleProfiler.createFromExecutionPath(List<Execution>). As its name implies, it takes an execution path as an argument, constructs a SimpleProfiler, goes through the Execution instances and does similar things to what are done in the start and end methods. To test out the method, I added two tests (in copy-paste fashion) to ReplayMachineTest that construct the ReplayMachine using a SimpleProfiler constructed from the original machine's execution path. Perhaps there is a better place and way to test this new functionality?

I have created a draft/WIP Pull Request for the first story that contains code for what I described in the previous paragraph. I'd be happy to get feedback for what I have done so far. I plan to use the same PR for further developments of the story.

I also thought about extending the SimpleProfiler class. This would've required the changing of the visibility of the inner executions variable to protected. Then there's the possibility of creating a new implementation of Profiler altogether, but I suspected that there would be duplicate code to what is already present in SimpleProfiler. My current approach can be considered an initial attempt. I'm open to suggestions.

Being able to create a Profiler from an execution path is only one part of the story. The more difficult part is to define a way how to save an execution path into a file and then later load that. I read from graphwalker-core/README.md under "Design goal" that "For example, the core itself does not know how to read and parse models from file, that's handles by another module: graphwalker-io." To me, this suggests that the execution path saving and loading logic should be placed in graphwalker-io. The logic could also be implemented to deal with the saving and loading of Profilers specifically.

Regarding the second story about visualizing the replay in Studio. As I am aware, there are two versions of the Studio – one that is accessible at the root web path "/" and the other (I assume newer) one that is accessible at the web path "/studio.html". Do you mean the latter one here? Also, what I think would improve the experience of replaying an execution is the ability to step backwards.

I think that one more story that could be added is storing model attribute/variable changes/values in the execution path and displaying the current attribute values at each step when replaying an execution. As far as I know, this information is currently not stored as part of an execution path. This can be beneficial for figuring out why a test run might have failed. Displaying current model attribute values can also be useful during visualizing a currently progressing test execution.

I'm open to opinions and suggestions. I would especially like to hear your ideas on how to implement saving an execution path or Profile to a file and then loading it from a file.

All the best,
Magnus.

Kristian Karl

unread,
Apr 6, 2021, 3:04:46 AM4/6/21
to GraphWalker
Hi Magnus,
Awesome that you started working on this!
I'll have a look at the draft/WIP Pull Request during the day and add feedback there.
Best Kristian

Kristian Karl

unread,
Apr 6, 2021, 3:12:48 AM4/6/21
to GraphWalker
1) I agree with placing the loading logic in graphwalker-io.
2) Regarding visualization, I mean the latter one (/studio.html).
3) Playback of a previous execution should include stepping back/forward. Maybe even a scroll bar to move quickly back and forth?
4) Yes, also the values for the [model] data at each step should be displayed during playback.

Magnus Teekivi

unread,
Apr 15, 2021, 9:17:26 AM4/15/21
to GraphWalker

Hello.

I wanted to try out ReplayMachine created from a multi-model machine but I ran into the following error as soon as the execution left the first model:

org.graphwalker.core.machine.MachineException: No path generator is defined

I investigated the issue, created a test in ReplayMachineTest demonstrating it and provided my fix for it in this Pull Request.


Regarding the replaying of a previous execution, I currently have some of questions and thoughts:

  1. ReplayMachine (as far as I know) actually re-executes the (Java) methods from model implementation classes. I would like to implement replaying a previous execution only visually without the test actually getting run, or at least make re-running the actual test optional. I think this kind of behavior is necessary to allow stepping backwards and scrolling to a random point of execution. How could this be achieved?

  2. Regarding loading and saving an execution from/to a file – how should this file be structured? I think it can be in .json format and I think it should certainly contain the list of execution steps (each step/item representing an Execution instance).

  3. To be able to show the values of [model] data, maybe it would make sense to add model attribute/variable data to the Execution class?

  4. Also, what needs to be decided is if or how contexts/models should be stored in this execution file? Earlier I thought about just referring to model file paths in the execution file but now I think it might be better to embed the whole model .json contents into the execution path. I think this is a somewhat better solution, because one might run a test and save its execution to a file at some point but later the model is changed and trying to replay an earlier execution that refers to the changed model might not work. This issue can be avoided with a self-contained format. This also makes it easier to move an execution file from one place/computer to another. The downside of this is obviously the execution file being bigger. What is your opinion on this?

  5. I think that it might be beneficial to also store the error message with stack trace from the test execution if the test execution failed.


All the best,
Magnus.

Reply all
Reply to author
Forward
0 new messages