Hello.
I would like to take an attempt at implementing these stories. I've done a bit of initial experimenting.
Regarding the first story, I chose a different approach than making a new Profiler implementation. Instead, I took a look at the source code of SimpleProfiler and noticed that it can be made into a profiler that can ingest a previous execution by making it possible to construct an instance of it from an execution path. For that, I created a static method I named SimpleProfiler.createFromExecutionPath(List<Execution>). As its name implies, it takes an execution path as an argument, constructs a SimpleProfiler, goes through the Execution instances and does similar things to what are done in the start and end methods. To test out the method, I added two tests (in copy-paste fashion) to ReplayMachineTest that construct the ReplayMachine using a SimpleProfiler constructed from the original machine's execution path. Perhaps there is a better place and way to test this new functionality?
I have created a draft/WIP Pull Request for the first story that contains code for what I described in the previous paragraph. I'd be happy to get feedback for what I have done so far. I plan to use the same PR for further developments of the story.
I also thought about extending the SimpleProfiler class. This would've required the changing of the visibility of the inner executions variable to protected. Then there's the possibility of creating a new implementation of Profiler altogether, but I suspected that there would be duplicate code to what is already present in SimpleProfiler. My current approach can be considered an initial attempt. I'm open to suggestions.
Being able to create a Profiler from an execution path is only one part of the story. The more difficult part is to define a way how to save an execution path into a file and then later load that. I read from graphwalker-core/README.md under "Design goal" that "For example, the core itself does not know how to read and parse models from file, that's handles by another module: graphwalker-io." To me, this suggests that the execution path saving and loading logic should be placed in graphwalker-io. The logic could also be implemented to deal with the saving and loading of Profilers specifically.
Regarding the second story about visualizing the replay in Studio. As I am aware, there are two versions of the Studio – one that is accessible at the root web path "/" and the other (I assume newer) one that is accessible at the web path "/studio.html". Do you mean the latter one here? Also, what I think would improve the experience of replaying an execution is the ability to step backwards.
I think that one more story that could be added is storing model attribute/variable changes/values in the execution path and displaying the current attribute values at each step when replaying an execution. As far as I know, this information is currently not stored as part of an execution path. This can be beneficial for figuring out why a test run might have failed. Displaying current model attribute values can also be useful during visualizing a currently progressing test execution.
I'm open to opinions and suggestions. I would especially like to hear your ideas on how to implement saving an execution path or Profile to a file and then loading it from a file.
All the best,
Magnus.
Hello.
I wanted to try out ReplayMachine created from a multi-model machine but I ran into the following error as soon as the execution left the first model:
org.graphwalker.core.machine.MachineException: No path generator is defined
I investigated the issue, created a test in ReplayMachineTest demonstrating it and provided my fix for it in this Pull Request.
Regarding the replaying of a previous execution, I currently have some of questions and thoughts:
ReplayMachine (as far as I know) actually re-executes the (Java) methods from model implementation classes. I would like to implement replaying a previous execution only visually without the test actually getting run, or at least make re-running the actual test optional. I think this kind of behavior is necessary to allow stepping backwards and scrolling to a random point of execution. How could this be achieved?
Regarding loading and saving an execution from/to a file – how should this file be structured? I think it can be in .json format and I think it should certainly contain the list of execution steps (each step/item representing an Execution instance).
To be able to show the values of [model] data, maybe it would make sense to add model attribute/variable data to the Execution class?
Also, what needs to be decided is if or how contexts/models should be stored in this execution file? Earlier I thought about just referring to model file paths in the execution file but now I think it might be better to embed the whole model .json contents into the execution path. I think this is a somewhat better solution, because one might run a test and save its execution to a file at some point but later the model is changed and trying to replay an earlier execution that refers to the changed model might not work. This issue can be avoided with a self-contained format. This also makes it easier to move an execution file from one place/computer to another. The downside of this is obviously the execution file being bigger. What is your opinion on this?
I think that it might be beneficial to also store the error message with stack trace from the test execution if the test execution failed.
All the best,
Magnus.