Integration testing of micro services

164 views
Skip to first unread message

Ben Houghton

unread,
Nov 3, 2015, 5:32:10 AM11/3/15
to Selenium Users
Hi,

When creating end to end tests for a collection of micro services (all of which have their own respective tests to verify their functinoality) do you publish and reuse the page models etc used within each of the services, or would you look to recreate this testing infrastructure for the end to end test solution?

In either scenario how would you enusre that the end to end tests are kept up to date with changes in the micro service within a continuous delivery framework?  For example a new widget is added to the presentation layer of one of the servies, that required interaction as part of the end to end journey.

Cheers,

Ben

David

unread,
Nov 4, 2015, 1:29:47 AM11/4/15
to Selenium Users
This sounds rather interesting to discuss. What is your use case or rather your system architecture, if you could share some info.

Generally microservices refer more to back end components of the system, like REST, APIs, parts generally w/o a UI. The UI is just one micro service among the rest, if it is to be considered a microservice itself (since it typically depends on the other microservices to function, so in itself it's not really one).

Do you have a system comprised of multiple front end UI facing micro services? If it is the typical scenario, this question of yours wouldn't really apply to Selenium and page objects because that's only for the UI and not the micro services that the UI depends on. You would only test at the end to end system at the UI level.

The only case before that is if you have a widget test framework for testing UI widgets in isolation from the rest of the UI and data/state dependencies. In that case, you should reuse the widget test infrastructure as part of the end to end test infrastructure, where feasible and applicable.

Ben Houghton

unread,
Nov 6, 2015, 11:32:03 AM11/6/15
to Selenium Users
Hi David,

We have a number of services that are all responsible for their own UI.  The user journey is comprised of a user passing through these UIs, to complete a journey.  Think a product catalogue service, followed by an order and payment service (simplified but conveys the idea).  Whilst one approach is to do away with traditional end to end tests, the testing specialists on the team wont be very keen to release something against which a traditional end to end user journey hasn't been executed.  Given that this is going to be a repetative exercise it would make sense to automate this.  Hence the pondering on the best place to contain the page models etc.

Hope the above gives a better insight into the architecture.

Cheers,

Ben

David

unread,
Nov 6, 2015, 5:56:56 PM11/6/15
to Selenium Users
Sounds interesting. So is this an architecture that is micro service divided internally but to the end user (externally) all the micro services appear linked together as a single website? E.g. user navigates product catalog (on website I assume), then orders item off catalog, and completes payment for the order.

If my assumption is not correct, please clarify/elaborate more on how the end user sees these micro services exposed to them. If my assumption is correct, then I would model the UI test framework on the end user experience instead. Meaning treat the test approach as end to end rather than micro services even though the components tested are micro services. So a single POM that spans across micro services, but you organize the POM pages such that they are grouped/categorized by micro services to which they belong. And you define test suites and test cases that target a particular micro service, groups of micro services, or the whole system. And the POM pages are design to be usable across tests (whether it is for single/multiple micro services or for the entire system).

The tricky part of testing the UI in terms of micro service (standalone) is if you have dependencies with other micro services (UI facing or not). You would then have to handle those dependencies (stub/mock/fake the data/state dependency, etc.).

Also, while not directly related, this deck of slides of mine might be potentially useful as a reference to you in terms of thinking of testing UIs as "micro services" chained together to complete a whole system: https://speakerdeck.com/dav1d1uu/ui-atdd-with-selenium. It doesn't offer any solutions or code, just thoughts on how to approach things.

Ben Houghton

unread,
Nov 10, 2015, 11:47:35 AM11/10/15
to Selenium Users
Yeah an internally divided microservice collection is a good way of describing it.

In terms of testing the individual services, any dependencies a service has on others are replaced with stub versions, so that we can be sure any failures are due to the code internal to the service, and not as a result of a dependency.

With regards to the end to end testing if the POMs are stored in the end to end solution (organised with respect to the microservices) then there will be a disconnect between changes to the service and the required refactorign of the POM.  Lets say for example that an additional step has bee added to the catalog search functionality, and the user now has to select whether or not they wish to filter the search by products in stock (probably not a good UX but it illustrates the example).  Since the page models are kept in a separate solution and as such more than likely a separate repository, two set of changes are required to implement this, changes to the service solution, and changes to the page models in the end to end test solution. As such our continuous delivery model will fail since the changes will be promoted to the integration environment individually. Either the service chages first meaning the integration tests will fail as they are using the old page models, or the POMs first which in turn will also cause the tests to fail as the radio buttons won't be present in the old version of the service.  Which ever way around the tests will fail until both sets of changes have been promoted to the integration environment.

David

unread,
Nov 12, 2015, 12:56:11 AM11/12/15
to Selenium Users
That's a known issue with dependent services that make up a whole system, whether microservices or not, when you pair tests by separate repositories. And that's not just for testing, it can affect actual app code too when set up the same way in terms of grouping and dependencies across distributed repos.

A possible solution to this is dependency management and constructing a central repository for (test) artifacts. How you do it is up to you and may also depend on your technology stack. You could have it as you say with test code (POMs and test cases) stored with their respective microservice repos. On successful builds, the test artifacts (the source code if scripting language based, or compiled (binary) assets such as JARs, Java class files, or .NET DLLs, etc.) are archived to a central repository just like app code (as releases, versions). The integration test repo on test runs as part of the test pipeline will pull from the central asset repo the test artifacts needed to run the integration tests. And this can be version managed as well, if the integration test needs to test with specific versions of each microservice (otherwise, it will use the latest available for each). If test stack is Java, you could do this with maven and internally hosted maven repos. There should be something similar for other languages like nuget for .NET, Ruby gems, pip and Python packages, npm for node.js, CPAN for Perl. Although for these languages I'm unsure whether there are options for private repo hosting or if they're all public, but if they're public, you could still package/compile the asset into a bundle that is normally distributed via public repo but then you just host the files internally over HTTP and wget/curl them from the integration test pipeline and use the package installer of your language platform to install the package. In essence, treat your POMs as if they are distributable libraries to be used with your programming language, only instead of using them to build apps or as utility libraries, you're using them to test apps.

There would be no test code change in terms of code logic with this solution. The only change you have to make is how you structure the end to end test code file structure. e.g. it should be a master file tree with placeholder directories that house each microservice's POMs and test cases. And on test runtime, it pulls those missing/outdated files into the directory by installing the missing/dependent (test) packages into it. And those directories also don't have to exist beforehand, could be created on first use (e.g. say for a newly created microservice). Only the end to end test cases need to be aware of the dependent POMs and test cases it needs to call (in the file tree). 

Think of this in Unix terms as target symlinks from end-to-end test repo to the microservice repo sources, or Windows shortcuts on Windows. This could be done in that way in fact, for test code that is all source code (that doesn't need compiling but works for compilable tests too if you choose to compile at end to end test runtime), with source code repo updates before running end to end tests. Here's a sample illustration:

End-to-end Test root
 |-- end to end test suite(s)
 |-- end to end test case directories
      |-- end to end test cases
 |-- microservice A
      |-- POMs via directory structure
      |-- microservice A test cases
 |-- microservice B, and so on

In clean state (in source code repo), the end to end test repo's microservice directories are empty. On each run, the end to end test pipeline does this as a prerequisite:

1. update the local repo of the end to end tests, as needed pulling off source control
2. go into each microservice directory
2. then run git pull origin master (of microservice repo in source control) to fetch the latest changes, or it be git clone (microservice repo) if in default clean empty state. If other version control, similar process with SVN, etc.

then when that's all done, can execute tests, all the dependent microservice test code has been fetched. And if no updates were made/needed, the git pull (or equivalent for other tools) will do nothing, so you don't waste needless copying of unchanged files.

FYI, version management with this type of solution should be possible for packages and assets (github releases, maven, gems, python packages, npm) but not sure how that's done with (git) source code, although I assume there is likely some method to it. So you won't have the issue of promoting the microservice test changes up to integration too soon.

I guess in a nutshell, you could say, treat your test code the same way you treat the app code in how you run them separately and yet integrate together. Use the same techniques. Your overall test framework is just another app comprised of (test) micro services.
Reply all
Reply to author
Forward
0 new messages