Reliability of integration tests (again)

28 views
Skip to first unread message

t...@quarendon.net

unread,
Aug 25, 2016, 3:01:06 AM8/25/16
to bndtools-users
Probably the biggest issue I have when trying to write "integration tests" is reliability, ensuring that everything that needs to be running and wired together is running and wired together before the test starts.

So I'm writing "integration tests" using an OSGi enRoute project with a ".test" suffix, and running with the "Bnd OSGi Test Launcher (Junit)" runner.
Because of the asynchronous nature of how the SCR works relative to the launcher, there's no guarantee that when my test starts running, that all components are activated and fully wired together. Sometimes it works, sometimes it doesn't.

I have been using the Amdatu testing configurator, which works up to a point, but it always seems a bit unsatisfactory. To completely describe all of the services that the test needs to wait for before it can run would potentially require dozens of steps, and there's no guarantee that I'd capture them all anyway. It may seem like it's working, but maybe I've forgotten one, and once in a while the tests fail because that service hasn't been properly activated yet, who knows.

Haven't I already expressed what components I need before testing can start by listing them in the "run requirements" in the bnd file? Why would I then want to list them all all over again?

Is there any way of telling that the SCR has done everything it needs to do, it's got nothing outstanding left to do (I'm assuming it's the SCR I'm interested in here, I'm using DS)? Is there some kind of listener that I can get some notification from? I had a quick look in the OSGi compendium specification, but couldn't see anything obvious, but I admit to not having read all 1600 pages :-)

It seems an obvious enough thing to want to do, and it seems like a reasonably enough thing to define, that there's nothing outstanding left to do, "activation" and "wiring" wise.

Any suggestions?
Thanks.

Timothy Ward

unread,
Aug 25, 2016, 5:03:26 AM8/25/16
to bndtool...@googlegroups.com
The way this is done is independent of SCR, and it is simply to use the OSGi service registry. Tests, like any other OSGi code, need to wait for the service that they’re using to become available. The ServiceTracker is a helpful tool and lets you avoid a lot of boilerplate:


private ServiceTracker<MyService, MyService> tracker;

private MyService serviceToTest;

@Before
public void setup() {

    tracker = new ServiceTracker<>(context, MyService.class, null);
    tracker.open();
    
    // Choose a sensible timeout, e.g. 5 seconds
    serviceToTest = tracker.waitForService(5000);

    assertNotNull(serviceToTest);
}

This code will wait up to 5 seconds for the service that you want to test before failing the test. This is typically the behaviour that you want, so tests don’t hang forever when something is screwed up.

Regards,

Tim


--
You received this message because you are subscribed to the Google Groups "bndtools-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bndtools-user...@googlegroups.com.
For more options, visit https://groups.google.com/d/optout.

t...@quarendon.net

unread,
Aug 25, 2016, 5:17:55 AM8/25/16
to bndtools-users
Yes, I'm using the amdatu testing configurator to do that, but my point was that there are a whole bunch of services that I need to be running before my tests will run, and it just seems the wrong way to do it.

It's no good just waiting for the services that I'm explicitly using to be up and running. Generally I'm testing REST API endpoints, but the amdatu testing code gives you a way to wait for those to be available too (just loops until a GET returns OK). My point was that it's just impractical and error prone to list and explicitly check for all of the services I need. Some of the services that I need are internal ones, just part of the implementation of a particular provider, and not things I'm explicitly calling from the test. I would have to list those explicitly by string name which is going to be error prone and fragile. 

As I say, I've already expressed what services I want to have up and running, I've already expressed the dependencies my test case has by listing things in runrequires/runbundles, it seems annoying/impractical/error prone/fragile to have to duplicate all of that again.
Somehow I just want to say "don't start executing tests until everything is all up and running and everything is in a stable state".


Timothy Ward

unread,
Aug 25, 2016, 5:47:09 AM8/25/16
to bndtool...@googlegroups.com
Hi Tom,

I think you’re fundamentally misunderstanding the way in which an OSGi system is assembled. Bundles may start/stop/install/uninstall at any time in an OSGi framework, so the concept of “everything is up and running and in a stable state” doesn’t really exist. The point of an OSGi service is that it is must only be registered when it can successfully function, i.e. all of the services that it depends on are available. Therefore it is sufficient to wait for the “one” service that you’re testing to become available. At that point the system is as ready as it needs to be for your test, which is exactly the same as it is for OSGi systems in production. 

It should absolutely *never* be the case that you look for or check which internal services are available/wired in your integration test. The function that you’re testing is the public service/REST endpoint, if that is up then the provider it depends on is up. No further checking is needed (or helpful).

If your test really needs you to get lots of different services then I would suggest that you look at refactoring the test(s) into multiple test classes (or even projects), as it sounds as though you’re trying to test all the layers of your application at once in a single monolithic test. This will be hard to read, and as you’re finding, hard to maintain. As with all software, writing good tests is hard. It’s really important to focus on the behaviour of the layer that you’re testing, and not the implementation details of the other layers. If written in this way then OSGi integration tests are pretty simple, and can usually get away with one to three trackers maximum.

Regards,

Tim


t...@quarendon.net

unread,
Aug 25, 2016, 6:01:41 AM8/25/16
to bndtools-users


On Thursday, 25 August 2016 10:47:09 UTC+1, Tim Ward wrote:

I think you’re fundamentally misunderstanding the way in which an OSGi system is assembled. 
 
I understand the dynamic nature of OSGi, but in a test environment (at least in the tests I'm writing) nothing is changing.

I admit though that I don't truly understand the life cycle of components.

Things are complicated by whiteboard patterns (so the component I'm calling may be up and running, but it uses a whiteboard pattern, so really what I'm interested in is whether the things that are going to register with it have been registered, even though I'm not using them directly), and configuration (I seem to see components getting activated, and then re-activated when configuration is applied -- how do I know that I get it in that final state, and not the first state) etc.

I've got individual test projects for lower level building block projects, I'm trying to write integration tests that exercise more complex behaviour, where the number of components I require to exist is getting larger, and it's beginning to get unmanageable. 

I admit though that part of my problem is not having confidence in what the minimum I need to check is, born out of not truly understanding the life cycle and how the activation process works . If I check for X, does that guarantee that everything I want to be up and running will be? Unless it's guaranteed then the tests will be unreliable. I was hoping for a simple way of checking "no outstanding work to do".

Timothy Ward

unread,
Aug 25, 2016, 6:23:50 AM8/25/16
to bndtool...@googlegroups.com
On 25 Aug 2016, at 11:01, t...@quarendon.net wrote:



On Thursday, 25 August 2016 10:47:09 UTC+1, Tim Ward wrote:

I think you’re fundamentally misunderstanding the way in which an OSGi system is assembled. 
 
I understand the dynamic nature of OSGi, but in a test environment (at least in the tests I'm writing) nothing is changing.

I admit though that I don't truly understand the life cycle of components.

Things are complicated by whiteboard patterns (so the component I’m calling may be up and running, but it uses a whiteboard pattern, so really what I'm interested in is whether the things that are going to register with it have been registered, even though I'm not using them directly), and configuration (I seem to see components getting activated, and then re-activated when configuration is applied -- how do I know that I get it in that final state, and not the first state) etc.

For config admin, when using DS it is common to add a “marker” property to the configuration. This will appear as a service property when it has been applied, and means that you can “know for certain” that your configuration has been applied. Another option is to use a configuration policy of REQUIRED for your component. In that case your component simply won’t start until the configuration is available, so there is no risk of getting an unconfigured instance.



I've got individual test projects for lower level building block projects, I'm trying to write integration tests that exercise more complex behaviour, where the number of components I require to exist is getting larger, and it's beginning to get unmanageable. 

I admit though that part of my problem is not having confidence in what the minimum I need to check is, born out of not truly understanding the life cycle and how the activation process works . If I check for X, does that guarantee that everything I want to be up and running will be? Unless it's guaranteed then the tests will be unreliable. I was hoping for a simple way of checking “no outstanding work to do".

For now you may see this as a leap of faith, but if your services are written properly (e.g. by using Declarative Services) then yes, checking for X does guarantee that everything used by X is also available. Checking for the dependencies separately in your test is simply not required.

I agree that the whiteboard model processing can be a little more challenging. For the HttpServiceRuntime there’s not really much choice other than to poll the service for information about your servlet. This is something that I’ve raised as a possible item for OSGi release 7.

Tim

t...@quarendon.net

unread,
Aug 26, 2016, 9:21:56 AM8/26/16
to bndtools-users


On Thursday, 25 August 2016 11:23:50 UTC+1, Tim Ward wrote:
I agree that the whiteboard model processing can be a little more challenging.

The whiteboard pattern is my biggest issue in this respect.
There are a number of cases where in order for my tests to work successfully I need to ensure that a component has been activated and registered with another, in a whiteboard fashion. There's no external service I can depend on, so the only think I can think of doing is to have a query method on the component implementing a whiteboard that answers "Has the component whose name I pass as argument to this method been registered with you yet", and then rely on my test knowing the class names of anything that needs to be registered. It's ugly, but I can't think of any other method.

Bryan Hunt

unread,
Aug 26, 2016, 10:07:32 AM8/26/16
to bndtools-users
FYI, I have a JUnit @Rule that does just this.  There is also a @Rule that will configure the service then wait for it.  https://github.com/BryanHunt/eUnit

t...@quarendon.net

unread,
Aug 26, 2016, 11:07:18 AM8/26/16
to bndtools-users


On Friday, 26 August 2016 15:07:32 UTC+1, Bryan Hunt wrote:
FYI, I have a JUnit @Rule that does just this.  There is also a @Rule that will configure the service then wait for it.  https://github.com/BryanHunt/eUnit

Thanks for the pointer.
I'm happy with the approach I've got, it gives me, for example, control over order of things (wait for this service, then perform this action, then wait for an HTTP endpoint to be available etc).

Whiteboard patterns are the issue. No good solution to that presents, and it makes life *very* hard.

David Jencks

unread,
Aug 26, 2016, 11:48:42 AM8/26/16
to bndtool...@googlegroups.com
I think what you are referring to as a whiteboard I think of as a multiple optional reference.  If the thing with the reference is a DS component I usually use the <target-name>.cardinality.minimum property set in configuration to assure that the expected number of referenced services are actually bound before the component starts.  The configuration-policy needs to be REQUIRE to assure that this property is actually set.  The configuration management agent i use (not open source) can count the references for me, but perhaps in your situation you can count them for the test situation.

thanks
david jencks

Raymond Auge

unread,
Aug 26, 2016, 12:25:11 PM8/26/16
to bndtool...@googlegroups.com

I believe what Tom is referring to is a legitimate concern with real whiteboard implementations which are NOT often represented as multiple, option reference but more with a pure ServiceTracker, for instance Http whiteboard, or something like a JAX-RS whiteboard.

In this case I'm assuming something like JAX-RS:

Here's the scenario:
- I install a JAX-RS whiteboard impl (using we have no idea what tech to implement it)
- I implement one of the expected types and register that as a service with the requisite whiteboard properties (we assume it has opt-in properties as most whiteboards do...)
- I hope the whiteboard impl picks up my service and does something interesting with it, let's assume it's a JAX-RS endpoint
- WHEN is that endpoint thing actually available???

Merely looking in the service registry tells me nothing because I have no idea when exactly the whiteboard did something (or not) with the service!

This is precisely the scenario that we also find with the Http Whiteboard at present and I agree with Tim that there's a legitimate grip to be made which is why Tim has opened RFP-182 [1].

I would actually add Tom's use case of "testing" as a very good use case to be added to the RFP.

- Ray
david jencks

To unsubscribe from this group and stop receiving emails from it, send an email to bndtools-users+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.

--
You received this message because you are subscribed to the Google Groups "bndtools-users" group.
To unsubscribe from this group and stop receiving emails from it, send an email to bndtools-users+unsubscribe@googlegroups.com.

For more options, visit https://groups.google.com/d/optout.



--
Raymond Augé (@rotty3000)
Senior Software Architect Liferay, Inc. (@Liferay)
Board Member & EEG Co-Chair, OSGi Alliance (@OSGiAlliance)

Raymond Auge

unread,
Aug 26, 2016, 12:25:49 PM8/26/16
to bndtool...@googlegroups.com

On Fri, Aug 26, 2016 at 12:25 PM, Raymond Auge <raymon...@liferay.com> wrote:
multiple, option reference

... multiple, optional references ...

t...@quarendon.net

unread,
Aug 27, 2016, 1:37:25 AM8/27/16
to bndtools-users


On Friday, 26 August 2016 16:48:42 UTC+1, djencks wrote:
I think what you are referring to as a whiteboard I think of as a multiple optional reference. 

See http://enroute.osgi.org/doc/218-patterns.html for what I mean by a whiteboard pattern.


 
Reply all
Reply to author
Forward
0 new messages