--
You received this message because you are subscribed to the Google Groups "gallio-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gallio-dev+...@googlegroups.com.
To post to this group, send email to galli...@googlegroups.com.
Visit this group at http://groups.google.com/group/gallio-dev?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
To unsubscribe from this group and stop receiving emails from it, send an email to gallio-dev+unsubscribe@googlegroups.com.
To post to this group, send email to galli...@googlegroups.com.
Visit this group at http://groups.google.com/group/gallio-dev?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
--
You received this message because you are subscribed to the Google Groups "gallio-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gallio-dev+unsubscribe@googlegroups.com.
To post to this group, send email to galli...@googlegroups.com.
Visit this group at http://groups.google.com/group/gallio-dev?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
What makes for a successful open source project? A company can takes it under their wing for a variety of reasons. E.g. the android os drives consumers into Google's advertising wing. MySql can up sell. Opscode makes it's money off chef consulting and hosting.
Others are backed by foundations or funded by donations from users. (Wikipedia).
A few months ago, I discussed whether my company (daptiv) could take it on, at least so that I could use time on it. However, it has little to with our core business and is only one of several frameworks that we use.
Getting a core group with day jobs will be a challenge. I could offer a little time, assuming it would look good on a resume.
I agree with others that have suggested that we greatly simplify gallio. Let's keep the core test framework and a command line runner going first. Everything else can and should be addons.
Definitely git. Github?
A website would be nice, but is not essential. Docs in packages, readmes, and samples. Distribution via NuGet.
--
You received this message because you are subscribed to the Google Groups "gallio-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gallio-dev+...@googlegroups.com.
To post to this group, send email to galli...@googlegroups.com.
Visit this group at http://groups.google.com/group/gallio-dev?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
I hesitate to chime in, given that everyone knows my interest is with a different framework. ;-)
However, I will... I've noticed that periodically asking what the products I work on provide as value is helpful in renewing committment and - maybe more importantly - focusing that committment in the right direction. By asking, I mean both asking oneself and asking the users. In the case of NUnit, if I had only asked myself, I would probably be working on a different set of things than I am - whether for goood or bad, time will tell.
Regarding corporate support - as discussed in another post - it seems to me that testing frameworks are only the core business of companies that sell testing frameworks. Therefore, full sponsorship is probably out of the question. But some level of cooperative sponsorship might be sought. Of course, that ties back to the value statement. What are they supporting and why?
Charlie
On Sun, Mar 24, 2013 at 1:06 PM, Greg Young <gregor...@gmail.com> wrote:
I think a real discussion should include what is the value statement of keeping it alive (aside from maintenance). What is the mission statement of the new team?
On Sunday, March 24, 2013, Cliff Burger wrote:
We've been asking the same question ourselves in my engineering department.
What makes for a successful open source project? A company can takes it under their wing for a variety of reasons. E.g. the android os drives consumers into Google's advertising wing. MySql can up sell. Opscode makes it's money off chef consulting and hosting.
Others are backed by foundations or funded by donations from users. (Wikipedia).
A few months ago, I discussed whether my company (daptiv) could take it on, at least so that I could use time on it. However, it has little to with our core business and is only one of several frameworks that we use.
Getting a core group with day jobs will be a challenge. I could offer a little time, assuming it would look good on a resume.
I agree with others that have suggested that we greatly simplify gallio. Let's keep the core test framework and a command line runner going first. Everything else can and should be addons.
Definitely git. Github?
A website would be nice, but is not essential. Docs in packages, readmes, and samples. Distribution via NuGet.
--
You received this message because you are subscribed to the Google Groups "gallio-dev" group.
To unsubscribe from this group and stop receiving emails from it, send an email to gallio-dev+unsubscribe@googlegroups.com.
To post to this group, send email to galli...@googlegroups.com.
Visit this group at http://groups.google.com/group/gallio-dev?hl=en.
For more options, visit https://groups.google.com/groups/opt_out.
Speaking of Charlie and Nunit, another approach we could take is to look at the features we love about mbunit and stuff them into nunit. As add-ins if not core features.
Essential - Extensibility: It is trivial to extend MBUnit using decorators inline in an assembly. Nunits addin model (seperate files to load) is more difficult to write and complicates tooling, versioning, and team development. For example, we use decorators to enable autofaq resolution of test fixture properties.
Important: Reports and output: Gallio's html reports are beautiful, and, can include images and video. Images are very useful when debugging web ui tests. It should be possible to port the HTML reporting to the nunit exe or an add-in. Image logging would probably require that the tests reference a different assembly.
Nice: MBunit has much more powerful data driven testing. Nunit's static data driven tests load all test cases on assembly load.
Initially, we have moved to mbunit for the simple and native parallel testing. However, executing several threads from one test runner process turned out to have more disadvantages than advantages.
Things about mbunit that I don't like or don't give a damn about: bloat. We love it because it can do so much (run rspec in iron ruby?), but mixing these this with the core distribution makes it prohibitively to maintain. For example, I don't have VS 2008 installed -- thats the version you need for the VS extensions -- so I can't build mbunit.
--
--
Charlie:
I would be very interested in your opinion (specifically) about any fundamental differences (whether in philosophy, vision, design, or implementation) between nUnit (both current and future) vs MbUnit/Gallio. I know that with one or both sides continually evolving this is a perennially recurring question, but I'd like your view of it as it stands currently. Now that (as I understand it) nUnit supports many of the data-driven testing (and other) features that were once major differentiators between nUnit and MbUnit, it would be useful to do a re-evaluation of the differences between the two.
Also, for the past several years, users' questions usually take the form of 'which one of these should I use?'. Given the (aforementioned) increased overlap in functionality between the two, I'd like to see whether and how we could instead promote a strategy of "let's use both nUnit and MbUnit with Gallio as the glue, and here's the recommended strategy for when and how nUnit/MbUnit should be used vs. the other". People don't think of using 'either' a unit test framework OR a mocking framework, they know that both are complementary. I'd like side-by-side use of both MbUnit and nUnit to be seen as a long-term-viable alternative to using 'either' one or the other. To be seen as simply having 2 tools in the toolbox instead of one. But the main problem with that has always been knowing the capabilities and strengths of each tool relative to the other, and in knowing when and how to use each tool.
Further, if we (the combined MbUnit and nUnit core teams) can also identify the similarities AND differences in vision and philosophy between the two frameworks, it would allow us to optimize each one for its own niche without having to unnecessarily duplicate features and efforts.
Hi Justin,On Tue, Mar 26, 2013 at 10:18 AM, Justin Webster <justin....@gmail.com> wrote:
Charlie:
I would be very interested in your opinion (specifically) about any fundamental differences (whether in philosophy, vision, design, or implementation) between nUnit (both current and future) vs MbUnit/Gallio. I know that with one or both sides continually evolving this is a perennially recurring question, but I'd like your view of it as it stands currently. Now that (as I understand it) nUnit supports many of the data-driven testing (and other) features that were once major differentiators between nUnit and MbUnit, it would be useful to do a re-evaluation of the differences between the two.
I think it would be cool to develop a side-by-side comparison that was supported by both NUnit and MbUnit developers. Usually, comparisons between frameworks are made by proponents of one package wanting to show that they are better than another - I once did an NUnit vs MSTest comparison of that kind myself. :-) If we had something that we all agreed on as factual, it would be useful in itself as well as a great basis for whatever comes next.
We could start with a simple comparison of visible features - available attributes, for example - but should go deeper than that. For example, NUnit attempts to deal with some (but not all) concerns regarding dependencies by use of a hierarchy of before and after attributes but doesn't allow explicit ordering of tests. So simply saying that MbUnit has [Order] and NUnit doesn't is only part of the story. I'm sure there are examples that go in the opposite direction as well. So we need to not only look at attributes but at why we use them and then determine how - if at all - the corresponding need is met by the other framework.
If we had such a comparison, I'd be glad to host it somewhere, along with an annotation that explained when the comparison had last been reviewed for accuracy by each team. I don't think I've ever seen a comparison like that (i.e. agreed upon by both sides) for any software product.
Also, for the past several years, users' questions usually take the form of 'which one of these should I use?'. Given the (aforementioned) increased overlap in functionality between the two, I'd like to see whether and how we could instead promote a strategy of "let's use both nUnit and MbUnit with Gallio as the glue, and here's the recommended strategy for when and how nUnit/MbUnit should be used vs. the other". People don't think of using 'either' a unit test framework OR a mocking framework, they know that both are complementary. I'd like side-by-side use of both MbUnit and nUnit to be seen as a long-term-viable alternative to using 'either' one or the other. To be seen as simply having 2 tools in the toolbox instead of one. But the main problem with that has always been knowing the capabilities and strengths of each tool relative to the other, and in knowing when and how to use each tool.
Fair enough, although it won't keep NUnit from continually trying to add features that compete with MbUnit's nor, I suspect, you with ours. :-)
Further, if we (the combined MbUnit and nUnit core teams) can also identify the similarities AND differences in vision and philosophy between the two frameworks, it would allow us to optimize each one for its own niche without having to unnecessarily duplicate features and efforts.
Possibly... but I have a bit of a negative reaction that someone should decide the appropriate niche for each and expect them to keep to that niche. It's in the nature of things that we should each try to fill whatever space we can reasonably fill. If market forces or access to resources limit us, that's natural too. But deciding in advance smacks to some extent of the approach taken by certain large companies, with which we each compete, but which we don't necessarily want to emulate.
That's not to say I would never agree with such an approach AFTER we determined the existing differences of vision and philosopy, only that I'm a little hesitant about it.
Charlie
On Tue, Mar 26, 2013 at 11:55 AM, Charlie Poole <nuni...@gmail.com> wrote:
Hi Justin,On Tue, Mar 26, 2013 at 10:18 AM, Justin Webster <justin....@gmail.com> wrote:
Charlie:
I would be very interested in your opinion (specifically) about any fundamental differences (whether in philosophy, vision, design, or implementation) between nUnit (both current and future) vs MbUnit/Gallio. I know that with one or both sides continually evolving this is a perennially recurring question, but I'd like your view of it as it stands currently. Now that (as I understand it) nUnit supports many of the data-driven testing (and other) features that were once major differentiators between nUnit and MbUnit, it would be useful to do a re-evaluation of the differences between the two.
I think it would be cool to develop a side-by-side comparison that was supported by both NUnit and MbUnit developers. Usually, comparisons between frameworks are made by proponents of one package wanting to show that they are better than another - I once did an NUnit vs MSTest comparison of that kind myself. :-) If we had something that we all agreed on as factual, it would be useful in itself as well as a great basis for whatever comes next.
We could start with a simple comparison of visible features - available attributes, for example - but should go deeper than that. For example, NUnit attempts to deal with some (but not all) concerns regarding dependencies by use of a hierarchy of before and after attributes but doesn't allow explicit ordering of tests. So simply saying that MbUnit has [Order] and NUnit doesn't is only part of the story. I'm sure there are examples that go in the opposite direction as well. So we need to not only look at attributes but at why we use them and then determine how - if at all - the corresponding need is met by the other framework.
I was actually thinking of maybe approaching it sort of like a set of user stories. For example:
IF I want to get my test methods to run in a specific order...
THEN (IF I am using nUnit) I would use the ... attribute OR I would ...
THEN (IF I am using MbUnit) I would ...
That way both sides could contribute 'requirements' or features, but individual sides would be responsible for showing their own framework in the best light. I guess it could be kind of like a voter's pamphlet too. With each side giving it's own 'PRO' argument and the other side 'if desired' adding a rebuttal. Personally, all I'm interested in (as a test framework consumer/user) is obtaining unbiased info about all the alternatives so I can make my own decision. So I'd be happy with whatever format would best meet the need of presenting decision makers with a minimally-biased (or at minimum bias from both sides to counter balance the biases) comparison of features and capabilities, etc.
If we had such a comparison, I'd be glad to host it somewhere, along with an annotation that explained when the comparison had last been reviewed for accuracy by each team. I don't think I've ever seen a comparison like that (i.e. agreed upon by both sides) for any software product.
Also, for the past several years, users' questions usually take the form of 'which one of these should I use?'. Given the (aforementioned) increased overlap in functionality between the two, I'd like to see whether and how we could instead promote a strategy of "let's use both nUnit and MbUnit with Gallio as the glue, and here's the recommended strategy for when and how nUnit/MbUnit should be used vs. the other". People don't think of using 'either' a unit test framework OR a mocking framework, they know that both are complementary. I'd like side-by-side use of both MbUnit and nUnit to be seen as a long-term-viable alternative to using 'either' one or the other. To be seen as simply having 2 tools in the toolbox instead of one. But the main problem with that has always been knowing the capabilities and strengths of each tool relative to the other, and in knowing when and how to use each tool.
Fair enough, although it won't keep NUnit from continually trying to add features that compete with MbUnit's nor, I suspect, you with ours. :-)Further, if we (the combined MbUnit and nUnit core teams) can also identify the similarities AND differences in vision and philosophy between the two frameworks, it would allow us to optimize each one for its own niche without having to unnecessarily duplicate features and efforts.
Possibly... but I have a bit of a negative reaction that someone should decide the appropriate niche for each and expect them to keep to that niche. It's in the nature of things that we should each try to fill whatever space we can reasonably fill. If market forces or access to resources limit us, that's natural too. But deciding in advance smacks to some extent of the approach taken by certain large companies, with which we each compete, but which we don't necessarily want to emulate.
I was not suggesting that we 'divvy up' the features between the two frameworks. I was mainly suggesting that both teams be more aware of each product's mission/vision. In the past when nUnit was semi-anti-integration testing features, that was a definite philosophical difference between the two products, and one that would help both developers and users make decisions if they were aware of it. As as example, if the MbUnit developers know that nUnit already has great support for a specific feature that has been requested as an enhancement of MbUnit, then the decisions about 'should we do it', and 'how should we do it' may change. Just by being aware that 'the other product' already has support for such a feature, the devs might decide to simply recommend use of that other product in that scenario. OR they might make a different decision. But the important thing (IMO) is that both teams are more aware of the true differences (and similarities) between both products.
That's not to say I would never agree with such an approach AFTER we determined the existing differences of vision and philosopy, only that I'm a little hesitant about it.
Also, if after attempting to determine the differences between each product's philosophy, few or zero major differences are found, then that conclusion in and of itself may be sufficient reason to further investigate a 'merge' of the two products. So we could stop 'competing' and spending time and effort to produce essentially the same (free) product twice and instead combine forces to achieve more together.
Charlie