Marc, would it be possible for you to describe a bit better than I
just did what you were doing and how it worked, and perhaps provide
some code samples? I, James, and I'm sure others would greatly
appreciate it.
Cheers,
Bob
> --
> You received this message because you are subscribed to the Google Groups
> "ValidateThis" group.
> To post to this group, send email to valida...@googlegroups.com.
> To unsubscribe from this group, send email to
> validatethis...@googlegroups.com.
> For more options, visit this group at
> http://groups.google.com/group/validatethis?hl=en.
>
--
Bob Silverberg
www.silverwareconsulting.com
My approach is generally to set up a query, or an array of structs,
which contains the input data, the expected validation result
(pass/fail), the expected failure field/fields if i expect a failure,
and even an expected failure message if I expect a failure. Something
like:
private function createValidationFailuresDataProvider(){
variables.validationFailuresDataProvider = [
{ firstname="", lastname="", username="", birthday="marc",
expectfailure=true, failurefields=["firstname", "lastname", username,
"birthday"], failurefieldtypes=["required","required","required","date"]}
,{firstname=repeatString("a", 1000), lastname=repeatString("a",
1000), username=repeatString("a",1000), birthday=5,
failurefields=["firstname", "lastname", username, "birthday"],
failurefieldtypes=["required","required","required","date"] }
];
}
I add to that failure data provider as I find or think of more
conditions to test against.
and then the test will be something like:
createValidationFailuresDataProvider();
/**
* @mxunit:dataprovider validationFailuresDataProvider
*/
function validateUser_should_validate_all_expected_conditions( condition ){
var user = createUser( condition.firstname, condition.lastname,
condition.username, condition.birthday );
var result = validateThis.validate( user, "User" );
var property = "";
for( property in condition.failurefields ){
assertValidationFailureExists(....);
}
}
You could even get better (I'm lame, so I didn't) and write an
assertion function that took all the failure fields, failure field
types, and failure messages and did the looping fo ryou so it'd look
like:
var result = ....;
assertAllValidationFailuresExist( result.getFailures(),
condition.failurefields, condition.failurefieldtypes,
condition.failureMessages );
I do the same for non-failures, as well. So I'll create an array of
structs of known *good* data, call it something like
validationPassesDataProvider, and write a test that ensures those
conditions do not fail.
What I love about data provider driven tests is that when you think of
new conditions, or perhaps business logic changes, you simply change
or add to your dataprovider and the test just does what it does. no
need to write extra assertions.
Marc
Have you got a Matrix style brain upgrade I could use - there's so
much good stuff to learn.
- John
I wish! I tend to repeat this mantra to myself whenever I get in a
"there's too much to learn" funk: "Step by Step, up toward the
mountaintop!"
Thanks indeed Marc. I'll admit that I haven't peeked inside the
attachement yet, but I _can_ see it in gmail. This is something that
would be great to add to the VT wiki. I'll put that on my todo list.
> ---
> I'm just wondering though if there is a way of taking things one step
> further and actually getting the tests to read the VT XML, identify the
> rules being set against a property and then setting that property up with
> typical scenerios.
> Things like conditions and params though I'm not sure how that would be
> managed. Could get messy.
> I'm kind of "typing aloud" on that one but maybe this is an opportunity to
> create some common tests for users to run against their work? A potential
> "extra" for VT.
I'm not sure that makes sense, James. If you did that all you'd really
be testing is the framework, and it already has its own suite of
tests. I'm not claiming that it's infallible, but when you use a fw
like VT you generally assume that it works as advertised.
I think what you need to test is _your_ configuration, not the fw.
That necessarily means that you have to create the tests to indicate
what should be valid and what should not. If you wrote something to
inspect the VT xml and then generate tests you wouldn't really be
testing your configuration at all.
Does that make sense?
Cheers,
Bob
> Cheers,
> James
>
--
Bob Silverberg
www.silverwareconsulting.com
I'm not sure that makes sense, James. If you did that all you'd really
be testing is the framework, and it already has its own suite of
tests. I'm not claiming that it's infallible, but when you use a fw
like VT you generally assume that it works as advertised.
Absolutely. I was question myself about exactly the same thing going home on the bus tonight :-)
I think what you need to test is _your_ configuration, not the fw.
That necessarily means that you have to create the tests to indicate
what should be valid and what should not. If you wrote something to
inspect the VT xml and then generate tests you wouldn't really be
testing your configuration at all.Does that make sense?
I have a *very strong opinion* on this one, as a guy who's written too
damn many unit test. Normally I'm a "strong opinions, weakly held"
dude.
Here's an exception.
I believe it is absolutely wrong to automate this kind of testing by
parsing the guts of the underlying framework in order to prove that
something works correctly.
When you're testing validations on an object, you are stating to the
world -- and by 'world' I mean "the world in which your Object lives"
-- that "These conditions are what I, the test writer, declare as the
conditions under which validation MUST fail. This is a very deliberate
statement. In TDD, this would happen before you've written a single
line of production code. When using VT, this happens even before
you've written a rules file for this object.
In the real world, this means that you're writing a test, perhaps with
a dataprovider which describes a bunch of failed validations, which
can only possibly pass if your object validates for all those cases.
If it fails for any of those tests, then your validation is
incomplete.
The only possible way to do this with integrity is to write the
*expectation* manually. Or, perhaps more accurately, The thing that
describes your *expectations* MUST be independent of the underlying
mechanism for validating. If you say "these 20 conditions are failure
conditions, for these reasons", then you can only do that manually (or
with a tiny bit of code that generates those conditions. But those
conditions are framework independent.
blah blah blah. What this means in real life is that if you write a
test which parses the validation XML and then creates validation
conditions based on that, you are only testing the validation rules
you've described. You are absolutely not testing what you, the
programmer of the unit test, have indicated is the "validation story"
for your object. Once you write your automation thing which parses VT
and proves that your validations work, it will *always* be correct.
This is a very bad thing.
Imagine:
you have a user. you write a "User.xml" file with a single rule, a
"required" on "firstName". and you run your test, which parses your
User.xml and creates failure cases for that file. Well, that automated
test thing only cares about "firstName", since that's the only rule in
the file. so your automated test thing creates conditions that are
guaranteed to fail for "first name required", and "first name less
than 20 chars", or whatever. but it can't possibly flag "lastName is
required", because that's not in your User.xml file yet.
What you end up with is a test which generates "fuzz" data -- username
empty, username less than minChars, username GT maxChars -- but that's
it! It can't possibly generate meaningful tests for the other
properties because those aren't' described -- YET -- in the User.xml
file
So your test, then, depends on the state of your user.xml file. Let's
say that your system expects that a user's last name is required. And
your VT user.xml file doesn't have that rule.
Your test will pass! how can it not?
BUT if you have an array of structs which describes your failure
conditions, your tests will only pass when your validation code -- in
this case your User.xml file -- completely declares your validations.
This means that the job of your tests is to prove that your validation
routine -- whatever that may be -- correctly validates your object.
But your tests cannot possibly do that job if they rely on the
underlying framework to provide them with the information they need to
perform those assertions.
Put another way: your tests prove your validation story. VT implements
your validation story. Your tests should *only ever pass* when VT
fully implements that validation story, and not before. And in this
sample case, where your validation story is "firstname is required,
lastname is required, username is required and cant' be a duplicate
and can't contain the firstname or lastname and can't contain the word
'unicorns'", then a user.xml file whose only "rule" is "username" is
required" should not pass that test!
But if you're using the user.xml to drive your test -- i.e. your
implementation drives your *proof* -- then that test will pass if you
automate it by parsing the implementation. And that is clearly not
correct.
Here's an image I keep in my head when I'm writing validation tests.
And I consider validation testing extremely important. I picture my
tests as cruel taskmasters, with a whip in hand, cracking my system's
ass whenever it doesn't work. And it's gonna crack ass until I get
that user.xml file in a state that it *serves its master*. The tests
are the master, because that's the only place in the system that knows
with certainty what exactly constitutes validity.
It's a crude analogy, but it suffices: Tests prove what's correct.
Implementation wants to be correct, but is only deemed so when tests
put the whip back and say "You, implementation, have done your job. Ye
shall be spared tonight".
Don't skimp here. Don't shortcut. If it helps, create a spreadsheet
with the fields you expect to be valid. Add rows for all manner of
conditions you expect to fail. Give that to your business folks and
have them fill it in. then use that spreadsheet as your dataprovider
-- mxunit has a "file' dataprovider born for cases like this.
When all rows in that spreadsheet pass validation, based on your
implementation, then you've done your job.
Best,
marc
Glad to contribute, gents.
One thing: a case where automation would help here is a CFBuilder
extension... in my experience probably 60-70% of validation rules
could be derived, or at least well-started, by introspecting the
database and looking at the columns. I could see a CFB extension where
you right click on a table in the RDS View, select "Create
ValidateThis file", and have it spit out a VT rules file that would
create, at a minimum, the "required" rules, maxlength rules, and
perhaps some rules around numeric data.
Best,
Marc
> -Jason