Thanks, José, I appreciate the answer.
For the sake of discussion, how would you propose I handle this situation? I haven't seen an alternative that makes sense to me. I have a struct that, due to legacy reasons, has around 25 fields that I need to validate. 20 or so of them have about the same validation logic, and the rest have various logic. There is a bit of overlap in the validation rules as well - i.e., 18 of them are required, 15 of them (not a subset of the 18 required fields) must be positive, 3 of them can be negative.
The way I approached it was to have lists of fields for which a particular rule applies; i.e., iterate over required_fields and assert that validation fails when that field is nil; iterate over positive_fields and assert that validation fails when the value is 0. I had a hard time thinking of a way to test this without macros that didn't get very repetitive. In my experience, repetitive code - even in tests, if not especially in tests - can be just as dangerous as indirection due to metaprogramming (which I usually try to use very judiciously if at all).
I guess one idea would be to set all of the required fields to nil and then assert that the error set contains each error that I expect; though this requires that I return a specific error for each condition in order to be able to test that the test failed where it should have.
```
test "required fields are present" do
invalid = %{@valid | foo: nil, bar: nil, ..... } # really do this with enum, but just to keep this short..
expected = %{invalid | errors: [:invalid_foo, invalid_bar]}
assert {:error, expected} == validate(invalid)
end
```
vs
```
test "required fields are present" do
invalid = %{@valid | foo: nil, bar: nil, .... }
assert {:error, invalid} == validate(invalid) # how do I know that it failed because `bar` is nil?
end
```