Tup doesn't have phony targets like in make, so the 'make && make test' where 'test' is a PHONY target doesn't have a direct analogue in tup. You can still put tests in a Tupfile, which in general would be a rule that has an input (the program under test), and a command (the test to run), but no outputs. Something like this:
: foreach test*.sh | prog |> sh %f |>
This tells tup to create a command for each test file (test*.sh) which depends on the program 'prog' ('prog' is in the order-only inputs section, so it isn't iterated as part of the foreach loop). The test cases are run with the bourne shell. No files are expected to be written to by these tests (although you could add an output like a log file here if you wanted). You can just have the test return 0 for success, or non-zero for failure. Tup will re-run any failed tests on a future update, even if the program itself hasn't changed.
Note that with the test cases in the Tupfile, just running 'tup' will both build your program and run all the tests. If you have many tests, this may not be what you want. You could still just build the program itself (by running 'tup prog'), or you can build the program and run a single test by specifying the command string (eg: tup 'sh test1.sh'). Note that if you do 'tup test1.sh', it will try to build only up to the test1.sh file itself, not the command to execute it. Since this is presumably a file you manually wrote yourself, that would do nothing.
For tup itself you can see that the tests aren't in the Tupfiles at all, which is another option to consider. In the test directory there are just independently executable test files, so a single test can be run with './t4000-compile-c.sh', with a './test.sh' that runs all tests. I do it this way because I don't want to run tup tests inside another instance of tup, but I also like the separation of 'tup' just doing a build and not running a full test suite.
For other projects I usually put quick tests (think style checkers, linters, etc) inside the Tupfile because they are unobtrusive enough to not cause a long edit-compile-test cycle, and those sorts of errors are usually easier to fix right in the middle of editing the code rather than later (during check-in time, for example). For a longer-running suite of tests, I have them separately executable so I can skip them if I want to focus on one particular aspect of the program at a time, and then run them all before committing.
Hope that helps,
-Mike