I'm running MXUnit compatible tests, via:
At the top of my browser-based test run, I have this result summary:
I knew about those four failures so figured "yep, all good". It was no until I scrolled down the list that I noticed I had an entire CFC not running due to a glitch in beforeTests(), which in itself accounted for seven failures.
These should definitely be included in this tally!
I also wonder if they should be "failures", "errors" or "skipped"? As the tests weren't run, I think they should count as "skipped"..?
Adam. This is a tricky one. Here's why
If they are marked as skipped this could mean that the tests are ok whe in
reality they are not. I therefore decided to mark it at 7 (random number)
until I can figure out a way to tally up all the specs defined and error
But I had to tell it was errors so any ant or automated proces could say
yep this failed
On Thursday, January 30, 2014, Adam Cameron (JIRA) <
Luis F. Majano
Ortus Solutions, Corp
ColdBox Platform: http://www.coldbox.org
Linked In: http://www.linkedin.com/pub/3/731/483
IECFUG Manager: http://www.iecfug.com
Social: twitter.com/lmajano facebook.com/lmajano
Ok, actually, after further analysis I know what you mean. Solved.
Cool. Just for future reference, I think this is incorrect: "If they are marked as skipped this could mean that the tests are ok". One should take no inference from that, other than the tests weren't run. That's what "skipped" means. If one does conclude from "skipped" that a test might be OK, they're just mistaken. You should not need to cater for that.