[wp-testers] Automated Testing Failing

Tom Klingenberg tklingenberg at lastflood.net
Mon Feb 22 13:45:46 UTC 2010


I have the same problem, current output footprint is for me:

Tests: 489, Assertions: 2879, Failures: 128, Errors: 54, Skipped: 18.

> In general for every test that currently fails we need to look through  
> and try and understand if
>
> a) The test is wrong or out-of-date
> b) The code has been changed and a bug has been introduced

Both can be the case because the test's codebase is not aligned with the  
core codebase (No more automated tests, nor is there any QC that is taking  
care of that). At least for some tests I did a peek in, I can say that  
they were out of date.

because a) and b) are both possible, the automated testing has come to a  
state where it renders useless as a whole. Comparing against someones else  
output as suggested did not work for me. You can run the testuite on your  
own before and afterwards you do a change and then compare the output.  
That worked for me but it's fuzzy. I'm totall unsure how much that pays  
because the trust in the suite is broken by that huge number of failures  
and errors.

Two use-cases which I can say which do their job is:

  1.) When you write your own testclass and tests you can run them  
seperately
  2.) When you use a certain testclass that exists, and you fix it on your  
local environment to be run properly again.

Both two cases are somehow the same.

We need

-> a working test environment again.
-> automated tests running (nightly or 4 times a day to cope up with the  
international development)
-> a trac for the tests repository to provide patches for tests and the  
testsuite itself

I suggest that should get moving to the date the beta testing begins.  
Should make the mu merger more safe in the long run.

My 2 cents.

Tom




More information about the wp-testers mailing list