Testing is potentially endless. We can not test till all the defects are unearthed and removed -- it is simply impossible. At some point, we have to stop testing and ship the software. The question is when ?
Realistically, testing is a trade-off between budget, time and quality. It is driven by profit models. The pessimistic, and unfortunately most often used approach is to stop testing whenever some, or any of the allocated resources -- time, budget, or test cases -- are exhausted. The optimistic stopping rule is to stop testing when either reliability meets the requirement, or the benefit from continuing testing cannot justify the testing cost. This will usually require the use of reliability models to evaluate and predict reliability of the software under test. Each evaluation requires repeated running of the following cycle: failure data gathering -- modeling -- prediction. This method does not fit well for ultra-dependable systems, however, because the real field failure data will take too long to accumulate.
Most common factors helpful in deciding when to stop the testing are:
1) Stop the Testing when deadlines like release deadlines or testing deadlines have reached
2) Stop the Testing when the test cases have been completed with some prescribed pass percentage.
3) Stop the Testing when the testing budget comes to its end.
4) Stop the Testing when the code coverage and functionality requirements come to a desired level.
5) Stop the Testing when bug rate drops below a prescribed level
6) Stop the Testing when the period of beta testing / alpha testing gets over.