One of the main complaints about automated UI tests is that they stop working when you make major changes to your software. There is some myth to this, most modern testing tools work directly with controls rather than being based on-screen coordinates. They are much more robust and don't break as easily as early UI testing tools.
This is the case with TestComplete, where the script for clicking on a button looks something like:
and entering text in a text control looks like
CustomerForm.edName.Text = 'John';
As you can see, the scripts will not break if the controls are simply moved around on the form where they reside. However, if a control is completely removed or a form is re-designed from scratch then the scripts will definitely need to be updated.
Why is this necessary? The reason the automated tests need to be updated is that the user interface of the application under test has changed and you need to re-train your tests to take into account the new functionality and learn not to use the commands from a previous version of the software. If this kind of learning sounds familiar, it's because that is exactly what your QA engineers, support team, documentation writers, trainers, and especially your customers in the field will have to face when presented with the latest and greatest version of your application. They will all have to forget what they learned about the previous version of the software and spend time learning the new and/or enhanced functionality. In this respect, the automated UI tests, by their nature, are behaving exactly as your users will, and you should listen to what the tests have to say.
If the enhancements in the application under test are not worth the effort of updating the test scripts, they are probably not worth the user's time spent to re-learn the modified UI. Treat your UI tests, and the feedback they give you when they break due to UI changes, with the same respect as feedback from your customers. Changes in the UI are harder to re-learn than they are to program, so every change, however neat it seems, needs to be evaluated as to whether it's worth the trouble and expense of learning anew. And the amount of changes to the test scripts can be used as a gauge to estimate the amount of effort that will be needed to train users on the new functionality.
Once, while working on an application update to a business application, I added several cool new reports, and then slightly re-ordered the menus of the app in a way that made more sense with the increased number of menu commands. As soon as the new version was released, we immediately received a call that the reports didn’t work anymore. Ouch. After investigating, it turned out that the user had been taught to run the reports by memorizing the exact key presses needed to launch each report (Alt, right arrow, right arrow, down arrow 5 times, enter). My improvements to the order of the menu items "broke" his memorized key strokes. To the user, it seemed like the reports themselves were broken. If we had been using automated testing, the failure of the UI test would have been like the first "user" complaining about the change. I would have received a reminder of the potential cost to users and been forced to reconsider the value of my improvements.
All this is not to say that existing functionality should never be updated, on the contrary, without change there will be no innovation, but just always keep in mind that change is expensive, and updating your automated UI tests is only a small part of the total cost equation.