All the objects in our databases are maintained in code - tables, view, triggers, stored procedures, everything - if we expect to find it in the database then it should be in DDL in code that we can run. Actual schema changes are versioned - so there s a table in the database that says this is schema version "n" and if this is not the current version (according to the update code) then we make the necessary changes.
We endeavour to separate out triggers and views - don t, although we probably should, do much with SP and FN - with drop and re-create code that is valid for the current schema version. Accordingly it should be "safe" to drop and recreate anything that isn t a table, although there will be sequencing issues with both the drop and the create if there are dependencies between objects. The nice thing about this generally is that we can confidently bring a schema from new to current and have confidence that any instance of the schema is consistent.
Expanding to your case, if you have the ability to run the schema update code including the code to recreate all the database objects according to the current definitions then your problem should substantially go away... backup, restore, run schema maint logic. This would have the further benefit that you can introduce schema (table) changes in the dev servers and still keep the same update logic.
I know this isn t a completely generic solution. And its worth noting that it probably works better with database per developer (I m an old fashioned programmer, so I see all problems as having code based solutions (-:) but as a general approach I think it has considerable merit because it gives you a consistent mechanism to address a number of problems.