The complexity of a problem is determined with the conditions of "n" tending towards infinity. This link is fundamentally why all O(n) numbers of less complexity are dropped; even the O(n) format drops other polynomial terms which would change on different hardware. Reasonably, you can add the various total times if you have the times of the functions. This could be meaningful data in a time dependent environment, such as handling large sets of data, where the called functions are called more than once. This could also give scalability of a solution and the performance peak of an upgrade on, let us say, a server. This would be a one machine solution, and coefficients would not be dropped either.
All machines will have various running coefficients in performing any function based on architecture and how the compiler generates the binary code. This means, if you are designing programs for multiple users, and they are on different computers , then the specific calculations are not only extraneous, but inaccurate as well.
In the case where the calculations are not extraneous or inaccurate:
The trick with separate structures is the time function of one is not the time function of all of the others.
O(n) = x + y + z,
x(n) = (t1) * (n^2)
y(n) = (t2) * (log n)
z(n) = (t3) * (n log n)
...
Time (t1), (t2), and (t3) are given as time of the function on a particular machine.