Upper and lower bounds have to do with the minimum and maximum "complexity" of an algorithm (I use that word advisedly since it has a very specific meaning in complexity analysis).
Take, for example, our old friend, the bubble sort. In an ideal case where all the data are already sorted, the time taken is f(n), a function dependent on n
, the number of items in the list. That s because you only have to make one pass of the data set (with zero swaps) to ensure your list is sorted.
In a particularly bad case where the data are sorted in the opposite to the order you want, the time taken becomes f(n2). This is because each pass moves one element to the right position and you need n
passes to do all elements.
In that case, the upper and lower bounds are different, even though the big-O complexity remains the same.
As an aside, the bubble sort is much maligned (usually for good reasons) but it can make sense in certain circumstances. I actually use it in an application where the bulk of the data are already sorted and only one or two items tend to be added at a time to the end of the list. For adding one item, and with a reverse-directional bubble sort, you can guarantee the new list will be sorted in one pass. That illustrates the lower bound concept.
In fact, you could make an optimization of the bubble sort that sets the lower bound to f(1), simply by providing an extra datum which indicates whether the list is sorted. You would set this after sorting and clear it when adding an item to the end.