There are no rules which require the output ( * ) to be any particular function. In fact we typically need to add some arithmetic operations at the end of the function per-se implemented in a given node, in order to scale and otherwise coerce the output to a particular form.
The advantage of working with all-or-nothing outputs and/or 0.0 to 1.0 normalized output is that it makes things more easily tractable, and also avoid issues of overflowing and such.
( * ) "Output" can be understood here as either the ouptut a given node (neuron) within the network or that of the network as a whole.
As indicated by Mark Bessey the input [to the network as a whole] and the output [of the network] typically receive some filtering/conversion. As hinted in this response and in Mark s comment, it may be preferable to have normalized/standard nodes in the "hidden" layers of the network, and apply some normalization/conversion/discretization as required for the input and/or for the output of the network; Such practice is however only a matter of practicality rather than an imperative requirement of Neural Networks in general.