I ve never had to deal with signal/stream encoding or decoding before, at least not beyond fairly basic protocols like HTTP, so forgive me if I m making this harder than I should be.
Several of the systems we use at work rely on SMPTE timecode to operate, a Manchester biphase-mark system that occupies 1kHz of bandwidth between 1kHz and 2kHz.
Because this is right in the audio spectrum, and at line-level, it can be plugged straight into the soundcard s line input, accessible using the audio API of your choice (I plan on using Core Audio on a Mac).
I m fairly happy about decoding the digital bitstream itself to recover the time and parameters, but actually recovering the bitstream from the sampled analogue signal is less straight forward, and I m not sure what the best way to approach the problem is.
My current plan is to allow a short amount of time once a signal is detected (1 second or 24-30 frames) to measure the maximum and minimum number of samples between zero-crossing levels (using a moving average filter to prevent spikes/dropouts affecting decoding) and the maximum and minimum recorded voltages to determine the zero crossing point (DC level).
I should then be able to use this information to construct a digital bitstream from the incoming analogue signal. Am I headed in the right direction, or is there a better way of doing it?
Thanks