English 中文(简体)
英语流传媒体
原标题:stream media FROM iphone

I need to stream audio from the mic to a http server.
These recording settings are what I need:

NSDictionary *audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                             [NSNumber numberWithInt: kAudioFormatULaw],AVFormatIDKey,        
                                             [NSNumber numberWithFloat:8000.0],AVSampleRateKey,//was 44100.0
                                             [NSData dataWithBytes: &acl length: sizeof( AudioChannelLayout ) ], AVChannelLayoutKey,
                                             [NSNumber numberWithInt:1],AVNumberOfChannelsKey,
                                             [NSNumber numberWithInt:64000],AVEncoderBitRateKey,
                                             nil];

API im coding to states:

Send a continuous stream of audio to the currently viewed camera. Audio needs to be encoded at G711 mu-law at 64 kbit/s for transfer to the Axis camera at the bedside. send (this should be a POST URL in SSL to connected server): POST /transmitaudio?id= Content-type: audio/basic Content-Length: 99999 (length is ignored)

以下是我试图与我合作的联系清单。

LINK-(SO)basic解释说,只有音频单位和录音带才能在通过地球同步网站录制时将斜线数据作为输出。

LINK - (audio callsback example > only include

LINK - (SO)REMOTE IO example | doesnt have start/stop and is for saving to a file

LiNK - (SO)REMOTE IO example : unanswered not working<>

LiNK - (SO)Basicudio example , good example but records to file

LiNK - (SO) Question that guidance me to InMemoryAudioFile category (couldnt à working) >followed links to inMemoryFile (or some such as that) but cann ted it work.

LiNK-(SO)more radio unit and field io example/problems >got this one working but one subsequently there isn t t a end function,即使我试图说明这一呼吁的内容并阻止了,但似乎还没有向服务器传送该录音。

http://atastypixel.com/blog/using-remoteio-audio-unit/comment-page-1/#comments” rel=“nofollow noretinger”>LINK - DecentrangeIO和音频查询示例,但是其他好例子,但与守则有某些问题(比较者认为其数据不是 ob-c++),并再次表明如何从中获取音频“”而不是文件

- - - - 用于视听产品包裹 通过它开展工作(见下文问题),但最后,不能让它工作,不管怎样,可能给它的时间不如其他人那么多。

缩略语 - (SO)problems, I have whentries to implementingudio queue/unit :not an example <>

LiNK - (SO)another YongIO example , another well example but cant figures out how to access it to data rather than file.

——也看着有趣的循环缓冲s t

Here is my current class attempting to stream. This seems to work although there is static coming out of the speakers at the receivers end (connected to the server). Which seems to indicate a problem with the audio data format.

IOS VERSION (minus delegation methods for GCD socket):

@implementation MicCommunicator {
AVAssetWriter * assetWriter;
AVAssetWriterInput * assetWriterInput;
}

@synthesize captureSession = _captureSession;
@synthesize output = _output;
@synthesize restClient = _restClient;
@synthesize uploadAudio = _uploadAudio;
@synthesize outputPath = _outputPath;
@synthesize sendStream = _sendStream;
@synthesize receiveStream = _receiveStream;

@synthesize socket = _socket;
@synthesize isSocketConnected = _isSocketConnected;

-(id)init {
    if ((self = [super init])) {

        _receiveStream = [[NSStream alloc]init];
        _sendStream = [[NSStream alloc]init];
        _socket = [[GCDAsyncSocket alloc] initWithDelegate:self delegateQueue:dispatch_get_main_queue()];
        _isSocketConnected = FALSE;

        _restClient = [RestClient sharedManager];
        _uploadAudio = false;

        NSArray *searchPaths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YES);
        _outputPath = [NSURL fileURLWithPath:[[searchPaths objectAtIndex:0] stringByAppendingPathComponent:@"micOutput.output"]];

        NSError * assetError;

        AudioChannelLayout acl;
        bzero(&acl, sizeof(acl));
        acl.mChannelLayoutTag = kAudioChannelLayoutTag_Mono; //kAudioChannelLayoutTag_Stereo;
        NSDictionary *audioOutputSettings = [NSDictionary dictionaryWithObjectsAndKeys:
                                             [NSNumber numberWithInt: kAudioFormatULaw],AVFormatIDKey,        
                                             [NSNumber numberWithFloat:8000.0],AVSampleRateKey,//was 44100.0
                                             [NSData dataWithBytes: &acl length: sizeof( AudioChannelLayout ) ], AVChannelLayoutKey,
                                             [NSNumber numberWithInt:1],AVNumberOfChannelsKey,
                                             [NSNumber numberWithInt:64000],AVEncoderBitRateKey,
                                             nil];

        assetWriterInput = [[AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeAudio outputSettings:audioOutputSettings]retain];
        [assetWriterInput setExpectsMediaDataInRealTime:YES];

        assetWriter = [[AVAssetWriter assetWriterWithURL:_outputPath fileType:AVFileTypeWAVE error:&assetError]retain]; //AVFileTypeAppleM4A

        if (assetError) {
            NSLog (@"error initing mic: %@", assetError);
            return nil;
        }
        if ([assetWriter canAddInput:assetWriterInput]) {
            [assetWriter addInput:assetWriterInput];
        } else {
            NSLog (@"can t add asset writer input...!");
            return nil;
        }

    }
    return self;
}

-(void)dealloc {
    [_output release];
    [_captureSession release];
    [_captureSession release];
    [assetWriter release];
    [assetWriterInput release];
    [super dealloc];
}


-(void)beginStreaming {

    NSLog(@"avassetwrter class is %@",NSStringFromClass([assetWriter class]));

    self.captureSession = [[AVCaptureSession alloc] init];
    AVCaptureDevice *audioCaptureDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeAudio];
    NSError *error = nil;
    AVCaptureDeviceInput *audioInput = [AVCaptureDeviceInput deviceInputWithDevice:audioCaptureDevice error:&error];
    if (audioInput)
        [self.captureSession addInput:audioInput];
    else {
        NSLog(@"No audio input found.");
        return;
    }

    self.output = [[AVCaptureAudioDataOutput alloc] init];

    dispatch_queue_t outputQueue = dispatch_queue_create("micOutputDispatchQueue", NULL);
    [self.output setSampleBufferDelegate:self queue:outputQueue];
    dispatch_release(outputQueue);

    self.uploadAudio = FALSE;

    [self.captureSession addOutput:self.output];
    [assetWriter startWriting];
    [self.captureSession startRunning];
}

-(void)pauseStreaming
{
    self.uploadAudio = FALSE;
}

-(void)resumeStreaming
{
    self.uploadAudio = TRUE;
}

-(void)finishAudioWork
{
    [self dealloc];
}

-(void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection {


    AudioBufferList audioBufferList;
    NSMutableData *data= [[NSMutableData alloc] init];
    CMBlockBufferRef blockBuffer;
    CMSampleBufferGetAudioBufferListWithRetainedBlockBuffer(sampleBuffer, NULL, &audioBufferList, sizeof(audioBufferList), NULL, NULL, 0, &blockBuffer);

    for (int y = 0; y < audioBufferList.mNumberBuffers; y++) {
        AudioBuffer audioBuffer = audioBufferList.mBuffers[y];
        Float32 *frame = (Float32*)audioBuffer.mData;

        [data appendBytes:frame length:audioBuffer.mDataByteSize];
    }

    // append [data bytes] to your NSOutputStream 

    // These two lines write to disk, you may not need this, just providing an example
    [assetWriter startSessionAtSourceTime:CMSampleBufferGetPresentationTimeStamp(sampleBuffer)];
    [assetWriterInput appendSampleBuffer:sampleBuffer];

    //start upload audio data
    if (self.uploadAudio) { 

        if (!self.isSocketConnected) {
            [self connect];
        }
            NSString *requestStr = [NSString stringWithFormat:@"POST /transmitaudio?id=%@ HTTP/1.0

",self.restClient.sessionId];

            NSData *requestData = [requestStr dataUsingEncoding:NSUTF8StringEncoding];        
        [self.socket writeData:requestData withTimeout:5 tag:0];     
        [self.socket writeData:data withTimeout:5 tag:0]; 
    }
    //stop upload audio data

    CFRelease(blockBuffer);
    blockBuffer=NULL;
    [data release];
}

JAVA版本:

import java.io.BufferedInputStream;
import java.io.BufferedOutputStream;
import java.io.BufferedReader;
import java.io.DataInputStream;
import java.io.DataOutputStream;
import java.io.IOException;
import java.io.InputStream;
import java.io.InputStreamReader;
import java.io.OutputStream;
import java.io.PrintWriter;
import java.nio.ByteBuffer;
import java.nio.ByteOrder;
import java.util.Arrays;

import javax.net.ssl.SSLContext;
import javax.net.ssl.SSLSocket;
import javax.net.ssl.SSLSocketFactory;
import javax.net.ssl.TrustManager;
import javax.net.ssl.X509TrustManager;

import android.media.AudioFormat;
import android.media.AudioManager;
import android.media.AudioRecord;
import android.media.AudioTrack;
import android.media.MediaRecorder.AudioSource;
import android.util.Log;

public class AudioWorker extends Thread
{ 
    private boolean stopped = false;

    private String host;
    private int port;
    private long id=0;
    boolean run=true;
    AudioRecord recorder;

    //ulaw encoder stuff
    private final static String TAG = "UlawEncoderInputStream";

    private final static int MAX_ULAW = 8192;
    private final static int SCALE_BITS = 16;

    private InputStream mIn;

    private int mMax = 0;

    private final byte[] mBuf = new byte[1024];
    private int mBufCount = 0; // should be 0 or 1

    private final byte[] mOneByte = new byte[1];
    ////
    /**
     * Give the thread high priority so that it s not canceled unexpectedly, and start it
     */
    public AudioWorker(String host, int port, long id)
    { 
        this.host = host;
        this.port = port;
        this.id = id;
        android.os.Process.setThreadPriority(android.os.Process.THREAD_PRIORITY_URGENT_AUDIO);
//        start();
    }

    @Override
    public void run()
    { 
        Log.i("AudioWorker", "Running AudioWorker Thread");
        recorder = null;
        AudioTrack track = null;
        short[][]   buffers  = new short[256][160];
        int ix = 0;

        /*
         * Initialize buffer to hold continuously recorded AudioWorker data, start recording, and start
         * playback.
         */
        try
        {
            int N = AudioRecord.getMinBufferSize(8000,AudioFormat.CHANNEL_IN_MONO,AudioFormat.ENCODING_PCM_16BIT);
            recorder = new AudioRecord(AudioSource.MIC, 8000, AudioFormat.CHANNEL_IN_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10);
            track = new AudioTrack(AudioManager.STREAM_MUSIC, 8000,   AudioFormat.CHANNEL_OUT_MONO, AudioFormat.ENCODING_PCM_16BIT, N*10, AudioTrack.MODE_STREAM);
            recorder.startRecording();
//            track.play();
            /*
             * Loops until something outside of this thread stops it.
             * Reads the data from the recorder and writes it to the AudioWorker track for playback.
             */


            SSLContext sc = SSLContext.getInstance("SSL");
            sc.init(null, trustAllCerts, new java.security.SecureRandom());
            SSLSocketFactory sslFact = sc.getSocketFactory();
            SSLSocket socket = (SSLSocket)sslFact.createSocket(host, port);

            socket.setSoTimeout(10000);
            InputStream inputStream = socket.getInputStream();
            DataInputStream in = new DataInputStream(new BufferedInputStream(inputStream));
            OutputStream outputStream = socket.getOutputStream();
            DataOutputStream os = new DataOutputStream(new BufferedOutputStream(outputStream));
            PrintWriter socketPrinter = new PrintWriter(os);
            BufferedReader br = new BufferedReader(new InputStreamReader(in));

//          socketPrinter.println("POST /transmitaudio?patient=1333369798370 HTTP/1.0");
            socketPrinter.println("POST /transmitaudio?id="+id+" HTTP/1.0");
            socketPrinter.println("Content-Type: audio/basic");
            socketPrinter.println("Content-Length: 99999");
            socketPrinter.println("Connection: Keep-Alive");
            socketPrinter.println("Cache-Control: no-cache");
            socketPrinter.println();
            socketPrinter.flush();


            while(!stopped)
            { 
                Log.i("Map", "Writing new data to buffer");
                short[] buffer = buffers[ix++ % buffers.length];

                N = recorder.read(buffer,0,buffer.length);
                track.write(buffer, 0, buffer.length);

                byte[] bytes2 = new byte[buffer.length * 2];
                ByteBuffer.wrap(bytes2).order(ByteOrder.LITTLE_ENDIAN).asShortBuffer().put(buffer);

                read(bytes2, 0, bytes2.length);
                os.write(bytes2,0,bytes2.length);

//
//                ByteBuffer byteBuf = ByteBuffer.allocate(2*N);
//              System.out.println("byteBuf length "+2*N);
//                int i = 0;
//                while (buffer.length > i) {
//                    byteBuf.putShort(buffer[i]);
//                    i++;
//                }         
//                byte[] b = new byte[byteBuf.remaining()];
            }
            os.close();
        }
        catch(Throwable x)
        { 
            Log.w("AudioWorker", "Error reading voice AudioWorker", x);
        }
        /*
         * Frees the thread s resources after the loop completes so that it can be run again
         */
        finally
        { 
            recorder.stop();
            recorder.release();
            track.stop();
            track.release();
        }
    }

    /**
     * Called from outside of the thread in order to stop the recording/playback loop
     */
    public void close()
    { 
         stopped = true;
    }
    public void resumeThread()
    { 
         stopped = false;
         run();
    }

    TrustManager[] trustAllCerts = new TrustManager[]{
            new X509TrustManager() {
                public java.security.cert.X509Certificate[] getAcceptedIssuers() {
                    return null;
                }
                public void checkClientTrusted(
                        java.security.cert.X509Certificate[] certs, String authType) {
                }
                public void checkServerTrusted(
                        java.security.cert.X509Certificate[] chain, String authType) {
                    for (int j=0; j<chain.length; j++)
                    {
                        System.out.println("Client certificate information:");
                        System.out.println("  Subject DN: " + chain[j].getSubjectDN());
                        System.out.println("  Issuer DN: " + chain[j].getIssuerDN());
                        System.out.println("  Serial number: " + chain[j].getSerialNumber());
                        System.out.println("");
                    }
                }
            }
    };


    public static void encode(byte[] pcmBuf, int pcmOffset,
            byte[] ulawBuf, int ulawOffset, int length, int max) {

        // from   ulaw  in wikipedia
        // +8191 to +8159                          0x80
        // +8158 to +4063 in 16 intervals of 256   0x80 + interval number
        // +4062 to +2015 in 16 intervals of 128   0x90 + interval number
        // +2014 to  +991 in 16 intervals of  64   0xA0 + interval number
        //  +990 to  +479 in 16 intervals of  32   0xB0 + interval number
        //  +478 to  +223 in 16 intervals of  16   0xC0 + interval number
        //  +222 to   +95 in 16 intervals of   8   0xD0 + interval number
        //   +94 to   +31 in 16 intervals of   4   0xE0 + interval number
        //   +30 to    +1 in 15 intervals of   2   0xF0 + interval number
        //     0                                   0xFF

        //    -1                                   0x7F
        //   -31 to    -2 in 15 intervals of   2   0x70 + interval number
        //   -95 to   -32 in 16 intervals of   4   0x60 + interval number
        //  -223 to   -96 in 16 intervals of   8   0x50 + interval number
        //  -479 to  -224 in 16 intervals of  16   0x40 + interval number
        //  -991 to  -480 in 16 intervals of  32   0x30 + interval number
        // -2015 to  -992 in 16 intervals of  64   0x20 + interval number
        // -4063 to -2016 in 16 intervals of 128   0x10 + interval number
        // -8159 to -4064 in 16 intervals of 256   0x00 + interval number
        // -8192 to -8160                          0x00

        // set scale factors
        if (max <= 0) max = MAX_ULAW;

        int coef = MAX_ULAW * (1 << SCALE_BITS) / max;

        for (int i = 0; i < length; i++) {
            int pcm = (0xff & pcmBuf[pcmOffset++]) + (pcmBuf[pcmOffset++] << 8);
            pcm = (pcm * coef) >> SCALE_BITS;

            int ulaw;
            if (pcm >= 0) {
                ulaw = pcm <= 0 ? 0xff :
                        pcm <=   30 ? 0xf0 + ((  30 - pcm) >> 1) :
                        pcm <=   94 ? 0xe0 + ((  94 - pcm) >> 2) :
                        pcm <=  222 ? 0xd0 + (( 222 - pcm) >> 3) :
                        pcm <=  478 ? 0xc0 + (( 478 - pcm) >> 4) :
                        pcm <=  990 ? 0xb0 + (( 990 - pcm) >> 5) :
                        pcm <= 2014 ? 0xa0 + ((2014 - pcm) >> 6) :
                        pcm <= 4062 ? 0x90 + ((4062 - pcm) >> 7) :
                        pcm <= 8158 ? 0x80 + ((8158 - pcm) >> 8) :
                        0x80;
            } else {
                ulaw = -1 <= pcm ? 0x7f :
                          -31 <= pcm ? 0x70 + ((pcm -   -31) >> 1) :
                          -95 <= pcm ? 0x60 + ((pcm -   -95) >> 2) :
                         -223 <= pcm ? 0x50 + ((pcm -  -223) >> 3) :
                         -479 <= pcm ? 0x40 + ((pcm -  -479) >> 4) :
                         -991 <= pcm ? 0x30 + ((pcm -  -991) >> 5) :
                        -2015 <= pcm ? 0x20 + ((pcm - -2015) >> 6) :
                        -4063 <= pcm ? 0x10 + ((pcm - -4063) >> 7) :
                        -8159 <= pcm ? 0x00 + ((pcm - -8159) >> 8) :
                        0x00;
            }
            ulawBuf[ulawOffset++] = (byte)ulaw;
        }
    }
    public static int maxAbsPcm(byte[] pcmBuf, int offset, int length) {
        int max = 0;
        for (int i = 0; i < length; i++) {
            int pcm = (0xff & pcmBuf[offset++]) + (pcmBuf[offset++] << 8);
            if (pcm < 0) pcm = -pcm;
            if (pcm > max) max = pcm;
        }
        return max;
    }

    public int read(byte[] buf, int offset, int length) throws IOException {
        if (recorder == null) throw new IllegalStateException("not open");

        // return at least one byte, but try to fill  length 
        while (mBufCount < 2) {
            int n = recorder.read(mBuf, mBufCount, Math.min(length * 2, mBuf.length - mBufCount));
            if (n == -1) return -1;
            mBufCount += n;
        }

        // compand data
        int n = Math.min(mBufCount / 2, length);
        encode(mBuf, 0, buf, offset, n, mMax);

        // move data to bottom of mBuf
        mBufCount -= n * 2;
        for (int i = 0; i < mBufCount; i++) mBuf[i] = mBuf[i + n * 2];

        return n;
    }

}
最佳回答

我关于这个专题的工作是夸张和漫长的。 最后,我不得不这样做,但也许会打折扣。 由于这一原因,我将在提出答案之前列出一些警告:

  1. There is still a clicking noise between buffers

  2. 我收到了警告,因为我是如何用我的肥胖症班进行肥胖症的,因此,这里存在一些错误(无论从我利用集合进行的研究来看,还是与释放一样iii,因此我坚决认为,这个问题在很大程度上:

    Object 0x13cd20 of class __NSCFString autoreleased with no pool in place - just leaking - break on objc_autoreleaseNoPool(iii to debug

  3. In order to get this working I had to comment out all AQPlayer references from 发言人 (see belowiii due to errors I couldnt fix any other way. It didnt matter for me however since I am only recording

So the main answer to the above is that there is a bug in AVAssetWriter that stopped it from appending the bytes and writing the audio data. I finally found this out after contacting apple support and have them notify me about this. As far as I know the bug is specific to ulaw and AVAssetWriter though I havnt tried many other formats to verify.
In response to this the only other option is/was to use AudioQueues. Something I had tried before but had brought a bunch of problems. The biggest problem being my lack of knowledge in obj-c++. The class below that got things working is from the speakHere example with slight changes so that the audio is ulaw formatted. The other problems came about trying to get all files to play nicely. However this was easily remedied by changing all filenames in the chain to 毫米. The next problem was trying to use the classes in harmony. This is still a WIP, and ties into warning number 2. But my basic solution to this was to use the 发言人 (also included in the speakhere exampleiii instead of directly accessing AQRecorder.

这里的任何途径是:

Using the 发言人 from an obj-c class

页: 1

@property(nonatomic,strongiii 发言人 * recorder;

毫米

[init method]
        //AQRecorder wrapper (发言人iii allocation
        _recorder = [[发言人 alloc]init];
        //AQRecorder wrapper (发言人iii initialization
        //technically this class is a controller and thats why its init method is awakeFromNib
        [_recorder awakeFromNib];

[recording]
     bool buttonState = self.audioRecord.isSelected;
[self.audioRecord setSelected:!buttonState];

if ([self.audioRecord isSelected]iii {

    [self.recorder startRecord];
}else {
    [self.recorder stopRecord];
}

发言人

#import "发言人页: 1"

@implementation 发言人

@synthesize player;
@synthesize recorder;

@synthesize btn_record;
@synthesize btn_play;
@synthesize fileDescription;
@synthesize lvlMeter_in;
@synthesize playbackWasInterrupted;

char *OSTypeToStr(char *buf, OSType tiii
{
    char *p = buf;
    char str[4], *q = str;
    *(UInt32 *iiistr = CFSwapInt32(tiii;
    for (int i = 0; i < 4; ++iiii {
        if (isprint(*qiii && *q !=  \ iii
            *p++ = *q++;
        else {
            sprintf(p, "\x%02x", *q++iii;
            p += 4;
        }
    }
    *p =   ;
    return buf;
}

-(voidiiisetFileDescriptionForFormat: (CAStreamBasicDescriptioniiiformat withName:(NSString*iiiname
{
    char buf[5];
    const char *dataFormat = OSTypeToStr(buf, format.mFormatIDiii;
    NSString* description = [[NSString alloc] initWithFormat:@"(%d ch. %s @ %g Hziii", format.NumberChannels(iii, dataFormat, format.mSampleRate, nil];
    fileDescription.text = description;
    [description release];  
}

#pragma mark Playback routines

-(voidiiistopPlayQueue
{
//  player->StopQueue(iii;
    [lvlMeter_in setAq: nil];
    btn_record.enabled = YES;
}

-(voidiiipausePlayQueue
{
//  player->PauseQueue(iii;
    playbackWasPaused = YES;
}


-(voidiiistartRecord
{
    //    recorder = new AQRecorder(iii;

    if (recorder->IsRunning(iiiiii // If we are currently recording, stop and save the file.
    {
        [self stopRecord];
    }
    else // If we re not recording, start.
    {
        //      btn_play.enabled = NO;  

        // Set the button s state to "stop"
        //      btn_record.title = @"Stop";

        // Start the recorder
        recorder->StartRecord(CFSTR("recordedFile.caf"iiiiii;

        [self setFileDescriptionForFormat:recorder->DataFormat(iii withName:@"Recorded File"];

        // Hook the level meter up to the Audio Queue for the recorder
        //      [lvlMeter_in setAq: recorder->Queue(iii];
    } 
}

- (voidiiistopRecord
{
    // Disconnect our level meter from the audio queue
//  [lvlMeter_in setAq: nil];

    recorder->StopRecord(iii;

    // dispose the previous playback queue
//  player->DisposeQueue(trueiii;

    // now create a new queue for the recorded file
    recordFilePath = (CFStringRefiii[NSTemporaryDirectory(iii stringByAppendingPathComponent: @"recordedFile.caf"];
//  player->CreateQueueForFile(recordFilePathiii;

    // Set the button s state back to "record"
//  btn_record.title = @"Record";
//  btn_play.enabled = YES;
}

- (IBActioniiiplay:(idiiisender
{
    if (player->IsRunning(iiiiii
    {
        if (playbackWasPausediii {
//          OSStatus result = player->StartQueue(trueiii;
//          if (result == noErriii
//              [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueResumed" object:self];
        }
        else
//          [self stopPlayQueue];
            nil;
    }
    else
    {       
//      OSStatus result = player->StartQueue(falseiii;
//      if (result == noErriii
//          [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueResumed" object:self];
    }
}

- (IBActioniiirecord:(idiiisender
{
    if (recorder->IsRunning(iiiiii // If we are currently recording, stop and save the file.
    {
        [self stopRecord];
    }
    else // If we re not recording, start.
    {
//      btn_play.enabled = NO;  
//      
//      // Set the button s state to "stop"
//      btn_record.title = @"Stop";

        // Start the recorder
        recorder->StartRecord(CFSTR("recordedFile.caf"iiiiii;

        [self setFileDescriptionForFormat:recorder->DataFormat(iii withName:@"Recorded File"];

        // Hook the level meter up to the Audio Queue for the recorder
        [lvlMeter_in setAq: recorder->Queue(iii];
    }   
}
#pragma mark AudioSession listeners
void interruptionListener(  void *  inClientData,
                            UInt32  inInterruptionStateiii
{
    发言人 *THIS = (发言人*iiiinClientData;
    if (inInterruptionState == kAudioSessionBeginInterruptioniii
    {
        if (THIS->recorder->IsRunning(iiiiii {
            [THIS stopRecord];
        }
        else if (THIS->player->IsRunning(iiiiii {
            //the queue will stop itself on an interruption, we just need to update the UI
            [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueStopped" object:THIS];
            THIS->playbackWasInterrupted = YES;
        }
    }
    else if ((inInterruptionState == kAudioSessionEndInterruptioniii && THIS->playbackWasInterruptediii
    {
        // we were playing back when we were interrupted, so reset and resume now
//      THIS->player->StartQueue(trueiii;
        [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueResumed" object:THIS];
        THIS->playbackWasInterrupted = NO;
    }
}

void propListener(  void *                  inClientData,
                    AudioSessionPropertyID  inID,
                    UInt32                  inDataSize,
                    const void *            inDataiii
{
    发言人 *THIS = (发言人*iiiinClientData;
    if (inID == kAudioSessionProperty_AudioRouteChangeiii
    {
        CFDictionaryRef routeDictionary = (CFDictionaryRefiiiinData;          
        //CFShow(routeDictionaryiii;
        CFNumberRef reason = (CFNumberRefiiiCFDictionaryGetValue(routeDictionary, CFSTR(kAudioSession_AudioRouteChangeKey_Reasoniiiiii;
        SInt32 reasonVal;
        CFNumberGetValue(reason, kCFNumberSInt32Type, &reasonValiii;
        if (reasonVal != kAudioSessionRouteChangeReason_CategoryChangeiii
        {
            /*CFStringRef oldRoute = (CFStringRefiiiCFDictionaryGetValue(routeDictionary, CFSTR(kAudioSession_AudioRouteChangeKey_OldRouteiiiiii;
            if (oldRouteiii   
            {
                printf("old route:
"iii;
                CFShow(oldRouteiii;
            }
            else 
                printf("ERROR GETTING OLD AUDIO ROUTE!
"iii;

            CFStringRef newRoute;
            UInt32 size; size = sizeof(CFStringRefiii;
            OSStatus error = AudioSessionGetProperty(kAudioSessionProperty_AudioRoute, &size, &newRouteiii;
            if (erroriii printf("ERROR GETTING NEW AUDIO ROUTE! %d
", erroriii;
            else
            {
                printf("new route:
"iii;
                CFShow(newRouteiii;
            }*/

            if (reasonVal == kAudioSessionRouteChangeReason_OldDeviceUnavailableiii
            {           
                if (THIS->player->IsRunning(iiiiii {
                    [THIS pausePlayQueue];
                    [[NSNotificationCenter defaultCenter] postNotificationName:@"playbackQueueStopped" object:THIS];
                }       
            }

            // stop the queue if we had a non-policy route change
            if (THIS->recorder->IsRunning(iiiiii {
                [THIS stopRecord];
            }
        }   
    }
    else if (inID == kAudioSessionProperty_AudioInputAvailableiii
    {
        if (inDataSize == sizeof(UInt32iiiiii {
            UInt32 isAvailable = *(UInt32*iiiinData;
            // disable recording if input is not available
            THIS->btn_record.enabled = (isAvailable > 0iii ? YES : NO;
        }
    }
}

#pragma mark Initialization routines
- (voidiiiawakeFromNib
{       
    // Allocate our singleton instance for the recorder & player object
    recorder = new AQRecorder(iii;
    player = nil;//new AQPlayer(iii;

    OSStatus error = AudioSessionInitialize(NULL, NULL, interruptionListener, selfiii;
    if (erroriii printf("ERROR INITIALIZING AUDIO SESSION! %d
", erroriii;
    else 
    {
        UInt32 category = kAudioSessionCategory_PlayAndRecord;  
        error = AudioSessionSetProperty(kAudioSessionProperty_AudioCategory, sizeof(categoryiii, &categoryiii;
        if (erroriii printf("couldn t set audio category!"iii;

        error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioRouteChange, propListener, selfiii;
        if (erroriii printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d
", erroriii;
        UInt32 inputAvailable = 0;
        UInt32 size = sizeof(inputAvailableiii;

        // we do not want to allow recording if input is not available
        error = AudioSessionGetProperty(kAudioSessionProperty_AudioInputAvailable, &size, &inputAvailableiii;
        if (erroriii printf("ERROR GETTING INPUT AVAILABILITY! %d
", erroriii;
//      btn_record.enabled = (inputAvailableiii ? YES : NO;

        // we also need to listen to see if input availability changes
        error = AudioSessionAddPropertyListener(kAudioSessionProperty_AudioInputAvailable, propListener, selfiii;
        if (erroriii printf("ERROR ADDING AUDIO SESSION PROP LISTENER! %d
", erroriii;

        error = AudioSessionSetActive(trueiii; 
        if (erroriii printf("AudioSessionSetActive (trueiii failed"iii;
    }

//  [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playbackQueueStopped:iii name:@"playbackQueueStopped" object:nil];
//  [[NSNotificationCenter defaultCenter] addObserver:self selector:@selector(playbackQueueResumed:iii name:@"playbackQueueResumed" object:nil];

//  UIColor *bgColor = [[UIColor alloc] initWithRed:.39 green:.44 blue:.57 alpha:.5];
//  [lvlMeter_in setBackgroundColor:bgColor];
//  [lvlMeter_in setBorderColor:bgColor];
//  [bgColor release];

    // disable the play button since we have no recording to play yet
//  btn_play.enabled = NO;
//  playbackWasInterrupted = NO;
//  playbackWasPaused = NO;
}

# pragma mark Notification routines
- (voidiiiplaybackQueueStopped:(NSNotification *iiinote
{
    btn_play.title = @"Play";
    [lvlMeter_in setAq: nil];
    btn_record.enabled = YES;
}

- (voidiiiplaybackQueueResumed:(NSNotification *iiinote
{
    btn_play.title = @"Stop";
    btn_record.enabled = NO;
    [lvlMeter_in setAq: player->Queue(iii];
}

#pragma mark Cleanup
- (voidiiidealloc
{
    [btn_record release];
    [btn_play release];
    [fileDescription release];
    [lvlMeter_in release];

//  delete player;
    delete recorder;

    [super dealloc];
}

@end

AQRecorder (页: 1 has 2 lines of importance

#define kNumberRecordBuffers    3
#define kBufferDurationSeconds 5.0

iii

#include "AQRecorder页: 1"
//#include "UploadAudioWrapperInterface页: 1"
//#include "RestClient页: 1"

RestClient * restClient;
NSData* data;

// ____________________________________________________________________________________
// Determine the size, in bytes, of a buffer necessary to represent the supplied number
// of seconds of audio data.
int AQRecorder::ComputeRecordBufferSize(const AudioStreamBasicDescription *format, float secondsiii
{
    int packets, frames, bytes = 0;
    try {
        frames = (intiiiceil(seconds * format->mSampleRateiii;

        if (format->mBytesPerFrame > 0iii
            bytes = frames * format->mBytesPerFrame;
        else {
            UInt32 maxPacketSize;
            if (format->mBytesPerPacket > 0iii
                maxPacketSize = format->mBytesPerPacket;    // constant packet size
            else {
                UInt32 propertySize = sizeof(maxPacketSizeiii;
                XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_MaximumOutputPacketSize, &maxPacketSize,
                                                 &propertySizeiii, "couldn t get queue s maximum output packet size"iii;
            }
            if (format->mFramesPerPacket > 0iii
                packets = frames / format->mFramesPerPacket;
            else
                packets = frames;   // worst-case scenario: 1 frame in a packet
            if (packets == 0iii       // sanity check
                packets = 1;
            bytes = packets * maxPacketSize;
        }
    } catch (CAXException eiii {
        char buf[256];
        fprintf(stderr, "Error: %s (%siii
", e.mOperation, e.FormatError(bufiiiiii;
        return 0;
    }   
    return bytes;
}

// ____________________________________________________________________________________
// AudioQueue callback function, called when an input buffers has been filled.
void AQRecorder::MyInputBufferHandler(  void *                              inUserData,
                                        AudioQueueRef                       inAQ,
                                        AudioQueueBufferRef                 inBuffer,
                                        const AudioTimeStamp *              inStartTime,
                                        UInt32                              inNumPackets,
                                        const AudioStreamPacketDescription* inPacketDesciii
{
    AQRecorder *aqr = (AQRecorder *iiiinUserData;


    try {
        if (inNumPackets > 0iii {
            // write packets to file
//          XThrowIfError(AudioFileWritePackets(aqr->mRecordFile, FALSE, inBuffer->mAudioDataByteSize,
//                                           inPacketDesc, aqr->mRecordPacket, &inNumPackets, inBuffer->mAudioDataiii,
//                     "AudioFileWritePackets failed"iii;
            aqr->mRecordPacket += inNumPackets;



//            int numBytes = inBuffer->mAudioDataByteSize;       
//            SInt8 *testBuffer = (SInt8*iiiinBuffer->mAudioData;
//            
//            for (int i=0; i < numBytes; i++iii
//            {
//                SInt8 currentData = testBuffer[i];
//                printf("Current data in testbuffer is %d", currentDataiii;
//                
//                NSData * temp = [NSData dataWithBytes:currentData length:sizeof(currentDataiii];
//            }


            data=[[NSData dataWithBytes:inBuffer->mAudioData length:inBuffer->mAudioDataByteSize]retain];

            [restClient uploadAudioData:data url:nil];

        }


        // if we re not stopping, re-enqueue the buffer so that it gets filled again
        if (aqr->IsRunning(iiiiii
            XThrowIfError(AudioQueueEnqueueBuffer(inAQ, inBuffer, 0, NULLiii, "AudioQueueEnqueueBuffer failed"iii;
    } catch (CAXException eiii {
        char buf[256];
        fprintf(stderr, "Error: %s (%siii
", e.mOperation, e.FormatError(bufiiiiii;
    }

}

AQRecorder::AQRecorder(iii
{
    mIsRunning = false;
    mRecordPacket = 0;

    data = [[NSData alloc]init];
    restClient = [[RestClient sharedManager]retain];
}

AQRecorder::~AQRecorder(iii
{
    AudioQueueDispose(mQueue, TRUEiii;
    AudioFileClose(mRecordFileiii;

    if (mFileNameiii{
     CFRelease(mFileNameiii;   
    }

    [restClient release];
    [data release];
}

// ____________________________________________________________________________________
// Copy a queue s encoder s magic cookie to an audio file.
void AQRecorder::CopyEncoderCookieToFile(iii
{
    UInt32 propertySize;
    // get the magic cookie, if any, from the converter     
    OSStatus err = AudioQueueGetPropertySize(mQueue, kAudioQueueProperty_MagicCookie, &propertySizeiii;

    // we can get a noErr result and also a propertySize == 0
    // -- if the file format does support magic cookies, but this file doesn t have one.
    if (err == noErr && propertySize > 0iii {
        Byte *magicCookie = new Byte[propertySize];
        UInt32 magicCookieSize;
        XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_MagicCookie, magicCookie, &propertySizeiii, "get audio converter s magic cookie"iii;
        magicCookieSize = propertySize; // the converter lies and tell us the wrong size

        // now set the magic cookie on the output file
        UInt32 willEatTheCookie = false;
        // the converter wants to give us one; will the file take it?
        err = AudioFileGetPropertyInfo(mRecordFile, kAudioFilePropertyMagicCookieData, NULL, &willEatTheCookieiii;
        if (err == noErr && willEatTheCookieiii {
            err = AudioFileSetProperty(mRecordFile, kAudioFilePropertyMagicCookieData, magicCookieSize, magicCookieiii;
            XThrowIfError(err, "set audio file s magic cookie"iii;
        }
        delete[] magicCookie;
    }
}

void AQRecorder::SetupAudioFormat(UInt32 inFormatIDiii
{
    memset(&mRecordFormat, 0, sizeof(mRecordFormatiiiiii;

    UInt32 size = sizeof(mRecordFormat.mSampleRateiii;
    XThrowIfError(AudioSessionGetProperty(  kAudioSessionProperty_CurrentHardwareSampleRate,
                                        &size, 
                                        &mRecordFormat.mSampleRateiii, "couldn t get hardware sample rate"iii;

    //override samplearate to 8k from device sample rate

    mRecordFormat.mSampleRate = 8000.0;

    size = sizeof(mRecordFormat.mChannelsPerFrameiii;
    XThrowIfError(AudioSessionGetProperty(  kAudioSessionProperty_CurrentHardwareInputNumberChannels, 
                                        &size, 
                                        &mRecordFormat.mChannelsPerFrameiii, "couldn t get input channel count"iii;


//    mRecordFormat.mChannelsPerFrame = 1;

    mRecordFormat.mFormatID = inFormatID;
    if (inFormatID == kAudioFormatLinearPCMiii
    {
        // if we want pcm, default to signed 16-bit little-endian
        mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger | kLinearPCMFormatFlagIsPacked;
        mRecordFormat.mBitsPerChannel = 16;
        mRecordFormat.mBytesPerPacket = mRecordFormat.mBytesPerFrame = (mRecordFormat.mBitsPerChannel / 8iii * mRecordFormat.mChannelsPerFrame;
        mRecordFormat.mFramesPerPacket = 1;
    }

    if (inFormatID == kAudioFormatULawiii {
//        NSLog(@"is ulaw"iii;
        mRecordFormat.mFormatFlags = kLinearPCMFormatFlagIsSignedInteger;
        mRecordFormat.mSampleRate = 8000.0;
//        mRecordFormat.mFormatFlags = 0;
        mRecordFormat.mFramesPerPacket = 1;
        mRecordFormat.mChannelsPerFrame = 1;
        mRecordFormat.mBitsPerChannel = 16;//was 8
        mRecordFormat.mBytesPerPacket = 1;
        mRecordFormat.mBytesPerFrame = 1;
    }
}

NSString * GetDocumentDirectory(voidiii
{    
    NSArray *paths = NSSearchPathForDirectoriesInDomains(NSDocumentDirectory, NSUserDomainMask, YESiii;
    NSString *basePath = ([paths count] > 0iii ? [paths objectAtIndex:0] : nil;
    return basePath;
}


void AQRecorder::StartRecord(CFStringRef inRecordFileiii
{
    int i, bufferByteSize;
    UInt32 size;
    CFURLRef url;

    try {       
        mFileName = CFStringCreateCopy(kCFAllocatorDefault, inRecordFileiii;

        // specify the recording format
        SetupAudioFormat(kAudioFormatULaw /*kAudioFormatLinearPCM*/iii;

        // create the queue
        XThrowIfError(AudioQueueNewInput(
                                      &mRecordFormat,
                                      MyInputBufferHandler,
                                      this /* userData */,
                                      NULL /* run loop */, NULL /* run loop mode */,
                                      0 /* flags */, &mQueueiii, "AudioQueueNewInput failed"iii;

        // get the record format back from the queue s audio converter --
        // the file may require a more specific stream description than was necessary to create the encoder.
        mRecordPacket = 0;

        size = sizeof(mRecordFormatiii;
        XThrowIfError(AudioQueueGetProperty(mQueue, kAudioQueueProperty_StreamDescription,  
                                         &mRecordFormat, &sizeiii, "couldn t get queue s format"iii;

        NSString *basePath = GetDocumentDirectory(iii;
        NSString *recordFile = [basePath /*NSTemporaryDirectory(iii*/ stringByAppendingPathComponent: (NSString*iiiinRecordFile];   

        url = CFURLCreateWithString(kCFAllocatorDefault, (CFStringRefiiirecordFile, NULLiii;

        // create the audio file
        XThrowIfError(AudioFileCreateWithURL(url, kAudioFileCAFType, &mRecordFormat, kAudioFileFlags_EraseFile,
                                          &mRecordFileiii, "AudioFileCreateWithURL failed"iii;
        CFRelease(urliii;

        // copy the cookie first to give the file object as much info as we can about the data going in
        // not necessary for pcm, but required for some compressed audio
        CopyEncoderCookieToFile(iii;


        // allocate and enqueue buffers
        bufferByteSize = ComputeRecordBufferSize(&mRecordFormat, kBufferDurationSecondsiii;   // enough bytes for half a second
        for (i = 0; i < kNumberRecordBuffers; ++iiii {
            XThrowIfError(AudioQueueAllocateBuffer(mQueue, bufferByteSize, &mBuffers[i]iii,
                       "AudioQueueAllocateBuffer failed"iii;
            XThrowIfError(AudioQueueEnqueueBuffer(mQueue, mBuffers[i], 0, NULLiii,
                       "AudioQueueEnqueueBuffer failed"iii;
        }
        // start the queue
        mIsRunning = true;
        XThrowIfError(AudioQueueStart(mQueue, NULLiii, "AudioQueueStart failed"iii;
    }
    catch (CAXException &eiii {
        char buf[256];
        fprintf(stderr, "Error: %s (%siii
", e.mOperation, e.FormatError(bufiiiiii;
    }
    catch (...iii {
        fprintf(stderr, "An unknown error occurred
"iii;
    }   

}

void AQRecorder::StopRecord(iii
{
    // end recording
    mIsRunning = false;
//    XThrowIfError(AudioQueueReset(mQueueiii, "AudioQueueStop failed"iii;  
    XThrowIfError(AudioQueueStop(mQueue, trueiii, "AudioQueueStop failed"iii;   
    // a codec may update its cookie at the end of an encoding session, so reapply it to the file now
    CopyEncoderCookieToFile(iii;
    if (mFileNameiii
    {
        CFRelease(mFileNameiii;
        mFileName = NULL;
    }
    AudioQueueDispose(mQueue, trueiii;
    AudioFileClose(mRecordFileiii;
}

如果我能找到更好的解决办法,我可以自由发表意见或改进我的答案。 请注意,这是我第一次尝试,我确信这不是最可取或最适当的解决办法。

问题回答

你们可以利用游戏框架? 然后将音响传到蓝色。 论文编写者图书馆有实例。





相关问题
List Contents of Directory in a UITableView

I am trying to list the contents of Ringtones directory in a TableView, however, I am only getting the last file in the directory in ALL cells, instead of file per cell. This is my code: - (...

iPhone NSUserDefaults persistance difficulty

In my app i have a bunch of data i store in the NSUserdefaults. This information consists of an NSObject (Object1) with NSStrings and NSNumbers and also 2 instances of yet another object (Object2). ...

Writing a masked image to disk as a PNG file

Basically I m downloading images off of a webserver and then caching them to the disk, but before I do so I want to mask them. I m using the masking code everyone seems to point at which can be found ...

Resize UIImage with aspect ratio?

I m using this code to resize an image on the iPhone: CGRect screenRect = CGRectMake(0, 0, 320.0, 480.0); UIGraphicsBeginImageContext(screenRect.size); [value drawInRect:screenRect blendMode:...

Allowing interaction with a UIView under another UIView

Is there a simple way of allowing interaction with a button in a UIView that lies under another UIView - where there are no actual objects from the top UIView on top of the button? For instance, ...