English 中文(简体)
SDK 4 AVFoundation - How use实收StillImage AsynchronouslyFromConnection?
原标题:iPhone SDK 4 AVFoundation - How to use captureStillImageAsynchronouslyFromConnection correctly?

我正试图利用新的<代码>AVFoundation framework,以便把照片带上Pi。

有了一家顿报,便叫了这席。 我可以听到封闭式声音,但我看看看记录结果。 如果我几次使用这种方法,照相机预约会冻结。

是否有任何指导说明如何使用<条码>。

[[self stillImageOutput] captureStillImageAsynchronouslyFromConnection:
                [[self stillImageOutput].connections objectAtIndex:0]
                     completionHandler:^(CMSampleBufferRef imageDataSampleBuffer,
                            NSError *error) {
                                              NSLog(@"inside");
                            }];
- (void)initCapture {
    AVCaptureDeviceInput *captureInput = [AVCaptureDeviceInput 
                                          deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] 
                                          error:nil];

    AVCaptureVideoDataOutput *captureOutput = [[AVCaptureVideoDataOutput alloc] init];

    captureOutput.alwaysDiscardsLateVideoFrames = YES; 

    dispatch_queue_t queue;
    queue = dispatch_queue_create("cameraQueue", NULL);
    [captureOutput setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);

    NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey; 
    NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA]; 
    NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key]; 
    [captureOutput setVideoSettings:videoSettings]; 

    self.captureSession = [[AVCaptureSession alloc] init];
    self.captureSession.sessionPreset = AVCaptureSessionPresetLow;

    [self.captureSession addInput:captureInput];
    [self.captureSession addOutput:captureOutput];

    self.prevLayer = [AVCaptureVideoPreviewLayer layerWithSession: self.captureSession];

    [self.prevLayer setOrientation:AVCaptureVideoOrientationLandscapeLeft];

    self.prevLayer.frame = CGRectMake(0.0, 0.0, 480.0, 320.0);
    self.prevLayer.videoGravity = AVLayerVideoGravityResizeAspectFill;

    [self.view.layer addSublayer: self.prevLayer];


    // Setup the default file outputs
    AVCaptureStillImageOutput *_stillImageOutput = [[[AVCaptureStillImageOutput alloc] init] autorelease];
    NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys:
                                    AVVideoCodecJPEG, AVVideoCodecKey,
                                    nil];
    [_stillImageOutput setOutputSettings:outputSettings];
    [outputSettings release];
    [self setStillImageOutput:_stillImageOutput];   

    if ([self.captureSession canAddOutput:stillImageOutput]) {
        [self.captureSession addOutput:stillImageOutput];
    }

    [self.captureSession commitConfiguration];
    [self.captureSession startRunning];

}
最佳回答

当4岁时,我们就面临这一问题。 我尝试了一件公平的 things事。 在这方面:

  • AVCaptureStillImageOutput and AVCaptureVideoDataOutput do not appear to play nicely with each other. If the video output is running, the image output never seems to complete (until you pause the session by putting the phone to sleep; then you seem to get a single image out).
  • AVCaptureStillImageOutput only seems to work sensibly with AVCaptureSessionPresetPhoto; otherwise you effectively get JPEG-encoded video frames. Might as well use higher-quality BGRA frames (incidentally, the camera s native output appears to be BGRA; it doesn t appear to have the colour subsampling of 2vuy/420v).
  • The video (everything that isn t Photo) and Photo presets seem fundamentally different; you never get any video frames if the session is in photo mode (you don t get an error either). Maybe they changed this...
  • You can t seem to have two capture sessions (one with a video preset and a video output, one with Photo preset and an image output). They might have fixed this.
  • You can stop the session, change the preset to photo, start the session, take the photo, and when the photo completes, stop, change the preset back, and start again. This takes a while and the video preview layer stalls and looks terrible (it re-adjusts exposure levels). This also occasionally deadlocked in the beta (after calling -stopRunning, session.running was still YES).
  • You might be able to disable the AVCaptureConnection (it s supposed to work). I remember this deadlocking; they may have fixed this.

我最后只是录制录像带。 “摄像画面”只贴上旗帜;在录像带中,如果悬挂国旗,则将录像带退回,而不是UIImage*。 这足以满足我们的图像处理需要——“摄像”基本存在,使用户能够作出负面反应(并选择提交浏览报告);实际上,我们并不想要2/3/5微粒图像,因为他们要处理年龄。

如果视频框架不够完善(即你想要掌握高性能图像捕获之间的视力框架),我首先看看看它们是否利用多个AVCapture会议来固定,因为那是你能够确定两个先决条件的唯一途径。

这很可能值得提出ug。 我在启动4.0个全球机制时投了一张标书;阿普果请我提供一些样本代码,但到那时,我决定使用视频框架工作,并释放。

此外,“低档”预选为 每一低限(并导致低剂量、低档视频预览)。 如果有的话,我就去了640x480,如果无法再回中。

问题回答

在经过大量审判和错误之后,我就如何做到这一点进行了努力。

Hint: Apple果是官方的docs,简单地说是错误的。 他们给你的守则实际上没有奏效。

我在此以循序渐进的指示写了:

http://red-glasses.com/index.php/tutorials/ios4-take-photos-with-live-video-preview-using-avfoundation/

关于联系的守则,但摘要如下:

-(void) viewDidAppear:(BOOL)animated
{
    AVCaptureSession *session = [[AVCaptureSession alloc] init];
    session.sessionPreset = AVCaptureSessionPresetMedium;

    CALayer *viewLayer = self.vImagePreview.layer;
    NSLog(@"viewLayer = %@", viewLayer);

    AVCaptureVideoPreviewLayer *captureVideoPreviewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];

    captureVideoPreviewLayer.frame = self.vImagePreview.bounds;
    [self.vImagePreview.layer addSublayer:captureVideoPreviewLayer];

    AVCaptureDevice *device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];

    NSError *error = nil;
    AVCaptureDeviceInput *input = [AVCaptureDeviceInput deviceInputWithDevice:device error:&error];
    if (!input) {
        // Handle the error appropriately.
        NSLog(@"ERROR: trying to open camera: %@", error);
    }
    [session addInput:input];

stillImageOutput = [[AVCaptureStillImageOutput alloc] init];
NSDictionary *outputSettings = [[NSDictionary alloc] initWithObjectsAndKeys: AVVideoCodecJPEG, AVVideoCodecKey, nil];
[stillImageOutput setOutputSettings:outputSettings];

[session addOutput:stillImageOutput];

    [session startRunning];
}

-(IBAction) captureNow
{
    AVCaptureConnection *videoConnection = nil;
    for (AVCaptureConnection *connection in stillImageOutput.connections)
    {
        for (AVCaptureInputPort *port in [connection inputPorts])
        {
            if ([[port mediaType] isEqual:AVMediaTypeVideo] )
            {
                videoConnection = connection;
                break;
            }
        }
        if (videoConnection) { break; }
    }

    NSLog(@"about to request a capture from: %@", stillImageOutput);
    [stillImageOutput captureStillImageAsynchronouslyFromConnection:videoConnection completionHandler: ^(CMSampleBufferRef imageSampleBuffer, NSError *error)
    {
         CFDictionaryRef exifAttachments = CMGetAttachment( imageSampleBuffer, kCGImagePropertyExifDictionary, NULL);
         if (exifAttachments)
         {
            // Do something with the attachments.
            NSLog(@"attachements: %@", exifAttachments);
         }
        else
            NSLog(@"no attachments");

        NSData *imageData = [AVCaptureStillImageOutput jpegStillImageNSDataRepresentation:imageSampleBuffer];
        UIImage *image = [[UIImage alloc] initWithData:imageData];

        self.vImage.image = image;
     }];
}

这是一项巨大的帮助——在我试图效仿AVCam的榜样时,我被困在eds中。

Here is a complete working project with my comments that explain what is happening. This illustrates how you can use the capture manager with multiple outputs. In this example there are two outputs.

首先是上述例子的图像输出。

The second provides frame by frame access to the video coming out of the camera. You can add more code to do something interesting with the frames if you like. In this example I am just updating a frame counter on the screen from within the delegate callback.

https://github.com/tdsltm/iphoneStubs/tree/master/VideoCamCaptureExample-RedGlassesBlog/VideoCamCaptureExample-RedGlassesBlog”rel=“noretinger”>https://github.com/tdsltm/iphoneStubs/tree/master/VideoCamtureExample-RedGlassesBlog/VideoCamtureExample-RedGlassesB

您应使用<>Adam的回答,但如果您使用“快速”的回答(如你现在可能做的多数)的话,那么,在座右边有1.2个港口:

  1. Make sure you import ImageIO
  2. Add a property private var stillImageOutput: AVCaptureStillImageOutput!
  3. Instantiate stillImageOutput before captureSession.startRunning():

与此类似:

stillImageOutput = AVCaptureStillImageOutput()
stillImageOutput.outputSettings = [AVVideoCodecKey: AVVideoCodecJPEG]
captureSession.addOutput(stillImageOutput)

然后使用该法典来掌握一个形象:

private func captureImage() {
    var videoConnection: AVCaptureConnection?
    for connection in stillImageOutput.connections as! [AVCaptureConnection] {
        for port in connection.inputPorts {
            if port.mediaType == AVMediaTypeVideo {
                videoConnection = connection
                break
            }
        }
        if videoConnection != nil {
            break
        }
    }
    print("about to request a capture from: (stillImageOutput)")
    stillImageOutput.captureStillImageAsynchronouslyFromConnection(videoConnection) { (imageSampleBuffer: CMSampleBuffer!, error: NSError!) -> Void in
        let exifAttachments = CMGetAttachment(imageSampleBuffer, kCGImagePropertyExifDictionary, nil)
        if let attachments = exifAttachments {
            // Do something with the attachments
            print("attachments: (attachments)")
        } else {
            print("no attachments")
        }
        let imageData = AVCaptureStillImageOutput.jpegStillImageNSDataRepresentation(imageSampleBuffer)
        let image = UIImage(data: imageData)
        // Do something with the image
    }
}

所有这一切都假定,你已经建立了,而且正如我一样,也只是需要从中抽取。





相关问题
Code sign Error

I have created a new iPhone application.I have two mach machines. I have created the certificate for running application in iPhone in one mac. Can I use the other mac for running the application in ...

ABPersonViewController Usage for displaying contact

Created a View based Project and added a contact to the AddressBook using ABAddressBookRef,ABRecordRef now i wanted to display the added contact ABPersonViewController is the method but how to use in ...

将音频Clips从Peter改为服务器

我不禁要问,那里是否有任何实例表明从Peit向服务器发送音响。 I m不关心电话或SIP风格的解决办法,只是一个简单的袖珍流程......

• 如何将搜查线重新定位?

我正试图把图像放在搜索条左边。 但是,问题始于这里,搜索条线不能重新布署。

热门标签