English 中文(简体)
Rendering to CVPixelBuffer on iOS
原标题:

I have a flutter plugin where I need do to some basic 3D rendering on iOS. I decided to go with the Metal API because the OpenGL ES is deprecated on the platform.

Before implementing a plugin I implemented rendering in the iOS application. There rendering works without problems.

While rendering to the texture I get whole area filled with black.

//preparation
Vertices = [Vertex(x:  1, y: -1,   tx: 1, ty: 1),
            Vertex(x: 1, y:  1,   tx: 1, ty: 0),
            Vertex(x: -1, y:  1,  tx: 0, ty: 0),
            Vertex(x:  -1, y:  -1,  tx: 0, ty: 1),]
Indices = [0, 1, 2, 2, 3, 0]

let d = [
    kCVPixelBufferOpenGLCompatibilityKey : true,
    kCVPixelBufferMetalCompatibilityKey : true
]
var cvret = CVPixelBufferCreate(kCFAllocatorDefault, width, height, kCVPixelFormatType_32BGRA, d as CFDictionary, &pixelBuffer); //FIXME jaki format
if(cvret != kCVReturnSuccess) {
    print("faield to create pixel buffer")
}

metalDevice = MTLCreateSystemDefaultDevice()! 

let desc = MTLTextureDescriptor.texture2DDescriptor(pixelFormat: MTLPixelFormat.rgba8Unorm, width: width, height: height, mipmapped: false)
desc.usage = MTLTextureUsage.renderTarget.union( MTLTextureUsage.shaderRead )
targetTexture = metalDevice.makeTexture(descriptor: desc)
metalCommandQueue = metalDevice.makeCommandQueue()!  
ciCtx = CIContext.init(mtlDevice: metalDevice)

let vertexBufferSize = Vertices.size()
vertexBuffer = metalDevice.makeBuffer(bytes: &Vertices, length: vertexBufferSize, options: .storageModeShared)

let indicesBufferSize = Indices.size()
indicesBuffer = metalDevice.makeBuffer(bytes: &Indices, length: indicesBufferSize, options: .storageModeShared)

let defaultLibrary = metalDevice.makeDefaultLibrary()!
let txProgram = defaultLibrary.makeFunction(name: "basic_fragment")
let vertexProgram = defaultLibrary.makeFunction(name: "basic_vertex") 

let pipelineStateDescriptor = MTLRenderPipelineDescriptor()
pipelineStateDescriptor.sampleCount = 1
pipelineStateDescriptor.vertexFunction = vertexProgram
pipelineStateDescriptor.fragmentFunction = txProgram
pipelineStateDescriptor.colorAttachments[0].pixelFormat = .rgba8Unorm

pipelineState = try! metalDevice.makeRenderPipelineState(descriptor: pipelineStateDescriptor)

//drawing
let renderPassDescriptor = MTLRenderPassDescriptor() 
renderPassDescriptor.colorAttachments[0].texture = targetTexture 
renderPassDescriptor.colorAttachments[0].loadAction = .clear 
renderPassDescriptor.colorAttachments[0].clearColor = MTLClearColor(red: 0.85, green: 0.85, blue: 0.85, alpha: 0.5) 
renderPassDescriptor.colorAttachments[0].storeAction = MTLStoreAction.store
renderPassDescriptor.renderTargetWidth = width
renderPassDescriptor.renderTargetHeight = height

guard let commandBuffer = metalCommandQueue.makeCommandBuffer() else { return }

guard let renderEncoder = commandBuffer.makeRenderCommandEncoder(descriptor: renderPassDescriptor) else { return }
renderEncoder.label = "Offscreen render pass"
renderEncoder.setVertexBuffer(vertexBuffer, offset: 0, index: 0) 

renderEncoder.setRenderPipelineState(pipelineState)
renderEncoder.drawIndexedPrimitives(type: .triangle, indexCount: Indices.count, indexType: .uint32, indexBuffer: indicesBuffer, indexBufferOffset: 0) // 2

renderEncoder.endEncoding() 
commandBuffer.commit() 

//copy to pixel buffer
guard let img = CIImage(mtlTexture: targetTexture) else { return }
ciCtx.render(img, to: pixelBuffer!)
问题回答

I m pretty sure that creating a separate MTLTexture and then blitting it into a CVPixelBuffer is not a way to go. You are basically writing it out to an MTLTexture and then using that result only to write it out to a CIImage.

Instead, you can make them share an IOSurface underneath by creating a CVPixelBuffer with CVPixelBufferCreateWithIOSurface and a corresponding MTLTexture with makeTexture(descriptor:iosurface:plane:) .

Or you can create an MTLBuffer that aliases the same memory as CVPixelBuffer, then create an MTLTexture from that MTLBuffer. If you are going to use this approach, I would suggest also using MTLBlitCommandEncoders methods optimizeContentsForCPUAccess and optimizeContentsForGPUAccess. You first optimizeContentsForGPUAccess, then use the texture on the GPU, then twiddle the pixels back into a CPU-readable format with optimizeContentsForCPUAccess. That way you don t lose the performance when rendering to a texture.

Yes, using Texture requires the native implementation of the FlutterTexture protocol and the implementation of the- (CVPixelBufferRef _Nullable)copyPixelBuffer; method. Code in this link: https://juejin.cn/post/7264920384902234168





相关问题
List Contents of Directory in a UITableView

I am trying to list the contents of Ringtones directory in a TableView, however, I am only getting the last file in the directory in ALL cells, instead of file per cell. This is my code: - (...

iPhone NSUserDefaults persistance difficulty

In my app i have a bunch of data i store in the NSUserdefaults. This information consists of an NSObject (Object1) with NSStrings and NSNumbers and also 2 instances of yet another object (Object2). ...

Writing a masked image to disk as a PNG file

Basically I m downloading images off of a webserver and then caching them to the disk, but before I do so I want to mask them. I m using the masking code everyone seems to point at which can be found ...

Resize UIImage with aspect ratio?

I m using this code to resize an image on the iPhone: CGRect screenRect = CGRectMake(0, 0, 320.0, 480.0); UIGraphicsBeginImageContext(screenRect.size); [value drawInRect:screenRect blendMode:...

Allowing interaction with a UIView under another UIView

Is there a simple way of allowing interaction with a button in a UIView that lies under another UIView - where there are no actual objects from the top UIView on top of the button? For instance, ...

热门标签