After, we can create the session, and store a reference to it. The first thing we need to do is import the AVFoundation framework into our file. Section 1: Setting Up The AVCaptureSession The AVCaptureOutputs are objects, or rather ways, to extract data out from whatever is feeding into the the capture session. Your inputs come from AVCaptureDevices which are software representations of the different audio/visual hardware components of an iOS device. You have a central AVCaptureSession that has inputs and outputs. So what is this “Capture subsystem” and how does it work? You can think of it as a pipeline from hardware to software. In this part we’ll be accomplishing the first point. Get live access to pixel or audio data streaming directly from a capture device. Produce different results than the system camera UI, such as RAW format photos, depth maps, or videos with custom timed metadata. Give users more direct control over photo and video capture, such as focus, exposure, and stabilization options. Use this system if you want to:īuild a custom camera UI to integrate shooting photos or videos into your app’s user experience. The AVFoundation Capture subsystem provides a common high-level architecture for video, photo, and audio capture services in iOS and macOS. What we’re interested in is Camera and Media Capture. But don’t underestimate it, it is very powerful and gives you all the flexibility you could possibility want (within reason of course). AVFoundation is the highest level framework for all things audio/visual in iOS.
0 Comments
Leave a Reply. |