Skip to content

FFGear API Usage Examples

Check out all FFdecoder API's basic recipes here âž¶ to better understand these usage examples.

After going through following Usage Examples, Checkout more of its advanced configurations here âž¶

FFGear requires the deffcode library

FFGear API MUST have the deffcode library installed, along with a valid FFmpeg executable. Any failure in detection will raise ImportError/RuntimeError immediately.

Install via pip:

pip install deffcode

For FFmpeg installation, see FFmpeg Installation âž¶

Bare-Minimum Usage

Following is the bare-minimum code you need to get started with FFGear API:

# import required libraries
from vidgear.gears import FFGear
import cv2

# open any valid video file with FFGear
stream = FFGear(source="myvideo.mp4").start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

 

Using FFGear with Network Streams

FFGear API directly supports any network stream URL that FFmpeg supports, including HTTP(s), RTSP/RTP, RTMP, and more.

# import required libraries
from vidgear.gears import FFGear
import cv2

# open an HTTP stream
stream = FFGear(
    source="https://example.com/live/stream.mp4",
    logging=True
).start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

This example assumes you already have an RTSP server running at the specified address.

For creating your own RTSP server locally, see WriteGear's RTSP/RTP bonus example âž¶

# import required libraries
from vidgear.gears import FFGear
import cv2

# force TCP transport for RTSP
options = {"-rtsp_transport": "tcp"}

# [WARNING] replace with your actual RTSP address
stream = FFGear(
    source="rtsp://localhost:8554/mystream",
    logging=True,
    **options
).start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

 

Using FFGear with Streaming Websites

FFGear internally implements the yt_dlp backend for seamlessly pipelining live video-frames and metadata from various streaming services like YouTube, Twitch, PeerTube, and many more âž¶. Enable it by setting stream_mode=True.

Supported Streaming Websites

The complete list of all supported Streaming Websites URLs can be found here âž¶

Accessing Stream Metadata

FFGear exposes a ytv_metadata attribute for accessing the stream's metadata as a JSON-like dict:

from vidgear.gears import FFGear

stream = FFGear(
    source="https://youtu.be/QDia3e12czc", stream_mode=True, logging=True
).start()

# access video metadata
video_metadata = stream.ytv_metadata
print(video_metadata.keys())
print(video_metadata["title"])
AV1 Dependency in FFmpeg

Most of the time, YouTube defaults to the AV1 video format. Therefore, you need an FFmpeg build that includes libdav1d for software AV1 decoding, which requires it to be compiled using the flag: --enable-libdav1d.

Check if you already have it: Run this command in your terminal or command prompt:

ffmpeg -decoders | grep dav1d

If you see V....D libdav1d, the library is installed and ready to use.

YouTube Playlists are not supported.

# import required libraries
from vidgear.gears import FFGear
import cv2

# set desired quality as 720p and video decoder as `libdav1d`
options = {"STREAM_RESOLUTION": "720p", "-vcodec": "libdav1d"}

# Add YouTube Video URL as source and enable Stream Mode
stream = FFGear(
    source="https://youtu.be/QDia3e12czc", stream_mode=True, logging=True, **options
).start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

If the Twitch user is offline, FFGear will throw ValueError.

# import required libraries
from vidgear.gears import FFGear
import cv2

# set desired quality as 720p
options = {"STREAM_RESOLUTION": "720p"}

# Add Twitch stream URL as source and enable Stream Mode
stream = FFGear(
    source="https://www.twitch.tv/shroud",
    stream_mode=True,
    logging=True,
    **options
).start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()
# import required libraries
from vidgear.gears import FFGear
import cv2

# set desired quality as 480p
options = {"STREAM_RESOLUTION": "480p"}

# Add PeerTube stream URL as source and enable Stream Mode
stream = FFGear(
    source="https://peertube.tv/w/q4GM7HcfUnqeBAj3urTUCv",
    stream_mode=True,
    logging=True,
    **options
).start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

 

Using FFGear with Different Pixel Formats

FFGear supports decoding frames in any FFmpeg pixel format via the frame_format parameter.

Use ffmpeg -pix_fmts terminal command to list all supported pixel formats.

Performance Mode ⚡ — Faster Decoding via YUV420p

Ingesting frames as 12-bit YUV 4:2:0 instead of 24-bit BGR halves the bytes moving through the FFmpeg pipe. Enable -enforce_cv_patch to auto-convert YUV/NV pixel formats frames inside FFGear for seamless OpenCV compatibility.

# import required libraries
from vidgear.gears import FFGear
import cv2

# IMPORTANT: enable OpenCV patch for YUV420p frames
options = {"-enforce_cv_patch": True}

# stream YUV420p frames
stream = FFGear(
    source="myvideo.mp4",
    frame_format="yuv420p",
    logging=True,
    **options
).start()

# loop over
while True:
    # read OpenCV compatible YUV420p frames
    frame = stream.read()

    # check for frame if NoneType
    if frame is None:
        break

    # {do something with frame here}

    # show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()
# import required libraries
from vidgear.gears import FFGear
import cv2

# stream grayscale frames
stream = FFGear(source="myvideo.mp4", frame_format="gray", logging=True).start()

# loop over
while True:

    # read grayscale frames
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # Show output window
    cv2.imshow("Grayscale Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

Fastest âš¡ RAW-to-Grayscale via -extract_luma

Every YUV/NV bytestream stores the Luma (Y) plane uncompressed at the top of each frame. The exclusive -extract_luma boolean attribute makes FFGear slice that Y-plane directly and hand back a 2D (H, W) grayscale ndarray — no colorspace conversion in FFmpeg, no cv2.cvtColor in Python. This is strictly faster than frame_format="gray", which still asks FFmpeg to do a yuv→gray conversion on every frame.

Combined with the reduced pipe-bytes of YUV 4:2:0 ingest, this is the fastest grayscale pipeline the API can produce.

# import required libraries
from vidgear.gears import FFGear
import cv2

# enable direct Luma (Y-plane) extraction
options = {"-extract_luma": True}

# stream Grayscale via YUV frames
stream = FFGear(
    source="myvideo.mp4",
    frame_format="yuv420p",
    logging=True,
    **options
).start()

# loop over
while True:
    # read grayscale frames
    frame = stream.read()

    # check for frame if NoneType
    if frame is None:
        break

    # {do something with Luma (Y-plane) extracted grayscale frame here}

    # NOTE: If you do not need previewing frames, comment following lines
    # show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

 

Using FFGear with Camera Devices (Indexes)

Enumerating all Camera Devices with Indexes

Using DeFFcode's Sourcer API, you can easily use its enumerate_devices property object to enumerate all probed Camera Devices (connected to your system) as dictionary object with device indexes as keys and device names as their respective values:

# import the necessary packages
from deffcode import Sourcer
import json

# stream
sourcer = Sourcer("0").probe_stream()

# enumerate probed devices as Dictionary object(`dict`)
print(sourcer.enumerate_devices)

# enumerate probed devices as JSON string(`json.dump`)
print(json.dumps(sourcer.enumerate_devices,indent=2))
After running above python code, the resultant Terminal Output will look something as following on Windows machine:
{0: 'Integrated Camera', 1: 'USB2.0 Camera', 2: 'DroidCam Source'}
{
"0": "Integrated Camera",
"1": "USB2.0 Camera",
"2": "DroidCam Source"
}

FFGear API supports camera devices by index, just like OpenCV. Under the hood it uses platform-specific FFmpeg demuxers to capture the device feed by specifying its matching index value either as integer or string of integer type to its source parameter.

Requirement for Index based Camera Device Capturing in FFGear API
  • MUST have appropriate FFmpeg binaries, Drivers, and Softwares installed:

    Internally, FFGear APIs achieves Index based Camera Device Capturing by employing some specific FFmpeg demuxers on different platforms(OSes). These platform specific demuxers are as follows:

    Platform(OS) Demuxer
    Windows OS dshow (or DirectShow)
    Linux OS video4linux2 (or its alias v4l2)
    Mac OS avfoundation

    âš  Important: Kindly make sure your FFmpeg binaries support these platform specific demuxers as well as system have the appropriate video drivers and related softwares installed.

  • The source parameter value MUST be exactly the probed Camera Device index (Use DeFFcode's Sourcer API enumerate_devices to list them).

  • The source_demuxer parameter value MUST be either None(also means empty) or "auto".

In this example we stream BGR24 video frames from Integrated Camera at index 0 on a Windows Machine:

Important Facts related to Camera Device Indexing
  • Camera Device indexes are 0-indexed. So the first device is at 0, second is at 1, so on. So if the there are n devices, the last device is at n-1.
  • Camera Device indexes can be of either integer (e.g. 0,1, etc.) or string of integer (e.g. "0","1", etc.) type.
  • Camera Device indexes can be negative (e.g. -1,-2, etc.), this means you can also start indexing from the end.
    • For example, If there are three devices:
      {0: 'Integrated Camera', 1: 'USB2.0 Camera', 2: 'DroidCam Source'}
      
    • Then, You can specify Positive Indexes and its Equivalent Negative Indexes as follows:

      Positive Indexes Equivalent Negative Indexes
      FFGear(source="0").formulate() FFGear(source="-3").formulate()
      FFGear(source="1").formulate() FFGear(source="-2").formulate()
      FFGear(source="2").formulate() FFGear(source="-1").formulate()

Out of Index Camera Device index values will raise ValueError in FFGear API

# import required libraries
from vidgear.gears import FFGear
import cv2

# stream with "0" index source for BGR24 output
stream = FFGear(
    source="0",
    frame_format="bgr24",
    logging=True,
).start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

 

Using FFGear with Simple FFmpeg Filtergraphs

FFGear supports live simple filtergraph pipelines via the -vf FFmpeg parameter through the options dictionary.

Simple filtergraphs have exactly one input and one output. Use them via the -vf parameter.

# import required libraries
from vidgear.gears import FFGear
import cv2

# define the Video Filter definition
# horizontally flip and scale to half its original size
options = {
    "-vf": "hflip,scale=w=iw/2:h=ih/2"
}

stream = FFGear(
    source="myvideo.mp4",
    frame_format="bgr24",
    logging=True,
    **options
).start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

 

Using FFGear with Sequence of images

FFGear supports Image Sequences such as Sequential(img%03d.png) and Glob pattern(*.png) in real-time.

Extracting Image Sequences from a video

You can use following FFmpeg command to extract sequences of images from a video file foo.mp4:

$ ffmpeg -i foo.mp4 /path/to/image-%03d.png

The default framerate is 25 fps, therefore this command will extract 25 images/sec from the video file, and save them as sequences of images (starting from image-000.png, image-001.png, image-002.png up to image-999.png).

If there are more than 1000 frames then the last image will be overwritten with the remaining frames leaving only the last frame.

The default images width and height is same as the video.

How to start with specific number image?

You can use -start_number FFmpeg parameter if you want to start with specific number image:

# define `-start_number` such as `5`
options = {"-ffprefixes":["-start_number", "5"]}

# initialize and formulate the stream with define parameters
stream = FFGear(source="/absolute/path/to/image-%03d.png", logging=True, **options).formulate()

Ensure source points to an absolute image sequence folder path

Furthermore, Use the exact filename pattern (for example, image-%03d.png) that was used when extracting the images. An incorrect or relative pattern may cause the FFGear API to raise a RuntimeError.

# import required libraries
from vidgear.gears import FFGear
import cv2

stream = FFGear(
    source="/absolute/path/to/image-%03d.png",
    frame_format="bgr24",
    logging=True
).start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

Bash-style globbing (* represents any number of any characters) is useful if your images are sequential but not necessarily in a numerically sequential order.

Ensure source points to an absolute image sequence folder path

The glob pattern may not be available on Windows FFmpeg builds.

# import required libraries
from vidgear.gears import FFGear
import cv2

# define `-pattern_type glob` for accepting glob pattern via ffprefixes
options = {
    "-ffprefixes":["-pattern_type", "glob"]
}

stream = FFGear(
    source="/absolute/path/to/*.png", # Glob pattern
    frame_format="bgr24",
    logging=True,
    **options
).start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

 

Using FFGear with Video Looping

In this example we stream BGR24 video frames from looping video using FFGear API:

The -stream_loop flag is a high-level FFmpeg command that repeats the entire input file a specified number of times before any filtering occurs. The recommended way to use the -stream_loop flag with the FFGear API is through the -ffprefixes list attribute of the options dictionary parameter.

Possible -stream_loop values are integer values

Integer values > 0 specify the number of loops, 0 means no looping, and -1 enables infinite looping.

Using -stream_loop 3 will loop the video 4 times in total.

# import required libraries
from vidgear.gears import FFGear
import cv2

# define loop 3 times (total 4 playbacks) via ffprefixes
options = {
    "-ffprefixes": ["-stream_loop", "3"]
}

# stream with suitable source
stream = FFGear(
    source="myvideo.mp4",
    frame_format="bgr24",
    logging=True,
    **options
).start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

The loop filter is used within an FFmpeg filtergraph (-filter_complex) to repeat a specific segment of video frames. In the FFGear API, it can be configured as an attribute of the options dictionary parameter.

This filter places all frames into memory (RAM), so applying trim filter first is strongly recommended. Otherwise, you might run out of memory.

Using loop filter for looping video

The filter accepts the following options:

  • loop: Sets the number of loops for integer values > 0. Setting this value to -1 results in infinite loops. Default is 0 (no loops).
  • size: Sets the maximum size in number of frames. Default is 0.
  • start: Sets the first frame of the loop. Default is 0.

You must often include setpts to fix the timestamps, otherwise frames may be dropped or playback may stutter.

Using loop=3 will loop the video 4 times in total.

# import required libraries
from vidgear.gears import FFGear
import cv2

# define loop 4 times, each loop is 15 frames, each loop skips the first 25 frames
options = {
    "-filter_complex": "loop=loop=3:size=15:start=25,setpts=N/FRAME_RATE/TB" # Or use: `loop=3:15:25`
}  

# stream with suitable source
stream = FFGear(
    source="myvideo.mp4",
    frame_format="bgr24",
    logging=True,
    **options
).start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

 

Using FFGear with OpenCV VideoWriter API

FFGear integrates seamlessly with OpenCV's VideoWriter() class to encode video frames into a multimedia file. However, it lacks fine-grained control over output quality, bitrate, compression, and other advanced parameters—features that are readily available with WriteGear API's Compression Mode.

You can use FFGear API's stream.metadata property object that dumps source Video's metadata information (as JSON string) to retrieve source framerate and resolution.

You could also use the WriteGear API's Non-compression mode, which provides flexible access to OpenCV's VideoWriter API for encoding video frames without compression. See this document for switching from VideoWriter API to WriteGear API âž¶.

By default, OpenCV expects BGR format frames in its cv2.write() method.

# import the necessary packages
from vidgear.gears import FFGear
import json, cv2

# stream with BGR24 pixel format output
stream = FFGear(source="myvideo.mp4", frame_format="bgr24", logging=True).start()

# retrieve JSON Metadata and convert it to dict
metadata_dict = json.loads(stream.stream.metadata)

# prepare OpenCV parameters
FOURCC = cv2.VideoWriter_fourcc("M", "J", "P", "G")
FRAMERATE = metadata_dict["output_framerate"]
FRAMESIZE = tuple(metadata_dict["output_frames_resolution"])

# Define writer with parameters and suitable output filename for e.g. `output_foo.avi`
writer = cv2.VideoWriter("output_foo.avi", FOURCC, FRAMERATE, FRAMESIZE, isColor=False))

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # writing BGR24 frame to writer
    writer.write(frame)

    # let's also show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close stream
stream.stop()

# safely close writer
writer.release()

Since OpenCV expects BGR format frames in its cv2.write() method, therefore we need to convert RGB frames into BGR before encoding as follows:

# import the necessary packages
from vidgear.gears import FFGear
import json, cv2

# stream with RGB24 pixel format output
stream = FFGear(source="myvideo.mp4", frame_format="rgb24", logging=True).start()

# retrieve JSON Metadata and convert it to dict
metadata_dict = json.loads(stream.stream.metadata)

# prepare OpenCV parameters
FOURCC = cv2.VideoWriter_fourcc("M", "J", "P", "G")
FRAMERATE = metadata_dict["output_framerate"]
FRAMESIZE = tuple(metadata_dict["output_frames_resolution"])

# Define writer with parameters and suitable output filename for e.g. `output_foo.avi`
writer = cv2.VideoWriter("output_foo.avi", FOURCC, FRAMERATE, FRAMESIZE)

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the RGB frame here}

    # converting RGB24 to BGR24 frame
    frame_bgr = cv2.cvtColor(frame, cv2.COLOR_RGB2BGR)

    # writing BGR24 frame to writer
    writer.write(frame_bgr)

    # let's also show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close stream
stream.stop()

# safely close writer
writer.release()

OpenCV also directly consumes GRAYSCALE frames in its cv2.write() method.

When writing a grayscale video, don't forget to add that flag (isColor=False) to the VideoWriter.

# import the necessary packages
from vidgear.gears import FFGear
import json, cv2

# stream with GRAYSCALE pixel format output
stream = FFGear(source="myvideo.mp4", frame_format="gray", logging=True).start()

# retrieve JSON Metadata and convert it to dict
metadata_dict = json.loads(stream.stream.metadata)

# prepare OpenCV parameters
FOURCC = cv2.VideoWriter_fourcc("M", "J", "P", "G")
FRAMERATE = metadata_dict["output_framerate"]
FRAMESIZE = tuple(metadata_dict["output_frames_resolution"])

# Define writer with parameters and suitable output filename for e.g. `output_foo_gray.avi`
writer = cv2.VideoWriter("output_foo_gray.avi", FOURCC, FRAMERATE, FRAMESIZE)

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the GRAYSCALE frame here}

    # writing GRAYSCALE frame to writer
    writer.write(frame)

    # let's also show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close stream
stream.stop()

# safely close writer
writer.release()

With FFGear API, frames extracted with YUV pixel formats (yuv420p, yuv444p, nv12, nv21 etc.) are generally incompatible with OpenCV APIs. But you can make them easily compatible by using exclusive -enforce_cv_patch boolean attribute of its options dictionary parameter.

Let's encode YUV420p pixel-format frames with OpenCV's write() method:

You can also use other YUV pixel-formats such yuv422p(4:2:2 subsampling) or yuv444p(4:4:4 subsampling) etc. instead for more higher dynamic range in the similar manner.

# import the necessary packages
from vidgear.gears import FFGear
import json, cv2

# enable OpenCV patch for YUV420p frames
options = {"-enforce_cv_patch": True}

# stream YUV420p frames
stream = FFGear(
    source="myvideo.mp4",
    frame_format="yuv420p",
    logging=True,
    **options
).start()

# retrieve JSON Metadata and convert it to dict
metadata_dict = json.loads(stream.stream.metadata)

# prepare OpenCV parameters
FOURCC = cv2.VideoWriter_fourcc("M", "J", "P", "G")
FRAMERATE = metadata_dict["output_framerate"]
FRAMESIZE = tuple(metadata_dict["output_frames_resolution"])

# Define writer with parameters and suitable output filename for e.g. `output_foo_yuv.avi`
writer = cv2.VideoWriter("output_foo_yuv.avi", FOURCC, FRAMERATE, FRAMESIZE)

# loop over
while True:

    # read OpenCV compatible YUV420 frames
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with frame here}

    # writing BGR frame to writer
    writer.write(frame)

    # let's also show output window
    cv2.imshow("Output", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close stream
stream.stop()

# safely close writer
writer.release()

 

Using FFGear with WriteGear API (Compression Mode)

High CPU Usage when chaining FFGear with WriteGear

When chaining FFGear with WriteGear, both FFmpeg processes (decoding + encoding) run as fast as your hardware allows with no artificial pacing between them. This causes the pipeline to max out your CPU to process the video in the shortest time possible, which may be undesirable.

You can mitigate this in two ways depending on your use case:

Pass the -re flag via FFGear's -ffprefixes parameter to force FFmpeg to read the input at its native framerate. This naturally paces the pipeline to real-time speed and drastically reduces CPU usage:

# force input to be read at native framerate
stream = FFGear(source="foo.mp4", frame_format="bgr24", **{"-ffprefixes": ["-re"]})

Pass -threads to both FFGear and WriteGear to cap the number of CPU threads each FFmpeg process may use. This leaves headroom for other system tasks:

# limit decoder to 2 threads
stream = FFGear(source="foo.mp4", frame_format="bgr24", **{"-threads": 2})

# limit encoder to 2 threads
writer = WriteGear(output="output_foo.mp4", **{"-input_framerate": fps, "-threads": 2})

Hardware Acceleration

If your machine has a dedicated GPU, you can offload video encoding entirely to the GPU. See CUDA-NVENC-accelerated Transcoding with WriteGear API (Compression Mode) âž¶ example for reference, which demonstrates shifting the heavy encoding workload away from the CPU.

FFGear integrates seamlessly with VidGear's WriteGear API in Compression Mode for high-quality re-encoding of decoded frames:

You can use FFGear API's stream.metadata property object that dumps source Video's metadata information (as JSON string) to retrieve source framerate.

WriteGear API by default expects BGR format frames in its write() class method.

# import required libraries
from vidgear.gears import FFGear
from vidgear.gears import WriteGear
import json

# open source with FFGear and BGR frames
stream = FFGear(source="myvideo.mp4", frame_format="bgr24", logging=True).start()

# retrieve framerate from source JSON Metadata and pass it as `-input_framerate` 
# parameter for controlled framerate
output_params = {
    "-input_framerate": json.loads(stream.stream.metadata)["source_video_framerate"]
}

# Define writer with default parameters and suitable
# output filename for e.g. `output_foo.mp4`
writer = WriteGear(output="output_foo.mp4", **output_params)

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # write frame to output
    writer.write(frame)

# safely close stream
stream.stop()

# safely close writer
writer.close()

In WriteGear API, you can use rgb_mode parameter in write() class method to write RGB format frames instead of default BGR as follows:

# import required libraries
from vidgear.gears import FFGear
from vidgear.gears import WriteGear
import json

# open source with FFGear and RGB frames
stream = FFGear(source="myvideo.mp4", frame_format="rgb24", logging=True).start()

# retrieve framerate from source JSON Metadata and pass it as `-input_framerate` 
# parameter for controlled framerate
output_params = {
    "-input_framerate": json.loads(stream.stream.metadata)["source_video_framerate"]
}

# Define writer with default parameters and suitable
# output filename for e.g. `output_foo_rgb.mp4`
writer = WriteGear(output="output_foo_rgb.mp4", **output_params)

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # write frame to output
    writer.write(frame, rgb_mode=True)

# safely close stream
stream.stop()

# safely close writer
writer.close()

WriteGear API also directly consumes GRAYSCALE format frames in its write() class method.

# import required libraries
from vidgear.gears import FFGear
from vidgear.gears import WriteGear
import json

# open source with FFGear and GRAYSCALE frames
stream = FFGear(source="myvideo.mp4", frame_format="gray", logging=True).start()

# retrieve framerate from source JSON Metadata and pass it as `-input_framerate` 
# parameter for controlled framerate
output_params = {
    "-input_framerate": json.loads(stream.stream.metadata)["source_video_framerate"]
}

# Define writer with default parameters and suitable
# output filename for e.g. `output_foo_gray.mp4`
writer = WriteGear(output="output_foo_gray.mp4", **output_params)

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # write frame to output
    writer.write(frame)

# safely close stream
stream.stop()

# safely close writer
writer.close()

Performance Mode ⚡ — Faster Decoding via YUV420p

Ingesting frames as 12-bit YUV 4:2:0 instead of 24-bit BGR halves the bytes moving through the FFmpeg pipe. Enable -enforce_cv_patch to auto-convert YUV/NV pixel formats frames inside FFGear for seamless OpenCV compatibility.

WriteGear API also directly consume YUV (or basically any other supported pixel format) frames in its write() class method with its -input_pixfmt attribute in compression mode.

You can also use yuv422p(4:2:2 subsampling) or yuv444p(4:4:4 subsampling) instead for more higher dynamic ranges.

# import required libraries
from vidgear.gears import FFGear
from vidgear.gears import WriteGear
import json

# open source with FFGear stream for YUV420 output
stream = FFGear(source="myvideo.mp4", frame_format="yuv420p", logging=True).start()

# retrieve framerate from source JSON Metadata and pass it as 
# `-input_framerate` parameter for controlled framerate
# and also add input pixfmt as yuv420p 
output_params = {
    "-input_framerate": json.loads(stream.stream.metadata)["output_framerate"],
    "-input_pixfmt": "yuv420p"
}

# Define writer with default parameters and suitable
# output filename for e.g. `output_foo_yuv.mp4`
writer = WriteGear(output="output_foo_yuv.mp4", logging=True, **output_params)

# loop over
while True:

    # read OpenCV compatible YUV420p frames
    frame = stream.read()

    # check for frame if NoneType
    if frame is None:
        break

    # {do something with the YUV420p frame here}

    # write frame to output
    writer.write(frame)

# safely close stream
stream.stop()

# safely close writer
writer.close()

Fastest âš¡ RAW-to-Grayscale via -extract_luma

Every YUV/NV bytestream stores the Luma (Y) plane uncompressed at the top of each frame. The exclusive -extract_luma boolean attribute makes FFGear slice that Y-plane directly and hand back a 2D (H, W) grayscale ndarray — no colorspace conversion in FFmpeg, no cv2.cvtColor in Python. This is strictly faster than frame_format="gray", which still asks FFmpeg to do a yuv→gray conversion on every frame.

Combined with the reduced pipe-bytes of YUV 4:2:0 ingest, this is the fastest grayscale pipeline the API can produce.

Similar to normal GRAYSCALE format frames, you can directly consume these frames in WriteGear API’s write() class method:

# import required libraries
from vidgear.gears import FFGear
from vidgear.gears import WriteGear
import json

# enable direct Luma (Y-plane) extraction
options = {"-extract_luma": True}

# stream Grayscale via YUV frames
stream = FFGear(
    source="myvideo.mp4",
    frame_format="yuv420p",
    logging=True,
    **options
).start()

# retrieve framerate from source JSON Metadata and pass it as `-input_framerate` 
# parameter for controlled framerate
output_params = {
    "-input_framerate": json.loads(stream.stream.metadata)["source_video_framerate"]
}

# Define writer with default parameters and suitable
# output filename for e.g. `output_foo_yuv_gray.mp4`
writer = WriteGear(output="output_foo_yuv_gray.mp4", **output_params)

# loop over
while True:
    # read grayscale frames
    frame = stream.read()

    # check for frame if NoneType
    if frame is None:
        break

    # {do something with Luma (Y-plane) extracted grayscale frame here}

    # write frame to output
    writer.write(frame)

# safely close stream
stream.stop()

# safely close writer
writer.close()