Check out all FFdecoder API's basic recipes here âž¶ to better understand these usage examples.
After going through following Usage Examples, Checkout more of its advanced configurations here âž¶
FFGear requires the deffcode library
FFGear API MUST have the deffcode library installed, along with a valid FFmpeg executable. Any failure in detection will raise ImportError/RuntimeError immediately.
# import required librariesfromvidgear.gearsimportFFGearimportcv2# open any valid video file with FFGearstream=FFGear(source="myvideo.mp4").start()# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# Show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close video streamstream.stop()
# import required librariesfromvidgear.gearsimportFFGearimportcv2# open an HTTP streamstream=FFGear(source="https://example.com/live/stream.mp4",logging=True).start()# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# Show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close video streamstream.stop()
This example assumes you already have an RTSP server running at the specified address.
# import required librariesfromvidgear.gearsimportFFGearimportcv2# force TCP transport for RTSPoptions={"-rtsp_transport":"tcp"}# [WARNING] replace with your actual RTSP addressstream=FFGear(source="rtsp://localhost:8554/mystream",logging=True,**options).start()# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# Show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close video streamstream.stop()
FFGear internally implements the yt_dlp backend for seamlessly pipelining live video-frames and metadata from various streaming services like YouTube, Twitch, PeerTube, and many more âž¶. Enable it by setting stream_mode=True.
Supported Streaming Websites
The complete list of all supported Streaming Websites URLs can be found here âž¶
Accessing Stream Metadata
FFGear exposes a ytv_metadata attribute for accessing the stream's metadata as a JSON-like dict:
fromvidgear.gearsimportFFGearstream=FFGear(source="https://youtu.be/QDia3e12czc",stream_mode=True,logging=True).start()# access video metadatavideo_metadata=stream.ytv_metadataprint(video_metadata.keys())print(video_metadata["title"])
AV1 Dependency in FFmpeg
Most of the time, YouTube defaults to the AV1 video format. Therefore, you need an FFmpeg build that includes libdav1d for software AV1 decoding, which requires it to be compiled using the flag: --enable-libdav1d.
Check if you already have it: Run this command in your terminal or command prompt:
ffmpeg-decoders|grepdav1d
If you see V....D libdav1d, the library is installed and ready to use.
# import required librariesfromvidgear.gearsimportFFGearimportcv2# set desired quality as 720p and video decoder as `libdav1d`options={"STREAM_RESOLUTION":"720p","-vcodec":"libdav1d"}# Add YouTube Video URL as source and enable Stream Modestream=FFGear(source="https://youtu.be/QDia3e12czc",stream_mode=True,logging=True,**options).start()# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# Show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close video streamstream.stop()
If the Twitch user is offline, FFGear will throw ValueError.
# import required librariesfromvidgear.gearsimportFFGearimportcv2# set desired quality as 720poptions={"STREAM_RESOLUTION":"720p"}# Add Twitch stream URL as source and enable Stream Modestream=FFGear(source="https://www.twitch.tv/shroud",stream_mode=True,logging=True,**options).start()# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# Show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close video streamstream.stop()
# import required librariesfromvidgear.gearsimportFFGearimportcv2# set desired quality as 480poptions={"STREAM_RESOLUTION":"480p"}# Add PeerTube stream URL as source and enable Stream Modestream=FFGear(source="https://peertube.tv/w/q4GM7HcfUnqeBAj3urTUCv",stream_mode=True,logging=True,**options).start()# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# Show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close video streamstream.stop()
FFGear supports decoding frames in any FFmpeg pixel format via the frame_format parameter.
Use ffmpeg -pix_fmts terminal command to list all supported pixel formats.
Performance Mode — Faster Decoding via YUV420p
Ingesting frames as 12-bit YUV 4:2:0 instead of 24-bit BGR halves the bytes moving through the FFmpeg pipe. Enable -enforce_cv_patch to auto-convert YUV/NV pixel formats frames inside FFGear for seamless OpenCV compatibility.
# import required librariesfromvidgear.gearsimportFFGearimportcv2# IMPORTANT: enable OpenCV patch for YUV420p framesoptions={"-enforce_cv_patch":True}# stream YUV420p framesstream=FFGear(source="myvideo.mp4",frame_format="yuv420p",logging=True,**options).start()# loop overwhileTrue:# read OpenCV compatible YUV420p framesframe=stream.read()# check for frame if NoneTypeifframeisNone:break# {do something with frame here}# show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close video streamstream.stop()
# import required librariesfromvidgear.gearsimportFFGearimportcv2# stream grayscale framesstream=FFGear(source="myvideo.mp4",frame_format="gray",logging=True).start()# loop overwhileTrue:# read grayscale framesframe=stream.read()# check for frame if NonetypeifframeisNone:break# Show output windowcv2.imshow("Grayscale Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close video streamstream.stop()
Fastest RAW-to-Grayscale via -extract_luma
Every YUV/NV bytestream stores the Luma (Y) plane uncompressed at the top of each frame. The exclusive -extract_luma boolean attribute makes FFGear slice that Y-plane directly and hand back a 2D (H, W) grayscale ndarray — no colorspace conversion in FFmpeg, no cv2.cvtColor in Python. This is strictly faster than frame_format="gray", which still asks FFmpeg to do a yuv→gray conversion on every frame.
Combined with the reduced pipe-bytes of YUV 4:2:0 ingest, this is the fastest grayscale pipeline the API can produce.
# import required librariesfromvidgear.gearsimportFFGearimportcv2# enable direct Luma (Y-plane) extractionoptions={"-extract_luma":True}# stream Grayscale via YUV framesstream=FFGear(source="myvideo.mp4",frame_format="yuv420p",logging=True,**options).start()# loop overwhileTrue:# read grayscale framesframe=stream.read()# check for frame if NoneTypeifframeisNone:break# {do something with Luma (Y-plane) extracted grayscale frame here}# NOTE: If you do not need previewing frames, comment following lines# show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close video streamstream.stop()
Using DeFFcode's Sourcer API, you can easily use its enumerate_devices property object to enumerate all probed Camera Devices (connected to your system) as dictionary object with device indexes as keys and device names as their respective values:
# import the necessary packagesfromdeffcodeimportSourcerimportjson# streamsourcer=Sourcer("0").probe_stream()# enumerate probed devices as Dictionary object(`dict`)print(sourcer.enumerate_devices)# enumerate probed devices as JSON string(`json.dump`)print(json.dumps(sourcer.enumerate_devices,indent=2))
After running above python code, the resultant Terminal Output will look something as following on Windows machine:
FFGear API supports camera devices by index, just like OpenCV. Under the hood it uses platform-specific FFmpeg demuxers to capture the device feed by specifying its matching index value either as integer or string of integer type to its source parameter.
Requirement for Index based Camera Device Capturing in FFGear API
MUST have appropriate FFmpeg binaries, Drivers, and Softwares installed:
Internally, FFGear APIs achieves Index based Camera Device Capturing by employing some specific FFmpeg demuxers on different platforms(OSes). These platform specific demuxers are as follows:
Important: Kindly make sure your FFmpeg binaries support these platform specific demuxers as well as system have the appropriate video drivers and related softwares installed.
The source parameter value MUST be exactly the probed Camera Device index(Use DeFFcode's Sourcer APIenumerate_devices to list them).
The source_demuxer parameter value MUST be either None(also means empty) or "auto".
In this example we stream BGR24 video frames from Integrated Camera at index 0 on a Windows Machine:
Important Facts related to Camera Device Indexing
Camera Device indexes are 0-indexed. So the first device is at 0, second is at 1, so on. So if the there are n devices, the last device is at n-1.
Camera Device indexes can be of either integer(e.g. 0,1, etc.) or string of integer(e.g. "0","1", etc.)type.
Camera Device indexes can be negative(e.g. -1,-2, etc.), this means you can also start indexing from the end.
# import required librariesfromvidgear.gearsimportFFGearimportcv2# stream with "0" index source for BGR24 outputstream=FFGear(source="0",frame_format="bgr24",logging=True,).start()# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# Show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close video streamstream.stop()
# import required librariesfromvidgear.gearsimportFFGearimportcv2# define the Video Filter definition# horizontally flip and scale to half its original sizeoptions={"-vf":"hflip,scale=w=iw/2:h=ih/2"}stream=FFGear(source="myvideo.mp4",frame_format="bgr24",logging=True,**options).start()# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# Show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close video streamstream.stop()
FFGear supports Image Sequences such as Sequential(img%03d.png) and Glob pattern(*.png) in real-time.
Extracting Image Sequences from a video
You can use following FFmpeg command to extract sequences of images from a video file foo.mp4:
$ffmpeg-ifoo.mp4/path/to/image-%03d.png
The default framerate is 25 fps, therefore this command will extract 25 images/sec from the video file, and save them as sequences of images (starting from image-000.png, image-001.png, image-002.png up to image-999.png).
If there are more than 1000 frames then the last image will be overwritten with the remaining frames leaving only the last frame.
The default images width and height is same as the video.
How to start with specific number image?
You can use -start_number FFmpeg parameter if you want to start with specific number image:
# define `-start_number` such as `5`options={"-ffprefixes":["-start_number","5"]}# initialize and formulate the stream with define parametersstream=FFGear(source="/absolute/path/to/image-%03d.png",logging=True,**options).formulate()
Ensure source points to an absolute image sequence folder path
Furthermore, Use the exact filename pattern (for example, image-%03d.png) that was used when extracting the images. An incorrect or relative pattern may cause the FFGear API to raise a RuntimeError.
# import required librariesfromvidgear.gearsimportFFGearimportcv2stream=FFGear(source="/absolute/path/to/image-%03d.png",frame_format="bgr24",logging=True).start()# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# Show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close video streamstream.stop()
Bash-style globbing (* represents any number of any characters) is useful if your images are sequential but not necessarily in a numerically sequential order.
Ensure source points to an absolute image sequence folder path
The glob pattern may not be available on Windows FFmpeg builds.
# import required librariesfromvidgear.gearsimportFFGearimportcv2# define `-pattern_type glob` for accepting glob pattern via ffprefixesoptions={"-ffprefixes":["-pattern_type","glob"]}stream=FFGear(source="/absolute/path/to/*.png",# Glob patternframe_format="bgr24",logging=True,**options).start()# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# Show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close video streamstream.stop()
In this example we stream BGR24 video frames from looping video using FFGear API:
The -stream_loop flag is a high-level FFmpeg command that repeats the entire input file a specified number of times before any filtering occurs. The recommended way to use the -stream_loop flag with the FFGear API is through the -ffprefixes list attribute of the options dictionary parameter.
Possible -stream_loop values are integer values
Integer values > 0 specify the number of loops, 0 means no looping, and -1 enables infinite looping.
Using -stream_loop 3 will loop the video 4 times in total.
# import required librariesfromvidgear.gearsimportFFGearimportcv2# define loop 3 times (total 4 playbacks) via ffprefixesoptions={"-ffprefixes":["-stream_loop","3"]}# stream with suitable sourcestream=FFGear(source="myvideo.mp4",frame_format="bgr24",logging=True,**options).start()# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# Show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close video streamstream.stop()
The loop filter is used within an FFmpeg filtergraph (-filter_complex) to repeat a specific segment of video frames. In the FFGear API, it can be configured as an attribute of the options dictionary parameter.
This filter places all frames into memory (RAM), so applying trim filter first is strongly recommended. Otherwise, you might run out of memory.
Using loop filter for looping video
The filter accepts the following options:
loop: Sets the number of loops for integer values > 0. Setting this value to -1 results in infinite loops. Default is 0 (no loops).
size: Sets the maximum size in number of frames. Default is 0.
start: Sets the first frame of the loop. Default is 0.
You must often include setpts to fix the timestamps, otherwise frames may be dropped or playback may stutter.
Using loop=3 will loop the video 4 times in total.
# import required librariesfromvidgear.gearsimportFFGearimportcv2# define loop 4 times, each loop is 15 frames, each loop skips the first 25 framesoptions={"-filter_complex":"loop=loop=3:size=15:start=25,setpts=N/FRAME_RATE/TB"# Or use: `loop=3:15:25`}# stream with suitable sourcestream=FFGear(source="myvideo.mp4",frame_format="bgr24",logging=True,**options).start()# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# Show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close video streamstream.stop()
FFGear integrates seamlessly with OpenCV's VideoWriter() class to encode video frames into a multimedia file. However, it lacks fine-grained control over output quality, bitrate, compression, and other advanced parameters—features that are readily available with WriteGear API's Compression Mode.
You can use FFGear API's stream.metadata property object that dumps source Video's metadata information (as JSON string) to retrieve source framerate and resolution.
# import the necessary packagesfromvidgear.gearsimportFFGearimportjson,cv2# stream with BGR24 pixel format outputstream=FFGear(source="myvideo.mp4",frame_format="bgr24",logging=True).start()# retrieve JSON Metadata and convert it to dictmetadata_dict=json.loads(stream.stream.metadata)# prepare OpenCV parametersFOURCC=cv2.VideoWriter_fourcc("M","J","P","G")FRAMERATE=metadata_dict["output_framerate"]FRAMESIZE=tuple(metadata_dict["output_frames_resolution"])# Define writer with parameters and suitable output filename for e.g. `output_foo.avi`writer=cv2.VideoWriter("output_foo.avi",FOURCC,FRAMERATE,FRAMESIZE,isColor=False))# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# writing BGR24 frame to writerwriter.write(frame)# let's also show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close streamstream.stop()# safely close writerwriter.release()
Since OpenCV expects BGR format frames in its cv2.write() method, therefore we need to convert RGB frames into BGR before encoding as follows:
# import the necessary packagesfromvidgear.gearsimportFFGearimportjson,cv2# stream with RGB24 pixel format outputstream=FFGear(source="myvideo.mp4",frame_format="rgb24",logging=True).start()# retrieve JSON Metadata and convert it to dictmetadata_dict=json.loads(stream.stream.metadata)# prepare OpenCV parametersFOURCC=cv2.VideoWriter_fourcc("M","J","P","G")FRAMERATE=metadata_dict["output_framerate"]FRAMESIZE=tuple(metadata_dict["output_frames_resolution"])# Define writer with parameters and suitable output filename for e.g. `output_foo.avi`writer=cv2.VideoWriter("output_foo.avi",FOURCC,FRAMERATE,FRAMESIZE)# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the RGB frame here}# converting RGB24 to BGR24 frameframe_bgr=cv2.cvtColor(frame,cv2.COLOR_RGB2BGR)# writing BGR24 frame to writerwriter.write(frame_bgr)# let's also show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close streamstream.stop()# safely close writerwriter.release()
OpenCV also directly consumes GRAYSCALE frames in its cv2.write() method.
When writing a grayscale video, don't forget to add that flag (isColor=False) to the VideoWriter.
# import the necessary packagesfromvidgear.gearsimportFFGearimportjson,cv2# stream with GRAYSCALE pixel format outputstream=FFGear(source="myvideo.mp4",frame_format="gray",logging=True).start()# retrieve JSON Metadata and convert it to dictmetadata_dict=json.loads(stream.stream.metadata)# prepare OpenCV parametersFOURCC=cv2.VideoWriter_fourcc("M","J","P","G")FRAMERATE=metadata_dict["output_framerate"]FRAMESIZE=tuple(metadata_dict["output_frames_resolution"])# Define writer with parameters and suitable output filename for e.g. `output_foo_gray.avi`writer=cv2.VideoWriter("output_foo_gray.avi",FOURCC,FRAMERATE,FRAMESIZE)# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the GRAYSCALE frame here}# writing GRAYSCALE frame to writerwriter.write(frame)# let's also show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close streamstream.stop()# safely close writerwriter.release()
With FFGear API, frames extracted with YUV pixel formats (yuv420p, yuv444p, nv12, nv21 etc.) are generally incompatible with OpenCV APIs. But you can make them easily compatible by using exclusive -enforce_cv_patch boolean attribute of its options dictionary parameter.
Let's encode YUV420p pixel-format frames with OpenCV's write() method:
You can also use other YUV pixel-formats such yuv422p(4:2:2 subsampling) or yuv444p(4:4:4 subsampling) etc. instead for more higher dynamic range in the similar manner.
# import the necessary packagesfromvidgear.gearsimportFFGearimportjson,cv2# enable OpenCV patch for YUV420p framesoptions={"-enforce_cv_patch":True}# stream YUV420p framesstream=FFGear(source="myvideo.mp4",frame_format="yuv420p",logging=True,**options).start()# retrieve JSON Metadata and convert it to dictmetadata_dict=json.loads(stream.stream.metadata)# prepare OpenCV parametersFOURCC=cv2.VideoWriter_fourcc("M","J","P","G")FRAMERATE=metadata_dict["output_framerate"]FRAMESIZE=tuple(metadata_dict["output_frames_resolution"])# Define writer with parameters and suitable output filename for e.g. `output_foo_yuv.avi`writer=cv2.VideoWriter("output_foo_yuv.avi",FOURCC,FRAMERATE,FRAMESIZE)# loop overwhileTrue:# read OpenCV compatible YUV420 framesframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with frame here}# writing BGR frame to writerwriter.write(frame)# let's also show output windowcv2.imshow("Output",frame)# check for 'q' key if pressedkey=cv2.waitKey(1)&0xFFifkey==ord("q"):break# close output windowcv2.destroyAllWindows()# safely close streamstream.stop()# safely close writerwriter.release()
Using FFGear with WriteGear API (Compression Mode)¶
High CPU Usage when chaining FFGear with WriteGear
When chaining FFGear with WriteGear, both FFmpeg processes (decoding + encoding) run as fast as your hardware allows with no artificial pacing between them. This causes the pipeline to max out your CPU to process the video in the shortest time possible, which may be undesirable.
You can mitigate this in two ways depending on your use case:
Pass the -re flag via FFGear's -ffprefixes parameter to force FFmpeg to read the input at its native framerate. This naturally paces the pipeline to real-time speed and drastically reduces CPU usage:
# force input to be read at native frameratestream=FFGear(source="foo.mp4",frame_format="bgr24",**{"-ffprefixes":["-re"]})
Pass -threads to both FFGear and WriteGear to cap the number of CPU threads each FFmpeg process may use. This leaves headroom for other system tasks:
# limit decoder to 2 threadsstream=FFGear(source="foo.mp4",frame_format="bgr24",**{"-threads":2})# limit encoder to 2 threadswriter=WriteGear(output="output_foo.mp4",**{"-input_framerate":fps,"-threads":2})
# import required librariesfromvidgear.gearsimportFFGearfromvidgear.gearsimportWriteGearimportjson# open source with FFGear and BGR framesstream=FFGear(source="myvideo.mp4",frame_format="bgr24",logging=True).start()# retrieve framerate from source JSON Metadata and pass it as `-input_framerate` # parameter for controlled framerateoutput_params={"-input_framerate":json.loads(stream.stream.metadata)["source_video_framerate"]}# Define writer with default parameters and suitable# output filename for e.g. `output_foo.mp4`writer=WriteGear(output="output_foo.mp4",**output_params)# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# write frame to outputwriter.write(frame)# safely close streamstream.stop()# safely close writerwriter.close()
In WriteGear API, you can use rgb_mode parameter in write() class method to write RGB format frames instead of default BGR as follows:
# import required librariesfromvidgear.gearsimportFFGearfromvidgear.gearsimportWriteGearimportjson# open source with FFGear and RGB framesstream=FFGear(source="myvideo.mp4",frame_format="rgb24",logging=True).start()# retrieve framerate from source JSON Metadata and pass it as `-input_framerate` # parameter for controlled framerateoutput_params={"-input_framerate":json.loads(stream.stream.metadata)["source_video_framerate"]}# Define writer with default parameters and suitable# output filename for e.g. `output_foo_rgb.mp4`writer=WriteGear(output="output_foo_rgb.mp4",**output_params)# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# write frame to outputwriter.write(frame,rgb_mode=True)# safely close streamstream.stop()# safely close writerwriter.close()
WriteGear API also directly consumes GRAYSCALE format frames in its write() class method.
# import required librariesfromvidgear.gearsimportFFGearfromvidgear.gearsimportWriteGearimportjson# open source with FFGear and GRAYSCALE framesstream=FFGear(source="myvideo.mp4",frame_format="gray",logging=True).start()# retrieve framerate from source JSON Metadata and pass it as `-input_framerate` # parameter for controlled framerateoutput_params={"-input_framerate":json.loads(stream.stream.metadata)["source_video_framerate"]}# Define writer with default parameters and suitable# output filename for e.g. `output_foo_gray.mp4`writer=WriteGear(output="output_foo_gray.mp4",**output_params)# loop overwhileTrue:# read frames from streamframe=stream.read()# check for frame if NonetypeifframeisNone:break# {do something with the frame here}# write frame to outputwriter.write(frame)# safely close streamstream.stop()# safely close writerwriter.close()
Performance Mode — Faster Decoding via YUV420p
Ingesting frames as 12-bit YUV 4:2:0 instead of 24-bit BGR halves the bytes moving through the FFmpeg pipe. Enable -enforce_cv_patch to auto-convert YUV/NV pixel formats frames inside FFGear for seamless OpenCV compatibility.
WriteGear API also directly consume YUV(or basically any other supported pixel format) frames in its write() class method with its -input_pixfmt attribute in compression mode.
You can also use yuv422p(4:2:2 subsampling) or yuv444p(4:4:4 subsampling) instead for more higher dynamic ranges.
# import required librariesfromvidgear.gearsimportFFGearfromvidgear.gearsimportWriteGearimportjson# open source with FFGear stream for YUV420 outputstream=FFGear(source="myvideo.mp4",frame_format="yuv420p",logging=True).start()# retrieve framerate from source JSON Metadata and pass it as # `-input_framerate` parameter for controlled framerate# and also add input pixfmt as yuv420p output_params={"-input_framerate":json.loads(stream.stream.metadata)["output_framerate"],"-input_pixfmt":"yuv420p"}# Define writer with default parameters and suitable# output filename for e.g. `output_foo_yuv.mp4`writer=WriteGear(output="output_foo_yuv.mp4",logging=True,**output_params)# loop overwhileTrue:# read OpenCV compatible YUV420p framesframe=stream.read()# check for frame if NoneTypeifframeisNone:break# {do something with the YUV420p frame here}# write frame to outputwriter.write(frame)# safely close streamstream.stop()# safely close writerwriter.close()
Fastest RAW-to-Grayscale via -extract_luma
Every YUV/NV bytestream stores the Luma (Y) plane uncompressed at the top of each frame. The exclusive -extract_luma boolean attribute makes FFGear slice that Y-plane directly and hand back a 2D (H, W) grayscale ndarray — no colorspace conversion in FFmpeg, no cv2.cvtColor in Python. This is strictly faster than frame_format="gray", which still asks FFmpeg to do a yuv→gray conversion on every frame.
Combined with the reduced pipe-bytes of YUV 4:2:0 ingest, this is the fastest grayscale pipeline the API can produce.
Similar to normal GRAYSCALE format frames, you can directly consume these frames in WriteGear API’s write() class method:
# import required librariesfromvidgear.gearsimportFFGearfromvidgear.gearsimportWriteGearimportjson# enable direct Luma (Y-plane) extractionoptions={"-extract_luma":True}# stream Grayscale via YUV framesstream=FFGear(source="myvideo.mp4",frame_format="yuv420p",logging=True,**options).start()# retrieve framerate from source JSON Metadata and pass it as `-input_framerate` # parameter for controlled framerateoutput_params={"-input_framerate":json.loads(stream.stream.metadata)["source_video_framerate"]}# Define writer with default parameters and suitable# output filename for e.g. `output_foo_yuv_gray.mp4`writer=WriteGear(output="output_foo_yuv_gray.mp4",**output_params)# loop overwhileTrue:# read grayscale framesframe=stream.read()# check for frame if NoneTypeifframeisNone:break# {do something with Luma (Y-plane) extracted grayscale frame here}# write frame to outputwriter.write(frame)# safely close streamstream.stop()# safely close writerwriter.close()