StreamGear API Usage Examples: Real-time Frames Mode ¶
Real-time Frames Mode itself is NOT Live-Streaming
To enable live-streaming in Real-time Frames Mode, use the exclusive -livestream
attribute of the stream_params
dictionary parameter in the StreamGear API. Checkout following usage example ➶ for more information.
Important Information
- StreamGear API MUST requires FFmpeg executables for its core operations. Follow these dedicated Platform specific Installation Instructions ➶ for its installation. API will throw RuntimeError, if it fails to detect valid FFmpeg executables on your system.
- In this mode, API by default generates a primary stream (at the index
0
) of same resolution as the input frames and at default framerate1. - In this mode, API DOES NOT automatically maps video-source audio to generated streams. You need to manually assign separate audio-source through
-audio
attribute ofstream_params
dictionary parameter. - In this mode, Stream copy (
-vcodec copy
) encoder is unsupported as it requires re-encoding of incoming frames. - Always use
close()
function at the very end of the main code.
DEPRECATION NOTICES for v0.3.3
and above
- The
terminate()
method in StreamGear is now deprecated and will be removed in a future release. Developers should use the newclose()
method instead, as it offers a more descriptive name, similar to the WriteGear API, for safely terminating StreamGear processes. - The
rgb_mode
parameter instream()
method, which earlier used to support RGB frames in Real-time Frames Mode is now deprecated, and will be removed in a future version. Only BGR format frames will be supported going forward. Please update your code to handle BGR format frames.
After going through following Usage Examples, Checkout more of its advanced configurations here ➶
Bare-Minimum Usage¶
Following is the bare-minimum code you need to get started with StreamGear API in Real-time Frames Mode:
We are using CamGear in this Bare-Minimum example, but any VideoCapture Gear will work in the similar manner.
In this mode, StreamGear DOES NOT automatically maps video-source audio to generated streams. You need to manually assign separate audio-source through -audio
attribute of stream_params
dictionary parameter.
After running this bare-minimum example, StreamGear will produce a Manifest file (dash.mpd
) with streamable chunks that contains information about a Primary Stream of same resolution and framerate1 as input (without any audio).
Bare-Minimum Usage with controlled Input-framerate¶
In Real-time Frames Mode, StreamGear API provides the exclusive
-input_framerate
attribute for thestream_params
dictionary parameter, which allows you to set the assumed constant framerate for incoming frames.
In this example, we will retrieve the framerate from a webcam video stream and set it as the value for the -input_framerate
attribute in StreamGear.
Remember, the input framerate defaults to 25.0 fps if the -input_framerate
attribute value is not defined in Real-time Frames mode.
Bare-Minimum Usage with Live-Streaming¶
You can easily activate Low-latency Live-Streaming in Real-time Frames Mode, where chunks will contain information for new frames only and forget previous ones, using the exclusive -livestream
attribute of the stream_params
dictionary parameter. The complete example is as follows:
In this mode, StreamGear DOES NOT automatically maps video-source audio to generated streams. You need to manually assign separate audio-source through -audio
attribute of stream_params
dictionary parameter.
Controlling chunk size in DASH
To control the number of frames kept in Chunks for the DASH stream (controlling latency), you can use the -window_size
and -extra_window_size
FFmpeg parameters. Lower values for these parameters will result in lower latency.
After every few chunks (equal to the sum of -window_size
and -extra_window_size
values), all chunks will be overwritten while Live-Streaming. This means that newer chunks in the manifest will contain NO information from older chunks, and the resulting DASH stream will only play the most recent frames, reducing latency.
Controlling chunk size in HLS
To control the number of frames kept in Chunks for the HLS stream (controlling latency), you can use the -hls_init_time
& -hls_time
FFmpeg parameters. Lower values for these parameters will result in lower latency.
After every few chunks (equal to the sum of -hls_init_time
& -hls_time
values), all chunks will be overwritten while Live-Streaming. This means that newer chunks in the master playlist will contain NO information from older chunks, and the resulting HLS stream will only play the most recent frames, reducing latency.
Bare-Minimum Usage with OpenCV¶
You can easily use the StreamGear API directly with any other Video Processing library (for e.g. OpenCV) in Real-time Frames Mode.
The following is a complete StreamGear API usage example with OpenCV:
This is a bare-minimum example with OpenCV, but any other Real-time Frames Mode feature or example will work in a similar manner.
Usage with Additional Streams¶
Similar to Single-Source Mode, in addition to the Primary Stream, you can easily generate any number of additional Secondary Streams with variable bitrate or spatial resolution, using the exclusive
-streams
attribute of thestream_params
dictionary parameter.
To generate Secondary Streams, add each desired resolution and bitrate/framerate as a list of dictionaries to the -streams
attribute. StreamGear will handle the rest automatically. The complete example is as follows:
A more detailed information on -streams
attribute can be found here ➶
In this mode, StreamGear DOES NOT automatically maps video-source audio to generated streams. You need to manually assign separate audio-source through -audio
attribute of stream_params
dictionary parameter.
Important Information about -streams
attribute
- In addition to the user-defined Secondary Streams, StreamGear automatically generates a Primary Stream (at index
0
) with the same resolution as the input frames and at default framerate1. - Ensure that your system, machine, server, or network can handle the additional resource requirements of the Secondary Streams. Exercise discretion when configuring multiple streams.
- You MUST define the
-resolution
value for each stream; otherwise, the stream will be discarded. - You only need to define either the
-video_bitrate
or the-framerate
for a valid stream.- If you specify the
-framerate
, the video bitrate will be calculated automatically. - If you define both the
-video_bitrate
and the-framerate
, the-framerate
will get discard automatically.
- If you specify the
Always use the -streams
attribute to define additional streams safely. Duplicate or incorrect definitions can break the transcoding pipeline and corrupt the output chunks.
Usage with File Audio-Input¶
In Real-time Frames Mode, if you want to add audio to your streams, you need to use the exclusive
-audio
attribute of thestream_params
dictionary parameter.
To add a audio source, provide the path to your audio file as a string to the -audio
attribute. The API will automatically validate and map the audio to all generated streams. The complete example is as follows:
Ensure the provided -audio
audio source is compatible with the input video source. Incompatibility can cause multiple errors or result in no output at all.
You MUST use -input_framerate
attribute to set exact value of input framerate when using external audio in Real-time Frames mode, otherwise audio delay will occur in output streams.
You can also assign a valid audio URL as input instead of a file path. More details can be found here ➶
Usage with Device Audio-Input¶
In Real-time Frames Mode, you can also use the exclusive
-audio
attribute of thestream_params
dictionary parameter for streaming live audio from an external device.
To stream live audio, format your audio device name followed by a suitable demuxer as a list, and assign it to the -audio
attribute. The API will automatically validate and map the audio to all generated streams. The complete example is as follows:
Example Assumptions
- You're running a Windows machine with all necessary audio drivers and software installed.
- There's an audio device named "Microphone (USB2.0 Camera)" connected to your Windows machine. Check instructions below to use device sources with the
-audio
attribute on different OS platforms.
Using devices sources with -audio
attribute on different OS platforms
To use device sources with the -audio
attribute on different OS platforms, follow these instructions:
Windows OS users can use the dshow (DirectShow) to list audio input device which is the preferred option for Windows users. You can refer following steps to identify and specify your sound card:
-
[OPTIONAL] Enable sound card(if disabled): First enable your Stereo Mix by opening the "Sound" window and select the "Recording" tab, then right click on the window and select "Show Disabled Devices" to toggle the Stereo Mix device visibility. Follow this post ➶ for more details.
-
Identify Sound Card: Then, You can locate your soundcard using
dshow
as follows:c:\> ffmpeg -list_devices true -f dshow -i dummy ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect libavutil 51. 74.100 / 51. 74.100 libavcodec 54. 65.100 / 54. 65.100 libavformat 54. 31.100 / 54. 31.100 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 19.102 / 3. 19.102 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 16.100 / 0. 16.100 [dshow @ 03ACF580] DirectShow video devices [dshow @ 03ACF580] "Integrated Camera" [dshow @ 03ACF580] "USB2.0 Camera" [dshow @ 03ACF580] DirectShow audio devices [dshow @ 03ACF580] "Microphone (Realtek High Definition Audio)" [dshow @ 03ACF580] "Microphone (USB2.0 Camera)" dummy: Immediate exit requested
-
Specify Sound Card: Then, you can specify your located soundcard in StreamGear as follows:
If audio still doesn't work then checkout this troubleshooting guide ➶ or reach us out on Gitter ➶ Community channel
Linux OS users can use the alsa to list input device to capture live audio input such as from a webcam. You can refer following steps to identify and specify your sound card:
-
Identify Sound Card: To get the list of all installed cards on your machine, you can type
arecord -l
orarecord -L
(longer output).arecord -l **** List of CAPTURE Hardware Devices **** card 0: ICH5 [Intel ICH5], device 0: Intel ICH [Intel ICH5] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: ICH5 [Intel ICH5], device 1: Intel ICH - MIC ADC [Intel ICH5 - MIC ADC] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: ICH5 [Intel ICH5], device 2: Intel ICH - MIC2 ADC [Intel ICH5 - MIC2 ADC] Subdevices: 1/1 Subdevice #0: subdevice #0 card 0: ICH5 [Intel ICH5], device 3: Intel ICH - ADC2 [Intel ICH5 - ADC2] Subdevices: 1/1 Subdevice #0: subdevice #0 card 1: U0x46d0x809 [USB Device 0x46d:0x809], device 0: USB Audio [USB Audio] Subdevices: 1/1 Subdevice #0: subdevice #0
-
Specify Sound Card: Then, you can specify your located soundcard in WriteGear as follows:
The easiest thing to do is to reference sound card directly, namely "card 0" (Intel ICH5) and "card 1" (Microphone on the USB web cam), as
hw:0
orhw:1
If audio still doesn't work then reach us out on Gitter ➶ Community channel
MAC OS users can use the avfoundation to list input devices for grabbing audio from integrated iSight cameras as well as cameras connected via USB or FireWire. You can refer following steps to identify and specify your sound card on MacOS/OSX machines:
-
Identify Sound Card: Then, You can locate your soundcard using
avfoundation
as follows:ffmpeg -f avfoundation -list_devices true -i "" ffmpeg version N-45279-g6b86dd5... --enable-runtime-cpudetect libavutil 51. 74.100 / 51. 74.100 libavcodec 54. 65.100 / 54. 65.100 libavformat 54. 31.100 / 54. 31.100 libavdevice 54. 3.100 / 54. 3.100 libavfilter 3. 19.102 / 3. 19.102 libswscale 2. 1.101 / 2. 1.101 libswresample 0. 16.100 / 0. 16.100 [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation video devices: [AVFoundation input device @ 0x7f8e2540ef20] [0] FaceTime HD camera (built-in) [AVFoundation input device @ 0x7f8e2540ef20] [1] Capture screen 0 [AVFoundation input device @ 0x7f8e2540ef20] AVFoundation audio devices: [AVFoundation input device @ 0x7f8e2540ef20] [0] Blackmagic Audio [AVFoundation input device @ 0x7f8e2540ef20] [1] Built-in Microphone
-
Specify Sound Card: Then, you can specify your located soundcard in StreamGear as follows:
If audio still doesn't work then reach us out on Gitter ➶ Community channel
It is advised to use this example with live-streaming enabled(True
) by using StreamGear API's exclusive -livestream
attribute of stream_params
dictionary parameter.
Ensure the provided -audio
audio source is compatible with the video source device. Incompatibility can cause multiple errors or result in no output at all.
You MUST use -input_framerate
attribute to set exact value of input framerate when using external audio in Real-time Frames mode, otherwise audio delay will occur in output streams.
Usage with Hardware Video-Encoder¶
In Real-time Frames Mode, you can easily change the video encoder according to your requirements by passing the
-vcodec
FFmpeg parameter as an attribute in thestream_params
dictionary parameter. Additionally, you can specify additional properties, features, and optimizations for your system's GPU.
In this example, we will be using h264_vaapi
as our Hardware Encoder and specifying the device hardware's location and compatible video filters by formatting them as attributes in the stream_params
dictionary parameter.
This example is just conveying the idea of how to use FFmpeg's hardware encoders with the StreamGear API in Real-time Frames Mode, which MAY OR MAY NOT suit your system. Please use suitable parameters based on your supported system and FFmpeg configurations only.
Checking VAAPI Support for Hardware Encoding
To use VAAPI (Video Acceleration API) as a hardware encoder in this example, follow these steps to ensure your FFmpeg supports VAAPI:
Please read the FFmpeg Documentation carefully before passing any additional values to the stream_params
parameter. Incorrect values may cause errors or result in no output.
-
In Real-time Frames Mode, the Primary Stream's framerate defaults to the value of the
-input_framerate
attribute, if defined. Otherwise, it will be set to 25 fps. ↩↩↩