Skip to content

PiGear Examples

Changing Output Pixel Format in PiGear API with Picamera2 Backend

With the Picamera2 backend, you can also define a custom format (format of output frame pixels) in PiGear API.

Handling output frames with a custom pixel format correctly

While defining custom format as an optional parameter, it is advised to also define the colorspace parameter in the PiGear API. This is required only under TWO conditions:

  • If format value is not MPEG for USB cameras.
  • If format value is not BGR (i.e., RGB888) or BGRA (i.e., XRGB8888) for Raspberry Pi camera modules.

âš  Otherwise, output frames might NOT be compatible with OpenCV functions, and you need to handle these frames manually!

Picamera2 library has an unconventional naming convention for its pixel formats.

Please note that, Picamera2 takes its pixel format naming from libcamera, which in turn takes them from certain underlying Linux components. The results are not always the most intuitive. For example, OpenCV users will typically want each pixel to be a (B, G, R) triple for which the RGB888 format should be chosen, and not BGR888. Similarly, OpenCV users wanting an alpha channel should select XRGB8888.

For more information, refer Picamera2 docs ➶

For reducing the size of frames in memory it is advised to use the YUV420 pixels format.

In this example we will be defining custom YUV420 (or YVU420) pixels format of output frame, and converting it back to BGR to be able to display with OpenCV.

You could also instead define colorspace="COLOR_YUV420p2RGB" parameter in PiGear API for converting it back to BGR similarly.

# import required libraries
from vidgear.gears import PiGear
import cv2

# formulate `format` Picamera2 API 
# configurational parameters
options = {
    "format": "YUV420" # or use `YVU420`
}

# open pi video stream with defined parameters
stream = PiGear(resolution=(640, 480), framerate=60, logging=True, **options).start()

# loop over
while True:

    # read frames from stream
    yuv420_frame = stream.read()

    # check for frame if Nonetype
    if yuv420_frame is None:
        break

    # {do something with the `YUV420` frame here}

    # convert `YUV420` to `BGR`
    bgr = cv2.cvtColor(yuv420_frame, cv2.COLOR_YUV420p2BGR)

    # {do something with the `BGR` frame here}

    # Show output window
    cv2.imshow("Output Frame", bgr)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

YUYV is a one packed 4:2:2 YUV format that is popularly used by USB cameras.

Make sure YUYV pixel format is supported by your USB camera.

In this example we will be defining custom YUYV pixels format of output frame, and converting it back to BGR to be able to display with OpenCV.

You could also instead define colorspace="COLOR_YUV2BGR_YUYV" parameter in PiGear API for converting it back to BGR similarly.

# import required libraries
from vidgear.gears import PiGear
import cv2

# formulate `format` Picamera2 API 
# configurational parameters
options = {
    "format": "YUYV"
}

# open pi video stream with defined parameters
stream = PiGear(resolution=(640, 480), framerate=60, logging=True, **options).start()

# loop over
while True:

    # read frames from stream
    yuv420_frame = stream.read()

    # check for frame if Nonetype
    if yuv420_frame is None:
        break

    # {do something with the `YUV420` frame here}

    # convert `YUV420` to `BGR`
    bgr = cv2.cvtColor(yuv420_frame, cv2.COLOR_YUV2BGR_YUYV)

    # {do something with the `BGR` frame here}

    # Show output window
    cv2.imshow("Output Frame", bgr)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

 

Dynamically Adjusting Raspberry Pi Camera Parameters at Runtime in PiGear API

With the picamera2 backend, using stream global parameter in the PiGear API, you can change all camera controls (except output resolution and format) at runtime after the camera has started.

Accessing all available camera controls

A complete list of all the available camera controls can be found in the picamera2 docs ➶, and also by inspecting the camera_controls property of the Picamera2 object available with stream global parameter in PiGear API:

# import required libraries
from vidgear.gears import PiGear

# open any pi video stream
stream = PiGear()

#display all available camera controls
print(stream.stream.camera_controls)

# safely close video stream
stream.stop()

This returns a dictionary with the control names as keys, and each value being a tuple of (min, max, default) values for that control. âš  The default value should be interpreted with some caution as in many cases libcamera's default value will be overwritten by the camera tuning as soon as the camera is started.

In this example, we will set the initial Camera Module's brightness value to -0.5 (dark), and will change it to 0.5 (bright) when the Z key is pressed at runtime:

Delay in setting runtime controls

There will be a delay of several frames before the controls take effect. This is because there is perhaps quite a large number of requests for camera frames already in flight, and for some controls (exposure time and analogue gain specifically), the camera may actually take several frames to apply the updates.

Using with construct for Guaranteed Camera Control Updates at Runtime

While directly modifying using set_controls method might seem convenient, it doesn't guarantee that all camera control settings are applied within the same frame at runtime. The with construct provides a structured approach to managing camera control updates in real-time. Here's how to use it:

# import required libraries
from vidgear.gears import PiGear

# formulate initial configurational parameters
options = "controls": {"ExposureTime": 5000, "AnalogueGain": 0.5}

# open pi video stream with these parameters
stream = PiGear(logging=True, **options).start() 

# Enter context manager and set runtime controls
# Within this block, the controls are guaranteed to be applied atomically
with stream.stream.controls as controls:  
    controls.ExposureTime = 10000  # Set new exposure time
    controls.AnalogueGain = 1.0     # Set new analogue gain

# ...rest of code goes here...

# safely close video stream
stream.stop()
# import required libraries
from vidgear.gears import PiGear
import cv2

# formulate initial configurational parameters
# set brightness to -0.5 (dark)
options = {"controls": {"Brightness": -0.5}}

# open pi video stream with these parameters
stream = PiGear(logging=True, **options).start() 

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break


    # {do something with the frame here}


    # Show output window
    cv2.imshow("Output Frame", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

    # check for 'z' key if pressed
    if key == ord("z"):
        # change brightness to 0.5 (bright)
        stream.stream.set_controls({"Brightness": 0.5})

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

You can also use the stream global parameter in PiGear with thepicamera backend to feed any picamera parameters at runtime after the camera has started.

PiGear API switches to the legacy picamerabackend if the picamera2 library is unavailable.

It is advised to enable logging(logging=True) to see which backend is being used.

The picamera library is built on the legacy camera stack that is NOT (and never has been) supported on 64-bit OS builds.

You could also enforce the legacy picamera API backend in PiGear by using the enforce_legacy_picamera optional parameter boolean attribute.

In this example we will set initial Camera Module's brightness value 80 (brighter), and will change it 30 (darker) when Z key is pressed at runtime:

# import required libraries
from vidgear.gears import PiGear
import cv2

# formulate initial configurational parameters 
# set brightness to `80` (bright)
options = {"brightness": 80} 

# open pi video stream with these parameters
stream = PiGear(logging=True, **options).start() 

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break


    # {do something with the frame here}


    # Show output window
    cv2.imshow("Output Frame", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

    # check for 'z' key if pressed
    if key == ord("z"):
        # change brightness to `30` (darker)
        stream.stream.brightness = 30

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

Accessing Multiple Camera through its Index in PiGear API

With the camera_num parameter in the PiGear API, you can easily select the camera index to be used as the source, allowing you to drive these multiple cameras simultaneously from within a single Python session.

The camera_num value can only be zero or greater, otherwise, PiGear API will throw ValueError for any negative value.

With the picamera2 backend, you can use the camera_num parameter in PiGear to select the camera index to be used as the source if you have multiple Raspberry Pi camera modules (such as CM4) and/or USB cameras connected simultaneously to your Raspberry Pi.

Accessing metadata about connected cameras.

You can call the global_camera_info() method of the Picamera2 object available with stream global parameter in PiGear API to find out what cameras are attached. This returns a list containing one dictionary for each camera, ordered according the camera number you would pass to the camera_num parameter in PiGear API to open that device. The dictionary contains:

  • Model : the model name of the camera, as advertised by the camera driver.
  • Location : a number reporting how the camera is mounted, as reported by libcamera.
  • Rotation : how the camera is rotated for normal operation, as reported by libcamera.
  • Id : an identifier string for the camera, indicating how the camera is connected.

You should always check this list to discover which camera is which as the order can change when the system boots or USB cameras are re-connected as follows:

# import required libraries
from vidgear.gears import PiGear

# open any pi video stream
stream = PiGear()

#display all available cameras metadata
print(stream.stream.global_camera_info())

# safely close video stream
stream.stop()

The PiGear API can accurately differentiate between USB and Raspberry Pi camera modules by utilizing the camera's metadata.

In this example, we will select the USB Camera connected at index 1 on the Raspberry Pi as the primary source for extracting frames in PiGear API:

Limited support for USB Cameras

This example also works with USB Cameras, However:

  • Users should assume that features such as: Camera controls ("controls"), Transformations ("transform"), Queue ("queue") , and Buffer Count ("buffer_count") that are supported on Raspberry Pi cameras, and so forth, are not available on USB Cameras.
  • Hot-plugging of USB cameras is also NOT supported - PiGear API should be completely shut down and restarted when cameras are added or removed.

This example assumes a USB Camera is connected at index 1, and some other camera connected at index 0 on your Raspberry Pi.

# import required libraries
from vidgear.gears import PiGear
from libcamera import Transform
import cv2

# formulate various Picamera2 API 
# configurational parameters for USB camera
options = {
    "sensor": {"output_size": (480, 320)},  # will override `resolution`
    "format": "RGB888" # BGR format for this example
    "auto_align_output_config": True,  # auto-align camera configuration
}

# open pi video stream at index `1` with defined parameters
stream = PiGear(camera_num=1, resolution=(640, 480), framerate=60, logging=True, **options).start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output Frame", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()

With the Picamera backend, you should not change the camera_num parameter unless you are using the Raspberry Pi 3/3+/4 Compute Module IO Boards or third party Arducam Camarray Multiple Camera Solutions, which supports attaching multiple camera modules to the same Raspberry Pi board using appropriate I/O connections.

You can use the camera_num parameter in PiGear with the picamera backend to select the camera index to be used as the source if you have multiple Raspberry Pi camera modules connected.

PiGear API switches to the legacy picamerabackend if the picamera2 library is unavailable.

It is advised to enable logging(logging=True) to see which backend is being used.

The picamera library is built on the legacy camera stack that is NOT (and never has been) supported on 64-bit OS builds.

You could also enforce the legacy picamera API backend in PiGear by using the enforce_legacy_picamera optional parameter boolean attribute.

In this example, we will select the Camera Module connected at index 1 on the Raspberry Pi as the primary source for extracting frames in PiGear API:

This example assumes a Camera Module is connected at index 1 on your Raspberry Pi.

# import required libraries
from vidgear.gears import PiGear
import cv2

# formulate various Picamera API 
# configurational parameters
options = {
    "hflip": True,
    "exposure_mode": "auto",
    "iso": 800,
    "exposure_compensation": 15,
    "awb_mode": "horizon",
    "sensor_mode": 0,
}

# open pi video stream at index `1` with defined parameters
stream = PiGear(camera_num=1, resolution=(640, 480), framerate=60, logging=True, **options).start()

# loop over
while True:

    # read frames from stream
    frame = stream.read()

    # check for frame if Nonetype
    if frame is None:
        break

    # {do something with the frame here}

    # Show output window
    cv2.imshow("Output Frame", frame)

    # check for 'q' key if pressed
    key = cv2.waitKey(1) & 0xFF
    if key == ord("q"):
        break

# close output window
cv2.destroyAllWindows()

# safely close video stream
stream.stop()