デモ用プログラム を実行する際、コマンドラインからDepthAIを利用するためのオプションを指定することができます。使用可能なオプションの一覧と使い方をまとめましたので参考にしてください。



python -s disparity_color,12
  • オプションで、フレームレートを12fpsに指定しています。


python -s metaout previewout depth -bb -ff
  • オプションで、深度ストリーム側にも境界ボックスをオーバーレイ表示させ、RGBカメラの映像は全視野が表示されるモードを指定しています。


  • [-h]
  • [-brd BOARD]
  • [-sh [1-14]]
  • [-cmx [1-14]]
  • [-nce [1-2]]
  • [-mct {auto,local,cloud}]
  • [-rgbr {1080,2160,3040}]
  • [-rgbf RGB_FPS]
  • [-cs COLOR_SCALE]
  • [-monor {400,720,800}]
  • [-monof MONO_FPS]
  • [-dct [0-255]]
  • [-med {0,3,5,7}]
  • [-lrc]
  • [-fv FIELD_OF_VIEW]
  • [-rfv RGB_FIELD_OF_VIEW]
  • [-b BASELINE]
  • [-w]
  • [-e]
  • [--clear-eeprom]
  • [-o]
  • [-dev DEVICE_ID]
  • [-debug [DEV_DEBUG]]
  • [-fusb2]
  • [-cnn {age-gender-recognition-retail-0013,deeplabv3p_person,emotions-recognition-retail-0003,face-detection-adas-0001,face-detection-retail-0004,facial-landmarks-35-adas-0002,human-pose-estimation-0001,landmarks-regression-retail-0009,mobilenet-ssd,mobileNetV2-PoseEstimation,pedestrian-detection-adas-0002,person-detection-retail-0013,person-vehicle-bike-detection-crossroad-1016,tiny-yolo-v3,vehicle-detection-adas-0002,vehicle-license-plate-detection-barrier-0106,yolo-v3}]
  • [-cnn2 {landmarks-regression-retail-0009,facial-landmarks-35-adas-0002,emotions-recognition-retail-0003}]
  • [-cam {rgb,left,right,left_right,rectified_left,rectified_right,rectified_left_right}]
  • [-dd]
  • [-bb]
  • [-ff]
  • [-sync]
  • [-seq]
  • [-u USB_CHUNK_KIB]
  • [-fw FIRMWARE]
  • [-vv]
  • [-s STREAMS [STREAMS ...]]
  • [-v VIDEO]
  • [-pcl]
  • [-mesh]
  • [-mirror_rectified {true,false}]



show this help message and exit

--config_overwrite CONFIG_OVERWRITE

JSON-formatted pipeline config object.
This will be override defaults used in this script.

-brd BOARD
--board BOARD

BW1097, BW1098OBC - Board type from resources/boards/ (not case-sensitive).
Or path to a custom .json board config.
Mutually exclusive with [-fv -rfv -b -r -w]

-sh [1-14]
--shaves [1-14]

Number of shaves used by NN.

-cmx [1-14]
--cmx_slices [1-14]

Number of cmx slices used by NN.

-nce [1-2]
--NN_engines [1-2]

Number of NN_engines used by NN.

-mct {auto,local,cloud}
--model-compilation-target {auto,local,cloud}

Compile model lcoally or in cloud?

-rgbr {1080,2160,3040}
--rgb_resolution {1080,2160,3040}

RGB cam res height: (1920x)1080, (3840x)2160 or (4056x)3040.
Default: 1080

-rgbf RGB_FPS
--rgb_fps RGB_FPS

RGB cam fps: max 118.0 for H:1080, max 42.0 for H:2160.
Default: 30.0

--color_scale COLOR_SCALE

Scale factor for 'color' stream preview window.
Default: 1.0

-monor {400,720,800}
--mono_resolution {400,720,800}

Mono cam res height: (1280x)720, (1280x)800 or (640x)400 - binning.
Default: 720

-monof MONO_FPS
--mono_fps MONO_FPS

Mono cam fps: max 60.0 for H:720 or H:800, max 120.0 for H:400.
Default: 30.0

-dct [0-255]
--disparity_confidence_threshold [0-255]


-med {0,3,5,7}
--stereo_median_size {0,3,5,7}

Disparity / depth median filter kernel size (N x N) .0 = filtering disabled.
Default: 7


Enable stereo 'Left-Right check' feature.

--field-of-view FIELD_OF_VIEW

Horizontal field of view (HFOV) for the stereo cameras in [deg].
Default: 71.86deg.

--rgb-field-of-view RGB_FIELD_OF_VIEW

Horizontal field of view (HFOV) for the RGB camera in [deg].
Default: 68.7938deg.

--baseline BASELINE

Left/Right camera baseline in [cm].
Default: 9.0cm.

--rgb-baseline RGB_BASELINE

Distance the RGB camera is from the Left camera.
Default: 2.0cm.


Do not swap the Left and Right cameras.


Store the calibration and board_config (fov, baselines, swap-lr) in the EEPROM onboard


Invalidate the calib and board_config from EEPROM


Use the calib and board_config from host, ignoring the EEPROM data if programmed

--device-id DEVICE_ID

USB port number for the device to connect to.
Use the word 'list' to show all devices and exit.

-debug [DEV_DEBUG]
--dev_debug [DEV_DEBUG]

Used by board developers for debugging.
Can take parameter to device binary


Force usb2 connection

-cnn {age-gender-recognition-retail-0013,deeplabv3p_person,emotions-recognition-retail-0003,face-detection-adas-0001,face-detection-retail-0004,facial-landmarks-35-adas-0002,human-pose-estimation-0001,landmarks-regression-retail-0009,mobilenet-ssd,mobileNetV2-PoseEstimation,pedestrian-detection-adas-0002,person-detection-retail-0013,person-vehicle-bike-detection-crossroad-1016,tiny-yolo-v3,vehicle-detection-adas-0002,vehicle-license-plate-detection-barrier-0106,yolo-v3}
--cnn_model {age-gender-recognition-retail-0013,deeplabv3p_person,emotions-recognition-retail-0003,face-detection-adas-0001,face-detection-retail-0004,facial-landmarks-35-adas-0002,human-pose-estimation-0001,landmarks-regression-retail-0009,mobilenet-ssd,mobileNetV2-PoseEstimation,pedestrian-detection-adas-0002,person-detection-retail-0013,person-vehicle-bike-detection-crossroad-1016,tiny-yolo-v3,vehicle-detection-adas-0002,vehicle-license-plate-detection-barrier-0106,yolo-v3}

Cnn model to run on DepthAI

-cnn2 {landmarks-regression-retail-0009,facial-landmarks-35-adas-0002,emotions-recognition-retail-0003}
--cnn_model2 {landmarks-regression-retail-0009,facial-landmarks-35-adas-0002,emotions-recognition-retail-0003}

Cnn model to run on DepthAI for second-stage inference

-cam {rgb,left,right,left_right,rectified_left,rectified_right,rectified_left_right}
--cnn_camera {rgb,left,right,left_right,rectified_left,rectified_right,rectified_left_right}

Choose camera input for CNN (default: rgb)


Disable depth calculation on CNN models with bounding box output


Draw the bounding boxes over the left/right/depth*streams


Full RGB FOV for NN, not keeping the aspect ratio


Synchronize 'previewout' and 'metaout' streams


Synchronize sequence numbers for all packets.

--usb-chunk-KiB USB_CHUNK_KIB

USB transfer chunk on device.
Higher (up to megabytes) may improve throughput, or 0 to disable chunking
Default: 64

--firmware FIRMWARE

Commit hash for custom FW, downloaded from Artifactory


Verbose, print info about received packets.

--streams STREAMS [STREAMS ...]

Define which streams to enable Format: stream_name or stream_name,max_fps
Example: -s metaout previewout
Example: -s metaout previewout,10 depth_sipp,10

--video VIDEO

Path where to save video stream (existing file will be overwritten)


Convert Depth map image to point clouds


use mesh for rectification of the stereo images

-mirror_rectified {true,false}
--mirror_rectified {true,false}

Normally, rectified_left/_right are mirrored for Stereo engine constraints. If false, disparity/depth will be mirrored instead. Default: true