INDEXに戻る

デモ用プログラム -depthai_demo.py- を実行する際、コマンドラインからDepthAIを利用するためのオプションを指定することができます。使用可能なオプションの一覧と使い方をまとめましたので参考にしてください。

コマンドの使用例

カーラーモードの深度ストリームを表示する場合

python depthai_demo.py -s disparity_color,12
  • オプションで、フレームレートを12fpsに指定しています。

推論結果と深度ストリームを表示する場合

python depthai_demo.py -s metaout previewout depth -bb -ff
  • オプションで、深度ストリーム側にも境界ボックスをオーバーレイ表示させ、RGBカメラの映像は全視野が表示されるモードを指定しています。

オプション一覧

  • [-h]
  • [-co CONFIG_OVERWRITE]
  • [-brd BOARD]
  • [-sh [1-14]]
  • [-cmx [1-14]]
  • [-nce [1-2]]
  • [-mct {auto,local,cloud}]
  • [-rgbr {1080,2160,3040}]
  • [-rgbf RGB_FPS]
  • [-cs COLOR_SCALE]
  • [-monor {400,720,800}]
  • [-monof MONO_FPS]
  • [-dct [0-255]]
  • [-med {0,3,5,7}]
  • [-lrc]
  • [-fv FIELD_OF_VIEW]
  • [-rfv RGB_FIELD_OF_VIEW]
  • [-b BASELINE]
  • [-r RGB_BASELINE]
  • [-w]
  • [-e]
  • [--clear-eeprom]
  • [-o]
  • [-dev DEVICE_ID]
  • [-debug [DEV_DEBUG]]
  • [-fusb2]
  • [-cnn {age-gender-recognition-retail-0013,deeplabv3p_person,emotions-recognition-retail-0003,face-detection-adas-0001,face-detection-retail-0004,facial-landmarks-35-adas-0002,human-pose-estimation-0001,landmarks-regression-retail-0009,mobilenet-ssd,mobileNetV2-PoseEstimation,pedestrian-detection-adas-0002,person-detection-retail-0013,person-vehicle-bike-detection-crossroad-1016,tiny-yolo-v3,vehicle-detection-adas-0002,vehicle-license-plate-detection-barrier-0106,yolo-v3}]
  • [-cnn2 {landmarks-regression-retail-0009,facial-landmarks-35-adas-0002,emotions-recognition-retail-0003}]
  • [-cam {rgb,left,right,left_right,rectified_left,rectified_right,rectified_left_right}]
  • [-dd]
  • [-bb]
  • [-ff]
  • [-sync]
  • [-seq]
  • [-u USB_CHUNK_KIB]
  • [-fw FIRMWARE]
  • [-vv]
  • [-s STREAMS [STREAMS ...]]
  • [-v VIDEO]
  • [-pcl]
  • [-mesh]
  • [-mirror_rectified {true,false}]

オプションの使い方

-h
--help

show this help message and exit

-co CONFIG_OVERWRITE
--config_overwrite CONFIG_OVERWRITE

JSON-formatted pipeline config object.
This will be override defaults used in this script.

-brd BOARD
--board BOARD

BW1097, BW1098OBC - Board type from resources/boards/ (not case-sensitive).
Or path to a custom .json board config.
Mutually exclusive with [-fv -rfv -b -r -w]

-sh [1-14]
--shaves [1-14]

Number of shaves used by NN.

-cmx [1-14]
--cmx_slices [1-14]

Number of cmx slices used by NN.

-nce [1-2]
--NN_engines [1-2]

Number of NN_engines used by NN.

-mct {auto,local,cloud}
--model-compilation-target {auto,local,cloud}

Compile model lcoally or in cloud?

-rgbr {1080,2160,3040}
--rgb_resolution {1080,2160,3040}

RGB cam res height: (1920x)1080, (3840x)2160 or (4056x)3040.
Default: 1080

-rgbf RGB_FPS
--rgb_fps RGB_FPS

RGB cam fps: max 118.0 for H:1080, max 42.0 for H:2160.
Default: 30.0

-cs COLOR_SCALE
--color_scale COLOR_SCALE

Scale factor for 'color' stream preview window.
Default: 1.0

-monor {400,720,800}
--mono_resolution {400,720,800}

Mono cam res height: (1280x)720, (1280x)800 or (640x)400 - binning.
Default: 720

-monof MONO_FPS
--mono_fps MONO_FPS

Mono cam fps: max 60.0 for H:720 or H:800, max 120.0 for H:400.
Default: 30.0

-dct [0-255]
--disparity_confidence_threshold [0-255]

Disparity_confidence_threshold.

-med {0,3,5,7}
--stereo_median_size {0,3,5,7}

Disparity / depth median filter kernel size (N x N) .0 = filtering disabled.
Default: 7

-lrc
--stereo_lr_check

Enable stereo 'Left-Right check' feature.

-fv FIELD_OF_VIEW
--field-of-view FIELD_OF_VIEW

Horizontal field of view (HFOV) for the stereo cameras in [deg].
Default: 71.86deg.

-rfv RGB_FIELD_OF_VIEW
--rgb-field-of-view RGB_FIELD_OF_VIEW

Horizontal field of view (HFOV) for the RGB camera in [deg].
Default: 68.7938deg.

-b BASELINE
--baseline BASELINE

Left/Right camera baseline in [cm].
Default: 9.0cm.

-r RGB_BASELINE
--rgb-baseline RGB_BASELINE

Distance the RGB camera is from the Left camera.
Default: 2.0cm.

-w
--no-swap-lr

Do not swap the Left and Right cameras.

-e
--store-eeprom

Store the calibration and board_config (fov, baselines, swap-lr) in the EEPROM onboard

--clear-eeprom

Invalidate the calib and board_config from EEPROM

-o
--override-eeprom

Use the calib and board_config from host, ignoring the EEPROM data if programmed

-dev DEVICE_ID
--device-id DEVICE_ID

USB port number for the device to connect to.
Use the word 'list' to show all devices and exit.

-debug [DEV_DEBUG]
--dev_debug [DEV_DEBUG]

Used by board developers for debugging.
Can take parameter to device binary

-fusb2
--force_usb2

Force usb2 connection

-cnn {age-gender-recognition-retail-0013,deeplabv3p_person,emotions-recognition-retail-0003,face-detection-adas-0001,face-detection-retail-0004,facial-landmarks-35-adas-0002,human-pose-estimation-0001,landmarks-regression-retail-0009,mobilenet-ssd,mobileNetV2-PoseEstimation,pedestrian-detection-adas-0002,person-detection-retail-0013,person-vehicle-bike-detection-crossroad-1016,tiny-yolo-v3,vehicle-detection-adas-0002,vehicle-license-plate-detection-barrier-0106,yolo-v3}
--cnn_model {age-gender-recognition-retail-0013,deeplabv3p_person,emotions-recognition-retail-0003,face-detection-adas-0001,face-detection-retail-0004,facial-landmarks-35-adas-0002,human-pose-estimation-0001,landmarks-regression-retail-0009,mobilenet-ssd,mobileNetV2-PoseEstimation,pedestrian-detection-adas-0002,person-detection-retail-0013,person-vehicle-bike-detection-crossroad-1016,tiny-yolo-v3,vehicle-detection-adas-0002,vehicle-license-plate-detection-barrier-0106,yolo-v3}

Cnn model to run on DepthAI

-cnn2 {landmarks-regression-retail-0009,facial-landmarks-35-adas-0002,emotions-recognition-retail-0003}
--cnn_model2 {landmarks-regression-retail-0009,facial-landmarks-35-adas-0002,emotions-recognition-retail-0003}

Cnn model to run on DepthAI for second-stage inference

-cam {rgb,left,right,left_right,rectified_left,rectified_right,rectified_left_right}
--cnn_camera {rgb,left,right,left_right,rectified_left,rectified_right,rectified_left_right}

Choose camera input for CNN (default: rgb)

-dd
--disable_depth

Disable depth calculation on CNN models with bounding box output

-bb
--draw-bb-depth

Draw the bounding boxes over the left/right/depth*streams

-ff
--full-fov-nn

Full RGB FOV for NN, not keeping the aspect ratio

-sync
--sync-video-meta

Synchronize 'previewout' and 'metaout' streams

-seq
--sync-sequence-numbers

Synchronize sequence numbers for all packets.
Experimental

-u USB_CHUNK_KIB
--usb-chunk-KiB USB_CHUNK_KIB

USB transfer chunk on device.
Higher (up to megabytes) may improve throughput, or 0 to disable chunking
Default: 64

-fw FIRMWARE
--firmware FIRMWARE

Commit hash for custom FW, downloaded from Artifactory

-vv
--verbose

Verbose, print info about received packets.

-s STREAMS [STREAMS ...]
--streams STREAMS [STREAMS ...]

Define which streams to enable Format: stream_name or stream_name,max_fps
Example: -s metaout previewout
Example: -s metaout previewout,10 depth_sipp,10
※デモ用プログラムで使用可能なストリーム名は限定されており、上記例の「depth_sipp」は使用できませんでした。

-v VIDEO
--video VIDEO

Path where to save video stream (existing file will be overwritten)

-pcl
--pointcloud

Convert Depth map image to point clouds

-mesh
--use_mesh

use mesh for rectification of the stereo images

-mirror_rectified {true,false}
--mirror_rectified {true,false}

Normally, rectified_left/_right are mirrored for Stereo engine constraints. If false, disparity/depth will be mirrored instead. Default: true

参考記事