Inference

Runs model inference.

user

Container for user-defined plugs. Nodes should never make their own plugs here, so users are free to do as they wish.

model

Path to the model file, which should be in .onnx format. Call loadModel() or press the reload button to configure the in and out plugs to match the model.

Tip

If a relative path is used, it will be searched for in all the filesystem locations specified by the GAFFERML_MODEL_PATHS environment variable.

Supported file extensions : onnx

in

The inputs to the model.

out

The outputs from the model.