FFmpeg
|
Go to the source code of this file.
Data Structures | |
struct | TaskItem |
struct | LastLevelTaskItem |
struct | DNNAsyncExecModule |
Common Async Execution Mechanism for the DNN Backends. More... | |
Macros | |
#define | DNN_BACKEND_COMMON_OPTIONS |
Functions | |
int | ff_check_exec_params (void *ctx, DNNBackendType backend, DNNFunctionType func_type, DNNExecBaseParams *exec_params) |
DNNReturnType | ff_dnn_fill_task (TaskItem *task, DNNExecBaseParams *exec_params, void *backend_model, int async, int do_ioproc) |
Fill the Task for Backend Execution. More... | |
DNNReturnType | ff_dnn_async_module_cleanup (DNNAsyncExecModule *async_module) |
Join the Async Execution thread and set module pointers to NULL. More... | |
DNNReturnType | ff_dnn_start_inference_async (void *ctx, DNNAsyncExecModule *async_module) |
Start asynchronous inference routine for the TensorFlow model on a detached thread. More... | |
DNNAsyncStatusType | ff_dnn_get_result_common (Queue *task_queue, AVFrame **in, AVFrame **out) |
Extract input and output frame from the Task Queue after asynchronous inference. More... | |
DNNReturnType | ff_dnn_fill_gettingoutput_task (TaskItem *task, DNNExecBaseParams *exec_params, void *backend_model, int input_height, int input_width, void *ctx) |
Allocate input and output frames and fill the Task with execution parameters. More... | |
DNN common functions different backends.
Definition in file dnn_backend_common.h.
#define DNN_BACKEND_COMMON_OPTIONS |
Definition at line 31 of file dnn_backend_common.h.
int ff_check_exec_params | ( | void * | ctx, |
DNNBackendType | backend, | ||
DNNFunctionType | func_type, | ||
DNNExecBaseParams * | exec_params | ||
) |
Definition at line 29 of file dnn_backend_common.c.
Referenced by ff_dnn_execute_model_native(), ff_dnn_execute_model_ov(), and ff_dnn_execute_model_tf().
DNNReturnType ff_dnn_fill_task | ( | TaskItem * | task, |
DNNExecBaseParams * | exec_params, | ||
void * | backend_model, | ||
int | async, | ||
int | do_ioproc | ||
) |
Fill the Task for Backend Execution.
It should be called after checking execution parameters using ff_check_exec_params.
task | pointer to the allocated task |
exec_param | pointer to execution parameters |
backend_model | void pointer to the backend model |
async | flag for async execution. Must be 0 or 1 |
do_ioproc | flag for IO processing. Must be 0 or 1 |
DNN_SUCCESS | if successful |
DNN_ERROR | if flags are invalid or any parameter is NULL |
Definition at line 56 of file dnn_backend_common.c.
Referenced by ff_dnn_execute_model_native(), ff_dnn_execute_model_ov(), ff_dnn_execute_model_tf(), and ff_dnn_fill_gettingoutput_task().
DNNReturnType ff_dnn_async_module_cleanup | ( | DNNAsyncExecModule * | async_module | ) |
Join the Async Execution thread and set module pointers to NULL.
async_module | pointer to DNNAsyncExecModule module |
DNN_SUCCESS | if successful |
DNN_ERROR | if async_module is NULL |
Definition at line 92 of file dnn_backend_common.c.
Referenced by destroy_request_item().
DNNReturnType ff_dnn_start_inference_async | ( | void * | ctx, |
DNNAsyncExecModule * | async_module | ||
) |
Start asynchronous inference routine for the TensorFlow model on a detached thread.
It calls the completion callback after the inference completes. Completion callback and inference function must be set before calling this function.
If POSIX threads aren't supported, the execution rolls back to synchronous mode, calling completion callback after inference.
ctx | pointer to the backend context |
async_module | pointer to DNNAsyncExecModule module |
DNN_SUCCESS | on the start of async inference. |
DNN_ERROR | in case async inference cannot be started |
Definition at line 111 of file dnn_backend_common.c.
Referenced by execute_model_tf(), and ff_dnn_flush_tf().
DNNAsyncStatusType ff_dnn_get_result_common | ( | Queue * | task_queue, |
AVFrame ** | in, | ||
AVFrame ** | out | ||
) |
Extract input and output frame from the Task Queue after asynchronous inference.
task_queue | pointer to the task queue of the backend |
in | double pointer to the input frame |
out | double pointer to the output frame |
DAST_EMPTY_QUEUE | if task queue is empty |
DAST_NOT_READY | if inference not completed yet. |
DAST_SUCCESS | if result successfully extracted |
Definition at line 141 of file dnn_backend_common.c.
Referenced by ff_dnn_get_result_native(), ff_dnn_get_result_ov(), and ff_dnn_get_result_tf().
DNNReturnType ff_dnn_fill_gettingoutput_task | ( | TaskItem * | task, |
DNNExecBaseParams * | exec_params, | ||
void * | backend_model, | ||
int | input_height, | ||
int | input_width, | ||
void * | ctx | ||
) |
Allocate input and output frames and fill the Task with execution parameters.
task | pointer to the allocated task |
exec_params | pointer to execution parameters |
backend_model | void pointer to the backend model |
input_height | height of input frame |
input_width | width of input frame |
ctx | pointer to the backend context |
DNN_SUCCESS | if successful |
DNN_ERROR | if allocation fails |
Definition at line 161 of file dnn_backend_common.c.
Referenced by get_output_native(), get_output_ov(), and get_output_tf().