In programming languages, the control flow determines the order in which statements are executed. Common control flows contain sequential execution, branching, and looping. PaddlePaddle Fluid inherits this concept and provides a variety of control flow APIs to control the execution logic of the deep learning model during training or prediction.
Conditional branch, for the input of a batch, according to the given conditions, select the process in
false_block to execute respectively, and then merge the outputs of the two branches into one after the execution. In general, conditional expressions can be generated by a logical comparison API such as less_than, equal.
Please refer to IfElse
Switch, like the
switch-case declaration commonly found in programming languages, selects different branch to execute depending on the value of the input expression. Specifically, the
Switch control flow defined by Fluid has the following characteristics:
- The condition of the case is a bool type value, which is a tensor type Variable in the Program;
- It checks each case one by one, selects the first case that satisfies the condition, and exits the block after completion of the execution;
- If all cases do not meet the conditions, the default case will be selected for execution.
Please refer to Switch
When the condition is true, repeatedly execute logic in the
While flow belongs to until the condition is judged to be false and the loop will be ended. The related APIs are as follows:
- increment : It is usually used to count the number of loops;
- array_read : Reads Variable from the specified location in
LOD_TENSOR_ARRAYto perform calculations;
- array_write : Writes the Variable back to the specified location in
LOD_TENSOR_ARRAYand stores the result of the calculation.
Please refer to While
Dynamic RNN can process a batch of unequal(variable)-length sequence data, which accepts the variable with
lod_level=1 as input. In the
DynamicRNN, the user needs to customize RNN’s single-step calculation logic. At each time step, the user can write the state to be remembered to the
DynamicRNN and write the required output to its
sequence_last_step gets the output of the last time step of
Please refer to DynamicRNN