Overview of Data Processing Framework
Advantages
- Computation efficiency
- DPF is a modern framework and it has been developed by taking advantages of new hardware architectures. Due to continued development, new capabilities are frequently added. -
- Generecity +
- Genericity
- DPF is physic agnostic. Therefore, its use is not limited to a particular field.
- Extensibility and Customization
- DPF is developed around two entities, one for the data (field) and one for the operation (operator). Each DPF capability is developed through operators, allowing componentization of the framework. DPF is also plugin based so adding new features or handling new formats is fast and easy. With componentization, plugins, and DPF scripting, you can add your own capabilities and link your existing work with DPF. @@ -699,7 +699,7 @@
Inputs: the input pins allow the user to pass on his data to the operator. Dpf data container types, standard types or operators' outputs can be connected on the input pins (connecting an operator output to another operator input doesn't evaluate this input operator). The inputs allow the user to choose the time/frequencies on which to evaluate a result, to specify the files where to find a result, to provide a field on which he wants an operation to be computed... Optional input pins to customize even more the operator outputs. Here is some of the most common pins:
+Inputs: the input pins allow the user to pass on his data to the operator. Dpf data container types, standard types or operators' outputs can be connected on the input pins (connecting an operator output to another operator input doesn't evaluate this input operator). The inputs allow the user to choose the time/frequencies on which to evaluate a result, to specify the files where to find a result, to provide a field on which he wants an operation to be computed... Optional input pins allow to customize even more the operator outputs. Here is some of the most common pins:
@@ -710,7 +710,7 @@ Operator
- +Configurations: with configurations the user can optionnaly choose how the operator will run. This is an advanced feature used for deep customization. The different options can change the way loops are done, it can change whether the operator needs to make check on the input or not... Here is some of the most common configuration options:
Configurations: with configurations the user can optionally choose how the operator will run. This is an advanced feature used for deep customization. The different options can change the way loops are done, it can change whether the operator needs to make check on the input or not... Here is some of the most common configuration options:
Data transformation: this is the internal operation that will occur when an operator is evaluated. The operation will return outputs depending on the inputs and configurations given by the user. The operation applied by each operator is described in its description.
@@ -722,7 +722,7 @@Workflow
The workflow is built by chaining operators. It will evaluate the data processing defined by the used operators. It needs input information, and it will compute the requested output information. The workflow is used to create a black box computing more or less basic transformation of the data. The different operators contained by a workflow can be internally connected together so that the end user doesn't need to be aware of its complexity. The workflow only needs to expose the necessary inputs pin and output pins. For example, a workflow could expose a "time scoping" input pin and a "data sources" input pin and expose a "result" output pin and have very complex routines inside it. See workflows' examples in the APIs tab.
-+
Available Operators
Operator
The Operator is the main object used to create, transform and stream the data. It can be seen as an integrated circuit in electronics with a range of pins in input and in output. When the operator is evaluated, it will process the input information to compute its output with respect to its description. The operator is made of:
-
-