
Predictive Evaluation
Tests
In order to decide whether the method is appropriate for a particular problem, numerous tests are performed , i.e. numerous tree-generating rules corresponding to different fixed train sets are created and applied to the corresponding test sets.
If they all perform above the desired precision level, the method is deemed satisfactory for the problem under consideration and can be used for predictive purposes.
Various methods for calculating similarity may be employed – e.g. Eudlidean distance, Manhattan distanc, Jaccard distance, cosine distance, etc.
Evaluation
The platform allows for the visualization of the actual and the predicted values for comparison.
The magnitude of the prediction error can be visualized as a confusion matrix and/or as a line chart, given that both variables involved are numerical.
A confusion matrix provides information on the distribution of the prediction error across conditions this allows a model to be evaluated based on its performance in avoiding false negatives, which for the insurance domain, is the most important type of error.
The model derived from the training set in the stages described above can be evaluated by using the commands and options under the “Evaluation” step. The node structure of the test set is compared to that of the training set. In the information tooltip, predicted values for relevant variables. In order to decide whether the method is appropriate for a particular problem, numerous tests are performed, i.e. numerous tree-generating rules corresponding to different fixed train sets are createdand applied to the corresponding test sets. If they all perform above the desired precision level, the method is deemed satisfactory for the problem under consideration and can be used for predictive purposes.