Algorithm Development Steps

The Rapid Assessment System and Design of Experiments: The use of these tools enables a design of experiments approach to be implemented such that the effect of data pretreatment, feature selection, regression methodology, etc. can be evaluated in terms of its impact on the overall performance of the target output. The design of experiments approach also facilitates effective mapping of the domain space through the selection of intelligently determined test points. 

Design of Experiments

Surrogate Model and Optimization

The Rapid Assessment System in combination with a design of experiments approach effectively quantifies the influence of the given parameter on overall model performance as well as interactions between parameters.  The output from this process can be used to create a surrogate or response surface model. The surrogate model creates a response surface based upon the test points intelligently selected in the design of experiments investigation. The resulting surrogate model can then be utilized to estimate future performance based upon selected algorithms and parameter settings.


The general parameter settings (pretreatment methodology, algorithm sequence, regression model, etc.) for optimal model performance have now been reasonably identified via the design of experiments in combination with the surrogate model. Further optimization within this limited option space can be completed through the use of additional design of experiments and optimization algorithms. A leaderboard associated with overall model performance is established and a multitude of performance parameters recorded and stored in the performance database.

Model Robustness Testing

The overall objective of the robustness testing phase is to identify the given prediction process that effectively balances overall robustness with prediction performance. Algorithm robustness is typically application-specific but involves testing of algorithm performance at the extremes anticipated in the operational environment, as well as more detailed statistical analysis and “stress testing” of models that is not practical during the large-scale rapid-assessment phase.

The results of the optimization phase are carefully examined and a selected number of models are taken through the robustness testing phase. Typically, approximately 10 different predictive methodologies are examined during robustness testing. These models are selected based upon overall performance with additional consideration given to those predictive processes that use methods or approaches thereby creating model diversity.

Model Selection & Results Aggregation

The model selection process involves a detailed analysis of each predictive approach based upon defined customer criteria. Criteria can include overall processing complexity, time to a reportable result, number of non-results generated, overall prediction performance, and overall algorithm robustness, etc.

During model robustness testing, models utilizing different assumptions are specifically maintained in the selection process. The ability to aggregate results across different models is specifically examined. Examination of prediction agreement has been demonstrated as a strong measure of measurement accuracy.

The ability to determine a confidence metric associated with a given prediction result can be very valuable in practical application. Therefore, a final step in the model selection process is to define outlier or anomaly detection methods and their associated thresholds so that application-specific confidence metrics are calculated and implemented.

Puzzle Pieces


Prospective Validation and Confidence Assessment

The final step in the algorithm development is a true validation of performance. The validation should be conducted on new or novel data. The predicted results versus reference values and the associated confidence level associated with each prediction are examined in detail. The overall performance results are then evaluated relative to product needs. This final step establishes confidence surrounding the overall prediction methodology prior to deployment of the product.