Many fields of computational science advance by improving the algorithms for solving key problems, such as SAT solving or binary classification. When comparing these algorithms, they are usually ranked based on some performance metric, with the algorithm that performs best over a set of problem instances getting the highest score. In other words, the generalist is considered to be the best algorithm.
Meta-algorithms
Meta-algorithmic techniques, such as automated algorithm selection, are able to provide great performance improvements over such a generalist approach. They do this by taking advantage of the complementary strengths of different algorithms, or in the case of algorithm configuration, of configurable or tunable algorithm components. Algorithms that are specially selected or configured for a specific subset of problem instances usually perform better, on this subset, than a generalist algorithm.
However, meta-algorithms are complex and difficult to use correctly. At the same time, when using them incorrectly, you may not benefit from their full potential, or in the extreme case, you may even obtain worse performance than without using them.
Sparkle
The Sparkle platform aims to make meta-algorithms more accessible to novice users. To achieve this, Sparkle implements standard protocols for algorithm selection and algorithm configuration that support their correct use. After an experiment, the results, as well as the problem instances, algorithms and protocol used, are presented in an automatically generated LaTeX/PDF document, parts of which can be easily copied and pasted into scientific papers.
As an example, we outline below how algorithm selection can be used in Sparkle. Once the platform is initialised, the components that we need (problem instances, solvers and a feature extractor) are added to Sparkle. Using these, the feature values for each instance, and the performance values of each solver on each instance can be computed. With those values a portfolio selector is constructed, which can be used to automatically select a solver for new problem instances. Finally, we generate a report that describes which components were used, which process Sparkle used to construct a portfolio selector with those components, and how large the marginal contribution of each solver is to this portfolio selector.
initialise.py
add_instances.py --run-solver-later --run-extractor-later PTN/
add_solver.py --run-solver-later --deterministic 0 PbO-CCSAT-Generic/
add_solver.py --run-solver-later --deterministic 0 CSCCSat/
add_solver.py --run-solver-later --deterministic 0 MiniSAT/
add_feature_extractor.py --run-extractor-later SAT-features-competition2012_revised_without_SatELite_sparkle/
compute_features.py
run_solvers.py
construct_sparkle_portfolio_selector.py
generate_report.py
Team
With the first version of Sparkle ready, we are working to improve it further by making things easier to use, and providing more meta-algorithmic techniques. For this we are supported by the software lab at LIACS, headed by Prof. Joost Visser. From this lab, Jérémie Gobeil helps to both improve Sparkle itself, as well as the development processes. Additionally, several students, including Jeroen Rook and Richard Middelkoop, also put work into Sparkle as part of their studies.
Interested?
You can take a look, explore and try the Sparkle platform by getting it from the bitbucket repository at https://bitbucket.org/sparkle-ai/sparkle/.