The ADA research group welcomes Annelot Bosman as a group member

Annelot recently joined the ADA group as a PhD candidate. Before coming to Leiden she completed both her bachelor’s and master’s in Econometrics at the Erasmus University Rotterdam. During her studies, she specialised in Operations Research and Quantitative Logistics.

Her PhD topic focusses on (complete) robustness verification of deep neural networks, which involves MIP-based verification. The topic is especially important as neural networks are used more and more in practical applications.  Verifying the safety of Neural Networks is an important step in being able to trust the decisions they make under any arbitrary circumstance.

Annelot will be supervised by both prof.dr. Holger Hoos and dr. Jan van Rijn.  Holger Hoos is part of the recently established Alexander-von-Humboldt Chair in AI Methodology (AIM) at RWTH Aachen University, which is an integral part of the multi-institutional ADA Research Group.

The ADA research group welcomes Mike Huisman as a group member

Author: Mike Huisman

After obtaining his BSc. in Artificial Intelligence and MSc. in Computer Science, Mike Huisman joined LIACS in March 2021 as a teaching Ph.D. candidate. He is part of the Reinforcement Learning Group at LIACS and has also joined the ADA group in November 2021.

His research is in the field of deep meta-learning, where the goal is to learn an efficient deep learning algorithm that can learn new tasks quickly (from a limited amount of data). This can democratize the use of deep learning by lowering the threshold for using powerful deep learning technologies in terms of required compute resources and data.   He conducts his research under the supervision of Prof. Dr. A. Plaat and Dr. J.N. van Rijn.

Thesis: Parallel Algorithm Portfolios in Sparkle

Author: Richard Middelkoop

On 29 July, I defended my bachelor thesis entitled “Parallel Algorithm Portfolios in Sparkle” after half a year of work following the project start in February 2021. For this project I was part of the ADA group and was supervised by Koen van der Blom and Holger Hoos.

In a parallel portfolio, a set of independent, non-communicating solvers solve problem instances in parallel. Within the thesis, insight is given into the design and implementation of parallel portfolios into Sparkle. Additionally, experiments show the validity of the implementation and the practical capabilities of parallel portfolios as implemented in Sparkle.

My work built on the existing platform created by the Sparkle development team. The thesis is of value to the platform since the state of the art is represented by a set of algorithms, and parallel algorithm portfolios allow for an easily accessible method to conduct experiments with such portfolios.

Below an example is outlined showing how parallel algorithm portfolios can be used in Sparkle. Once the platform is initialized, the needed components (problem instances, solvers) are added to Sparkle. Using these the portfolio can be constructed. With the constructed portfolio and a selection of instances, an experiment can be conducted. Finally, we generate a report that describes the used components and an empirical evaluation of the results from the experiment.

initialise.py
add_instances.py --run-solver-later --run-extractor-later PTN/
add_solver.py --run-solver-later --deterministic 0 CSCCSat/
add_solver.py --run-solver-later --deterministic 0 MiniSAT/
add_solver.py --run-solver-later --deterministic 0 PbO-CCSAT-Generic/
construct_sparkle_parallel_portfolio.py --nickname runtime_experiment
run_sparkle_parallel_portfolio.py --instance-paths Instances/PTN/ --portfolio-name runtime_experiment
generate_report.py

First results in the thesis suggest that parallelising up to 64 instances of a nondeterministic solver with different random seeds results in performance gains compared to lower numbers of instances. When using 128 solver instances performance is similar to 64 solver instances, and there no longer seems to be a benefit to using more solver instances. With 256 or more solver instances the overhead appears to increase and performance starts to drop again in this practical implementation.

If you are interested in the project, feel free to take a look at the bitbucket project page and try Sparkle for yourself.

AutoML adoption is low, what is stopping people?

Machine learning (ML) is used more and more in a wide range of applications, but there are not enough machine learning experts to properly support this growth. With automated machine learning (AutoML) the goal is to make ML easier to use. As a result, experts should be able to deploy more ML systems, and less expertise would be needed to work with AutoML than when working with ML directly.

AutoML adoption

For AutoML to have this effect, however, it first needs to be adopted in practice. To study adoption levels in practice, we held a survey among teams that engineer software with ML components. As it turns out, adoption is not actually very high. In the figure below, we can see that, depending on the AutoML technique, more that 20 or even 30% of the respondents did not adopt AutoML at all. In addition, another 50 to 60% do not adopt AutoML techniques completely.

Adoption per technique. The first bar shows adoption of the feature technique including ‘Implicit, e.g. deep learning’ answers, while in the second bar they are excluded.

What holds back adoption?

Although we were able to assess the adoption levels through the survey, it does not tell us why people do or do not adopt these AutoML techniques. To find out, we started holding interviews, and already gained some insights based on two initial interviews.

The people we talked to raised a number of usability concerns. Firstly, there is a perception that AutoML techniques are hard to use and will require a significant initial investment to learn how to use. Secondly, someone indicated that in practice AutoML systems sometimes fail to work on the used data, but do not always make clear what the problem is. Thirdly, a participant raised that it is difficult to predict a good run length for AutoML systems. Related to this, there were also concerns about the required computational resources to use AutoML.

Looking forward

Clearly, there is room to improve AutoML adoption, and there are still some issues to resolve that could help to increase adoption. To get a more crisp view of which issues are common, and which are more atypical we plan to hold more interviews. In addition to learning about common issues, we also hope to find out whether there are differences between organisation types. Government organisations dealing with citizen data may, for instance, be more hesitant to adopt AutoML for privacy or bias sensitive applications.

Paper

If you are interested in the details you can learn more by reading our paper “AutoML Adoption in ML Software“ which was published in the AutoML workshop at ICML 2021.
Authors: Koen van der Blom, Alex Serban, Holger Hoos, Joost Visser.
Link: https://ada.liacs.nl/papers/BloEtAl21.pdf

Sparkle: Accessible Meta-Algorithmics

Many fields of computational science advance by improving the algorithms for solving key problems, such as SAT solving or binary classification. When comparing these algorithms, they are usually ranked based on some performance metric, with the algorithm that performs best over a set of problem instances getting the highest score. In other words, the generalist is considered to be the best algorithm.

Meta-algorithms

Meta-algorithmic techniques, such as automated algorithm selection, are able to provide great performance improvements over such a generalist approach. They do this by taking advantage of the complementary strengths of different algorithms, or in the case of algorithm configuration, of configurable or tunable algorithm components. Algorithms that are specially selected or configured for a specific subset of problem instances usually perform better, on this subset, than a generalist algorithm.

However, meta-algorithms are complex and difficult to use correctly. At the same time, when using them incorrectly, you may not benefit from their full potential, or in the extreme case, you may even obtain worse performance than without using them.

Sparkle

The Sparkle platform aims to make meta-algorithms more accessible to novice users. To achieve this, Sparkle implements standard protocols for algorithm selection and algorithm configuration that support their correct use. After an experiment, the results, as well as the problem instances, algorithms and protocol used, are presented in an automatically generated LaTeX/PDF document, parts of which can be easily copied and pasted into scientific papers.

As an example, we outline below how algorithm selection can be used in Sparkle. Once the platform is initialised, the components that we need (problem instances, solvers and a feature extractor) are added to Sparkle. Using these, the feature values for each instance, and the performance values of each solver on each instance can be computed. With those values a portfolio selector is constructed, which can be used to automatically select a solver for new problem instances. Finally, we generate a report that describes which components were used, which process Sparkle used to construct a portfolio selector with those components, and how large the marginal contribution of each solver is to this portfolio selector.

initialise.py
add_instances.py --run-solver-later --run-extractor-later PTN/
add_solver.py --run-solver-later --deterministic 0 PbO-CCSAT-Generic/
add_solver.py --run-solver-later --deterministic 0 CSCCSat/
add_solver.py --run-solver-later --deterministic 0 MiniSAT/
add_feature_extractor.py --run-extractor-later SAT-features-competition2012_revised_without_SatELite_sparkle/
compute_features.py
run_solvers.py
construct_sparkle_portfolio_selector.py
generate_report.py

Team

With the first version of Sparkle ready, we are working to improve it further by making things easier to use, and providing more meta-algorithmic techniques. For this we are supported by the software lab at LIACS, headed by Prof. Joost Visser. From this lab, Jérémie Gobeil helps to both improve Sparkle itself, as well as the development processes. Additionally, several students, including Jeroen Rook and Richard Middelkoop, also put work into Sparkle as part of their studies.

Interested?

You can take a look, explore and try the Sparkle platform by getting it from the bitbucket repository at  https://bitbucket.org/sparkle-ai/sparkle/.

AutoML for Dynamic Recommender Systems

What can we do when the performance of an AI system, such as a Recommender System, degrades due to changes in the data that it is processing? This is the central question we seek to answer in our recent project: “Hyper-Parameter Optimization for Latent Spaces in Dynamic Recommender Systems”

When an AI system is deployed, it is usually assumed that the data the system is processing is similar to the data it has been trained on. However, there are many scenarios in which this assumption is likely to be violated. Think, for example, of a recommendation system. These are systems designed to predict user preferences for items, such as movies, songs or books, and are commonly used by technology companies like Amazon, Netflix or Spotify to recommend new items or content to their users. These systems are a good example to illustrate the relevance of the problem studied in this work: User preferences are based on their taste and we all know that taste most definitely changes over time, just as the collection of items is constantly updated, with new items added or old items removed. Overall, we are dealing with a dynamic, rather than a static system, which means we have to rethink the way a learning algorithm is deployed.

Overview of the proposed method. The Nelder-Mead algorithm produces a set of configurations, from which the best one is selected and deployed.

Recommender systems are often based on collaborative filtering algorithms, and these algorithms have associated hyper-parameters, such as embedding size or learning rate, affecting the behavior and performance of the system. When the incoming data changes, for example because the collection of items is updated, there might be a new, unknown set of hyper-parameters that works more effectively on this new data than the previous setting. Our approach therefore involves i) detecting when system performance degrades (concept drift) and ii) initialise re-configuration of the matrix factorization algorithm. For this, we employ an adaption of the Nelder-Mead algorithm, which uses a set of heuristics to produce a number of potentially good configurations, from which the best one is selected and deployed.

To optimally control the characteristics of the stream, we also built a data generator, which produces realistic streams of preference data and injects drift into the data by means of expanding or shrinking the latent feature space or, in other words, change the user preferences.

Results on synthetic data stream. The y-axis shows RMSE, whereas the x-axis shows the number of samples processed.

We consider multiple baselines, including an online adaption of the well-known configurator SMAC as well as a carefully tuned static model, and yield promising results. In future work, we want to apply this method to situations in which the parameter space is more complex, for example, when using deep learning models requiring multiple components and specific tuning of the underlying components, such as convolutional architectures, recurrent layers or even attention models based on transformers.

This project is joint work by Matthias König and Holger H. Hoos (ADA group at LIACS, Leiden University), together with Bruno Veloso (Portucalense University, Portugal), Luciano Caroprese (ICAR-CNR, Italy), Sónia Teixeira (University of Porto, Portugal & LIAAD-INESC TEC, Portugal), Giuseppe Manco (ICAR-CNR, Italy) and João Gama (University of Porto, Portugal & LIAAD-INESC TEC, Portugal). It has been executed in the context of the HumanE-AI-Net, one of the two ICT-48 Networks of Centres of Excellence recently established by the European commission, in which ADA plays a key role. The accompanying paper will be presented at ECML PKDD 2021 and can be found here, along with the code and other materials.

AutoML for Earth Observation data

In March of 2020, I joined the ADA group to work on my Introductory research project, as part of my master program in Computer Science, under the supervision of Dr. Mitra Baratchi and Dr. Jan van Rijn. As my research topic was related to satellite data, I had the opportunity to collaborate with Dr. Andreas Vollrath, who was working at the ESA Phi-lab. My project was titled “Building Deep Learning Models for Remote Sensing Applications Through Automated Machine Learning”. Based on the results of our assumption test and preliminary experiments, we presented an online poster for the ESA Phi-week 2020. The presentation video can be watched here. We demonstrated the potential of applying the most recent advances in AutoML, specifically hyperparameter optimisation, to a remote sensing dataset. Starting from that, we proposed to build an AutoML system focused on Earth Observation data. This project proposal led to my thesis work “Automated Machine Learning for Satellite Data: Integrating Remote Sensing Pre-trained Models into AutoML Systems”.

Cover image showing AutoML being used for satellite data and representing the collaboration between Earth Observation and Machine Learning fields to achieve the best results.

Application areas that can benefit from machine learning models extracted from Satellite data are numerous. Environmental mappings, crops monitoring, urban planning and emergency response are some examples. However, in order to leverage the latest advancements in machine learning, much expert knowledge and hands-on expertise is required. Automated Machine Learning (AutoML) aims to automate the different stages of a machine learning pipeline, making crucial decisions in a data-driven and objective way. To reach this goal, AutoML systems have been developed. 

Open source AutoML systems are usually benchmarked with natural image datasets. Satellite images differ from natural images in various aspects. One of the most important differences is related to the number and type of spectral bands composing the image. Natural colour images have three bands (red, green & blue), but in satellite images, more spectral bands are available. Therefore, the input data can be composed of more and different types of spectral bands (a common practice in real-world applications). These differences made us wonder how the current AutoML systems work for satellite datasets. To come up with an answer, the first part of our project focused on testing the image classification task available in the AutoML system of AutoKeras on a varied remote sensing benchmark suite of 7 datasets that we compiled by reviewing the remote sensing literature. We found good performance in almost all of these datasets, with impressive results for the RGB version of two of these datasets (EuroSAT and So2Sat). However, we did not achieve better results than manually designed models for more real-world application datasets. This showed the importance of customising these systems to achieve better performance on remote sensing data.

Furthermore, as acquiring ground truth label data is quite expensive, a problem in the field of remote sensing is designing high performing models in the lack of large labelled datasets. The use of pre-trained models has given promising results to address this challenge. As a next step, we added remote sensing pre-trained models to AutoKeras. This helped to improve the accuracy on non-RGB datasets further. In our experiments, we compared 3 AutoML approaches (initializing architectures (i) without pre-training, (ii) pre-trained on Imagenet and (iii) pre-trained on remote sensing datasets) against manually designed architectures on our benchmark set. AutoML architectures outperformed the manually designed architectures in 5 of these datasets. We showed that the best AutoML approach depends on the properties of the dataset at hand, and these results further highlighted the importance of having customised Earth Observation AutoML systems. To know more about this project, please take a look at the project page.

By using AutoML methods to reduce the gap between state-of-the-art machine learning research and applied machine learning, the use of advanced AI for remote sensing applications can be more accessible. One of my personal goals is to keep updating and documenting our project repository so people of different backgrounds can use it.
We believe that making this benchmark publicly available will enable the community to further experiment with relevant remote sensing datasets and expand the AutoML systems for different application goals.

Gilles Ottervanger finishes successful Master’s project at ADA

Last month, I successfully defended my Master’s thesis titled “MultiETSC: Automated Machine Learning for Time Series Classification”. I have been working on my MultiETSC project for a little over a year as a part of the ADA group, supervised by Can Wang, Mitra Baratchi and Holger H. Hoos. In this project, we developed an AutoML approach to the problem of early time series classification (ETSC).

ETSC is a problem that is relevant for time-critical applications of time series classifications. It arises from the desire to get classification output before the full time series is observed. ETSC requires a classifier to decide at what point to classify based on the partially observed time series. This decision trades off the earliness of the classification for the accuracy of the classification. Therefore, any ETSC algorithm is faced with two competing objectives: earliness and accuracy.

For MutliETSC, we took eight existing and one naive self-produced ETSC algorithms and partly rewrote them to share a common interface. On this set of algorithms, we applied the MO-ParamILS configurator, which can optimise multiple objectives simultaneously. MultiETSC produces a set of configurations that give multiple options for trading off earliness against accuracy, while non of these configurations is strictly better than the other. This allows a user to pick a configuration that fits the ETSC problem at hand.

In our research, we show that leaving out either the algorithm selection or the multi-objective optimisation parts will both result in inferior results. This means that both these aspects significantly contribute to the quality of MultiETSC’s output.

If you want to know more about this project, have a look at the project page.

Finishing this project, sadly means I will be leaving the ADA group, but I am happy to have had the opportunity to work in such a great environment with such smart and motivated people as in the ADA group.

Another familiar student rejoins the ADA group for his PhD

It would seem that former Master student Laurens Arp will not be leaving the ADA group just yet, as he has just renewed his membership for the group, this time as a PhD student.

Prior to starting his PhD studies, Laurens worked on his Master’s Thesis on spatial interpolation within the ADA group supervised by dr. Mitra Baratchi and prof.dr. Holger Hoos, culminating in his MSc degree obtained in 2020.

The research Laurens will be carrying out now is part of dr. Mitra Baratchi’s “Physics-aware Spatio-temporal Machine Learning for Earth Observation Data” project, which involves a collaboration with the European Space Agency. In this project, the goal is to estimate environmental parameters (leaf density, CO2 levels in the atmosphere, etc.) from optical Earth observation data acquired by satellites. Generally speaking, there are two approaches one might take to this end. The first of those, traditionally favoured by the ecology community, would be to create inverted versions of deterministic radiative transfer models, which model how radiation (captured in Earth observation data) is affected by environmental parameters (ground truth). The other approach, perhaps more familiar to the AI and machine learning communities, would be to use spatio-temporal machine learning methods (e.g., temporal CNNs) to model the correlation between data (spectral images of radiation) and ground truth (environmental parameters). Both approaches have their own strengths and weaknesses, which motivates the development of hybrid models benefiting from the strengths of both. Thus, the creation of such hybrid models is what the project will aim to achieve.

Between global warming, the loss of biodiversity through mass extinction and other ecological and environmental perils the Earth is faced with these days, the challenges we, as residents of the Earth, are faced with can be daunting indeed. It is our hope that the models we will create will provide a meaningful contribution to the global efforts to address these issues.

The ADA research group welcomes Richard Middelkoop as a new Bachelor student

Besides working in the ADA research group, Richard studies Computer Science & Economics where his main interest lies in being able to apply codes to real-life applications. Therefore, he found the ADA group an excellent environment to work on his bachelor project.

Richards’ project is concerned with Parallel Algorithm Portfolios, and adding this new functionality to the Sparkle platform. The problems which can be solved by the functionality will encompass decision problems, optimisation problems with a single solution and optimisation problems with several possible solution. All being well,
the project will showcase a practical application of Parallel Portfolios.