AutoAI4EO: NAS with AutoKeras for Earth Observation (Part 2)

This is the second blog post in the series about our research on Neural Architecture Search (NAS) for Earth Observation (EO). In Part 1 we introduced NAS and how it can be applied to EO. We talked about a NAS framework: AutoKeras [1], and briefly discussed how it can be customized for EO tasks.

In this blog post, we talk about how AutoKeras can be used to create methods for EO imagery. More specifically, we are going to tackle the task of classification of EO imagery through the work “Automated Machine Learning for Satellite Data: Integrating Remote Sensing Pre-trained Models into AutoML Systems” by Nelly R. Palacios Salinas, Mitra Baratchi, Jan N. van Rijn and Andreas Vollrath [2]. It is the first work that focused on developing AutoML methods for EO.

Even though we will discuss how you could create methods like this yourself, Palacios et al. already present a framework for the classification of EO imagery that can be used with just a few lines of code and without requiring in-depth prior knowledge of AutoML or deep learning in general. The links to the code as well as the full Python notebook for this blog can be found at the end of the post.

Figure 1: Examples of images and labels in scene classification. Source: UC Merced [13]

Classification

Image classification is the task of assigning labels to an image. For instance, you’d like to know whether your image is showing a desert or a forest or some other type of land cover. Classification has, among others, applications in tasks like urban planning, hazard detection, and monitoring of the environment [3]. In EO the term “image classification” is sometimes used to refer to what is called segmentation in computer science. This is the task of assigning labels to individual pixels. For instance, this is the case when you classify pixels into the classes “building” or “not building” for building segmentation tasks. We, however, will only talk about classification in the traditional sense where you consider the complete image.

Figure 2: Example of segmentation or pixel classification: building footprint segmentation. The image on the left shows the input image, on the right you see the segmentation map where “building” pixels are white, and background pixels are black. Source

EO imagery

The task of classifying natural images is very different from the task of classifying EO images. In EO images, areas of interest can be very small on the image. For instance, if you want to differentiate between “savanna” and “bare ground”, individual plants could cover only a few pixels or less, depending on the resolution of the image. In some satellite images, like those obtained from Sentinel-2 with a resolution of 10 m per pixel, a plant could even be much smaller than a single pixel. Additionally, an EO image can cover a large area that includes multiple land cover types and can contain image features on different scales, from a tree covering a few pixels to larger patterns in the image like a lake. These properties need to be taken into account when designing NAS frameworks for the classification of EO imagery.

NAS for classification of EO imagery

Many NAS frameworks exist for the classification of natural images, including NASnet [4], AutoGAN [5], and ENAS [6]. Whereas these frameworks work exceptionally well for natural images, these have not been designed with the complexity of satellite imagery in mind, and therefore their performance and applications for EO data are limited. One of the main reasons for these limitations lay down in the use of datasets with simpler images like CIFAR-10 [6] for training and evaluation.

Currently, NAS methods are being developed specifically for classifying EO images. For instance, the work of our research group members Palacios Salinas et al. [2] shows that their classifier developed in AutoKeras is able to outperform 71% of the baseline methods created for natural images. These results were achieved by customizing AutoKeras’ ImageClassifier.

Classification in AutoKeras

AutoKeras has a ready-to-use NAS system for image classification, which is called the ImageClassifier. The search space of the ImageClassifier consists of various code modules that can be morphed, repeated, and combined to form a neural network. This type of AutoKeras block that can combine multiple types of blocks is called a hyperblock. The options include ResNet [7], Xception [8], and a convolutional block. It’s possible to use pre-trained weights from ImageNet [8], which is a natural image dataset. If you need a refresher on AutoKeras, read our previous blogpost on this topic.

Tutorial: Hyperblock

AutoKeras has documentation on how to implement your own blocks, but let’s take it a step further and take a look at how we can implement a hyperblock that will allow you to choose from different blocks. We’re going to implement a hyperblock that will let the NAS framework choose between ResNet and Xception.

import autokeras as ak
import tensorflow as tf

class HyperBlock(ak.Block):
    def build(self, hp, inputs):
        inputs = tf.nest.flatten(inputs)[0]
        
        if hp.Choice("model_type",["resnet", "xception"]) == "resnet":
            outputs = ak.ResNetBlock().build(hp,inputs)
        else:
            outputs=ak.XceptionBlock().build(hp,inputs)
        return outputs

Here we define a new class that builds either a ResNet or a DenseNet block. You can modify the HyperBlock to make it possible to choose which model you want beforehand:

from typing import Optional

class HyperBlock(ak.Block):
    def __init__(self, model_type: Optional[str] = None,**kwargs):
        super().__init__(**kwargs)
        
        if model_type is not None and model_type != "resnet" and model_type != "xception":
            raise Exception(f"invalid model_type {model_type}")

        self.model_type=model_type

    def get_config(self):
        config = super().get_config()
        config.update({"model_type": self.model_type})
        return config

    def _build_model(self,hp, output_node,model_type: str):
        if model_type=="resnet":
            return ak.ResNetBlock().build(hp,output_node)
        elif model_type=="xception":
            return ak.XceptionBlock().build(hp, output_node)

    def build(self, hp, inputs):
        inputs=tf.nest.flatten(inputs)[0]

        # Let AutoKeras choose a model
        if self.model_type is None:
            model_type= hp.Choice("model_type", ["resnet", "xception"])
            with hp.conditional_scope("model_type",[model_type]):
                outputs = self._build_model(hp,inputs,model_type)
        # Select model yourself
        else:
            outputs = self._build_model(hp,inputs,model_type)

As you can see, this becomes considerably more complicated. Let’s break down the changes:

  • We added an __init__ method so you can specify the model_type parameter
  • Error handling is important: check whether a valid model_type parameter has been passed
  • get_config: add the new block parameter to the config
  • build: because you now have 2 scenarios (let AutoKeras choose a model or select yourself), we now need a conditional scope. The value of model_type which is selected by AutoKeras, will now only be active within the scope.
  • _build_model: helper function to avoid code repetition. This becomes especially helpful if you have many options.

Here you go, your first hyperblock! You can also make your own block based on a specific neural network architecture and include it in your hyperblock. We will show you how to do this in Part 3 of this series.

Transfer learning

The ImageClassifier also gives you the option to use models that are pre-trained on ImageNet. However, as we discussed, this is not very useful for EO imagery (or even natural images, see: Rethinking Pre-training and Self-training [9]). Palacios Salinas et al. solved this by loading weights obtained from pre-training on some common EO datasets: RESISC45 [10], EUROSAT [11], So2SAT [12] and UC Merced [13].

Tutorial: loading weights

We’re going to look at how we can load these weights in our HyperBlock. Luckily for us, the weights can be downloaded from https://tfhub.dev.

EO_VERSIONS= {
    "resisc45": "https://tfhub.dev/google/remote_sensing/resisc45-resnet50/1",
    "eurosat": "https://tfhub.dev/google/remote_sensing/eurosat-resnet50/1",
    "so2sat": "https://tfhub.dev/google/remote_sensing/so2sat-resnet50/1",
    "ucmerced": "https://tfhub.dev/google/remote_sensing/uc_merced-resnet50/1",
}

Before we can use these weights in our HyperBlock, we need to make a ResNet block that can load the weights.

import tensorflow_hub as hub

class EOResNetBlock(ak.Block):
    #Remote sensing pretrained modules based on: https://github.com/palaciosnrps/automl-rs-project/blob/714bbe36c68fd0f2b989bfee89eac9497d7acf45/autokeras/blocks/basic.py"""
    def __init__(
         self, 
         version: Optional[str] = None,
         **kwargs,
         ):
            super().__init__(**kwargs)

            if version is not None and version not in EO_VERSIONS.keys() and set(version) <= set(EO_VERSIONS.keys()):
                raise Exception(f"invalid version {version}")
        
            self.version=version

    def get_config(self):
        config = super().get_config()
        config.update({"version": self.version})
        return config

    def build(self, hp, inputs=None):
        input_node = tf.nest.flatten(inputs)[0]

        if self.version is None:
            version= hp.Choice("version", list(EO_VERSIONS.keys()))
        elif isinstance(self.version,list):
            version = self.version
        else:
            version = [self.version]
            
        module = hub.KerasLayer(EO_VERSIONS[version],tags='train',trainable=False)
        min_size = 224
        if input_node.shape[3] not in [1, 3]:
            if self.pretrained:
                raise ValueError(
                    "When pretrained is set to True, expect input to "
                    "have 1 or 3 channels, bug got "
                    "{channels}.".format(channels=input_node.shape[3])
                )

        if input_node.shape[1] < min_size or input_node.shape[2] < min_size:
            input_node = tf.keras.layers.experimental.preprocessing.Resizing(
                max(min_size, input_node.shape[1]),
                max(min_size, input_node.shape[2]),
            )(input_node)
        if input_node.shape[3] == 1:
            input_node = tf.keras.layers.Concatenate()([input_node] * 3)
        if input_node.shape[3] != 3:
            input_node = tf.keras.layers.Conv2D(filters=3, kernel_size=1, padding="same")(
                input_node
            )	

        output_node = module(input_node)
        return output_node

In fact, this block is very similar to the HyperBlock, but instead of model_type it has a version parameter that specifies which weights to use. Once again, we check whether a correct version has been specified by the user. The build function is a bit different:

  • We do not need a conditional scope now, because we don’t build blocks in a conditional statement.
  • A module is created that serves as an interface to pre-trained tensorflow models.
  • The number of channels of the input data is checked to make sure it agrees with the pre-trained model.

Now we can add our pre-trained block to the HyperBlock:

class HyperBlock(ak.Block):
    def __init__(self, model_type: Optional[str] = None, version: Optional[str] = None,**kwargs):
        super().__init__(**kwargs)
        
        if model_type is not None and model_type != "resnet" and model_type != "xception" and model_type != "eo_resnet":
            raise Exception(f"invalid model_type {model_type}")

        self.model_type=model_type
        self.version = version

    def get_config(self):
        config = super().get_config()
        config.update({"model_type": self.model_type})
        return config

    def _build_model(self,hp, output_node,model_type: str):
        if model_type=="resnet":
            return ak.ResNetBlock().build(hp,output_node)
        elif model_type=="xception":
            return ak.XceptionBlock().build(hp, output_node)
        elif model_type=="eo_resnet":
            return EOResNetBlock(version=self.version).build(hp,output_node)

    def build(self, hp, inputs):
        inputs=tf.nest.flatten(inputs)[0]

        # Let AutoKeras choose a model
        if self.model_type is None:
            model_type= hp.Choice("model_type", ["resnet", "xception", "eo_resnet"])
            with hp.conditional_scope("model_type",[model_type]):
                outputs = self._build_model(hp,inputs,model_type)
        # Select model yourself
        else:
            outputs = self._build_model(hp,inputs,self.model_type)
        return outputs

We can use our block to create a custom NAS with the AutoModel class of AutoKeras and train it on the UC Merced dataset. The number of trials determines how many networks are sampled by AutoKeras. We now set it to 1, just to test whether it works.

import tensorflow_datasets as tfds
# load the dataset 317.MiB
train_set=tfds.as_numpy(tfds.load("uc_merced", download=True,as_supervised=False, batch_size=-1,split="train[:80%]"))
test_set=tfds.as_numpy(tfds.load("uc_merced", download=True,as_supervised=False, batch_size=-1,split="train[80%:]"))

weights_versions = {k: v for k,v in EO_VERSIONS.items() if k != "ucmerced"}

input_node = ak.ImageInput()
output_node = HyperBlock(model_type="eo_resnet")(input_node)
output_node = ak.ClassificationHead()(output_node)
eo_nas_model = ak.AutoModel(input_node, output_node, max_trials=1,overwrite=True)

eo_nas_model.fit(x=train_set["image"], y=train_set["label"], epochs=10)

When you run this, you’ll find that the model compiles and the neural architecture search starts. Success!

Conclusion

In this blog post, we discussed NAS for the classification of EO images. Using the work by Palacios Salinas et al. as an example, you have learned how to customize the AutoKeras search space by creating a hyperblock and how to apply transfer learning by including pre-trained weights obtained from training on EO datasets.

The Python notebook for this blog post can be found at https://github.com/JuliaWasala/NAS_with_AutoKeras_EO. The original code by Nelly R. Palacios Salinas can be found on GitHub: https://github.com/palaciosnrps/automl-rs-project. This code allows you to use her methods for NAS for the classification of EO imagery in just a few lines of code.

In the next post, we will go even further into the customization of AutoKeras with another example of our research: NAS for super-resolution. We’re going to cover how to add our own custom metrics, add new model architectures to the search space, and extend AutoKeras to other tasks. Stay tuned for more advanced tutorials on AutoKeras for EO!

References

[1] Jin, H., Song, Q. and Hu, X., 2019, July. Auto-keras: An efficient neural architecture search system. In Proceedings of the 25th ACM SIGKDD international conference on knowledge discovery & data mining (pp. 1946-1956).

[2] Palacios Salinas, N.R., Baratchi, M., Rijn, J.N.V. and Vollrath, A., 2021, September. Automated machine learning for satellite data: integrating remote sensing pre-trained models into AutoML systems. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases (pp. 447-462). Springer, Cham.

[3] Cheng, G., et al.: Remote sensing image scene classification meets deep learning: Challenges, methods, benchmarks, and opportunities. In: IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing 13, 3735–3756 (2020)

[4] Zoph, B., Vasudevan, V., Shlens, J. and Le, Q.V., 2018. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 8697-8710).

[5] Gong, X., Chang, S., Jiang, Y. and Wang, Z., 2019. Autogan: Neural architecture search for generative adversarial networks. In Proceedings of the IEEE/CVF International Conference on Computer Vision (pp. 3224-3234).

[6] Pham, H., Guan, M., Zoph, B., Le, Q. and Dean, J., 2018, July. Efficient neural architecture search via parameters sharing. In International conference on machine learning (pp. 4095-4104). PMLR.

[7] He, K., Zhang, X., Ren, S. and Sun, J., 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 770-778).

[8] Chollet, F., 2017. Xception: Deep learning with depthwise separable convolutions. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 1251-1258).

[9] Zoph, B., Ghiasi, G., Lin, T.Y., Cui, Y., Liu, H., Cubuk, E.D. and Le, Q., 2020. Rethinking pre-training and self-training. Advances in neural information processing systems, 33, pp.3833-3845.

[10] Cheng, G., Han, J. and Lu, X., 2017. Remote sensing image scene classification: Benchmark and state of the art. Proceedings of the IEEE105(10), pp.1865-1883.

[11] Helber, P., Bischke, B., Dengel, A. and Borth, D., 2019. Eurosat: A novel dataset and deep learning benchmark for land use and land cover classification. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing12(7), pp.2217-2226.

[12] Zhu, X.X., Hu, J., Qiu, C., Shi, Y., Kang, J., Mou, L., Bagheri, H., Häberle, M., Hua, Y., Huang, R. and Hughes, L., 2019. So2Sat LCZ42: A benchmark dataset for global local climate zones classification. arXiv preprint arXiv:1912.12171.

[13] Yang, Y. and Newsam, S., 2010, November. Bag-of-visual-words and spatial extensions for land-use classification. In Proceedings of the 18th SIGSPATIAL international conference on advances in geographic information systems (pp. 270-279).

Advertisement

AutoAI4EO: NAS with AutoKeras for Earth Observation (Part 1)

This is the first post in a series about our research on Neural Architecture Search (NAS) for Earth Observation (EO). This blog post is an introduction to NAS for EO with AutoKeras . In Part 2 we will talk about NAS for the classification of satellite imagery and transfer learning in AutoKeras. Part 3 will cover NAS for super-resolution for satellite imagery and advanced search space modifications in AutoKeras.  

Today we will talk about what NAS is, and why it is so useful for the analysis of EO imagery. We’ll discuss AutoKeras, and how we have used it in our research to create methods customized for EO. 

What is Neural Architecture Search? 

Deep neural networks have become ubiquitous algorithms for automating different tasks ranging from language processing to facial recognition. These powerful methods can be used to model complex relationships in data without using manually engineered features. However, the design of the neural networks themselves can be a tedious and time-consuming process, during which many different architectures are examined by deep learning experts. Neural network architectures are controlled by hyperparameters that define the types of layers used, the number of layers, but also training parameters like the choice of optimizer or learning rate. The number of choices that have to be made makes it hard for other scientists to use these tools for their research: as a consequence, they often use simpler, available options like CNNs and miss out on the full capabilities of state-of-the-art neural network architecture (think: GANs, transformers, etc.).  

Figure 1: Diagram of a standard NAS framework. A search strategy s samples candidate models from the search space S. The candidate model is evaluated. The search strategy is updated with the evaluation result. 

There is currently a need for these state-of-the-art architectures to become more accessible to researchers from other fields, this is where NAS comes in. NAS frameworks can automatically design and optimize neural networks based on the input data, thus circumventing a human design expert. To do this, the framework needs 3 components: 

  1. A search space populated with possible hyperparameters (e.g., layer type, activation functions). Candidate neural networks are built from components in the search space. 
  1. A search strategy: you need an algorithm that will traverse the search space and intelligently design neural networks. The total number of possible networks that can be constructed from a given search space is often much larger than the number of those that can reasonably be evaluated (for example, there are approximately 1015 architectures in the ENAS search space . This number is so large, because there is a choice of 4 different activation functions for the 12 layers that are in the base module that is repeated to create the network, resulting in 412  or approximately 1015 options [1]), therefore you need a way to decide on the next candidate architecture to consider. 
  1. An evaluation metric: finally, you want to be able to evaluate and compare the architectures that your framework has generated so you can find the best one. 

The search space and the evaluation metric are often task-specific and used for many NAS frameworks. However, the search strategy can vary strongly from framework to framework. There are many search strategies you can choose from, including ones using Reinforcement Learning (e.g., NAS-RL [2]), Evolutionary Algorithms (e.g., Large-scale Evolution [3]), or even stochastic gradient descent (e.g., DARTS [4]). NAS libraries often allow you to use these strategies as well as others like random search and Bayesian optimization. So far, our research is focused on designing the search space and thus we used the standard search strategy offered by the NAS library. 

Why use NAS for EO? 

We’ve established that NAS has great potential to make state-of-the-art neural networks more accessible to scientists from all domains. But why is it especially well-suited to address EO problems? 

Let’s think about a typical EO analysis pipeline for the task of image classification. You want to classify your images based on land cover: whether it is a city, a forest, a desert, etc. First, the raw data would be obtained by a measurement instrument, like Sentinel-2. These images first need to be preprocessed: for example, you want to calibrate the colors in the image to account for different lighting conditions, you want to remove artifacts, and filter your data for clouds. Then, you could consider using techniques like super-resolution to increase the quality of the data. Finally, you can classify your images using either a trained classifier or take the extra steps to label your data and train a classifier specifically for your dataset.  

Usually, you would manually select the methods and procedures you would use at each step. This is a time-consuming process and makes it harder to automatically process the vast amounts of data that are generated by EO instruments. An additional challenge is that each step requires different expertise, and thus often different people to carry out these steps. This process could be automated with the help of AutoML techniques, saving researchers valuable hours. Additionally, it is really not possible for humans to find the best pipeline by (informed) trial and error if the pipeline is very complex and there are many design choices to be made.  AutoML can make it possible to automate tasks based on EO data and as a result analyze more data.  

Figure 2: Left: Sentinel-2 image. 27 January 2019. European Space Agency. Right: Sample of a frog from the CIFAR-10 dataset [5]. This dataset is often used for image classification.

Interest is rising in AI4EO: artificial intelligence techniques for Earth Observation, that are not simply direct applications of existing machine learning methods, but take unique properties of  EO data into account. EO data exists in many forms, from measurements of wind direction to optical satellite images. EO imagery can contain many more features of different scales than natural images (for instance, of faces). Additionally, the tasks performed with EO images can be very different from the tasks performed with natural images. For example, in deforestation mapping, small differences between individual images can be of great importance. Therefore, we cannot simply use techniques for natural images. Besides, the performance of ML methods for EO problems can be greatly increased by using available knowledge and theory of physical models, as well as having the benefit of making these ML models more explainable. 

In the case of NAS, there are examples of adaptions of existing NAS frameworks for related domains such as spatio-temporal forecasting. For instance, AutoST [6] modifies the DART framework with knowledge of spatio-temporal systems. The resulting framework can generate networks that outperform state-of-the-art approaches to various forecasting problems. 

AutoKeras’ role 

As mentioned in the previous section, there are many options in terms of NAS frameworks. In this section, we will describe one of those, AutoKeras [7], which we have used for our research on NAS for EO. AutoKeras is a Python library that enables users to implement NAS in Keras. It offers some ready-made options like NAS for Image Classification and NAS for Regression. There are many options to populate the search space, including existing models like ResNet [8] and Transformers as well as different search strategies. AutoKeras generates candidate architectures by mutating and repeating so-called blocks: these blocks are sub-networks, like a stack of one or more CNN layers or even complete models.  Block parameters like the number of layers or the kernel size are automatically configured by AutoKeras, but the framework can also select different types of blocks and stack them. 

AutoKeras can be used to create customized search spaces by changing the block parameters, but users can also create custom blocks. Additionally, it is possible to load pre-trained weights to speed up training. We can use this functionality to customize our methods for EO. We need to do this, because we want to use characteristics of EO data to achieve better results on EO tasks than we could by simply using techniques developed for natural images. Additionally, customising our search space to our task will help reduce the search space. Though an infinite search space can, in theory, help us discover neural networks that a human would not think of, in practice, this can result in prohibitively long running times before a good architecture is found.  

In the coming blog posts, we will discuss two examples of how we have used AutoKeras in our research to create NAS methods specifically for EO. Next up will be the classification of satellite imagery, where we have used custom blocks and the power of transfer learning to achieve state-of-the-art results in image classification.  

References

[1] Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, Jeff Dean.  Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4095-4104, 2018. 

[2]  B. Zoph, Q.V. Le, Neural Architecture Search with reinforcement learning, Proceedings of the International Conference on Learning Representations (ICLR), 2017. 

[3] Real, E., Moore, S., Selle, A., Saxena, S., Suematsu, Y. L., Tan, J., … & Kurakin, A. (2017, August). Large-scale evolution ofimage classifiers. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 2902-2911).JMLR. org 

[4] Liu, H., Simonyan, K., & Yang, Y. (2018). Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055. 

[5] A. Krizhevsky, “Learning Multiple Layers of Features from
Tiny Images,” Technical report, 2009.

[6] Li, T., Zhang, J., Bao, K., Liang, Y., Li, Y., Zheng, Y. (n.d.). AutoST: Efficient Neural Architecture Search for Spatio-Temporal Prediction. KDD, 20. https://doi.org/10.1145/3394486.3403122&nbsp;

[7] Haifeng Jin, Qingquan Song, and Xia Hu. 2019. Auto-Keras: An Efficient Neural Architecture Search System. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’19). Association for Computing Machinery, New York, NY, USA, 1946–1956. https://doi.org/10.1145/3292500.3330648&nbsp;

[8] K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778, doi: 10.1109/CVPR.2016.90. 

AutoML for Earth Observation data

In March of 2020, I joined the ADA group to work on my Introductory research project, as part of my master program in Computer Science, under the supervision of Dr. Mitra Baratchi and Dr. Jan van Rijn. As my research topic was related to satellite data, I had the opportunity to collaborate with Dr. Andreas Vollrath, who was working at the ESA Phi-lab. My project was titled “Building Deep Learning Models for Remote Sensing Applications Through Automated Machine Learning”. Based on the results of our assumption test and preliminary experiments, we presented an online poster for the ESA Phi-week 2020. The presentation video can be watched here. We demonstrated the potential of applying the most recent advances in AutoML, specifically hyperparameter optimisation, to a remote sensing dataset. Starting from that, we proposed to build an AutoML system focused on Earth Observation data. This project proposal led to my thesis work “Automated Machine Learning for Satellite Data: Integrating Remote Sensing Pre-trained Models into AutoML Systems”.

Cover image showing AutoML being used for satellite data and representing the collaboration between Earth Observation and Machine Learning fields to achieve the best results.

Application areas that can benefit from machine learning models extracted from Satellite data are numerous. Environmental mappings, crops monitoring, urban planning and emergency response are some examples. However, in order to leverage the latest advancements in machine learning, much expert knowledge and hands-on expertise is required. Automated Machine Learning (AutoML) aims to automate the different stages of a machine learning pipeline, making crucial decisions in a data-driven and objective way. To reach this goal, AutoML systems have been developed. 

Open source AutoML systems are usually benchmarked with natural image datasets. Satellite images differ from natural images in various aspects. One of the most important differences is related to the number and type of spectral bands composing the image. Natural colour images have three bands (red, green & blue), but in satellite images, more spectral bands are available. Therefore, the input data can be composed of more and different types of spectral bands (a common practice in real-world applications). These differences made us wonder how the current AutoML systems work for satellite datasets. To come up with an answer, the first part of our project focused on testing the image classification task available in the AutoML system of AutoKeras on a varied remote sensing benchmark suite of 7 datasets that we compiled by reviewing the remote sensing literature. We found good performance in almost all of these datasets, with impressive results for the RGB version of two of these datasets (EuroSAT and So2Sat). However, we did not achieve better results than manually designed models for more real-world application datasets. This showed the importance of customising these systems to achieve better performance on remote sensing data.

Furthermore, as acquiring ground truth label data is quite expensive, a problem in the field of remote sensing is designing high performing models in the lack of large labelled datasets. The use of pre-trained models has given promising results to address this challenge. As a next step, we added remote sensing pre-trained models to AutoKeras. This helped to improve the accuracy on non-RGB datasets further. In our experiments, we compared 3 AutoML approaches (initializing architectures (i) without pre-training, (ii) pre-trained on Imagenet and (iii) pre-trained on remote sensing datasets) against manually designed architectures on our benchmark set. AutoML architectures outperformed the manually designed architectures in 5 of these datasets. We showed that the best AutoML approach depends on the properties of the dataset at hand, and these results further highlighted the importance of having customised Earth Observation AutoML systems. To know more about this project, please take a look at the project page.

By using AutoML methods to reduce the gap between state-of-the-art machine learning research and applied machine learning, the use of advanced AI for remote sensing applications can be more accessible. One of my personal goals is to keep updating and documenting our project repository so people of different backgrounds can use it.
We believe that making this benchmark publicly available will enable the community to further experiment with relevant remote sensing datasets and expand the AutoML systems for different application goals.

Gilles Ottervanger finishes successful Master’s project at ADA

Last month, I successfully defended my Master’s thesis titled “MultiETSC: Automated Machine Learning for Time Series Classification”. I have been working on my MultiETSC project for a little over a year as a part of the ADA group, supervised by Can Wang, Mitra Baratchi and Holger H. Hoos. In this project, we developed an AutoML approach to the problem of early time series classification (ETSC).

ETSC is a problem that is relevant for time-critical applications of time series classifications. It arises from the desire to get classification output before the full time series is observed. ETSC requires a classifier to decide at what point to classify based on the partially observed time series. This decision trades off the earliness of the classification for the accuracy of the classification. Therefore, any ETSC algorithm is faced with two competing objectives: earliness and accuracy.

For MutliETSC, we took eight existing and one naive self-produced ETSC algorithms and partly rewrote them to share a common interface. On this set of algorithms, we applied the MO-ParamILS configurator, which can optimise multiple objectives simultaneously. MultiETSC produces a set of configurations that give multiple options for trading off earliness against accuracy, while non of these configurations is strictly better than the other. This allows a user to pick a configuration that fits the ETSC problem at hand.

In our research, we show that leaving out either the algorithm selection or the multi-objective optimisation parts will both result in inferior results. This means that both these aspects significantly contribute to the quality of MultiETSC’s output.

If you want to know more about this project, have a look at the project page.

Finishing this project, sadly means I will be leaving the ADA group, but I am happy to have had the opportunity to work in such a great environment with such smart and motivated people as in the ADA group.