This is the first post in a series about our research on Neural Architecture Search (NAS) for Earth Observation (EO). This blog post is an introduction to NAS for EO with AutoKeras . In Part 2 we will talk about NAS for the classification of satellite imagery and transfer learning in AutoKeras. Part 3 will cover NAS for super-resolution for satellite imagery and advanced search space modifications in AutoKeras.
Today we will talk about what NAS is, and why it is so useful for the analysis of EO imagery. We’ll discuss AutoKeras, and how we have used it in our research to create methods customized for EO.
What is Neural Architecture Search?
Deep neural networks have become ubiquitous algorithms for automating different tasks ranging from language processing to facial recognition. These powerful methods can be used to model complex relationships in data without using manually engineered features. However, the design of the neural networks themselves can be a tedious and time-consuming process, during which many different architectures are examined by deep learning experts. Neural network architectures are controlled by hyperparameters that define the types of layers used, the number of layers, but also training parameters like the choice of optimizer or learning rate. The number of choices that have to be made makes it hard for other scientists to use these tools for their research: as a consequence, they often use simpler, available options like CNNs and miss out on the full capabilities of state-of-the-art neural network architecture (think: GANs, transformers, etc.).

There is currently a need for these state-of-the-art architectures to become more accessible to researchers from other fields, this is where NAS comes in. NAS frameworks can automatically design and optimize neural networks based on the input data, thus circumventing a human design expert. To do this, the framework needs 3 components:
- A search space populated with possible hyperparameters (e.g., layer type, activation functions). Candidate neural networks are built from components in the search space.
- A search strategy: you need an algorithm that will traverse the search space and intelligently design neural networks. The total number of possible networks that can be constructed from a given search space is often much larger than the number of those that can reasonably be evaluated (for example, there are approximately 1015 architectures in the ENAS search space . This number is so large, because there is a choice of 4 different activation functions for the 12 layers that are in the base module that is repeated to create the network, resulting in 412 or approximately 1015 options [1]), therefore you need a way to decide on the next candidate architecture to consider.
- An evaluation metric: finally, you want to be able to evaluate and compare the architectures that your framework has generated so you can find the best one.
The search space and the evaluation metric are often task-specific and used for many NAS frameworks. However, the search strategy can vary strongly from framework to framework. There are many search strategies you can choose from, including ones using Reinforcement Learning (e.g., NAS-RL [2]), Evolutionary Algorithms (e.g., Large-scale Evolution [3]), or even stochastic gradient descent (e.g., DARTS [4]). NAS libraries often allow you to use these strategies as well as others like random search and Bayesian optimization. So far, our research is focused on designing the search space and thus we used the standard search strategy offered by the NAS library.
Why use NAS for EO?
We’ve established that NAS has great potential to make state-of-the-art neural networks more accessible to scientists from all domains. But why is it especially well-suited to address EO problems?
Let’s think about a typical EO analysis pipeline for the task of image classification. You want to classify your images based on land cover: whether it is a city, a forest, a desert, etc. First, the raw data would be obtained by a measurement instrument, like Sentinel-2. These images first need to be preprocessed: for example, you want to calibrate the colors in the image to account for different lighting conditions, you want to remove artifacts, and filter your data for clouds. Then, you could consider using techniques like super-resolution to increase the quality of the data. Finally, you can classify your images using either a trained classifier or take the extra steps to label your data and train a classifier specifically for your dataset.
Usually, you would manually select the methods and procedures you would use at each step. This is a time-consuming process and makes it harder to automatically process the vast amounts of data that are generated by EO instruments. An additional challenge is that each step requires different expertise, and thus often different people to carry out these steps. This process could be automated with the help of AutoML techniques, saving researchers valuable hours. Additionally, it is really not possible for humans to find the best pipeline by (informed) trial and error if the pipeline is very complex and there are many design choices to be made. AutoML can make it possible to automate tasks based on EO data and as a result analyze more data.


Figure 2: Left: Sentinel-2 image. 27 January 2019. European Space Agency. Right: Sample of a frog from the CIFAR-10 dataset [5]. This dataset is often used for image classification.
Interest is rising in AI4EO: artificial intelligence techniques for Earth Observation, that are not simply direct applications of existing machine learning methods, but take unique properties of EO data into account. EO data exists in many forms, from measurements of wind direction to optical satellite images. EO imagery can contain many more features of different scales than natural images (for instance, of faces). Additionally, the tasks performed with EO images can be very different from the tasks performed with natural images. For example, in deforestation mapping, small differences between individual images can be of great importance. Therefore, we cannot simply use techniques for natural images. Besides, the performance of ML methods for EO problems can be greatly increased by using available knowledge and theory of physical models, as well as having the benefit of making these ML models more explainable.
In the case of NAS, there are examples of adaptions of existing NAS frameworks for related domains such as spatio-temporal forecasting. For instance, AutoST [6] modifies the DART framework with knowledge of spatio-temporal systems. The resulting framework can generate networks that outperform state-of-the-art approaches to various forecasting problems.
AutoKeras’ role
As mentioned in the previous section, there are many options in terms of NAS frameworks. In this section, we will describe one of those, AutoKeras [7], which we have used for our research on NAS for EO. AutoKeras is a Python library that enables users to implement NAS in Keras. It offers some ready-made options like NAS for Image Classification and NAS for Regression. There are many options to populate the search space, including existing models like ResNet [8] and Transformers as well as different search strategies. AutoKeras generates candidate architectures by mutating and repeating so-called blocks: these blocks are sub-networks, like a stack of one or more CNN layers or even complete models. Block parameters like the number of layers or the kernel size are automatically configured by AutoKeras, but the framework can also select different types of blocks and stack them.
AutoKeras can be used to create customized search spaces by changing the block parameters, but users can also create custom blocks. Additionally, it is possible to load pre-trained weights to speed up training. We can use this functionality to customize our methods for EO. We need to do this, because we want to use characteristics of EO data to achieve better results on EO tasks than we could by simply using techniques developed for natural images. Additionally, customising our search space to our task will help reduce the search space. Though an infinite search space can, in theory, help us discover neural networks that a human would not think of, in practice, this can result in prohibitively long running times before a good architecture is found.
In the coming blog posts, we will discuss two examples of how we have used AutoKeras in our research to create NAS methods specifically for EO. Next up will be the classification of satellite imagery, where we have used custom blocks and the power of transfer learning to achieve state-of-the-art results in image classification.
References
[1] Hieu Pham, Melody Guan, Barret Zoph, Quoc Le, Jeff Dean. Proceedings of the 35th International Conference on Machine Learning, PMLR 80:4095-4104, 2018.
[2] B. Zoph, Q.V. Le, Neural Architecture Search with reinforcement learning, Proceedings of the International Conference on Learning Representations (ICLR), 2017.
[3] Real, E., Moore, S., Selle, A., Saxena, S., Suematsu, Y. L., Tan, J., … & Kurakin, A. (2017, August). Large-scale evolution ofimage classifiers. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 2902-2911).JMLR. org
[4] Liu, H., Simonyan, K., & Yang, Y. (2018). Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055.
[5] A. Krizhevsky, “Learning Multiple Layers of Features from
Tiny Images,” Technical report, 2009.
[6] Li, T., Zhang, J., Bao, K., Liang, Y., Li, Y., Zheng, Y. (n.d.). AutoST: Efficient Neural Architecture Search for Spatio-Temporal Prediction. KDD, 20. https://doi.org/10.1145/3394486.3403122
[7] Haifeng Jin, Qingquan Song, and Xia Hu. 2019. Auto-Keras: An Efficient Neural Architecture Search System. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining (KDD ’19). Association for Computing Machinery, New York, NY, USA, 1946–1956. https://doi.org/10.1145/3292500.3330648
[8] K. He, X. Zhang, S. Ren and J. Sun, “Deep Residual Learning for Image Recognition,” 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016, pp. 770-778, doi: 10.1109/CVPR.2016.90.
One thought on “AutoAI4EO: NAS with AutoKeras for Earth Observation (Part 1)”