Task2Vec: Task Embedding for Meta-Learning
Abstract
We introduce a method to provide vectorial representations of visual classification tasks which can be used to reason about the nature of those tasks and their relations. Given a dataset with ground-truth labels and a loss function defined over those labels, we process images through a “probe network” and compute an embedding based on estimates of the Fisher information matrix associated with the probe network parameters. This provides a fixed-dimensional embedding of the task that is independent of details such as the number of classes and does not require any understanding of the class label semantics. We demonstrate that this embedding is capable of predicting task similarities that match our intuition about semantic and taxonomic relations between different visual tasks (e.g., tasks based on classifying different types of plants are similar). We also demonstrate the practical value of this framework for the meta-task of selecting a pre-trained feature extractor for a new task. We present a simple meta-learning framework for learning a metric on embeddings that is capable of predicting which feature extractors will perform well. Selecting a feature extractor with task embedding obtains a performance close to the best available feature extractor, while costing substantially less than exhaustively training and evaluating on all available feature extractors.
1 Introduction
The success of Deep Learning hinges in part on the fact that models learned for one task can be used on other related tasks. Yet, no general framework exists to describe and learn relations between tasks. We introduce the task2vec embedding, a technique to represent tasks as elements of a vector space based on the Fisher Information Matrix. The norm of the embedding correlates with the complexity of the task, while the distance between embeddings captures semantic similarities between tasks (Fig. 1). When other natural distances are available, such as the taxonomical distance in biological classification, we find that the embedding distance correlates positively with it (Fig. 2). Moreover, we introduce an asymmetric distance on tasks which correlates with the transferability between tasks.
Computation of the embedding leverages a duality between network parameters (weights) and outputs (activations) in a deep neural network (DNN): Just as the activations of a DNN trained on a complex visual recognition task are a rich representation of the input images, we show that the gradients of the weights relative to a task-specific loss are a rich representation of the task itself. Specifically, given a task defined by a dataset of labeled samples, we feed the data through a pre-trained reference convolutional neural network which we call “probe network”, and compute the diagonal Fisher Information Matrix (FIM) of the network filter parameters to capture the structure of the task (Sect. 2). Since the architecture and weights of the probe network are fixed, the FIM provides a fixed-dimensional representation of the task. We show this embedding encodes the “difficulty” of the task, characteristics of the input domain, and which features of the probe network are useful to solve it (Sect. 2.1).
Our task embedding can be used to reason about the space of tasks and solve meta-tasks. As a motivating example, we study the problem of selecting the best pre-trained feature extractor to solve a new task. This can be particularly valuable when there is insufficient data to train or fine-tune a generic model, and transfer of knowledge is essential. task2vec depends solely on the task, and ignores interactions with the model which may however play an important role. To address this, we learn a joint task and model embedding, called model2vec, in such a way that models whose embeddings are close to a task exhibit good perfmormance on the task. We use this to select an expert from a given collection, improving performance relative to fine-tuning a generic model trained on ImageNet and obtaining close to ground-truth optimal selection. We discuss our contribution in relation to prior literature in Sect. 6, after presenting our empirical results in Sect. 5.
2 Task Embeddings via Fisher Information
Given an observed input (e.g., an image) and an hidden task variable (e.g., a label), a deep network is a family of functions parametrized by weights , trained to approximate the posterior by minimizing the (possibly regularized) cross entropy loss , where is the empirical distribution defined by the training set . It is useful, especially in transfer learning, to think of the network as composed of two parts: a feature extractor which computes some representation of the input data, and a “head,” or classifier, which encodes the distribution given the representation .
Not all network weights are equally useful in predicting the task variable: the importance, or “informative content,” of a weight for the task can be quantified by considering a perturbation of the weights, and measuring the average Kullbach-Leibler (KL) divergence between the original output distribution and the perturbed one . To second-order approximation, this is
where is the Fisher information matrix (FIM):
that is, the expected covariance of the scores (gradients of the log-likelihood) with respect to the model parameters.
The FIM is a Riemannian metric on the space of probability distributions [7], and provides a measure of the information a particular parameter (weight or feature) contains about the joint distribution : If the classification performance for a given task does not depend strongly a parameter, the corresponding entry in the FIM will be small. The FIM is also related to the (Kolmogorov) complexity of a task, a property that can be used to define a computable metric of the learning distance between tasks [3]. Finally, the FIM can be interpreted as an easy-to-compute positive semidefinite upper-bound to the Hessian of the cross-entropy loss, and coincides with it at local minima [24]. In particular, “flat minima” correspond to weights that have, on average, low (Fisher) information [5, 13].
2.1 task2vec embedding using a probe network
While the network activations capture the information in the input image which are needed to infer the image label, the FIM indicates the set of feature maps which are more informative for solving the current task. Following this intuition, we use the FIM to represent the task itself. However, the FIMs computed on different networks are not directly comparable. To address this, we use single “probe” network pre-trained on ImageNet as a feature extractor and re-train only the classifier layer on any given task, which usually can be done efficiently. After training is complete, we compute the FIM for the feature extractor parameters.
Since the full FIM is unmanageably large for rich probe networks based on CNNs, we make two additional approximations. First, we only consider the diagonal entries, which implicitly assumes that correlations between different filters in the probe network are not important. Second, since the weights in each filter are usually not independent, we average the Fisher Information for all weights in the same filter. The resulting representation thus has fixed size, equal to the number of filters in the probe network. We call this embedding method task2vec.
Robust Fisher computation
Since the FIM is a local quantity, it is affected by the local geometry of the training loss landscape, which is highly irregular in many deep network architectures [21], and may be too noisy when trained with few samples. To avoid this problem, instead of a direct computation, we use a more robust estimator that leverages connections to variational inference. Assume we perturb the weights of the network with Gaussian noise with precision matrix , and we want to find the optimal which yields a good expected error, while remaining close to an isotropic prior . That is, we want to find that minimizes:
where is the cross-entropy loss and controls the weight of the prior. Notice that for this reduces to the Evidence Lower-Bound (ELBO) commonly used in variational inference. Approximating to the second order, the optimal value of satisfies (see Supplementary Material):
Therefore, can be considered as an estimator of the FIM , biased towards the prior in the low-data regime instead of being degenerate. In case the task is trivial (the loss is constant or there are too few samples) the embedding will coincide with the prior , which we will refer to as the trivial embedding. This estimator has the advantage of being easy to compute by directly minimizing the loss through Stochastic Gradient Variational Bayes [18], while being less sensitive to irregularities of the loss landscape than direct computation, since the value of the loss depends on the cross-entropy in a neighborhood of of size . As in the standard Fisher computation, we estimate one parameter per filter, rather than per weight, which in practice means that we constrain whenever and belongs to the same filter. In this case, optimization of can be done efficiently using the local reparametrization trick of [18].
2.2 Properties of the task2vec embedding
The task embedding we just defined has a number of useful properties. For illustrative purposes, consider a two-layer sigmoidal network for which an analytic expression can be derived (see Supplementary Materials). The FIM of the feature extractor parameters can be written using the Kronecker product as
where and the matrix is an element-wise product of classifier weights and first layer feature activations . It is informative to compare this expression to an embedding based only on the dataset domain statistics, such as the (non-centered) covariance of the input data or the covariance of the feature activations. One could take such statistics as a representative domain embedding since they only depend on the marginal distribution in contrast to the FIM task embedding, which depends on the joint distribution . These simple expressions highlight some important (and more general) properties of the Fisher embedding we now describe.
Invariance to the label space: The task embedding does not directly depend on the task labels, but only on the predicted distribution of the trained model. Information about the ground-truth labels is encoded in the weights which are a sufficient statistic of the task [5]. In particular, the task embedding is invariant to permutations of the labels , and has fixed dimension (number of filters of the feature extractor) regardless of the output space (e.g., k-way classification with varying k).
Encoding task difficulty: As we can see from the expressions above, if the fit model is very confident in its predictions, goes to zero. Hence, the norm of the task embedding scales with the difficulty of the task for a given feature extractor . Figure 2 (Right) shows that even for more complex models trained on real data, the FIM norm correlates with test performance.
Encoding task domain: Data points that are classified with high confidence, i.e., is close to 0 or 1, will have a lower contribution to the task embedding than points near the decision boundary since is maximized at . Compare this to the covariance matrix of the data, , to which all data points contribute equally. Instead, in task2vec information on the domain is based on data near the decision boundary (task-weighted domain embedding).
Encoding useful features for the task: The FIM depends on the curvature of the loss function with the diagonal entries capturing the sensitivity of the loss to model parameters. Specifically, in the two-layer model one can see that, if a given feature is uncorrelated with , the corresponding blocks of are zero. In contrast, a domain embedding based on feature activations of the probe network (e.g., ) only reflects which features vary over the dataset without indication of whether they are relevant to the task.
3 Similarity Measures on the Space of Tasks
What metric should be used on the space of tasks? This depends critically on the meta-task we are considering. As a motivation, we concentrate on the meta-task of selecting the pre-trained feature extractor from a set in order to obtain the best performance on a new training task. There are several natural metrics that may be considered for this meta-task. In this work, we mainly consider:
Taxonomic distance
For some tasks, there is a natural notion of semantic similarity, for instance defined by sets of categories organized in a taxonomic hierarchy where each task is classification inside a subtree of the hierarchy (e.g., we may say that classifying breeds of dogs is closer to classification of cats than it is to classification of species of plants). In this setting, we can define
where are the sets of categories in task and is an ultrametric or graph distance in the taxonomy tree. Notice that this is a proper distance, and in particular it is symmetric.
Transfer distance.
We define the transfer (or fine-tuning) gain from a task to a task (which we improperly call distance, but is not necessarily symmetric or positive) as the difference in expected performance between a model trained for task from a fixed initialization (random or pre-trained), and the performance of a model fine-tuned for task starting from a solution of task :
where the expectations are taken over all trainings with the selected architecture, training procedure and network initialization, is the final test error obtained by training on task from the chosen initialization, and is the error obtained instead when starting from a solution to task and then fine-tuning (with the selected procedure) on task .
3.1 Symmetric and asymmetric task2vec metrics
By construction, the Fisher embedding on which task2vec is based captures fundamental information about the structure of the task. We may therefore expect that the distance between two embeddings correlate positively with natural metrics on the space of tasks. However, there are two problems in using the Euclidean distance between embeddings: the parameters of the network have different scales, and the norm of the embedding is affected by complexity of the task and the number of samples used to compute the embedding.
Symmetric task2vec distance
To make the distance computation robust, we propose to use the cosine distance between normalized embeddings:
where is the cosine distance, and are the two task embeddings (i.e., the diagonal of the Fisher Information computed on the same probe network), and the division is element-wise. This is a symmetric distance which we expect to capture semantic similarity between two tasks. For example, we show in Fig. 2 that it correlates well with the taxonomical distance between species on iNaturalist.
On the other hand, precisely for this reason, this distance is ill-suited for tasks such as model selection, where the (intrinsically asymmetric) transfer distance is more relevant.
Asymmetric task2vec distance
In a first approximation, that does not consider either the model or the training procedure used, positive transfer between two tasks depends both on the similarity between two tasks and on the complexity of the first. Indeed, pre-training on a general but complex task such as ImageNet often yields a better result than fine-tuning from a close dataset of comparable complexity. In our case, complexity can be measured as the distance from the trivial embedding. This suggests the following asymmetric score, again improperly called a “distance” despite being asymmetric and possibly negative:
where is the trivial embedding, and is an hyperparameter. This has the effect of bring more complex models closer. The hyper-parameter can be selected based on the meta-task. In our experiments, we found that the best value of ( when using a ResNet-34 pretrained on ImageNet as the probe network) is robust to the choice of meta-tasks.
4 model2vec: task/model co-embedding
By construction, the task2vec distance ignores details of the model and only relies on the task. If we know what task a model was trained on, we can represent the model by the embedding of that task. However, in general we may not have such information (e.g., black-box models or hand-constructed feature extractors). We may also have multiple models trained on the same task with different performance characteristics. To model the joint interaction between task and model (i.e., architecture and training algorithm), we aim to learn a joint embedding of the two.
We consider for concreteness the problem of learning a joint embedding for model selection. In order to embed models in the task space so that those near a task are likely to perform well on that task, we formulate the following meta-learning problem: Given models, their model2vec embedding are the vectors , where is the task embedding of the task used to train model (if available, else we set it to zero), and is a learned “model bias” that perturbs the task embedding to account for particularities of the model. We learn by optimizing a -way cross entropy loss to predict the best model given the task distance (see Supplementary Material):
After training, given a novel query task , we can then predict the best model for it as the , that is, the model embedded closest to the query task.
5 Experiments
We test task2vec on a large collection of tasks and models, related to different degrees. Our experiments aim to test both qualitative properties of the embedding and its performance on meta-learning tasks. We use an off-the-shelf ResNet-34 pretrained on ImageNet as our probe network, which we found to give the best overall performance (see Sect. 5.2). The collection of tasks is generated starting from the following four main datasets. iNaturalist [36]: Each task extracted corresponds to species classification in a given taxonomical order. For instance, the “Rodentia task” is to classify species of rodents. Notice that each task is defined on a separate subset of the images in the original dataset; that is, the domains of the tasks are disjoint. CUB-200 [37]: We use the same procedure as iNaturalist to create tasks. In this case, all tasks are classifications inside orders of birds (the aves taxonomical class), and have generally much less training samples than corresponding tasks in iNaturalist. iMaterialist [1] and DeepFashion [23]: Each image in both datasets is associated with several binary attributes (e.g., style attributes) and categorical attributes (e.g., color, type of dress, material). We binarize the categorical attributes, and consider each attribute as a separate task. Notice that, in this case, all tasks share the same domain and are naturally correlated.
In total, our collection of tasks has 1460 tasks (207 iNaturalist, 25 CUB, 228 iMaterialist, 1000 DeepFashion). While a few tasks have many training examples (e.g., hundred thousands), most have just hundreds or thousands of samples. This simulates the heavy-tail distribution of data in real-world applications.
Together with the collection of tasks, we collect several “expert” feature extractors. These are ResNet-34 models pre-trained on ImageNet and then fine-tuned on a specific task or collection of related tasks (see Supplementary Materials for details). We also consider a “generic”expert pre-trained on ImageNet without any finetuning. Finally, for each combination of expert feature extractor and task, we trained a linear classifier on top of the expert in order to solve the selected task using the expert.
In total, we trained 4,100 classifiers, 156 feature extractors and 1,460 embeddings. The total effort to generate the final results was about 1,300 GPU hours.
Meta-tasks.
In Sect. 5.2, for a given task we aim to predict, using task2vec , which expert feature extractor will yield the best classification performance. In particular, we formulate two model selection meta-tasks: iNat + CUB and Mixed. The first consists of 50 tasks and experts from iNaturalist and CUB, and aims to test fine-grained expert selection in a restricted domain. The second contains a mix of 26 curated experts and 50 random tasks extracted from all datasets, and aims to test model selection between different domains and tasks (see Supplementary Material for details).
5.1 Task Embedding Results
Task Embedding qualitatively reflects taxonomic distance for iNaturalist
For tasks extracted from the iNaturalist dataset (classification of species), the taxonomical distance between orders provides a natural metric of the semantic similarity between tasks. In Figure 2 we compare the symmetric task2vec distance with the taxonomical distance, showing strong agreement.
Task embedding for iMaterialist
In Fig. 1 we show a t-SNE visualization of the embedding for iMaterialist and iNaturalist tasks. Task embedding yields interpretable results: Tasks that are correlated in the dataset, such as binary classes corresponding to the same categorical attribute, may end up far away from each other and close to other tasks that are semantically more similar (e.g., the jeans category task is close to the ripped attribute and the denim material). This is reflected in the mixture of colors of semantically related nearby tasks, showing non-trivial grouping.
We also compare the task2vec embedding with a domain embedding baseline, which only exploits the input distribution rather than the task distribution . While some tasks are highly correlated with their domain (e.g., tasks from iNaturalist), other tasks differ only on the labels (e.g., all the attribute tasks of iMaterialist, which share the same clothes domain). Accordingly, the domain embedding recovers similar clusters on iNaturalist. However, on iMaterialst domain embedding collapses all tasks to a single uninformative cluster (not a single point due to slight noise in embedding computation).
Task Embedding encodes task difficulty
The scatter-plot in Fig. 3 compares the norm of embedding vectors vs. performance of the best expert (or task specific model for cases where we have the diagonal computed). As shown analytically for the two-layers model, the norm of the task embedding correlates with the complexity of the task also on real tasks and architectures.
Probe network | Top-10 | All |
---|---|---|
Chance | +13.95% | +59.52% |
VGG-13 | +4.82% | +38.03% |
DenseNet-121 | +0.30% | +10.63% |
ResNet-13 | +0.00% | +9.97% |
5.2 Model Selection
Given a task, our aim is to select an expert feature extractor that maximizes the classification performance on that task. We propose two strategies: (1) embed the task and select the feature extractor trained on the most similar task, and (2) jointly embed the models and tasks, and select a model using the learned metric (see Section 4). Notice that (1) does not use knowledge of the model performance on various tasks, which makes it more widely applicable but requires we know what task a model was trained for and may ignore the fact that models trained on slightly different tasks may still provide an overall better feature extractor (for example by over-fitting less to the task they were trained on).
In Table 2 we compare the overall results of the various proposed metrics on the model selection meta-tasks. On both the iNat+CUB and Mixed meta-tasks, the Asymmetric task2vec model selection is close to the ground-truth optimal, and significantly improves over both chance, and over using an generic ImageNet expert. Notice that our method has complexity, while searching over a collection of experts is .
Error distribution
In Fig. 3 we show in detail the error distribution of the experts on multiple tasks. It is interesting to notice that the classification error obtained using most experts clusters around some mean value, and little improvement is observed over using a generic expert. On the other hand, a few optimal experts can obtain a largely better performance on the task than a generic expert. This confirms the importance of having access to a large collection of experts when solving a new task, especially if few training data are available. But this collection can only be efficiently exploited if an algorithm is given to efficiently find one of the few experts for the task, which we propose.
Meta-task | Optimal | Chance | ImageNet | task2vec | Asymmetric task2vec | model2vec |
---|---|---|---|---|---|---|
iNat + CUB | 31.24 | +59.52% | +30.18% | +42.54% | +9.97% | +6.81% |
Mixed | 22.90 | +112.49% | +75.73% | +40.30% | +29.23% | +27.81% |
Dependence on task dataset size
Finding experts is especially important when the task we are interested in has relatively few samples. In Fig. 4 we show how the performance of task2vec varies on a model selection task as the number of samples varies. At all sample sizes task2vec is close to the optimum, and improves over selecting a generic expert (ImageNet), both when fine-tuning and when training only a classifier. We observe that the best choice of experts is not affected by the dataset size, and that even with few examples task2vec is able to find the optimal experts.
Choice of probe network
6 Related Work
Task and Domain embedding.
Tasks distinguished by their domain can be understood simply in terms of image statistics. Due to the bias of different datasets, sometimes a benchmark task may be identified just by looking at a few images [34]. The question of determining what summary statistics are useful (analogous to our choice of probe network) has also been considered, for example [9] train an autoencoder that learns to extract fixed dimensional summary statistics that can reproduce many different datasets accurately. However, for general vision tasks which apply to all natural images, the domain is the same across tasks.
Taskonomy [39] explores the structure of the space of tasks, focusing on the question of effective knowledge transfer in a curated collection of 26 visual tasks, ranging from classification to 3D reconstruction, defined on a common domain. They compute pairwise transfer distances between pairs of tasks and use the results to compute a directed hierarchy. Introducing novel tasks requires computing the pairwise distance with tasks in the library. In contrast, we focus on a larger library of 1,460 fine-grained classification tasks both on same and different domains, and show that it is possible to represent tasks in a topological space with a constant-time embedding. The large task collection and cheap embedding costs allow us to tackle new meta-learning problems.
Fisher kernels
Our work takes inspiration from Jaakkola and Hausler [16]. They propose the “Fisher Kernel”, which uses the gradients of a generative model score function as a representation of similarity between data items
Here is a parameterized generative model and is the Fisher information matrix. This provides a way to utilize generative models in the context of discriminative learning. Variants of the Fisher kernel have found wide use as a representation of images [28, 29], and other structured data such as protein molecules [17] and text [30]. Since the generative model can be learned on unlabelled data, several works have investigated the use of Fisher kernel for unsupervised learning [14, 31]. [35] learns a metric on the Fisher kernel representation similar to our metric learning approach. Our approach differs in that we use the FIM as a representation of a whole dataset (task) rather than using model gradients as representations of individual data items.
Fisher Information for CNNs
Our approach to task embedding makes use of the Fisher Information matrix of a neural network as a characterization of the task. Use of Fisher information for neural networks was popularized by Amari [6] who advocated optimization using natural gradient descent which leverages the fact that the FIM is an appropriate parameterization-independent metric on statistical models. Recent work has focused on approximates of FIM appropriate in this setting (see e.g., [12, 10, 25]). FIM has also been proposed for various regularization schemes [5, 8, 22, 27], analyze learning dynamics of deep networks [4], and to overcome catastrophic forgetting [19].
Meta-learning and Model Selection
The general problem of meta-learning has a long history with much recent work dedicated to problems such as neural architecture search and hyper-parameter estimation. Closely related to our problem is work on selecting from a library of classifiers to solve a new task [33, 2, 20]. Unlike our approach, these usually address the question via land-marking or active testing, in which a few different models are evaluated and performance of the remainder estimated by extension. This can be viewed as a problem of completing a matrix defined by performance of each model on each task.
7 Discussion
task2vec is an efficient way to represent a task, or the corresponding dataset, as a fixed dimensional vector. It has several appealing properties, in particular its norm correlates with the test error obtained on the task, and the cosine distance between embeddings correlates with natural distances between tasks, when available, such as the taxonomic distance for species classification, and the fine-tuning distance for transfer learning. Having a representation of tasks paves the way for a wide variety of meta-learning tasks. In this work, we focused on selection of an expert feature extractor in order to solve a new task, especially when little training data is present, and showed that using task2vec to select an expert from a collection can sensibly improve test performance while adding only a small overhead to the training process.
Meta-learning on the space of tasks is an important step toward general artificial intelligence. In this work, we introduce a way of dealing with thousands of tasks, enough to enable reconstruct a topology on the task space, and to test meta-learning solutions. The current experiments highlight the usefulness of our methods. Even so, our collection does not capture the full complexity and variety of tasks that one may encounter in real-world situations. Future work should further test effectiveness, robustness, and limitations of the embedding on larger and more diverse collections.
References
- [1] iMaterialist Challenge (Fashion) at FGVC5 workshop, CVPR 2018. https://www.kaggle.com/c/imaterialist-challenge-fashion-2018.
- [2] S. M. Abdulrahman, P. Brazdil, J. N. van Rijn, and J. Vanschoren. Speeding up algorithm selection using average ranking and active testing by introducing runtime. Machine learning, 107(1):79–108, 2018.
- [3] A. Achille, G. Mbeng, G. Paolini, and S. Soatto. The dynamic distance between learning tasks: From Kolmogorov complexity to transfer learning via quantum physics and the information bottleneck of the weights of deep networks. Proc. of the NIPS Workshop on Integration of Deep Learning Theories (ArXiv: 1810.02440), October 2018.
- [4] A. Achille, M. Rovere, and S. Soatto. Critical learning periods in deep neural networks. Proc. of the Intl. Conf. on Learning Representations (ICLR). ArXiv:1711.08856, 2019.
- [5] A. Achille and S. Soatto. Emergence of invariance and disentanglement in deep representations. Journal of Machine Learning Research (ArXiv 1706.01350), 19(50):1–34, 2018.
- [6] S.-I. Amari. Natural gradient works efficiently in learning. Neural computation, 10(2):251–276, 1998.
- [7] S.-I. Amari and H. Nagaoka. Methods of information geometry, volume 191 of translations of mathematical monographs. American Mathematical Society, 13, 2000.
- [8] S. Arora, R. Ge, B. Neyshabur, and Y. Zhang. Stronger generalization bounds for deep nets via a compression approach. arXiv preprint arXiv:1802.05296, 2018.
- [9] H. Edwards and A. Storkey. Towards a neural statistician. arXiv preprint arXiv:1606.02185, 2016.
- [10] C. Finn, P. Abbeel, and S. Levine. Model-agnostic meta-learning for fast adaptation of deep networks. arXiv preprint arXiv:1703.03400, 2017.
- [11] K. He, X. Zhang, S. Ren, and J. Sun. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 770–778, 2016.
- [12] T. Heskes. On ânaturalâ learning and pruning in multilayered perceptrons. Neural Computation, 12(4):881–901, 2000.
- [13] S. Hochreiter and J. Schmidhuber. Flat minima. Neural Computation, 9(1):1–42, 1997.
- [14] A. D. Holub, M. Welling, and P. Perona. Combining generative models and fisher kernels for object recognition. In IEEE International Conference on Computer Vision, volume 1, pages 136–143. IEEE, 2005.
- [15] G. Huang, Z. Liu, L. Van Der Maaten, and K. Q. Weinberger. Densely connected convolutional networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2017.
- [16] T. Jaakkola and D. Haussler. Exploiting generative models in discriminative classifiers. In Advances in neural information processing systems, pages 487–493, 1999.
- [17] T. S. Jaakkola, M. Diekhans, and D. Haussler. Using the fisher kernel method to detect remote protein homologies. In ISMB, volume 99, pages 149–158, 1999.
- [18] D. P. Kingma, T. Salimans, and M. Welling. Variational dropout and the local reparameterization trick. In Advances in Neural Information Processing Systems, pages 2575–2583, 2015.
- [19] J. Kirkpatrick, R. Pascanu, N. Rabinowitz, J. Veness, G. Desjardins, A. A. Rusu, K. Milan, J. Quan, T. Ramalho, A. Grabska-Barwinska, et al. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, page 201611835, 2017.
- [20] R. Leite, P. Brazdil, and J. Vanschoren. Selecting classification algorithms with active testing. In International workshop on machine learning and data mining in pattern recognition, pages 117–131. Springer, 2012.
- [21] H. Li, Z. Xu, G. Taylor, and T. Goldstein. Visualizing the loss landscape of neural nets. arXiv preprint arXiv:1712.09913, 2017.
- [22] T. Liang, T. Poggio, A. Rakhlin, and J. Stokes. Fisher-rao metric, geometry, and complexity of neural networks. arXiv preprint arXiv:1711.01530, 2017.
- [23] Z. Liu, P. Luo, S. Qiu, X. Wang, and X. Tang. Deepfashion: Powering robust clothes recognition and retrieval with rich annotations. In Proceedings of the IEEE conference on computer vision and pattern recognition, pages 1096–1104, 2016.
- [24] J. Martens. New perspectives on the natural gradient method. CoRR, abs/1412.1193, 2014.
- [25] J. Martens and R. Grosse. Optimizing neural networks with kronecker-factored approximate curvature. In International conference on machine learning, pages 2408–2417, 2015.
- [26] P. Matikainen, R. Sukthankar, and M. Hebert. Model recommendation for action recognition. In Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference on, pages 2256–2263. IEEE, 2012.
- [27] Y. Mroueh and T. Sercu. Fisher gan. In Advances in Neural Information Processing Systems, pages 2513–2523, 2017.
- [28] F. Perronnin, J. Sánchez, and T. Mensink. Improving the fisher kernel for large-scale image classification. In European conference on computer vision, pages 143–156. Springer, 2010.
- [29] J. Sánchez, F. Perronnin, T. Mensink, and J. Verbeek. Image classification with the fisher vector: Theory and practice. International journal of computer vision, 105(3):222–245, 2013.
- [30] C. Saunders, A. Vinokourov, and J. S. Shawe-taylor. String kernels, fisher kernels and finite state automata. In Advances in Neural Information Processing Systems, pages 649–656, 2003.
- [31] M. Seeger. Learning with labeled and unlabeled data. Technical Report EPFL-REPORT-161327, Institute for Adaptive and Neural Computation, University of Edinburgh, 2000.
- [32] K. Simonyan and A. Zisserman. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556, 2014.
- [33] M. R. Smith, L. Mitchell, C. Giraud-Carrier, and T. Martinez. Recommending learning algorithms and their associated hyperparameters. arXiv preprint arXiv:1407.1890, 2014.
- [34] A. Torralba and A. A. Efros. Unbiased look at dataset bias. In Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on, pages 1521–1528. IEEE, 2011.
- [35] L. Van Der Maaten. Learning discriminative fisher kernels. In ICML, volume 11, pages 217–224, 2011.
- [36] G. Van Horn, O. Mac Aodha, Y. Song, Y. Cui, C. Sun, A. Shepard, H. Adam, P. Perona, and S. Belongie. The inaturalist species classification and detection dataset. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2018.
- [37] C. Wah, S. Branson, P. Welinder, P. Perona, and S. Belongie. The Caltech-UCSD Birds-200-2011 Dataset. Technical Report CNS-TR-2011-001, California Institute of Technology, 2011.
- [38] Y.-X. Wang and M. Hebert. Model recommendation: Generating object detectors from few samples. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 1619–1628, 2015.
- [39] A. R. Zamir, A. Sax, W. Shen, L. Guibas, J. Malik, and S. Savarese. Taskonomy: Disentangling task transfer learning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3712–3722, 2018.
- [40] P. Zhang, J. Wang, A. Farhadi, M. Hebert, and D. Parikh. Predicting failures of vision systems. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, pages 3566–3573, 2014.
Appendix A Analytic FIM for two-layer model
Assume we have data points and . Assume that a fixed feature extractor applied to data points yields features and a linear model with parameters is trained to model the conditional distribution where is the sigmoid function. The gradient of the cross-entropy loss with respect to the linear model parameters is:
and the empirical estimate of the Fisher information matrix is:
In general, we are also interested in the Fisher information of the parameters of the feature extractor since this is independent of the specifics of the output space (e.g., for k-way classification). Consider a 2-layer network where the feature extractor uses a sigmoid non-linearity:
and the matrix specifies the feature extractor parameters and are parameters of the task-specific classifier. Taking the gradient w.r.t. parameters we have:
The Fisher Information Matrix (FIM) consists of blocks:
We focus on the FIM of the probe network parameters which is independent of the dimensionality of the output layer and write it in matrix form as:
Note that each block consists of the same matrix multiplied by a scalar given as:
We can thus write the whole FIM as the expectation of a Kronecker product:
where the matrix can be written as
Given a task described by N training samples , the FIM can be estimated empirically as
where we take expectation over w.r.t. the predictive distribution .
(a) Random linear + ReLU | (b) Polynomial of degree three |
Example toy task embedding
As noted in the main text, the FIM depends on the domain embedding, the particular task and its complexity. We illustrate these properties of the task embedding using an “toy” task space illustrated in Figure 5. We generate 64 binary classification tasks by clustering a uniform grid of points in the XY plane into clusters using -means and assigning a half of them to one category. We consider two different feature extractors, which play the role of “probe network”. One is a collection of polynomial functions of degree , the second is random linear features of the form where and are sampled uniformly between and between .
Appendix B Robust Fisher Computation
Consider again the loss function (parametrized with the covariance matrix instead of the precision matrix for convenience of notation):
We will make use of the fact that the Fisher Information matrix is a positive semidefinite approximation of the Hessian of the cross-entropy loss, and coincide with it in local minima [24]. Expanding to the second order around , we have:
where in the last line used the known expression for the KL divergence of two Gaussian. Taking the derivative with respect to and setting it to zero, we obtain that the expression loss is minimized when or, rewritten in term of the precision matrices, when
where we have introduced the precision matrices and .
We can then obtain an estimate of the Hessian of the cross-entropy loss at the point , and hence of the FIM, by minimizing the loss with respect to . This is a more robust approximation than the standard definition, as it depends on the loss in a whole neighborhood of of size , rather than from the derivatives of the loss at a point. To further make the estimation more robust, and to reduce the number of parameters, we constrain to be diagonal, and constrain weights belonging to the same filter to have the same precision . Optimization of this loss can be performed easily using Stochastic Gradient Variational Bayes, and in particular using the local reparametrization trick of [18].
The prior precision should be picked according to the scale of the weights of each layer. In practice, since the weights of each layer have a different scale, we found it useful to select a different for each layer, and train it together with ,
Appendix C Details of the experiments
c.1 Training of experts and classifiers
Given a task, we train an expert on it by fine-tuning an off-the-shelf ResNet-34 pretrained on ImageNet111https://pytorch.org/docs/stable/torchvision/models.html. Fine-tuning is performed by first fixing the weights of the network and retraining from scratch only the final classifier for 10 epochs using Adam, and then fine-tuning all the network together with SGD for 60 epochs with weight decay 5e-4, starting from learning rate 0.001 and decreasing it by a factor 0.1 at epochs 40.
Given an expert, we train a classifier on top of it by replacing the final classification layer and training it with Adam for 16 epochs. We use weight decay 5e-4 and learning rate 1e-4.
The tasks we train on generally have different number of samples and unbalanced classes. To limit the impact of this imbalance on the training procedure, regardless of the total size of the dataset, in each epoch we always sample 10,000 images with replacement, uniformly between classes. In this way, all epochs have the same length and see approximately the same number of examples for each class. We use this balanced sampling in all experiments, unless noted otherwise.
c.2 Computation of the task2vec embedding
As the described in the main text, the task2vec embedding is obtained by choosing a probe network, retraining the final classifier on the given task, and then computing the Fisher Information Matrix for the weights of the probe network.
Unless specified otherwise, we use an off-the-shelf ResNet-34 pretrained on ImageNet as the probe network. The Fisher Information Matrix is computed in a robust way minimizing the loss function with respect to the precision matrix , as described before. To make computation of the embedding faster, instead of waiting for the convergence of the classifier, we train the final classifier for 2 epochs using Adam and then we continue to train it jointly with the precision matrix using the loss . We constrain to be positive by parametrizing it as , for some unconstrained variable . While for the classifier we use a low learning rate (1e-4), we found it useful to use an higher learning rate (1e-2) to train .
c.3 Training the model2vecembedding
As described in the main text, in the model2vecembedding we aim to learn a vector representation of the -th model in the collection, which represents both the task the model was trained on (through the task2vec embedding ), and the particularities of the model (through the learned parameter ).
We learn by minimizing a -way classification loss which, given a task , aims to select the model that performs best on the task among a collection of models. Multiple models may perform similarly and close to optimal: to preserve this information, instead of using a one-hot encoding for the best model, we train using soft-labels obtained as follows:
where is the ground-truth test error obtained by training a classifier for task on top of the -th model. Notice that for , the soft-label reduces to the one-hot encoding of the index of the best performing model. However, for lower ’s, the vector contains richer information about the relative performance of the models.
We obtain our prediction in a similar way: Let , then we set our model prediction to be
where the scalar is a learned parameter. Finally, we learn both the ’s and using a cross-entropy loss:
which is minimized precisely when .
In our experiments we set , and minimize the loss using Adam with learning rate , weight decay , and early stopping after 81 epochs, and report the leave-one-out error (that is, for each task we train using the ground truth of all other tasks and test on that task alone, and report the average of the test errors obtained in this way).
Appendix D Datasets, tasks and meta-tasks
Our two model selection meta-tasks, iNat+CUB and Mixed, are curated as follows. For iNat+CUB, we generated 50 tasks and (the same) experts from iNaturalist and CUB. The 50 tasks consist of 25 iNaturalist tasks and 25 CUB tasks to provide a balanced mix from two datasets of the same domain. We generated the 25 iNaturalist tasks by grouping species into orders and then choosing the top 25 orders with the most samples. The number of samples for tasks shows the heavy-tail distribution typical of real data, with the top task having 64,100 samples (the Passeriformes order classification task), while most tasks have around samples.
The 25 CUB tasks were similarly generated with 10 order tasks but additionally has 15 Passeriformes family tasks: After grouping CUB into orders, we determined 11 usable order tasks (the only unusable order task, Gaviiformes, has only one species so it makes no sense to train on it). However, one of the orders—Passeriformes—dominated all other orders with 134 species when compared to 3-24 species of other orders. Therefore, we decided to further subdivide the Passeriformes order task into family tasks (i.e., grouping species into families) to provide a more balanced partition. This resulted in 15 usable family tasks (i.e., has more than one species) out of 22 family tasks. Unlike iNaturalist, tasks from CUB have only a few hundreds of samples and hence benefit more from carefully selecting an expert.
In the iNAT+CUB meta-task the classification tasks are the same tasks used to train the experts. To avoid trivial solutions (always selecting the expert trained on the task we are trying to solve) we test in a leave-one-out fashion: given a classficication task, we aim to select the best expert that was not trained on the same data.
For the Mixed meta-task, we chose 40 random tasks and 25 curated experts from all datasets. The 25 experts were generated from iNaturalist, iMaterialist and DeepFashion (CUB, having fewer samples than iNaturalist, is more appropriate as tasks). For iNaturalist, we trained 15 experts: 8 order tasks and 7 class tasks (species ordered by class), both with number of samples greater than 10,000. For DeepFashion, we trained 3 category experts (upper-body, lower-body, full-body). For iMaterialist, we trained 2 category experts (pants, shoes) and 5 multi-label experts by grouping attributes (color, gender, neckline, sleeve, style). For the purposes of clustering attributes into larger groups for training experts (and color coding the dots in Figure 1), we obtained a de-anonymized list of the iMaterialist Fashion attribute names from the FGCV contest organizers.
The 40 random tasks were generated as follows. In order to balance tasks among all datasets, we selected 5 CUB, 15 iNaturalist, 15 iMaterialist and 5 DeepFashion tasks. Within those datasets, we randomly pick tasks with a sufficient number of validation samples and maximum variety. For the iNaturalist tasks, we group the order tasks into class tasks, filter out the number of validation samples less than 100 and randomly pick order tasks within each class. For the iMaterialist tasks, we similarly group the tasks (e.g. category, style, pattern), filter out tasks with less than 1,000 validation samples and randomly pick tasks within each group. For CUB, we randomly select 2 order tasks and 3 Passeriformes family tasks, and for DeepFashion, we randomly select the tasks uniformly. All this ensures that we have a balanced variety of tasks.
For the data efficiency experiment, we trained on a subset of the tasks and experts in the Mixed meta-task: We picked the Accipitriformes, Asparagales, Upper-body, Short Sleeves for the tasks, and the Color, Lepidoptera, Upper-body, Passeriformes, Asterales for the experts. Tasks where selected among those that have more than 30,000 training samples in order to represent all datasets. The experts were also selected to be representative of all datasets, and contain both strong and very weak experts (such as the Color expert).