# Merging Models with Fisher-Weighted Averaging

###### Abstract

Transfer learning provides a way of leveraging knowledge from one task when learning another task. Performing transfer learning typically involves iteratively updating a model’s parameters through gradient descent on a training dataset. In this paper, we introduce a fundamentally different method for transferring knowledge across models that amounts to “merging” multiple models into one. Our approach effectively involves computing a weighted average of the models’ parameters. We show that this averaging is equivalent to approximately sampling from the posteriors of the model weights. While using an isotropic Gaussian approximation works well in some cases, we also demonstrate benefits by approximating the precision matrix via the Fisher information. In sum, our approach makes it possible to combine the “knowledge” in multiple models at an extremely low computational cost compared to standard gradient-based training. We demonstrate that model merging achieves comparable performance to gradient descent-based transfer learning on intermediate-task training and domain adaptation problems. We also show that our merging procedure makes it possible to combine models in previously unexplored ways. To measure the robustness of our approach, we perform an extensive ablation on the design of our algorithm.

## 1 Introduction

The paradigm of transfer learning Pan and Yang (2009), which involves pre-training a model before fine-tuning it on a target task, has become pervasive in many applications of machine learning. The preparatory step of pre-training a model on a data-rich task ideally instills useful “knowledge” into the network’s weights, which allows the model to learn more rapidly and effectively when fine-tuned on a downstream task of interest. Transfer learning has therefore become a particularly important and omnipresent tool across many fields, including natural language processing Ruder et al. (2019); Devlin et al. (2018b); Dai and Le (2015); Radford et al. (2018); Raffel et al. (2019); Peters et al. (2018) and computer vision Oquab et al. (2014); Jia et al. (2014); Yosinski et al. (2014). In computer vision, pre-training is typically done on a large labeled dataset like ImageNet Deng et al. (2009); Russakovsky et al. (2015), whereas applications of transfer learning to natural language processing typically pre-train through self-supervised training on a large unlabeled text corpus. Recently, it has been shown that training on an “intermediate” task between pre-training and fine-tuning can further boost performance Phang et al. (2018); Vu et al. (2020); Pruksachatkun et al. (2020); Phang et al. (2020). Alternatively, continued self-supervised training on unlabelled domain-specialized data can serve as a form of domain adaptation Gururangan et al. (2020).

All of the aforementioned transfer learning methods transfer knowledge by using a trained network to initialize another network followed by iterative gradient descent. While demonstrably powerful, several drawbacks arise from this: First, improvements to ancestor models cannot be passed down to descendants; instead, we must restart the whole process from the improved ancestor model, throwing away our previous work. For example, if we fine-tune a pre-trained model on a downstream task, but then the pre-trained model is improved through additional training, we must re-fine-tune the new model on our downstream task if we want to confer benefits from this additional pre-training. Furthermore, if we gain access to a checkpoint that has been fine-tuned on a useful intermediate task, we must again throw away our previous work and fine-tune from the intermediate task checkpoint. Existing methods for transfer learning also have the disadvantage of only being able to transfer information from a single model. While it may be possible to train on multiple intermediate tasks sequentially, one quickly either runs into a combinatorial explosion of saved checkpoints or faces the issue of “catastrophic forgetting” in continual learning Kirkpatrick et al. (2017). In addition to slowing down experimentation by preventing reuse of work, these drawbacks impose limitations on the types of transfer that can occur.

In this paper, we introduce model merging, a fundamentally different transfer learning method that does not suffer from these deficiencies. Instead of transferring knowledge through gradient-based training, our method directly combines the weights of trained models. This is accomplished through computing a weighted average of the models’ checkpoints. We explore using both using a single scalar weight per model and using parameter-wise weights given by the Fisher information computed over the models’ respective datasets. The Fisher information can be estimated efficiently using existing algorithms and can be computed once for a given model and reused. Merging has an extremely low amortized computational cost, with individual merges typically taking less than a second on a GPU. We visualize a comparison of our proposed merging procedure to traditional gradient-based transfer learning in fig. 1.

Empirically, we demonstrate that merging models fine-tuned on individual tasks achieves comparable performance to sequential intermediate-task training on the GLUE benchmark. We also show that model merging can provide an additional boost to models created via traditional intermediate-task training without having to worry about catastrophic forgetting. This provides a concrete example of transfer that is fast and easy with merging but onerous or impossible to do with existing methods. We also show that model merging compares favorably to sequential domain adaptation on NLP tasks in the biomedical and computer science domains. Finally, we perform an extensive ablation study on hyperparameters relevant to model merging.

The rest of our paper is structured as follows: In section 2, we provide the necessary background before detailing our model merging procedure. Section 3 consists of experimental results on intermediate-task training and domain adaptation, in addition to our ablation study. We explore related works in section 4 and provide conclusions and thoughts on future work in section 5.

## 2 Weighted Parameter Averaging for Model Merging

We introduce two related approaches for merging models that we dub “isotropic merging” and “Fisher merging”. As a high-level summary, our approach effectively creates a posterior probability distribution over the parameters for each model and then selects the parameters with the highest joint likelihood. These posterior distributions each take the form of a Gaussian centered around the model’s parameters. Isotropic merging uses the identity matrix as the precision matrix while Fisher merging uses the model’s Fisher information matrix. We always use a diagonal approximation of the Fisher matrix in practice, so the precision matrix is always diagonal. We thus set each merged parameter value to a weighted average of the corresponding parameter values from the original models. We add model-level weightings as additional hyperparameters to set the relative importance of each model.

In this section, we first provide background on the Laplace approximation, which motivates the form of the posterior and our use of the Fisher information matrix. We then provide a review of Fisher information before describing how it is used in Fisher merging.

### 2.1 Laplace Approximation

Since common neural network training algorithms produce a point estimate of the parameter values, we require a means of producing a posterior probability distribution from a point estimate. Isotropic merging uses an isotropic Gaussian centered at the parameter values as a zero-order approximation to the posterior, i.e. we assume the Gaussian posterior has identity covariance.

Alternatively, Fisher merging uses the Laplace approximation to the posterior, which corresponds to a second-order Taylor expansion of the log density around a mode (MacKay, 1992). This leads to a Gaussian approximation of the posterior, where is the Hessian matrix and are the model’s trained parameter values. More precisely, we assume that the parameter values of a trained neural network are a local maximum of the posterior. It can then be shown that the precision matrix of the Laplace approximation is given by the Fisher information matrix of the network at , which we discuss in the next subsection.

### 2.2 Fisher Information Matrix

The Fisher information matrix (Fisher, 1922; Amari, 1997) of a neural network is a positive semidefinite matrix given by the formula

(1) |

It can be shown that the Fisher information matrix coincides with the Hessian at modes of the distribution (Pascanu and Bengio, 2013), explaining its use in the Laplace approximation.

The Fisher information matrix can also be used to relate changes in the model parameters to changes in the model output. It can be shown that

(2) |

as , where denotes the KL-divergence (Pascanu and Bengio, 2013). The field of information geometry explores this interpretation in depth where the Fisher information matrix plays the role of a metric on a Riemannian manifold (Nielsen, 2020).

As the full Fisher matrix takes memory to store, it quickly becomes impractical for all but the smallest models. We are thus forced to use an approximation to the full Fisher in practice. In this paper, we follow the common practice of using the diagonal of the Fisher matrix Kirkpatrick et al. (2017) and leave exploration of alternative Fisher approximations to future work. In our experiments, we estimated the diagonal of the Fisher matrix via

(3) |

where are drawn i.i.d. from the dataset that was used to train the model. The expectation over can be estimated via sampling from or computed exactly when the number of classes is small. While other methods (e.g. (Achille et al., 2019a)) exist for estimating the Fisher, we leave their exploration for future work.

### 2.3 Model Merging

We start with a set of trained neural networks with weights and accompanying matrices . For Fisher merging, correspond to the diagonal approximate Fisher matrices, whereas for isotropic merging are identity matrices. As discussed in section 2.1, we then construct as a Gaussian-distributed posterior over the weights of the merged model with mean and precision . To obtain the merged model, we find a single set of parameters that is given a high probability under each posterior. Formally, we have

(4) |

where the are hyperparameters corresponding to the relative weighting of each constituent model, whose optimization we will address below. For both isotropic and Fisher merging, (4) has the closed-form solution

(5) |

where .

Merging then becomes the optimization problem of finding the merging coefficients . Note that we can restrict our search to the -simplex. In practice, due to the computational efficiency of evaluating one particular setting of the merging coefficients, we perform this optimization through a simple grid search. We note that the search could be made more efficient via methods such as Bayesian hyperparameter optimization (Snoek et al., 2015), and that our merging formula (4) is differentiable with respect to the merging coefficients. We leave further exploration of search algorithms for merging coefficient to future work.

In addition to the Bayesian motivation behind merging described above, we can also motivate Fisher merging based on the information-geometric interpretation of the Fisher information matrix: We seek a point in parameter space that minimizes the expected KL-divergence between the original and merged models on their corresponding datasets. After approximating the divergences via (2), model merging then can be seen as finding such a point.

### 2.4 Caveats

Numerical Issues. Note that (5) can run into numerical issues when the Fisher is close to zero across all models for a given parameter. Since we have a privileged “target model” in all of our experiments (i.e. the model that has been fine-tuned on the final task of interest), we address this potential issue by “defaulting” to the parameter’s value in the target model in these cases. An alternative would be to take an average weighted only by the merging coefficients (i.e., pretend the Fisher is the same across all models). Preliminary experiments found that the choice of a “default” value for these parameters had little impact on merging performance, which makes sense given that a small Fisher value ultimately means that the parameter is relatively unimportant to the model’s behavior.

Ensuring Closeness of Checkpoints. Since the Fisher information is a local property of a single parameter value, in theory all of the models we are merging must be close to each other in parameter space. If the parameters are far apart, then merged parameters could end up in low-probability regions of at least one posterior. A similar argument holds for the isotropic case. Furthermore, the Fisher matrix becomes a poorer approximation of model geometry. Thus, we only experiment with merging models that were fine-tuned from the same pre-trained checkpoint. We also explored regularizing the squared L2 distance between the fine-tuned and pre-trained weights during fine-tuning.

Unmergeable Parameters. In many cases, we have some parameters from each model that do not appear in all of the models we are merging. In particular, this includes having task-specific classification heads on top of a common body architecture. We handle this by only applying the merging procedure (4) to the shared body parameters and keeping the task-specific heads unchanged. Although this may lead to a distribution shift in the classification head inputs, we found it to work well in practice for the datasets and tasks we consider.

## 3 Experiments

Since we are not aware of any existing methods for merging models, we run experiments emulating existing transfer learning pipelines that involve creating an intermediate model from a pre-trained checkpoint before creating the final fine-tuned model. For the most part, intermediate-task training has mainly been considered in the NLP domain; as such, we limit our experiments to the BERT Devlin et al. (2018b) and RoBERTa Liu et al. (2019) pre-trained language models. We stress that the purpose of these experiments is not to outperform the existing methods but rather to evaluate our merging method against existing standard practices. We also show that in some cases our merging method can provide an additional boost when applied on top of existing gradient-based techniques for transfer learning. To enable comparison to past work, we mostly explored merging pairs of models but we include results on 3- and 4-way merges in appendix B. For high-resource tasks, we made use of fine-tuned BERT and RoBERTa checkpoints from the Hugging Face repository (Wolf et al., 2019). The Hugging Face model repository collects fine-tuned checkpoints derived from standard, publicly-released models like BERT and RoBERTa. We used checkpoints from Hugging Face to reduce the computational cost of our experiments and to provide an example of how merging can be used to draw benefit from existing publicly-released fine-tuned models. We fine-tuned our own models for low resource tasks since their higher variance required us to run multiple trials.

### 3.1 Intermediate-Task Training

Within the context of transfer learning, intermediate-task training refers to fine-tuning a pre-trained model on an intermediate task before fine-tuning it on a final target task. This has been found to provide an additional improvement to target task performance compared to using the pre-trained model alone (Vu et al., 2020; Phang et al., 2018). We provide an analog of this in our merging framework by merging a model fine-tuned on the target task with a model fine-tuned on the intermediate task.

#### 3.1.1 Emulating Existing Methods

Following previous work, we ran experiments using BERT-base on the GLUE benchmark (Wang et al., 2018). The GLUE benchmark consists of the sentence acceptability task CoLA (Warstadt et al., 2019), the sentiment detection task SST-2 (Socher et al., 2013), the paraphrase detection tasks MRPC and QQP (Dolan and Brockett, 2005; Iyer et al., 2017), the sentence similarity task STS-B (Cer et al., 2017), and the natural language inference (NLI) tasks MNLI, QNLI, RTE, and WNLI (Bowman et al., 2015; Rajpurkar et al., 2016a; Dagan et al., 2005; Levesque et al., 2012). All of the GLUE tasks are classification tasks except for STS-B, which is a regression task with a score ranging from 0 to 5. Following common practice, we do not run experiments on WNLI due to the tricks required to get a good score (Devlin et al., 2018a; Kocijan et al., 2019). We turn STS-B into a classification task by partitioning the continuous label into 25 equally-sized buckets (Raffel et al., 2019). We emphasize that this was done for the sake of convenience rather than due to a limitation of our methods. When a single task has multiple metrics, we report the average over metrics. See Wang et al. (2018) for more details on these tasks and their associated metrics. We also consider the SQuAD reading comprehension benchmark as an intermediate task Rajpurkar et al. (2016b). We detail how we obtained fine-tuned checkpoints on these tasks in appendix A.

We computed a diagonal Fisher approximation for each checkpoint using up to 4096 examples from the corresponding train set. We computed the expectation with respect to in (3) exactly for all tasks except SQuAD. Being an extractive question-answering task, its output space is quite large, so we sampled a single output per example. As in past work Pruksachatkun et al. (2020); Vu et al. (2020), we merged checkpoints from every possible pair of tasks. We chose our merging coefficients by a grid search with 50 points, using the score on the first 2048 validation examples as the selection metric. We used a Fisher information threshold of 1e-6 to resolve numerical stability issues through the method outlined in section 2.4.

We present our results using Fisher merging in fig. 2 and isotropic merging in table A3. In line with existing intermediate-task training results, we find that merging provided a substantial boost on the RTE task, with MNLI providing a particularly large boost of 9.5 points from Fisher merging. The low-resource tasks of CoLA and MRPC also received a modest boost. We found that Fisher merging provided provided a slightly larger boost than isotropic merging in all cases except MRPC. Interestingly, we see that some high resource tasks benefited from merging as well, with QQP getting a 0.6-point boost from Fisher merging with QNLI. In this case, Fisher merging provided a significantly larger boost than isotropic merging.

As a baseline, we ran traditional gradient-based intermediate-task training on CoLA, MRPC, STS-B, and RTE using the same donor checkpoints as used for merging. We present our results in table A4. Overall, we find the boosts attained through merging to typically be somewhat smaller than traditional gradient-based fine-tuning. The cost of merging with a given checkpoint, however, is far less than performing a completely new fine-tuning run; see section 3.3.4 for a quantification of the difference in FLOPs. Furthermore, we need not have access to the checkpoint at the time of fine-tuning; we can keep benefiting from new checkpoints as they become available. Note that merging will never result in a degradation in performance on the validation set of the original task because it is always possible to set for the original model and otherwise, which amounts to just keeping the original checkpoint.

#### 3.1.2 Enabling New Paths for Knowledge Transfer

We now explore whether checkpoints derived from intermediate-task training can benefit from merging. This setting does not have an existing direct analog in traditional gradient-based intermediate-task training, where the final model is only able to bootstrap knowledge from the lineage of tasks used to train the checkpoint used for initialization before final fine-tuning (as shown in fig. 1). Merging allows us to transfer additional knowledge to the would-be final model that can complement the knowledge transferred from the checkpoint used for initialization.

Following the set of tasks considered in Liu et al. (2019), we fine-tuned the BERT-base MNLI checkpoint on MRPC, STS-B, and RTE using the same hyperparameters as the previous section. We then merged all pairs of checkpoints from the union of these intermediate task checkpoints and the high-resource task checkpoints from Hugging Face. We present our results using Fisher merging in fig. 2 and isotropic merging in table A6. The three target tasks all benefited from intermediate-task training. MRPC and RTE further benefited from merging. The boost was typically larger for Fisher merged models, especially for RTE.

We see that merging with the MNLI checkpoint still provided a benefit even to the models fine-tuned from it, which we hypothesize came from reintroducing information forgotten during fine-tuning. However, the boost from MNLI was smaller than merging with checkpoints fine-tuned on other tasks. The biggest boost came from Fisher merging with the SST-2 checkpoint with a gain of 2.3 points. Being a non-NLI task with many training examples, we speculate that SST-2 may have provided the biggest boost since it provides the most new information to the model.

Again, we ran a sequential fine-tuning baseline for some task pairs in the setup. Specifically, we started with the MNLI checkpoint, fine-tuned on the donor task, and then fine-tuned on the target task. From our results in table A7, we see that this actually performed worse than directly fine-tuning on only the target task from the MNLI checkpoint. We hypothesize this is related to the phenomena of catastrophic forgetting Goodfellow et al. (2013), so continual learning methods such as Elastic Weight Consolidation Kirkpatrick et al. (2017) would likely improve upon this baseline. Nevertheless, this illustrates model merging’s ability to sidestep the issue of catastrophic forgetting and enable exploration of novel transfer learning strategies.

#### 3.1.3 Scaling

Seeing that merging can provide a boost on top of intermediate-task training, we explored whether this boost could still be obtained for models with near-state-of-the-art performance. We sought to improve the performance of a RoBERTa-large RTE model that had been fine-tuned from an MNLI intermediate checkpoint. Our fine-tuning and Fisher computation procedure was the same as for BERT-base with the exception of using a batch size of 8 and doing 4 rather than 5 trials. Our donor models were the original RoBERTa-large checkpoint (i.e., not fine-tuned on MNLI) fine-tuned on MRPC, RTE, STS-B, and SST-2. The SST-2 checkpoint was fine-tuned for 3 epochs rather than 10.

We present our results for Fisher merging in fig. 4 with full numerical results in table A8. Even on well-performing models, merging can boost performance. As in section 3.1.2, we again found the sequential fine-tuning baseline to perform poorly. Interestingly, the largest boost of 2.2 points came from Fisher merging with another RTE checkpoint, which is reminiscent of ensembling (Opitz and Shavlik, 1996). We leave further exploration of the application of merging to ensembling for future work.

### 3.2 Domain Adaptation

We now turn our attention to the “domain-adaptive pre-training” (DAPT) approach for domain adaptation advocated by Gururangan et al. (2020), which is methodologically similar to intermediate-task training. DAPT consists of additional pre-training of an original general-purpose pre-trained checkpoint on domain-specific unlabeled data.

We explore the benefits of merging in an experimental setup similar to Gururangan et al. (2020). We focus on the biomedical (BioMed) and computer science (CS) domains because they correspond to the classification tasks that saw the largest gains from domain-adaptive pre-training in Gururangan et al. (2020). Namely, we experimented with the ChemProt (Kringelum et al., 2016) relation classification task on the BioMed domain. On the CS domain, we used the citation intent task of ACL-ARC (Jurgens et al., 2018) and the relation classification task of SciERC (Luan et al., 2018). Following Gururangan et al. (2020), we report macro- for ACL-ARC and SciERC, and we report micro- for ChemProt.

We used RoBERTa-base (Liu et al., 2019) as our baseline model. We performed our own domain-adaptive pre-training to be able to experiment with the number of updates used for DAPT and experiment with applying regularization. Please see appendix C for details on our pre-training. Our fine-tuning and target task Fisher computation procedures were the same as in our GLUE experiments with the exception of using a batch size of 8 when fine-tuning. Fine-tuning for 10 epochs, we saved a checkpoint at the end of each epoch. We computed the Fisher for the DAPT checkpoints on 131,072 examples, using one sample from the logits per example. We merged each checkpoint saved during fine-tuning with the DAPT checkpoint from the task’s domain. We performed a grid search of 75 merging coefficients and used the score on the first 2048 test examples as the selection criterion. We used a Fisher information threshold of 1e-20 to resolve numerical stability issues through the method outlined in section 2.4. We report the scores of the best unmerged and the best merged checkpoint from each fine-tuning run.

Task | Unmerged | Fisher | Isotropic | Fine-tuned |
---|---|---|---|---|

ChemProt | ||||

ACL-ARC | ||||

SciERC |

We present our results in table 1. Merging provided the largest boost on Acl-Arc, and outperformed traditional fine-tuning in this setting. We only observed a minor improvement in performance on ChemProt and SciERC. We note that our boosts from gradient-based fine-tuning were smaller than reported in Gururangan et al. (2020), which was likely because we were only able to train on public data and we applied domain-adaptive pre-training for fewer steps. However, our results are consistent in the sense that Acl-Arc received the largest boost and ChemProt received the smallest boost.

### 3.3 Ablations

Having demonstrated the benefits of merging, we now perform ablation studies on relevant hyperparameters. Seeing as it frequently outperforms isotropic merging, we only perform ablations using the Fisher merging approach. We chose to focus on MNLI-to-RTE intermediate-task transfer since merging provided the largest boost on that pair of tasks. All experiments in this section start from the BERT-base pre-trained checkpoint. We always used a batch size of 16 and the Adam optimizer with a learning rate of 1e-5. We used a Fisher information threshold of 1e-6 to resolve numerical stability issues when merging through the method outlined in section 2.4.

#### 3.3.1 Regularization During Fine-tuning

Since our methods depend on constructing a local approximation to the posterior over model weights, they implicitly require checkpoints to be “close enough” to work properly. While merging checkpoints fine-tuned from a common pre-trained model has a favorable inductive bias, penalizing distance from that pre-trained checkpoint during fine-tuning can strengthen that bias. We explored regularizing the L2 distance from the fine-tuned parameter values to the pre-trained parameter values Kirkpatrick et al. (2017) during fine-tuning. We fine-tuned separate BERT-base models on RTE and MNLI with varying regularization strengths. We used 4096 examples to compute the Fisher and merged used a grid search on the merge coefficient with 50 evenly spaced values.

We present our results in table A9. It appears that the regularization slightly hurts the performance of unmerged RTE models. For merging, we obtained our best results with the unregularized RTE models both in terms of absolute performance and the size of the boost. While weak regularization during MNLI fine-tuning led to a slightly larger merging boost, too-strong regularization hurt performance. We hypothesize that the different number of steps that the RTE and MNLI models were trained for explains the difference in optimal regularization for the two tasks. Ten epochs of RTE with a batch size of 16 corresponds to roughly 1.5k steps while two epochs of MNLI with the same batch size corresponds to about 50k steps. Models fine-tuned for relatively few steps probably do not change enough from their initialization to affect merging. However, models fine-tuned for many steps can stray far enough to negatively impact merging without additional regularization. Based on these results, we suspect that we could improve our results from section 3.1.1 by applying weak regularization when fine-tuning on high resource tasks.

#### 3.3.2 Checkpoint Choice

Since the number of fine-tuning steps can change the closeness to the pre-trained checkpoint, we ran ablations on the number of epochs used for fine-tuning. We explored fine-tuning for between 1 and 10 epochs on RTE and between 0.5 and 4 epochs for MNLI. We used the same hyperparameters for fine-tuning as the previous section, using 3e-4 as the regularization coefficient. We also used the same Fisher computation and merging procedure.

We present our results in table A10. Without merging, the RTE score is roughly independent of the epoch. The boost from merging tended to increase with RTE epochs. Similarly, MNLI models fine-tuned for too few steps produced smaller boosts. The boost leveled off but did not decrease as we fine-tuned for longer on MNLI. These results suggest that fine-tuning for longer improves the efficacy of merging. We stress, however, that this may be a suboptimal strategy when the validation score of the target model drops with continued training due to overfitting. Taken together with the results from the previous section, merging appears robust to fine-tuning methods with a weak inductive bias of closeness to the pre-trained checkpoint.

#### 3.3.3 Fisher Examples

We now turn our attention to the estimation of the Fisher matrix, namely the number of examples used to estimate it. We reused the RTE checkpoints from our intermediate-task training experiments in section 3.1.1. We used the textattack/bert-base-uncased-MNLI model from Hugging Face as the MNLI model. We used the same Fisher computation and merging procedure as in section 3.3.1.

We present our results in table A11, where we see a clear trend of increasing performance as the number of Fisher examples increases for both the target and donor models. Notably, even using only 256 examples to compute the Fisher outperformed the isotropic merging baseline.

#### 3.3.4 Computational Cost

We had previously noted that our merging procedure can be substantially more efficient than standard gradient-based fine-tuning. To measure this claim concretely, we computed the FLOPs required for fine-tuning and merging an RTE checkpoint based on the heuristics described in Kaplan et al. (2020). Fine-tuning BERT-base on RTE for 10 epochs would require about 5.5e14 FLOPs. Our merging procedures require computing the merged checkpoint (eq. 5) and then evaluating it on the validation set with Fisher merging also requiring the estimation of the Fisher matrix (eq. 3) beforehand. These steps require about 4.0e8, 2.0e12, and 9.1e13 FLOPs respectively, resulting in a roughly 6 lower total cost compared to fine-tuning for Fisher merging and 275 lower cost for isotropic merging. Further, we reiterate that the Fisher matrix only needs to be computed once and can be reused for subsequent merges, which amortizes the most expensive step in Fisher merging.

## 4 Related Work

Like our work, elastic weight consolidation (EWC) (Kirkpatrick et al., 2017) uses the Laplace approximation to the posterior over model weights to create a regularizer to prevent catastrophic forgetting in the context of continual learning. While their framework supports the use of posteriors from multiple models as well, they restrict such models to be previous checkpoints of a continually trained model. EWC keeps the model from losing previously acquired knowledge while merging provides a means of directly adding new knowledge to a model.

Some other existing procedures such as distillation (Hinton et al., 2015) and ensembling (Opitz and Shavlik, 1996) can also be thought of as combining or transferring knowledge between neural networks. However, those methods represent knowledge solely through the output of models. The knowledge contained within the weights of a network will necessarily be greater than the knowledge contained in its output Achille et al. (2019b). Hence, methods that directly combine model weights such as merging have the potential to be more powerful than those methods. Furthermore, our merging procedure has an efficient and closed-form solution (eq. 5) while distillation requires iterative gradient descent-based training.

Isotropic checkpoint averaging is used by federated learning McMahan et al. (2017) and Polyak averaging Polyak and Juditsky (1992). However, the checkpoints merged by those methods can be thought of coming from the same training run of single model. We believe we are the first to demonstrate cross-task transfer coming from checkpoint averaging and to explore it in the context of transfer learning. However, adapting ideas from federated learning such as Liu et al. (2018); Wang et al. (2020) could provide a fruitful avenue for future model merging research.

Natural gradient descent refers to an optimization procedure that uses KL-divergence of model predictions as a distance measure rather than the Euclidean distance in parameter space employed by regular gradient descent (Amari, 1997). It does this by performing gradient descent on a Riemannian manifold with the Fisher information matrix as its metric (Pascanu and Bengio, 2013). In practice, this amounts to using the Fisher as a preconditioner during gradient descent. Some work on natural gradient descent may prove relevant for model merging such as using Kronecker-factorized Fisher matrices as an alternative to the diagonal approximation employed in this paper (Martens and Grosse, 2015; Grosse and Martens, 2016; Martens et al., 2018).

## 5 Conclusion

In this paper, we introduced model merging, a fundamentally new way of transferring knowledge across models. Our proposed algorithm merges models by computing a weighted average of the models’ parameters, with the option of introducing a weighting based on an approximation of each parameter’s Fisher information. We demonstrated that merging is an efficient alternative to traditional gradient-based fine-tuning for intermediate-task training and domain adaptation, and also demonstrated that it enables forms of knowledge transfer that would be onerous with traditional transfer learning methods. In future work, we plan to investigate different methods for approximating the Fisher information as well as more unusual combinations of models.

## References

- Task2vec: task embedding for meta-learning. In Proceedings of the IEEE/CVF International Conference on Computer Vision, pp. 6430–6439. Cited by: §2.2.
- Where is the information in a deep neural network?. arXiv preprint arXiv:1905.12213. Cited by: §4.
- Neural learning in structured parameter spaces-natural riemannian gradient. Advances in neural information processing systems, pp. 127–133. Cited by: §2.2, §4.
- A large annotated corpus for learning natural language inference. arXiv preprint arXiv:1508.05326. Cited by: §3.1.1.
- Semeval-2017 task 1: semantic textual similarity-multilingual and cross-lingual focused evaluation. arXiv preprint arXiv:1708.00055. Cited by: §3.1.1.
- The pascal recognising textual entailment challenge. In Machine Learning Challenges Workshop, pp. 177–190. Cited by: §3.1.1.
- Semi-supervised sequence learning. In Advances in neural information processing systems, Cited by: §1.
- ImageNet: a large-scale hierarchical image database. In 2009 IEEE conference on computer vision and pattern recognition, Cited by: §1.
- Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §3.1.1.
- Bert: pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805. Cited by: §1, §3.
- Automatically constructing a corpus of sentential paraphrases. In Proceedings of the Third International Workshop on Paraphrasing (IWP2005), Cited by: §3.1.1.
- On the mathematical foundations of theoretical statistics. Philosophical Transactions of the Royal Society of London. Series A, Containing Papers of a Mathematical or Physical Character 222 (594-604), pp. 309–368. Cited by: §2.2.
- An empirical investigation of catastrophic forgetting in gradient-based neural networks. arXiv preprint arXiv:1312.6211. Cited by: §3.1.2.
- A kronecker-factored approximate fisher matrix for convolution layers. In International Conference on Machine Learning, pp. 573–582. Cited by: §4.
- Don’t stop pretraining: adapt language models to domains and tasks. arXiv preprint arXiv:2004.10964. Cited by: Appendix C, §1, §3.2, §3.2, §3.2.
- Distilling the knowledge in a neural network. arXiv preprint arXiv:1503.02531. Cited by: §4.
- External Links: Link Cited by: §3.1.1.
- Caffe: convolutional architecture for fast feature embedding. In Proceedings of the 22nd ACM international conference on Multimedia, Cited by: §1.
- Measuring the evolution of a scientific field through citation frames. Transactions of the Association for Computational Linguistics 6, pp. 391–406. Cited by: §3.2.
- Scaling laws for neural language models. arXiv preprint arXiv:2001.08361. Cited by: §3.3.4.
- Adam: a method for stochastic optimization. arXiv preprint arXiv:1412.6980. Cited by: Appendix A.
- Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences 114 (13), pp. 3521–3526. Cited by: §1, §2.2, §3.1.2, §3.3.1, §4.
- A surprisingly robust trick for winograd schema challenge. arXiv preprint arXiv:1905.06290. Cited by: §3.1.1.
- ChemProt-3.0: a global chemical biology diseases mapping. Database 2016. Cited by: §3.2.
- The winograd schema challenge. In Thirteenth International Conference on the Principles of Knowledge Representation and Reasoning, Cited by: §3.1.1.
- Rotate your networks: better weight consolidation and less catastrophic forgetting. In 2018 24th International Conference on Pattern Recognition (ICPR), pp. 2262–2268. Cited by: §4.
- Roberta: a robustly optimized bert pretraining approach. arXiv preprint arXiv:1907.11692. Cited by: §3.1.2, §3.2, §3.
- S2orc: the semantic scholar open research corpus. arXiv preprint arXiv:1911.02782. Cited by: Appendix C.
- Multi-task identification of entities, relations, and coreference for scientific knowledge graph construction. arXiv preprint arXiv:1808.09602. Cited by: §3.2.
- A practical bayesian framework for backpropagation networks. Neural computation 4 (3), pp. 448–472. Cited by: §2.1.
- Kronecker-factored curvature approximations for recurrent neural networks. In International Conference on Learning Representations, Cited by: §4.
- Optimizing neural networks with kronecker-factored approximate curvature. In International conference on machine learning, pp. 2408–2417. Cited by: §4.
- Communication-efficient learning of deep networks from decentralized data. In Artificial Intelligence and Statistics, pp. 1273–1282. Cited by: §4.
- An elementary introduction to information geometry. Entropy 22 (10), pp. 1100. Cited by: §2.2.
- Actively searching for an effective neural network ensemble. Connection Science 8 (3-4), pp. 337–354. Cited by: §3.1.3, §4.
- Learning and transferring mid-level image representations using convolutional neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, Cited by: §1.
- A survey on transfer learning. IEEE Transactions on knowledge and data engineering 22 (10), pp. 1345–1359. Cited by: §1.
- Revisiting natural gradient for deep networks. arXiv preprint arXiv:1301.3584. Cited by: §2.2, §2.2, §4.
- Deep contextualized word representations. arXiv preprint arXiv:1802.05365. Cited by: §1.
- Sentence encoders on stilts: supplementary training on intermediate labeled-data tasks. arXiv preprint arXiv:1811.01088. Cited by: §1, §3.1.
- English intermediate-task training improves zero-shot cross-lingual transfer too. arXiv preprint arXiv:2005.13013. Cited by: §1.
- Acceleration of stochastic approximation by averaging. SIAM journal on control and optimization 30 (4), pp. 838–855. Cited by: §4.
- Intermediate-task transfer learning with pretrained models for natural language understanding: when and why does it work?. arXiv preprint arXiv:2005.00628. Cited by: §1, §3.1.1.
- Improving language understanding by generative pre-training. Cited by: §1.
- Exploring the limits of transfer learning with a unified text-to-text transformer. arXiv preprint arXiv:1910.10683. Cited by: §1, §3.1.1.
- Know what you don’t know: unanswerable questions for squad. arXiv preprint arXiv:1806.03822. Cited by: Appendix A.
- Squad: 100,000+ questions for machine comprehension of text. arXiv preprint arXiv:1606.05250. Cited by: §3.1.1.
- SQuAD: 100,000+ questions for machine comprehension of text. In Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, Cited by: §3.1.1.
- Transfer learning in natural language processing. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Tutorials, pp. 15–18. Cited by: §1.
- ImageNet large scale visual recognition challenge. International journal of computer vision. Cited by: §1.
- Scalable bayesian optimization using deep neural networks. In International conference on machine learning, pp. 2171–2180. Cited by: Appendix B, §2.3.
- Recursive deep models for semantic compositionality over a sentiment treebank. In Proceedings of the 2013 conference on empirical methods in natural language processing, pp. 1631–1642. Cited by: §3.1.1.
- Exploring and predicting transferability across nlp tasks. arXiv preprint arXiv:2005.00770. Cited by: §1, §3.1.1, §3.1.
- GLUE: a multi-task benchmark and analysis platform for natural language understanding. arXiv preprint arXiv:1804.07461. Cited by: §3.1.1.
- Federated learning with matched averaging. arXiv preprint arXiv:2002.06440. Cited by: §4.
- Neural network acceptability judgments. Transactions of the Association for Computational Linguistics 7, pp. 625–641. Cited by: §3.1.1.
- HuggingFace’s transformers: state-of-the-art natural language processing. arXiv preprint arXiv:1910.03771. Cited by: §3.
- How transferable are features in deep neural networks?. In Advances in neural information processing systems, Cited by: §1.

## Appendix A GLUE Fine-tuning Details

For the high resource tasks QNLI, QQP, SST-2, and MNLI, we used checkpoints downloaded from Hugging Face. We also used a checkpoint from Hugging Face that was fine-tuned on the extractive question answering task SQuAD 2.0 (Rajpurkar et al., 2018) as an alternative intermediate task checkpoint. For the low resource tasks CoLA, MRPC, RTE, and STS-B, we fine-tuned for 10 epochs using a batch size of 16 and the Adam optimizer Kingma and Ba (2014) with a learning rate of 1e-5. We ran 5 independent fine-tuning runs for the low-resource tasks, discarding runs with poor performance. In our own fine-tuning runs, we added an L2 regularization term with strength 3e-4 to the distance between the pre-trained checkpoint and the fine-tuned body. Note that the high resource task checkpoints did not use such a regularization term.

## Appendix B Merging 3+ Models

While we have only explored merging two checkpoints throughout this work, our merging procedure (5) can handle an arbitrary number of checkpoints. In this section, we provide some preliminary results from merging three and four checkpoints.

We explored the merging of models fine-tuned on GLUE tasks directly from BERT-base. The experiments in this section are essentially the same as in section 3.1.1 but with merging more than two checkpoints. Due to the increased dimension of merging coefficient space, we opted to try 2500 random samples from the Dirichlet distribution rather than perform a grid search on the merging coefficients. We plan to explore methods such as Bayesian hyperparameter optimization (Snoek et al., 2015) to more efficiently find merging parameters in the future.

We present our results in table A1. The three-way merges we explored led to slightly better results than the best two-way merges for their respective target tasks. The four-way merge we performed did slightly worse than the two-way merge baseline, which we suspect is due to the larger merging coefficient search space.

Target Task | Donor Tasks | Score | Best Single Donor |
---|---|---|---|

RTE | MNLI, QNLI | (MNLI) | |

QQP | QNLI, CoLA | (QNLI) | |

QQP | QNLI, CoLA, SQuAD | (QNLI) |

## Appendix C Domain-Adaptive Pre-training Details

We performed additional domain-adaptive pre-training on RoBERTa-base for 32,768 steps with a batch size of 32 using the Adam optimizer with a learning rate of 1e-5. We used the BioMed and CS splits of the public S2ORC dataset of abstracts and full-length papers (Lo et al., 2019). We note that Gururangan et al. (2020) used an internal version of S2ORC that includes additional papers that could not be released due to copyright issues.

## Appendix D Additional Tables

Task | CoLA | SST-2 | MRPC | STS-B | QQP | MNLI | QNLI | RTE |
---|---|---|---|---|---|---|---|---|

CoLA | ||||||||

SST-2 | ||||||||

MRPC | ||||||||

STS-B | ||||||||

QQP | ||||||||

MNLI | ||||||||

QNLI | ||||||||

RTE | ||||||||

SQuAD |

Task | CoLA | SST-2 | MRPC | STS-B | QQP | MNLI | QNLI | RTE |
---|---|---|---|---|---|---|---|---|

CoLA | ||||||||

SST-2 | ||||||||

MRPC | ||||||||

STS-B | ||||||||

QQP | ||||||||

MNLI | ||||||||

QNLI | ||||||||

RTE |

Task | CoLA | MRPC | STS-B | RTE |
---|---|---|---|---|

CoLA | ||||

SST-2 | ||||

MRPC | ||||

STS-B | ||||

QQP | ||||

MNLI | ||||

QNLI | ||||

RTE |

Task | MRPC | STS-B | RTE |
---|---|---|---|

SST-2 | |||

MRPC | |||

STS-B | |||

QQP | |||

MNLI | |||

QNLI | |||

RTE | |||

SQuAD |

Task | MRPC | STS-B | RTE |
---|---|---|---|

SST-2 | |||

MRPC | |||

STS-B | |||

QQP | |||

MNLI | |||

QNLI | |||

RTE |

Task | MRPC | STS-B | RTE |
---|---|---|---|

MRPC | |||

STS-B | |||

RTE |

Task | Fisher | Isotropic | Sequential |
---|---|---|---|

SST-2 | |||

MRPC | |||

STS-B | |||

RTE | – |

0 | 1e-6 | 3e-4 | 1e-2 | 1e-1 | |
---|---|---|---|---|---|

Orig. | |||||

0 | |||||

1e-6 | |||||

3e-4 | |||||

1e-2 | |||||

1e-1 |

Epoch | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 |
---|---|---|---|---|---|---|---|---|---|---|

Unmerged | ||||||||||

0.5 | ||||||||||

1.0 | ||||||||||

1.5 | ||||||||||

2.0 | ||||||||||

2.5 | ||||||||||

3.0 | ||||||||||

3.5 | ||||||||||

4.0 |

Examples | 256 | 1024 | 2490 |
---|---|---|---|

256 | |||

1024 | |||

4096 | |||

32768 | |||

392702 |