FAMO: A Fast Optimization Method for Multitask Learning (MTL) that Mitigates the Conflicting Gradients using O(1) Space and Time

15 views
1 min read

Summary

● Multitask learning (MLT) involves training a single model to perform multiple tasks simultaneously, facing challenges in managing large models and optimizing across tasks.

● Existing solutions for mitigating under-optimization in multitask learning involve gradient manipulation techniques, which can be computationally expensive.

● The Fast Adaptive Multitask Optimization (FAMO) method dynamically adjusts task weights to ensure a balanced loss decrease across tasks, offering a computationally efficient approach to multitask optimization without the need for extensive gradient computations.

● FAMO achieves a balanced loss decrease across tasks by updating task weights based on the change in log losses, amortizing computation over time, and introducing regularization to focus more on recent updates.

● Empirical experiments demonstrate FAMO’s ability to efficiently mitigate conflicting gradients in various experiment settings, showcasing consistent performance improvements across diverse multitask learning scenarios.

Author: Mahmoud Ghorbel
Source: link

Latest from Blog

withemes on instagram