Skip to content

Commit d2cba5e

Browse files
committed
documents update
1 parent f9281b0 commit d2cba5e

File tree

8 files changed

+150
-25
lines changed

8 files changed

+150
-25
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -4,7 +4,7 @@
44
</h1>
55
66
<h4 align="center">Official implementation of Conflict-Free Inverse Gradients Method</h4>
7-
<h6 align="center">Towards Conflict-free Training for everything!</h6>
7+
<h6 align="center">Towards Conflict-free Training for Everything and Everyone!</h6>
88

99
<p align="center">
1010
[<a href="https://arxiv.org/abs/2312.05320">📄 Research Paper</a>][<a href="https://tum-pbs.github.io/ConFIG/">📖 Documentation & Examples</a>]
@@ -14,7 +14,7 @@
1414

1515
* **What is the ConFIG method?**
1616

17-
​ The conFIG method is a generic method for optimization problems involving **multiple loss terms** (e.g., Multi-task Learning, Continuous Learning, and Physics Informed Neural Networks). It prevents the optimization from getting stuck into a local minimum of a specific loss term due to the conflict between losses. On the contrary, it leads the optimization to the **shared minimal of all losses** by providing a **conflict-free update direction.**
17+
​ The conFIG method is a generic method for optimization problems involving **multiple loss terms** (e.g., Multi-task Learning, Continuous Learning, and Physics Informed Neural Networks). It prevents the optimization from getting stuck into a local minimum of a specific loss term due to the conflict between losses. On the contrary, it leads the optimization to the **shared minimum of all losses** by providing a **conflict-free update direction.**
1818

1919
<p align="center">
2020
<img src="docs/assets/config_illustration.png" style="zoom: 33%;" />
@@ -35,7 +35,7 @@
3535

3636
Then the dot product between $\boldsymbol{g}_{ConFIG}$ and each loss-specific gradient is always positive and equal, i.e., $`\boldsymbol{g}_{i}^{\top}\boldsymbol{g}_{ConFIG}=\boldsymbol{g}_{j}^{\top}\boldsymbol{g}_{ConFIG}> 0 \quad \forall i,j \in [1,m]`$​.
3737

38-
* **Is the ConFIG Computationally expensive?**
38+
* **Is the ConFIG computationally expensive?**
3939

4040
​ Like many other gradient-based methods, ConFIG needs to calculate each loss's gradient in every optimization iteration, which could be computationally expensive when the number of losses increases. However, we also introduce a **momentum-based method** where we can reduce the computational cost **close to or even lower than a standard optimization procedure** with a slight degeneration in accuracy. This momentum-based method is also applied to another gradient-based method.
4141

docs/assets/config_white.png

4.2 KB
Loading

docs/assets/config_white.svg

Lines changed: 10 additions & 10 deletions
Loading

docs/assets/download.svg

Lines changed: 122 additions & 0 deletions
Loading

docs/examples/mtl_toy.ipynb

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -8,9 +8,8 @@
88
"\n",
99
"Here, we would like to show a classic and interesting toy example of multi-task learning (MTL). \n",
1010
"\n",
11-
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/tum-pbs/ConFIG/blob/main/docs/examples/mtl_toy.ipynb\">\n",
12-
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
13-
"</a>\n",
11+
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tum-pbs/ConFIG/blob/main/docs/examples/mtl_toy.ipynb)\n",
12+
"[![Open Locally](../assets/download.svg)](https://github.com/tum-pbs/ConFIG/blob/main/docs/examples/mtl_toy.ipynb)\n",
1413
"\n",
1514
"In this example, there are two tasks represented by two loss functions, which are"
1615
]
@@ -439,7 +438,9 @@
439438
"cell_type": "markdown",
440439
"metadata": {},
441440
"source": [
442-
"The results are similar to ConFIG, but it needs more iterations to converge. You may notice that we give an additional 1000 optimization iterations for the momentum version. This is because we only update a single gradient direction every iteration, so it usually requires more iterations to get a similar or better performance than the ConFIG method. You can have a try by yourself to see the optimization trajectory. The acceleration of the momentum version is not so significant in this case since the backpropagation of gradients is not the main bottleneck of the optimization."
441+
"The results are similar to ConFIG, but it needs more iterations to converge. You may notice that we give an additional 1000 optimization iterations for the momentum version. This is because we only update a single gradient direction every iteration, so it usually requires more iterations to get a similar or better performance than the ConFIG method. You can have a try by yourself to see the optimization trajectory. The acceleration of the momentum version is not so significant in this case since the backpropagation of gradients is not the main bottleneck of the optimization.\n",
442+
"\n",
443+
"Click [here](https://github.com/tum-pbs/ConFIG/tree/main/experiments/MTL) to have a check in the MTL experiment in our research paper."
443444
]
444445
}
445446
],

docs/examples/pinn_burgers.ipynb

Lines changed: 5 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,8 @@
99
"\n",
1010
"In this example, we would like to show you another example of how to use ConFIG method to train a physics informed neural network (PINN) for solving a PDE. \n",
1111
"\n",
12-
"<a target=\"_blank\" href=\"https://colab.research.google.com/github/tum-pbs/ConFIG/blob/main/docs/examples/pinn_burgers.ipynb\">\n",
13-
" <img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/>\n",
14-
"</a>\n",
12+
"[![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/tum-pbs/ConFIG/blob/main/docs/examples/pinn_burgers.ipynb)\n",
13+
"[![Open Locally](../assets/download.svg)](https://github.com/tum-pbs/ConFIG/blob/main/docs/examples/pinn_burgers.ipynb)\n",
1514
"\n",
1615
"In this example, we will solve the 1D Burgers' equation:\n",
1716
"\n",
@@ -471,7 +470,9 @@
471470
"id": "bb56fffe",
472471
"metadata": {},
473472
"source": [
474-
"As the result shows, both the training speed and test accuracy are improved by using the momentum version of the ConFIG method. Please note that the momentum version does not always guarantee a better performance than the non-momentum version. The main feature of the momentum version is the acceleration, as it only requires a single gradient update in each iteration. We usually will just give the momentum version more training epochs to improve the performance further."
473+
"As the result shows, both the training speed and test accuracy are improved by using the momentum version of the ConFIG method. Please note that the momentum version does not always guarantee a better performance than the non-momentum version. The main feature of the momentum version is the acceleration, as it only requires a single gradient update in each iteration. We usually will just give the momentum version more training epochs to improve the performance further.\n",
474+
"\n",
475+
"Click [here](https://github.com/tum-pbs/ConFIG/tree/main/experiments/PINN) to have a check in the PINN experiment in our research paper."
475476
]
476477
}
477478
],

docs/index.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@ hide:
88
<p align="center">
99
<img src="./assets/config.png" width="400"/>
1010
</p>
11-
<h4 align="center">Towards Conflict-free Training for everything!</h4>
11+
<h4 align="center">Towards Conflict-free Training for Everything and Everyone!</h4>
1212

1313
<p align="center">
1414
[ <a href="https://arxiv.org/abs/2312.05320">📄 Research Paper</a> ][ <a href="https://github.com/tum-pbs/ConFIG"><img src="./assets/github.svg" width="16"> GitHub Repository</a> ]
@@ -20,7 +20,7 @@ hide:
2020

2121
* **What is the ConFIG method?**
2222

23-
​ The conFIG method is a generic method for optimization problems involving **multiple loss terms** (e.g., Multi-task Learning, Continuous Learning, and Physics Informed Neural Networks). It prevents the optimization from getting stuck into a local minimum of a specific loss term due to the conflict between losses. On the contrary, it leads the optimization to the **shared minimal of all losses** by providing a **conflict-free update direction.**
23+
​ The conFIG method is a generic method for optimization problems involving **multiple loss terms** (e.g., Multi-task Learning, Continuous Learning, and Physics Informed Neural Networks). It prevents the optimization from getting stuck into a local minimum of a specific loss term due to the conflict between losses. On the contrary, it leads the optimization to the **shared minimum of all losses** by providing a **conflict-free update direction.**
2424

2525
<p align="center">
2626
<img src="./assets/config_illustration.png" style="zoom: 33%;" />
@@ -41,7 +41,7 @@ $$
4141

4242
Then the dot product between $\mathbf{g}_{ConFIG}$ and each loss-specific gradient is always positive and equal, i.e., $\mathbf{g}_{i}^{\top}\mathbf{g}_{ConFIG}=\mathbf{g}_{j}^{\top}\mathbf{g}_{ConFIG} > 0 \quad \forall i,j \in [1,m]$​.
4343

44-
* **Is the ConFIG Computationally expensive?**
44+
* **Is the ConFIG computationally expensive?**
4545

4646
​ Like many other gradient-based methods, ConFIG needs to calculate each loss's gradient in every optimization iteration, which could be computationally expensive when the number of losses increases. However, we also introduce a **momentum-based method** where we can reduce the computational cost **close to or even lower than a standard optimization procedure** with a slight degeneration in accuracy. This momentum-based method is also applied to another gradient-based method.
4747

mkdocs.yml

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -15,6 +15,7 @@ theme:
1515
- toc.integrate # Table of contents is integrated on the left; does not appear separately on the right.
1616
- header.autohide # header disappears as you scroll
1717
- navigation.top
18+
- navigation.footer
1819
palette:
1920
- scheme: default
2021
primary: brown
@@ -30,7 +31,7 @@ theme:
3031
name: Switch to light mode
3132
icon:
3233
repo: fontawesome/brands/github # GitHub logo in top right
33-
logo: assets/config_white.svg
34+
logo: assets/config_white.png
3435
favicon: assets/config_colorful.svg
3536

3637
extra:

0 commit comments

Comments
 (0)