Skip to content

Commit 9f2ad57

Browse files
committed
fix README and doc website
1 parent bc7592f commit 9f2ad57

File tree

2 files changed

+5
-4
lines changed

2 files changed

+5
-4
lines changed

.github/workflows/build_docs.yml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -32,4 +32,4 @@ jobs:
3232
restore-keys: |
3333
mkdocs-material-
3434
- run: pip install mkdocs-material
35-
- run: mkdocs gh-deploy --force
35+
- run: mkdocs gh-deploy --force --no-strict

README.md

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -19,18 +19,18 @@
1919

2020
* **How does the ConFIG work?**
2121

22-
The ConFIG method obtains the conflict-free direction by calculating the inverse of the loss-specific gradients matrix:
22+
The ConFIG method obtains the conflict-free direction by calculating the inverse of the loss-specific gradients matrix:
2323

2424
$$
25-
\boldsymbol{g}_{\text{ConFIG}}=\left(\sum_{i=1}^m \boldsymbol{g}_i^\top\boldsymbol{g}_u\right)\boldsymbol{g}_u,
25+
\boldsymbol{g}_{ConFIG}=\left(\sum_{i=1}^m \boldsymbol{g}_i^\top\boldsymbol{g}_u\right)\boldsymbol{g}_u,
2626
$$
2727

2828
$$
2929
\boldsymbol{g}_u = \mathcal{U}\left[
3030
[\mathcal{U}(\boldsymbol{g}_1),\mathcal{U}(\boldsymbol{g}_2),\cdots, \mathcal{U}(\boldsymbol{g}_m)]^{-\top} \mathbf{1}_m\right].
3131
$$
3232

33-
Then the dot product between $\boldsymbol{g}_{\text{ConFIG}}$ and each loss-specific gradient is always positive and equal, i.e.. $\boldsymbol{g}_i^\top\boldsymbol{g}_{\text{ConFIG}}=\boldsymbol{g}_i^\top\boldsymbol{g}_{\text{ConFIG}} \quad \forall i,j \in [1,m]$​.
33+
Then the dot product between $\boldsymbol{g}_{ConFIG}$ and each loss-specific gradient is always positive and equal, i.e., $\boldsymbol{g}_i^\top\boldsymbol{g}_{ConFIG}=\boldsymbol{g}_i^\top\boldsymbol{g}_{ConFIG} \quad \forall i,j \in [1,m]$​.
3434

3535
* **Is the ConFIG Computationally expensive?**
3636

@@ -45,6 +45,7 @@ Then the dot product between $\boldsymbol{g}_{\text{ConFIG}}$ and each loss-spec
4545
<img src="./docs/assets/TUM.svg" width="16"> Technical University of Munich
4646
<img src="./docs/assets/PKU.svg" width="16"> Peking University
4747
</h6>
48+
4849
***Abstract:*** The loss functions of many learning problems contain multiple additive terms that can disagree and yield conflicting update directions. For Physics-Informed Neural Networks (PINNs), loss terms on initial/boundary conditions and physics equations are particularly interesting as they are well-established as highly difficult tasks. To improve learning the challenging multi-objective task posed by PINNs, we propose the ConFIG method, which provides conflict-free updates by ensuring a positive dot product between the final update and each loss-specific gradient. It also maintains consistent optimization rates for all loss terms and dynamically adjusts gradient magnitudes based on conflict levels. We additionally leverage momentum to accelerate optimizations by alternating the back-propagation of different loss terms. The proposed method is evaluated across a range of challenging PINN scenarios, consistently showing superior performance and runtime compared to baseline methods. We also test the proposed method in a classic multi-task benchmark, where the ConFIG method likewise exhibits a highly promising performance.
4950

5051
***Read from:*** [[Arxiv](https://arxiv.org/abs/2312.05320)]

0 commit comments

Comments
 (0)