[opt_transport] Replace np.sum(a * b) with np.vdot(a, b)#475
[opt_transport] Replace np.sum(a * b) with np.vdot(a, b)#475mmcky merged 2 commits intoQuantEcon:mainfrom
Conversation
|
Hi @suda-yuga, Thanks so much for your PR! This is a nice application of |
Thank you for your comment. You're absolutely right — in this case, np.sum(a * b) is arguably more transparent and easier to read when working with 2-D matrices. The motivation behind this change was based on prior discussions (#463) suggesting that a @ b or np.vdot(a, b) can be more efficient than np.sum(a * b) due to avoiding intermediate arrays and leveraging BLAS-level optimizations. I applied the same reasoning to the Frobenius inner product of matrices here. That said, I agree with you — clarity matters, and np.sum(a * b) may better communicate the intent in this specific context. I'm happy to revert or revise the change depending on what's preferred for readability. |
|
Thanks @suda-yuga! I'm happy either way. I just feel that it's less obvious to readers what we're trying to compute here, unlike the more straightforward cases where we have two 1-D arrays. For the sake of presentation, I'm inclined to close this PR—though I really like what you've done! Hi @mmcky, I will leave it to you to make the call! |
|
Maybe write something like "Here we use |
|
Or "Here we use |
|
Thanks @oyamad for the suggestion. I think being explicit around the use of |
|
thanks @HumphreyYang and @oyamad for your valuable feedback and review. Thanks @suda-yuga for the PR. |
Replace np.sum(a * b) with np.vdot(a, b)
#463