Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No trailing return type cpp-linter suggestions #32

Merged
merged 1 commit into from
Jul 25, 2024

Conversation

ewanwm
Copy link
Owner

@ewanwm ewanwm commented Jul 25, 2024

No description provided.

@ewanwm ewanwm merged commit 59b5379 into main Jul 25, 2024
3 checks passed
@ewanwm ewanwm deleted the workflow_no_trailing_return_type branch July 25, 2024 22:17
Copy link
Contributor

Cpp-Linter Report ⚠️

Some files did not pass the configured checks!

clang-tidy reports: 106 concern(s)
  • tests/two-flavour-const-matter.cpp:9:5: warning: [readability-isolate-declaration]

    multiple declarations in a single statement reduces readability

        float m1 = 1.0, m2 = 2.0;
        ^~~~~~~~~~~~~~~~~~~~~~~~~
  • tests/two-flavour-const-matter.cpp:31:5: warning: [cppcoreguidelines-pro-type-member-init]

    uninitialized record type: 'bargerProp'

        TwoFlavourBarger bargerProp;
        ^
                                   {}
  • tests/two-flavour-const-matter.cpp:53:9: warning: [readability-isolate-declaration]

    multiple declarations in a single statement reduces readability

            Tensor eigenVals, eigenVecs;
            ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
  • tests/two-flavour-const-matter.cpp:62:9: warning: [modernize-use-auto]

    use auto when initializing with a template cast to avoid duplicating the type name

            float calcV1 = eigenVals.getValue<float>({0, 0});
            ^~~~~
            auto
  • tests/two-flavour-const-matter.cpp:63:9: warning: [modernize-use-auto]

    use auto when initializing with a template cast to avoid duplicating the type name

            float calcV2 = eigenVals.getValue<float>({0, 1});
            ^~~~~
            auto
  • tests/barger.cpp:13:5: warning: [cppcoreguidelines-pro-type-member-init]

    uninitialized record type: 'bargerProp'

        TwoFlavourBarger bargerProp;
        ^
                                   {}
  • tests/tensor-basic.cpp:42:56: warning: [readability-uppercase-literal-suffix]

    floating point literal has suffix 'j', which is not uppercase

        tensorComplex.setValue({0, 0}, std::complex<float>(0.0j));
                                                           ^  ~
                                                              J
  • tests/tensor-basic.cpp:43:56: warning: [readability-uppercase-literal-suffix]

    floating point literal has suffix 'j', which is not uppercase

        tensorComplex.setValue({0, 1}, std::complex<float>(1.0j));
                                                           ^  ~
                                                              J
  • tests/tensor-basic.cpp:44:56: warning: [readability-uppercase-literal-suffix]

    floating point literal has suffix 'j', which is not uppercase

        tensorComplex.setValue({0, 2}, std::complex<float>(2.0j));
                                                           ^  ~
                                                              J
  • tests/tensor-basic.cpp:46:56: warning: [readability-uppercase-literal-suffix]

    floating point literal has suffix 'j', which is not uppercase

        tensorComplex.setValue({1, 0}, std::complex<float>(3.0j));
                                                           ^  ~
                                                              J
  • tests/tensor-basic.cpp:47:56: warning: [readability-uppercase-literal-suffix]

    floating point literal has suffix 'j', which is not uppercase

        tensorComplex.setValue({1, 1}, std::complex<float>(4.0j));
                                                           ^  ~
                                                              J
  • tests/tensor-basic.cpp:48:56: warning: [readability-uppercase-literal-suffix]

    floating point literal has suffix 'j', which is not uppercase

        tensorComplex.setValue({1, 2}, std::complex<float>(5.0j));
                                                           ^  ~
                                                              J
  • tests/tensor-basic.cpp:50:56: warning: [readability-uppercase-literal-suffix]

    floating point literal has suffix 'j', which is not uppercase

        tensorComplex.setValue({2, 0}, std::complex<float>(6.0j));
                                                           ^  ~
                                                              J
  • tests/tensor-basic.cpp:51:56: warning: [readability-uppercase-literal-suffix]

    floating point literal has suffix 'j', which is not uppercase

        tensorComplex.setValue({2, 1}, std::complex<float>(7.0j));
                                                           ^  ~
                                                              J
  • tests/tensor-basic.cpp:52:56: warning: [readability-uppercase-literal-suffix]

    floating point literal has suffix 'j', which is not uppercase

        tensorComplex.setValue({2, 2}, std::complex<float>(8.0j));
                                                           ^  ~
                                                              J
  • tests/tensor-basic.cpp:122:64: warning: [readability-uppercase-literal-suffix]

    floating point literal has suffix 'j', which is not uppercase

        complexGradTest.setValue({0, 0}, std::complex<float>(0.0 + 0.0j));
                                                                   ^  ~
                                                                      J
  • tests/tensor-basic.cpp:123:64: warning: [readability-uppercase-literal-suffix]

    floating point literal has suffix 'j', which is not uppercase

        complexGradTest.setValue({0, 1}, std::complex<float>(0.0 + 1.0j));
                                                                   ^  ~
                                                                      J
  • tests/tensor-basic.cpp:124:64: warning: [readability-uppercase-literal-suffix]

    floating point literal has suffix 'j', which is not uppercase

        complexGradTest.setValue({1, 0}, std::complex<float>(1.0 + 0.0j));
                                                                   ^  ~
                                                                      J
  • tests/tensor-basic.cpp:125:64: warning: [readability-uppercase-literal-suffix]

    floating point literal has suffix 'j', which is not uppercase

        complexGradTest.setValue({1, 1}, std::complex<float>(1.0 + 1.0j));
                                                                   ^  ~
                                                                      J
  • tests/two-flavour-vacuum.cpp:9:5: warning: [readability-isolate-declaration]

    multiple declarations in a single statement reduces readability

        float m1 = 0.1, m2 = 0.5;
        ^~~~~~~~~~~~~~~~~~~~~~~~~
  • tests/two-flavour-vacuum.cpp:28:5: warning: [cppcoreguidelines-pro-type-member-init]

    uninitialized record type: 'bargerProp'

        TwoFlavourBarger bargerProp;
        ^
                                   {}
  • nuTens/tensors/torch-tensor.cpp:5:49: warning: [cppcoreguidelines-avoid-non-const-global-variables]

    variable 'scalarTypeMap' is non-const and globally accessible, consider making it const

    std::map<NTdtypes::scalarType, c10::ScalarType> scalarTypeMap = {{NTdtypes::kFloat, torch::kFloat},
                                                    ^
  • nuTens/tensors/torch-tensor.cpp:11:49: warning: [cppcoreguidelines-avoid-non-const-global-variables]

    variable 'deviceTypeMap' is non-const and globally accessible, consider making it const

    std::map<NTdtypes::deviceType, c10::DeviceType> deviceTypeMap = {{NTdtypes::kCPU, torch::kCPU},
                                                    ^
  • nuTens/tensors/torch-tensor.cpp:71:5: warning: [modernize-loop-convert]

    use range-based for loop instead

        for (size_t i = 0; i < indices.size(); i++)
        ^   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            (const auto & indice : indices)
  • nuTens/tensors/torch-tensor.cpp:73:62: warning: [readability-braces-around-statements]

    statement should be inside braces

            if (const int *index = std::get_if<int>(&indices[i]))
                                                                 ^
                                                                  {
  • nuTens/tensors/torch-tensor.cpp:74:24: warning: [modernize-use-emplace]

    use emplace_back instead of push_back

                indicesVec.push_back(at::indexing::TensorIndex(*index));
                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~      ~
                           emplace_back(
  • nuTens/tensors/torch-tensor.cpp:75:83: warning: [readability-braces-around-statements]

    statement should be inside braces

            else if (const std::string *index = std::get_if<std::string>(&indices[i]))
                                                                                      ^
                                                                                       {
  • nuTens/tensors/torch-tensor.cpp:76:24: warning: [modernize-use-emplace]

    use emplace_back instead of push_back

                indicesVec.push_back(at::indexing::TensorIndex((*index).c_str()));
                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                ~
                           emplace_back(
  • nuTens/tensors/torch-tensor.cpp:97:5: warning: [modernize-loop-convert]

    use range-based for loop instead

        for (size_t i = 0; i < indices.size(); i++)
        ^   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            (const auto & indice : indices)
  • nuTens/tensors/torch-tensor.cpp:99:62: warning: [readability-braces-around-statements]

    statement should be inside braces

            if (const int *index = std::get_if<int>(&indices[i]))
                                                                 ^
                                                                  {
  • nuTens/tensors/torch-tensor.cpp:100:24: warning: [modernize-use-emplace]

    use emplace_back instead of push_back

                indicesVec.push_back(at::indexing::TensorIndex(*index));
                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~      ~
                           emplace_back(
  • nuTens/tensors/torch-tensor.cpp:101:83: warning: [readability-braces-around-statements]

    statement should be inside braces

            else if (const std::string *index = std::get_if<std::string>(&indices[i]))
                                                                                      ^
                                                                                       {
  • nuTens/tensors/torch-tensor.cpp:102:24: warning: [modernize-use-emplace]

    use emplace_back instead of push_back

                indicesVec.push_back(at::indexing::TensorIndex((*index).c_str()));
                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~                ~
                           emplace_back(
  • nuTens/tensors/torch-tensor.cpp:116:5: warning: [modernize-loop-convert]

    use range-based for loop instead

        for (size_t i = 0; i < indices.size(); i++)
        ^   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            (int indice : indices)
  • nuTens/tensors/torch-tensor.cpp:118:20: warning: [modernize-use-emplace]

    use emplace_back instead of push_back

            indicesVec.push_back(at::indexing::TensorIndex(indices[i]));
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~          ~
                       emplace_back(
  • nuTens/tensors/torch-tensor.cpp:127:5: warning: [modernize-loop-convert]

    use range-based for loop instead

        for (size_t i = 0; i < indices.size(); i++)
        ^   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
            (int indice : indices)
  • nuTens/tensors/torch-tensor.cpp:129:20: warning: [modernize-use-emplace]

    use emplace_back instead of push_back

            indicesVec.push_back(at::indexing::TensorIndex(indices[i]));
                       ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~          ~
                       emplace_back(
  • nuTens/propagator/propagator.cpp:9:9: warning: [readability-isolate-declaration]

    multiple declarations in a single statement reduces readability

            Tensor eigenVals, eigenVecs;
            ^~~~~~~~~~~~~~~~~~~~~~~~~~~~
  • nuTens/propagator/propagator.cpp:17:5: warning: [readability-else-after-return]

    do not use 'else' after 'return'

        else
        ^~~~
  • nuTens/propagator/propagator.cpp:30:58: warning: [readability-uppercase-literal-suffix]

    floating point literal has suffix 'j', which is not uppercase

            Tensor::exp(Tensor::div(Tensor::scale(massesSq, -1.0j * _baseline), Tensor::scale(energies, 2.0)));
                                                             ^  ~
                                                                J
  • nuTens/propagator/propagator.cpp:30:63: error: [clang-diagnostic-error]

    implicit conversion from '_Complex double' to 'float' is not permitted in C++

            Tensor::exp(Tensor::div(Tensor::scale(massesSq, -1.0j * _baseline), Tensor::scale(energies, 2.0)));
                                                                  ^
    /home/runner/work/nuTens/nuTens/nuTens/propagator/propagator.cpp:30:101: warning: 2.0 is a magic number; consider replacing it with a named constant [cppcoreguidelines-avoid-magic-numbers,readability-magic-numbers]
            Tensor::exp(Tensor::div(Tensor::scale(massesSq, -1.0j * _baseline), Tensor::scale(energies, 2.0)));
                                                                                                        ^
  • tests/barger-propagator.hpp:3:10: warning: [modernize-deprecated-headers]

    inclusion of deprecated C++ header 'math.h'; consider using 'cmath' instead

    #include <math.h>
             ^~~~~~~~
             <cmath>
  • tests/barger-propagator.hpp:38:18: warning: [readability-make-member-function-const]

    method 'lv' can be made const

        inline float lv(float energy)
                     ^
                                      const
  • tests/barger-propagator.hpp:44:18: warning: [readability-make-member-function-const]

    method 'lm' can be made const

        inline float lm()
                     ^
                          const
  • tests/barger-propagator.hpp:52:28: warning: [readability-braces-around-statements]

    statement should be inside braces

            if (_density > 0.0)
                               ^
                                {
  • tests/barger-propagator.hpp:54:9: warning: [readability-else-after-return]

    do not use 'else' after 'return'

            else
            ^~~~
                        return _theta
  • tests/barger-propagator.hpp:54:13: warning: [readability-braces-around-statements]

    statement should be inside braces

            else
                ^
    note: this fix will not be applied because it overlaps with another fix
  • tests/barger-propagator.hpp:61:28: warning: [readability-braces-around-statements]

    statement should be inside braces

            if (_density > 0.0)
                               ^
                                {
  • tests/barger-propagator.hpp:64:9: warning: [readability-else-after-return]

    do not use 'else' after 'return'

            else
            ^~~~
                        return (_m1 * _m1 - _m2 * _m2)
  • tests/barger-propagator.hpp:64:13: warning: [readability-braces-around-statements]

    statement should be inside braces

            else
                ^
    note: this fix will not be applied because it overlaps with another fix
  • tests/barger-propagator.hpp:83:37: warning: [readability-braces-around-statements]

    statement should be inside braces

            if (alpha == 0 && beta == 0)
                                        ^
                                         {
  • tests/barger-propagator.hpp:84:13: warning: [bugprone-branch-clone]

    repeated branch in conditional chain

                return std::cos(gamma);
                ^
    /home/runner/work/nuTens/nuTens/tests/barger-propagator.hpp:84:35: note: end of the original
                return std::cos(gamma);
                                      ^
    /home/runner/work/nuTens/nuTens/tests/barger-propagator.hpp:86:13: note: clone 1 starts here
                return std::cos(gamma);
                ^
  • tests/barger-propagator.hpp:85:9: warning: [readability-else-after-return]

    do not use 'else' after 'return'

            else if (alpha == 1 && beta == 1)
            ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
  • tests/barger-propagator.hpp:85:42: warning: [readability-braces-around-statements]

    statement should be inside braces

            else if (alpha == 1 && beta == 1)
                                             ^
    note: this fix will not be applied because it overlaps with another fix
  • tests/barger-propagator.hpp:87:42: warning: [readability-braces-around-statements]

    statement should be inside braces

            else if (alpha == 0 && beta == 1)
                                             ^
    note: this fix will not be applied because it overlaps with another fix
  • tests/barger-propagator.hpp:89:42: warning: [readability-braces-around-statements]

    statement should be inside braces

            else if (alpha == 1 && beta == 0)
                                             ^
    note: this fix will not be applied because it overlaps with another fix
    /home/runner/work/nuTens/nuTens/tests/barger-propagator.hpp:120:27: warning: narrowing conversion from 'double' to 'float' [bugprone-narrowing-conversions,cppcoreguidelines-narrowing-conversions]
            float sin2Gamma = std::sin(2.0 * gamma);
                              ^
    /home/runner/work/nuTens/nuTens/tests/barger-propagator.hpp:120:36: warning: 2.0 is a magic number; consider replacing it with a named constant [cppcoreguidelines-avoid-magic-numbers,readability-magic-numbers]
            float sin2Gamma = std::sin(2.0 * gamma);
                                       ^
    /home/runner/work/nuTens/nuTens/tests/barger-propagator.hpp:121:24: warning: narrowing conversion from 'double' to 'float' [bugprone-narrowing-conversions,cppcoreguidelines-narrowing-conversions]
            float sinPhi = std::sin(dM2 * _baseline / (4.0 * energy));
                           ^
    /home/runner/work/nuTens/nuTens/tests/barger-propagator.hpp:121:52: warning: 4.0 is a magic number; consider replacing it with a named constant [cppcoreguidelines-avoid-magic-numbers,readability-magic-numbers]
            float sinPhi = std::sin(dM2 * _baseline / (4.0 * energy));
                                                       ^
    /home/runner/work/nuTens/nuTens/tests/barger-propagator.hpp:124:24: warning: narrowing conversion from 'double' to 'float' [bugprone-narrowing-conversions,cppcoreguidelines-narrowing-conversions]
            float onAxis = 1.0 - offAxis;
                           ^
  • tests/barger-propagator.hpp:126:27: warning: [readability-braces-around-statements]

    statement should be inside braces

            if (alpha == beta)
                              ^
                               {
  • tests/barger-propagator.hpp:128:9: warning: [readability-else-after-return]

    do not use 'else' after 'return'

            else
            ^~~~
                        return offAxis
  • tests/barger-propagator.hpp:128:13: warning: [readability-braces-around-statements]

    statement should be inside braces

            else
                ^
    note: this fix will not be applied because it overlaps with another fix
  • tests/test-utils.hpp:3:10: warning: [modernize-deprecated-headers]

    inclusion of deprecated C++ header 'math.h'; consider using 'cmath' instead

    #include <math.h>
             ^~~~~~~~
             <cmath>
  • tests/test-utils.hpp:28:9: warning: [cppcoreguidelines-macro-usage]

    function-like macro 'TEST_EXPECTED' used; consider a 'constexpr' template function

    #define TEST_EXPECTED(value, expectation, varName, threshold)                                                          \
            ^
  • tests/test-utils.hpp:30:57: warning: [bugprone-macro-parentheses]

    macro argument should be enclosed in parentheses

            if (Testing::relativeDiff(value, expectation) > threshold)                                                     \
                                                            ^
                                                            (        )
  • tests/test-utils.hpp:32:36: warning: [bugprone-macro-parentheses]

    macro argument should be enclosed in parentheses

                std::cerr << "bad " << varName << std::endl;                                                               \
                                       ^
                                       (      )
  • tests/test-utils.hpp:33:37: warning: [bugprone-macro-parentheses]

    macro argument should be enclosed in parentheses

                std::cerr << "Got: " << value;                                                                             \
                                        ^
                                        (    )
  • tests/test-utils.hpp:34:44: warning: [bugprone-macro-parentheses]

    macro argument should be enclosed in parentheses

                std::cerr << "; Expected: " << expectation;                                                                \
                                               ^
                                               (          )
  • nuTens/logging.hpp:12:9: warning: [cppcoreguidelines-macro-usage]

    macro 'NT_LOG_LEVEL_TRACE' used to declare a constant; consider using a 'constexpr' constant

    #define NT_LOG_LEVEL_TRACE 0
            ^
  • nuTens/logging.hpp:13:9: warning: [cppcoreguidelines-macro-usage]

    macro 'NT_LOG_LEVEL_DEBUG' used to declare a constant; consider using a 'constexpr' constant

    #define NT_LOG_LEVEL_DEBUG 1
            ^
  • nuTens/logging.hpp:14:9: warning: [cppcoreguidelines-macro-usage]

    macro 'NT_LOG_LEVEL_INFO' used to declare a constant; consider using a 'constexpr' constant

    #define NT_LOG_LEVEL_INFO 2
            ^
  • nuTens/logging.hpp:15:9: warning: [cppcoreguidelines-macro-usage]

    macro 'NT_LOG_LEVEL_WARNING' used to declare a constant; consider using a 'constexpr' constant

    #define NT_LOG_LEVEL_WARNING 3
            ^
  • nuTens/logging.hpp:16:9: warning: [cppcoreguidelines-macro-usage]

    macro 'NT_LOG_LEVEL_ERROR' used to declare a constant; consider using a 'constexpr' constant

    #define NT_LOG_LEVEL_ERROR 4
            ^
  • nuTens/logging.hpp:17:9: warning: [cppcoreguidelines-macro-usage]

    macro 'NT_LOG_LEVEL_SILENT' used to declare a constant; consider using a 'constexpr' constant

    #define NT_LOG_LEVEL_SILENT 5
            ^
  • nuTens/logging.hpp:21:9: warning: [cppcoreguidelines-macro-usage]

    macro 'SPDLOG_ACTIVE_LEVEL' used to declare a constant; consider using a 'constexpr' constant

    #define SPDLOG_ACTIVE_LEVEL SPDLOG_LEVEL_TRACE
            ^
  • nuTens/logging.hpp:41:10: error: [clang-diagnostic-error]

    'spdlog/spdlog.h' file not found

    #include "spdlog/spdlog.h"
             ^
  • nuTens/logging.hpp:48:34: warning: [cppcoreguidelines-avoid-non-const-global-variables]

    variable 'runtimeLogLevel' is non-const and globally accessible, consider making it const

    static spdlog::level::level_enum runtimeLogLevel = spdlog::level::trace;
                                     ^
  • nuTens/logging.hpp:67:23: warning: [cppcoreguidelines-avoid-non-const-global-variables]

    variable 'once' is non-const and globally accessible, consider making it const

    static std::once_flag once;
                          ^
  • nuTens/logging.hpp:84:9: warning: [cppcoreguidelines-macro-usage]

    variadic macro 'NT_TRACE' used; consider using a 'constexpr' variadic template function

    #define NT_TRACE(...)                                                                                                  \
            ^
  • nuTens/logging.hpp:92:9: warning: [cppcoreguidelines-macro-usage]

    variadic macro 'NT_DEBUG' used; consider using a 'constexpr' variadic template function

    #define NT_DEBUG(...)                                                                                                  \
            ^
  • nuTens/logging.hpp:100:9: warning: [cppcoreguidelines-macro-usage]

    variadic macro 'NT_INFO' used; consider using a 'constexpr' variadic template function

    #define NT_INFO(...)                                                                                                   \
            ^
  • nuTens/logging.hpp:108:9: warning: [cppcoreguidelines-macro-usage]

    variadic macro 'NT_WARN' used; consider using a 'constexpr' variadic template function

    #define NT_WARN(...)                                                                                                   \
            ^
  • nuTens/logging.hpp:116:9: warning: [cppcoreguidelines-macro-usage]

    variadic macro 'NT_ERROR' used; consider using a 'constexpr' variadic template function

    #define NT_ERROR(...)                                                                                                  \
            ^
  • nuTens/nuTens-pch.hpp:3:10: warning: [modernize-deprecated-headers]

    inclusion of deprecated C++ header 'math.h'; consider using 'cmath' instead

    #include <math.h>
             ^~~~~~~~
             <cmath>
  • nuTens/tensors/tensor.hpp:195:5: warning: [modernize-use-nodiscard]

    function 'real' should be marked [[nodiscard]]

        Tensor real() const;
        ^
        [[nodiscard]] 
  • nuTens/tensors/tensor.hpp:197:5: warning: [modernize-use-nodiscard]

    function 'imag' should be marked [[nodiscard]]

        Tensor imag() const;
        ^
        [[nodiscard]] 
  • nuTens/tensors/tensor.hpp:200:5: warning: [modernize-use-nodiscard]

    function 'conj' should be marked [[nodiscard]]

        Tensor conj() const;
        ^
        [[nodiscard]] 
  • nuTens/tensors/tensor.hpp:202:5: warning: [modernize-use-nodiscard]

    function 'abs' should be marked [[nodiscard]]

        Tensor abs() const;
        ^
        [[nodiscard]] 
  • nuTens/tensors/tensor.hpp:204:5: warning: [modernize-use-nodiscard]

    function 'angle' should be marked [[nodiscard]]

        Tensor angle() const;
        ^
        [[nodiscard]] 
  • nuTens/tensors/tensor.hpp:208:5: warning: [modernize-use-nodiscard]

    function 'cumsum' should be marked [[nodiscard]]

        Tensor cumsum(int dim) const;
        ^
        [[nodiscard]] 
  • nuTens/tensors/tensor.hpp:211:5: warning: [modernize-use-nodiscard]

    function 'sum' should be marked [[nodiscard]]

        Tensor sum() const;
        ^
        [[nodiscard]] 
  • nuTens/tensors/tensor.hpp:222:5: warning: [modernize-use-nodiscard]

    function 'grad' should be marked [[nodiscard]]

        Tensor grad() const;
        ^
        [[nodiscard]] 
  • nuTens/tensors/tensor.hpp:247:5: warning: [modernize-use-nodiscard]

    function 'toString' should be marked [[nodiscard]]

        std::string toString() const;
        ^
        [[nodiscard]] 
  • nuTens/tensors/tensor.hpp:259:5: warning: [modernize-use-nodiscard]

    function 'getValue' should be marked [[nodiscard]]

        Tensor getValue(const std::vector<std::variant<int, std::string>> &indices) const;
        ^
        [[nodiscard]] 
  • nuTens/tensors/tensor.hpp:262:5: warning: [modernize-use-nodiscard]

    function 'getNdim' should be marked [[nodiscard]]

        size_t getNdim() const;
        ^
        [[nodiscard]] 
  • nuTens/tensors/tensor.hpp:265:5: warning: [modernize-use-nodiscard]

    function 'getBatchDim' should be marked [[nodiscard]]

        int getBatchDim() const;
        ^
        [[nodiscard]] 
  • nuTens/tensors/tensor.hpp:268:5: warning: [modernize-use-nodiscard]

    function 'getShape' should be marked [[nodiscard]]

        std::vector<int> getShape() const;
        ^
        [[nodiscard]] 
  • nuTens/tensors/tensor.hpp:277:9: warning: [modernize-loop-convert]

    use range-based for loop instead

            for (size_t i = 0; i < indices.size(); i++)
            ^   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
                (int indice : indices)
  • nuTens/tensors/tensor.hpp:279:24: warning: [modernize-use-emplace]

    use emplace_back instead of push_back

                indicesVec.push_back(at::indexing::TensorIndex(indices[i]));
                           ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~          ~
                           emplace_back(
  • nuTens/tensors/tensor.hpp:296:3: warning: [readability-redundant-access-specifiers]

    redundant access specifier has the same accessibility as the previous access specifier

      public:
      ^~~~~~~
  • nuTens/tensors/tensor.hpp:297:5: warning: [modernize-use-nodiscard]

    function 'getTensor' should be marked [[nodiscard]]

        inline const torch::Tensor &getTensor() const
        ^
        [[nodiscard]] 
  • nuTens/tensors/tensor.hpp:303:19: warning: [cppcoreguidelines-non-private-member-variables-in-classes]

    member variable '_tensor' has protected visibility

        torch::Tensor _tensor;
                      ^
  • nuTens/propagator/propagator.hpp:34:5: warning: [modernize-use-nodiscard]

    function 'calculateProbs' should be marked [[nodiscard]]

        Tensor calculateProbs(const Tensor &energies) const;
        ^
        [[nodiscard]] 
  • nuTens/propagator/propagator.hpp:60:38: warning: [readability-braces-around-statements]

    statement should be inside braces

            if (_matterSolver != nullptr)
                                         ^
                                          {
  • nuTens/propagator/propagator.hpp:69:38: warning: [readability-braces-around-statements]

    statement should be inside braces

            if (_matterSolver != nullptr)
                                         ^
                                          {
  • nuTens/propagator/propagator.hpp:98:5: warning: [modernize-use-nodiscard]

    function '_calculateProbs' should be marked [[nodiscard]]

        Tensor _calculateProbs(const Tensor &energies, const Tensor &masses, const Tensor &PMNS) const;
        ^
        [[nodiscard]] 
  • nuTens/propagator/propagator.hpp:100:3: warning: [readability-redundant-access-specifiers]

    redundant access specifier has the same accessibility as the previous access specifier

      private:
      ^~~~~~~~
  • nuTens/propagator/propagator.hpp:101:12: warning: [bugprone-reserved-identifier]

    declaration uses identifier '_PMNSmatrix', which is a reserved identifier

        Tensor _PMNSmatrix;
               ^~~~~~~~~~~
               PMNSmatrix
  • nuTens/propagator/const-density-solver.hpp:65:13: warning: [modernize-use-auto]

    use auto when initializing with a template cast to avoid duplicating the type name

                float m_i = masses.getValue<float>({0, i});
                ^~~~~
                auto

Have any feedback or feature suggestions? Share it here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant