I wrote an article taking PentestGPT for a spin and gave my thoughts on it. #75
Replies: 2 comments 1 reply
-
Thanks for the feedback! For the three key points you mentioned:
I agree. That's why people are using embedding together with LLMs to address this, and I'm doing some tests on that. I won't say it's promising, but memory issues might be resolved in near future. You may also want to try the 32K token GPT-4, which works amazing in some complex tasks.
Agree. Unfortunately, even fine-tuned open-source LLMs cannot outperform GPT-4. I did some testing on GLM-130B but it didn't work well. This can be a long-lasting issue until some solid open-source models come out.
I still hold the opposite opinion and believe that this can be a solid solution for more complex tasks. I do believe that with the improvement of LLMs, those issues can be solved one day. Let's see the performance of Google Sec-PaLM. Lastly, thanks so much for the feedback and the criticisms. I'll do more testing and try to make it better:) |
Beta Was this translation helpful? Give feedback.
-
Just picked up a job wherein my duties are trying to solve this very problem. I'll be in touch, maybe we can chat later. |
Beta Was this translation helpful? Give feedback.
-
https://mantisek.com/taking-pentestgpt-for-a-spin
I thought it fair to include this, as I offer quite a few criticisms.
Beta Was this translation helpful? Give feedback.
All reactions