Skip to content

Commit

Permalink
additional insights
Browse files Browse the repository at this point in the history
  • Loading branch information
seekayel committed Jun 29, 2024
1 parent 782aa57 commit 51bd3ee
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion blog/2024-06-27-ai-engineer-conference-thoughts/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -50,4 +50,8 @@ Also in light of the ease of writing single use scripts via llms this also still

> Use tools in preference to unskilled help to lighten a programming task, even if you have to detour to build the tools and expect to throw some of them out after you've finished using them.
I feel that I need to build out my commandline tooling/facility with llms and the text/prompt piping. One tool that I've found useful so far is [llm (cli for calling llms)](https://github.com/simonw/llm). Another technic is to slurp all the sub-files in a directory into the context and submit those as part of the prompt (hat tip to [Manuel Odendahl](https://github.com/wesen)).
I feel that I need to build out my commandline tooling/facility with llms and the text/prompt piping. One tool that I've found useful so far is [llm (cli for calling llms)](https://github.com/simonw/llm). Another technic is to slurp all the sub-files in a directory into the context and submit those as part of the prompt (hat tip to [Manuel Odendahl](https://github.com/wesen)).

## LLMs assume the have the information needed to solve the problem

Inherently how they are trained, LLMs assume they have the context needed to predict the next token. This means they have no reasoning layer to determine if they have been given a task that would be best solved by first acquiring more information.

0 comments on commit 51bd3ee

Please sign in to comment.