Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Deeper explanation #2

Open
JpEncausse opened this issue May 20, 2024 · 0 comments
Open

Deeper explanation #2

JpEncausse opened this issue May 20, 2024 · 0 comments

Comments

@JpEncausse
Copy link

JpEncausse commented May 20, 2024

Hello,
I like the way you did it ! This project is 1 year old, is it still relevant or did you change your usage ? Because AI is moving so fast.

I was wondering if I correctly understand your process :

  1. Create a column for embeddings
  2. Convert some fields of a row into an embedding. Store it into the new column

The user ask a question :

  1. Ask an LLM to create a "fake row"
  2. Create an embeddings for this row
  3. Compare distance between this embeddign and the table
  4. Build an answer based on top X matching rows.

I didn't understand the 3. wan you explain where/how this cosine similarity works in the code ?
If I have 3000 rows do you iterate on all to run this algorithm ? I don't find how it works. Is it efficient ?

Thanks !

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant