An interesting behavior #704
ariel-pettyjohn
started this conversation in
Show and tell
Replies: 1 comment
-
As I continue to test today, I'm noticing fewer and fewer of these "no candidates" errors for these particular queries. I'd say I'm only seeing errors about 20% of the time now, so the model seems to be adapting very quickly. I did just get another string error though:
From a solution perspective, I think I obviously need to disambiguate what I mean by "object", etc., in order to avoid these errors. I'm mostly just curious how simply expanding the possibility space of types could apparently resolve these errors though? I still haven't received a single error with the added types. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
So, I've written a service using Genkit that takes an
objectName
and lists itsobjectProperties
along with string literals corresponding to their Zod types. The end goal is to develop a preprocessing pipeline that first generates Zod schema for objects in a query in order to better answer the query at a later step.There are some object properties that the model just seems to struggle to type though, for example: the "composition" and "texture" properties of a "uniform" object. My first thought was that both "composition" and "texture" could be described as "uniform" themselves, which might cause confusion. It's also worth noting that the model seems to combine the properties for both a "uniform" as an article of clothing and an abstract "uniform object". Here's an example of the error:
Notice that the data being matched against in this error itself appears to be a Zod schema, but remember that I'm only returning a string corresponding to a Zod type at this point, not a schema itself. This is why I started this other discussion, because I thought the problem might be the result of wrapping the type strings in Zod literals, but that turned out to be a red herring.
I've spent the better part of a week trying to isolate and understand why only a select handful of examples like this throw "no candidate" errors (thanks again to the Firebase team for the improved error reporting!) and I was just about to give up.... Here's where things get interesting though: I'd only hard-coded Zod types for "array", "boolean", "number", "object", and "string" while building the prototype. So I decided to take a break from bug-hunting to add the rest of the Zod types to the makeshift enum...which somehow ended up resolving the "no candidates" error!
Here's what's really bizarre though: the model still types things like the "composition" and "texture" of a "uniform" as either a "string" or "object", which were both in the original enum! Here's the makeshift enum with the recently added types commented:
When I comment these types now, the model again begins to fail to type a small handful of properties as "string" or "object", but when I uncomment them the model consistently types those same properties as "object" or "string". It's like the expanded universe of possibilities makes some difference in the model's output, even though it still only ends up using one of the original handful of types. I have seen it start to use the new "date" type in a few other examples, but it still consistently types things like "texture" as either a "string" or "object", yet then begins to fail when the other types aren't present in the enum. Here's an example of the desired output:
The model failed 100% of the time yesterday without these additional types, but it works maybe 30% of the time today, so I assume it's continuing to learn, but I've yet to see it fail when I add these additional types. I've given it no other information to define what a "type" is, just this object of constant values. Does anyone have an explanation for why this is happening? Maybe it's 100% to-be-expected, but this seems kind of remarkable to me.
Beta Was this translation helpful? Give feedback.
All reactions