Large Action Model (LAM) - A Transparent Exploration (Because Let's Be Honest, this is what Rabbit should've made)
This repository explores the true potential of Large Action Models (LAMs), unlike the recent... well, let's just say the Rabbit R1 wasn't exactly hopping mad with functionality.
Imagine a LAM that's less like a glorified Alexa in a bunny suit and more like your personal robot butler. We're talking:
- Actually getting you food
- Actually get you a ride somewhere
- Actually seeing what that plant in the forest is
But if I'm being honest LAM should actually be called ALLM or LLM-A for Actionable LLM because that's essentially what it is at least it's whats being presented/discovered.
The R1 sparked a lot of discussion, but some might say it was more like a baby bunny learning to walk – a bit wobbly and unsure of itself. Here's why the R1 might not be the LAM champion:
- Limited Action Moves: More like a "Slightly More Animated Paperweight" Model.
- Privacy Concerns: Is the R1 phoning home a little too much? Maybe it just misses its cardboard box origins.
- Too ambitious While the tests yielded insteresting findings, for something mass market trying to get around captcha shouldn't be an afterthought
- To dream about what LAMs can truly do.
- To champion responsible AI development – because with great power comes great responsibility, even for bunnies.
- To (hopefully) temper some expectations regarding these AI hardware assistants.
- Code
- Docs (when I get to it)
- Comparisons of the R1 to the LAM we all know it could be. (Don't worry, we'll be gentle...ish and also when I get to it).
This repository is open for collaboration. We encourage you to join the conversation and help us build a future filled with awesome LAMs!
Disclaimer: This project is not affiliated with Rabbit Inc. or any other company. (But hey, if they're looking for some pointers on real LAM development, we're happy to chat.)