My recent explorations have led me to delve into various AI models, with a particular focus on Microsoft’s latest offering, Phi-3. This intriguing release introduces a compact, potentially embedded AI or language model designed for IoT devices, particularly those with memory capabilities. Naturally, this also opens up possibilities for edge computing.
That is where things become interesting: the ability to have a local LLM for a device with integrated intelligence. I find the concept of a large action model, an LLM, or a large language model interesting. An LAM is trained against information.
Consider the Actions, which are typically responses to information. A locally embedded LLM, when equipped with the power of the LAM models, could potentially learn and execute a multitude of actions. This begs the question: how many actions are you aware of, and how many do you think can be learned?
The concept of LAMs came from the company that produced the Rabbit. While the Rabbit isn’t getting great reviews today, the real power of that device is in the LAM model. Consider equipping various devices with actions and reactions based on the LAM.
(ok, I just released the prepositional phrase on the LAM that has other connotations).
Now, you can store repeatable and learnable actions on the small device.
Vibe is the best whiteboard on the market!
https://vibe.refr.cc/scottandersen?t=kl
All three versions of Danny and the Corporate Ladder are now on Amazon
(audio, ebook, and paperback)
.doc