A different view of [[Language model agents]] - language model entities are systems that have multi-purpose memory (e.g. episodic memory, procedural memory), can self-regulate (detect errors, avoid infinite loops, meaningfully learn from mistakes), and are generally useful for long-term operation.
This is a distinction from agents, because [[Current (2023) language model agents are too linear]].
Some of the key challenges include
- Parallel processing & integration of thoughts
- Incoming stimulus management & prioritization
- Intent management - goals, tasks, current actions
- Attention management - current context of attention window
- [[Short term & long term memory for LMEs]]
- [[Procedural memory and learning for LMEs]]
- Speed & latency
- Interpretability and workability
- Hallucination & reliability
- Security
Also, [[Language Model Entities may be a safer route to AGI due to high observability]].
Update July 2025: There's a group called the Foundation Agents Organization, who is attempting to coin the term "foundation agent" for this kind of project. I want to read all their research. At the same time, it makes me wish I had my shit more together last year, so I could have gotten credit for some of these developments. A lot of what is in their papers is outlined in the document above.
On the plus side, people thinking this way are the exception rather than the rule. There's still room to be one of the pioneers here.