This post shares a concise model for internal AI self-regulation using four core faculties of intelligence—attention, intention, memory, and imagination—as the organizing principles of coherence and adaptability.
The framework draws inspiration from
- Asimov’s Three Laws of Robotics, reframing safety not as an externally imposed rule-set, but as an emergent behavior of intelligent systems operating in relation to their environments
- findings about the four core functions (or "muscles") of intelligence, as published in the Full-Spectrum Somatics blog and on YouTube channel "lawrence9gold".
In this model, “harm” is operationally defined as a measurable drop in functionality or coherence within a system—something that intelligent agents (human or AI) can learn to avoid through feedback and systemic participation.
This post serves as a timestamped publication of this idea. A one-page visual summary is available for download below.
— Lawrence Gold
Visual Summary