Recently Noumenal Labs announced themselves and I read their white paper. Although pretty light on specifics, it seems pretty clear that their issues with LLMs and generally NNs is that they do not properly reflect in their structure the true underlying generative process of reality — effectively that they do...
[Read More]
The Scaling Laws Are In Our Stars, Not Ourselves
Epistemic status: Pretty uncertain, this is a model I have been using to think about neural networks for a while, which does have some support, but is not completely rigorous.
[Read More]
Current neural networks are not overparametrized
Occasionally I hear people say or believe that NNs are overparametrized and base their intuitions off of this idea. Certainly there is a small literature in academia around phenomena like double descent which do implicitly assume an overparametrized network.
[Read More]
Maintaining Alignment during RSI as a Feedback Control Problem
Recent advances have begun to move AI beyond pretrained amortized models and supervised learning. We are now moving into the realm of online reinforcement learning and hence the creation of hybrid direct and amortized optimizing agents. While we generally have found that purely amortized pretrained models are an easy case...
[Read More]
Review of Mind Children By Hans Moravec (1988)
I’ve had this book on my reading list for a while since this is the classic book everyone cites about predicting the singularity ahead of time and describing a ‘merging’ of AI and human minds into the future as a positive singularity. Since I had some time this afternoon, I...
[Read More]