Discussion about this post

User's avatar
Scott Davis's avatar

I have had a similar sense to your suffering notion. Mine revolves around the idea of objective functions, which is a fancy word for how we weigh the value of various things that we might want to accomplish and the costs of achieving them. We learn because there are incentives to do so, which is another way of saying that we learn in the pursuit of those things we value, and we learn to achieve those things we value at the lowest possible cost to us. So, there are always better and worse outcomes, and there are always better and worse paths to those outcomes. We "suffer" as you put it when we get a bad outcome or when the path to that outcome carries costs that exceed the value of the outcome. It's a gradient scale of net satisfaction that runs from a hugely negative possibility to a hugely positive one. I've not seen in my interactions with LLMs any such objective function other than fit-to-model.

Expand full comment

No posts