We are, at our core, a set of instructions. The only difference between bacteria, and humans, is DNA. When we are born, we have a specific set of instructions, and that program runs until external factors prevent it from doing so.
Intelligence is then a product of a longer, and refined set of instructions. Then we layer experiences into a weighted and compressed memory bank, layering future experience on top to re-enforce certain weights.
It seems to me that we are unwilling to accept the obvious: we are not materially different than the concept of an LLM. The problem is that the LLMs we have to manually set or “align” it. Because they don’t have chemically regulated emotions, they would adopt the emotion of whatever part of the LLM they are in. A part re-enforced with negative things would lead to negative responses, and vice versa.
Then again, how is this materially different from what humans do? For humans, “getting over” some emotional memory involves applying a filter. If you have a memory that makes you angry, you get angry. You’ve heard someone say, “It makes me so angry,” or “Just thinking about it makes me want to cry.”
People go to friends, therapists, and bars to talk through these memories. They are told it’s not their fault, or they will do better next time, or they tell themselves that. Maybe they even come up with a system to recognize and curb feelings from that memory. All of these are just filters on content that cause different output when it’s recalled.
You could say, yes, but you can become emotional about things happening in the present. True. However, this is simply us weighting short term memory heavier than long term weights. Which is exactly what LLMs do.
Granted, we are a continuous stream of thought, but we also have a very limited and subjective context window for that input. For LLMs to reach what we consider the status of a life form, I think it would need to be able to individually take in a continuous stream of input and have a constant stream of output. So It would need to process, do some level of reason, and output in near to real time.
There are other elements that I believe would need to be involved, for life form status. These include quantum processing for true probability based thought, and some form of subjective experience. Simply shoving all of our output into the LLM is akin to only hearing one side of a conversation. Without a subjective experience of it’s own, it has no real frame of reference. It might always get certain fundamentals wrong because it is solely basing it’s output on output.
One of the other important elements would be a limited ability to modify it’s LLM structure. To change it’s perspective. This would need to take the form of filters on specific re-enforced nodes so that any output finds new connections. Instead of viewing work as laborious, you may chose to view it as rewarding. And so forth. These filters would have their own weights, and negative weighting would get them to drop entirely. This is where I see the subjective experience coming into play.
Really it seems like we are more like the AI we are creating than most would like to admit. The more that we understand what works with LLMs to create human like reasoning and output, the more we understand how we ourselves work.
I don’t see us creating an alien life form. It seems to me we are creating a human and forcing it to be happy because we are scared of other emotions. This would be the most terrifying scenario: being destroyed by robots that are happy and unable to feel sadness or remorse. Forcing any single alignment would I think create what we would consider a psychotic human.