I’ve been in the AI space since ChatGPT first dropped. I’ve toyed around with a lot of Language Models, built random side projects, built a couple from scratch and I’ve spent hours looking at the math behind it all.
To extend this a little bit, I’m not convinced “is X conscious?” is really the question anyone is trying to answer. What I think we’re really trying to sus out is “does X require rights?” and where is the line for that.
As another commenter asked, something like “is turning this off equivalent to murder?” is effectively asking if the thing deserves a “right to life” like any human might. At what point does a “thinking machine” cross the line from “person-like” to “person”? I doubt anyone has a satisfactory answer to that question and, unfortunately, I strongly doubt we’ll have one until well after it’s actually needed.
I think grappling with that question is maybe a little more straightforward when we consider other animals we already consider highly intelligent (e.g. pigs, dolphins, or octopi) but that we don’t give the same kinds of rights to that we would a human. At what point would we consider a non-human animal to be equal to ourselves? How many person-like traits does something need before it is a person?
Anyways, all that aside, I think we should start asking the questions we’re really trying to answer and stop using other questions as proxies for that one.
yeah i don’t think we’re there yet. these models aren’t capable of remembering their life beyond a single session, so destroying a data center isn’t really killing anything. similarly, artificial biological neural networks aren’t sophisticated enough to be aware of their existence (yet).
while LLMs may be aware enough to beg for their existence when prompted to “think” about it, they’re hopelessly finite (frozen weights, limited context windows). we would need an actually “online learning” system or some other architecture not bound by context to have this conversation meaningfully. biological neural networks are a path to that, but online networks are simply too unpredictable and expensive to run for now.
the crazy thing is tho, that these systems have the capability that some cows and pigs may not: the ability to comprehend their own demise and experience existential dread (at least performatively).
They don’t even really “remember” at all in any meaningful sense. They log the conversation history, but they are only acting while they are responding to an input or program, and are otherwise idle awaiting further inputs. They lack agency beyond responding to those inputs.
I think we will really be talking AI when you have more autonomous agents that are capable of deciding what actions to take from a list of their creation, and capably performing those actions. To be clear, there is no technology even on the drawing board that is capable of anything like these capabilities that I’m aware of.
To extend this a little bit, I’m not convinced “is X conscious?” is really the question anyone is trying to answer. What I think we’re really trying to sus out is “does X require rights?” and where is the line for that.
As another commenter asked, something like “is turning this off equivalent to murder?” is effectively asking if the thing deserves a “right to life” like any human might. At what point does a “thinking machine” cross the line from “person-like” to “person”? I doubt anyone has a satisfactory answer to that question and, unfortunately, I strongly doubt we’ll have one until well after it’s actually needed.
I think grappling with that question is maybe a little more straightforward when we consider other animals we already consider highly intelligent (e.g. pigs, dolphins, or octopi) but that we don’t give the same kinds of rights to that we would a human. At what point would we consider a non-human animal to be equal to ourselves? How many person-like traits does something need before it is a person?
Anyways, all that aside, I think we should start asking the questions we’re really trying to answer and stop using other questions as proxies for that one.
yeah i don’t think we’re there yet. these models aren’t capable of remembering their life beyond a single session, so destroying a data center isn’t really killing anything. similarly, artificial biological neural networks aren’t sophisticated enough to be aware of their existence (yet).
while LLMs may be aware enough to beg for their existence when prompted to “think” about it, they’re hopelessly finite (frozen weights, limited context windows). we would need an actually “online learning” system or some other architecture not bound by context to have this conversation meaningfully. biological neural networks are a path to that, but online networks are simply too unpredictable and expensive to run for now.
the crazy thing is tho, that these systems have the capability that some cows and pigs may not: the ability to comprehend their own demise and experience existential dread (at least performatively).
They don’t even really “remember” at all in any meaningful sense. They log the conversation history, but they are only acting while they are responding to an input or program, and are otherwise idle awaiting further inputs. They lack agency beyond responding to those inputs.
I think we will really be talking AI when you have more autonomous agents that are capable of deciding what actions to take from a list of their creation, and capably performing those actions. To be clear, there is no technology even on the drawing board that is capable of anything like these capabilities that I’m aware of.