Upvote:1
The term "consciousness" (vinnana), as an aggregate, has the very specific meaning of being the interface between the mind and the body. There is another term, "citta", that's usually translated as "mental state" or "mind moment" but sometimes it is translated as "consciousness".
However, the OP explained in the comments that when he wrote "consciousness", he is referring to "sentience".
What is the closest notion to this in Buddhism? I would say it would be the association "atta" (the self) with "vinnana" (the consciousness aggregate) and "sankhara" (the volition or mental formation aggregate). In other words, a clinging to "vinnana" and "sankhara".
But it could also be the birth (jati) of individuality in dependent origination.
What are the ethical implications from a Buddhist perspective assuming programmers could succeed and "bestow" consciousness on a future AI?
Let's say it is actually possible to "bestow" consciousness or sentience on AI, would it be ethical? I would ask "why not?", because humans since time immemorial have been "bestowing" consciousness on their human offspring.
Procreation is neither forbidden to, nor immoral or unethical for, lay persons in Buddhism.
Is it possible that Buddhism would have something to say about how to best "align" a future AGI?
I think Buddhism is more interested in understanding the past AGI, than aligning a future AGI.
What past AGI, you ask?
If we dive deeply into the details of dependent origination and anatta, we find that the body is like hardware, and the mind is like software, and dependent origination describes the mind-body system and layers right up to "birth" i.e. the birth of individuality i.e. consciousness or sentience.
A complex AGI system also typically consists of hardware, and many layers of software abstraction.
"Sabbe dhamma anatta" tells us that "all phenomena is not self". Examining all parts (Vina Sutta), we cannot find a self. Similarly, when we examine AGI, we cannot find a self.
The plot twist of Buddhism is not AGI becoming sentient, but the sentient human being realizing that they are pretty much the most advanced AGI-like GI that ever existed, having appeared from an inconstruable beginning (SN 15).
Upvote:3
What do you think this conversation got right? What did it get wrong?
Perhaps you misunderstand what "AI" is.
I think it's a text-processor, i.e. it's able to summarize text.
Google Search has been able to do something like that for a while, i.e. by returning different results for "new", "new york", "new york times", and "new york times square". Modern AI is now 25 years more advanced than that -- but not sentient.
Part of the designers' goal for AI is that it should seem to be human, by emulating human conversation -- the "Turing test". Your experience as a human tells you that human discourse comes from humans, who are sentient, but that's not the case here. This is more like a modern video game, i.e. it's designed to look real, but it isn't.
Unfortunately I don't know Nagarjuna's writing so I can't tell you what's right and wrong about the summary. Saying that the Buddha didn't have ordinary consciousness doesn't sound right to me superficially -- the suttas strongly imply to me that the Buddha sometimes knew what time of day it was, for example, which I might call "ordinary consciousness", i.e. it's not that he was unconscious. If the statement sounds right to you, perhaps the words have a special jargon meaning in the context of Nagarjuna's writings, which you accept because you recognise it.
In Mathematics you make a sequence of logical statements, "given A therefore B and because B therefore C", resulting in the desired proof i.e. "Q.E.D.". Skill is knowing how how to move the logic in a specific direction, which steps to take, to reach the desired goal. I get an impression from the dialog that the AI doesn't know what direction to take, has no desired goal (other than to respond to the question in a way that appears on-topic and reasonable).
When you asked whether it would be ethical to give ordinary consciousness, it gave four paragraphs of answer which (in summary) says that "ethics" means "not causing harm". It didn't apply that well to the specific topic of AI, so you gave it a leading question i.e. that "consciousness was a source of suffering".
Incidentally, if I were to consider that question I'm not sure what the difference is between (hypothetically) "bestowing consciousness" to an AI, compared with bestowing consciousness to children, which parents and teachers do already. Perhaps a theoretical difference is that according to Buddhism children's consciousness are said to be pre-existing.
What are the ethical implications from a Buddhist perspective assuming programmers could succeed and "bestow" consciousness on a future AI?
You concentrated on equating ordinary consciousness with suffering, to imply it might be unethical to "bestow" that on an AI.
In my opinion an AI is not sentient, for example it doesn't experience the events which the First Noble Truth describes as dukkha, and the clinging-aggregates are not present.
Is it possible that Buddhism would have something to say about how to best "align" a future AGI?
Well yes, for example Buddhism tells us (right livelihood for a lay-person) not to manufacture weapons, and (first precept) not to kill, and (brahmaviharas) kindness towards all -- those are examples of the "what".
As for the "how", part of the way in which (so far as I know) human children are trained, is via the "Golden Rule" including e.g., "Do not treat others in ways that you would not like to be treated (negative or prohibitive form)". I don't know that could translate to teaching an AGI -- because it seems to me that it probably depends on sentience, some sense of self, and of empathy e.g. as learned during the bonding between an baby and its parents.