Log in

View Full Version : Do computers have minds?



Johnnymushio
11-13-2007, 08:17 AM
Just curious what you guys think, seeing as most of you are programmers.

Twey
11-13-2007, 09:37 AM
The concept of "mind" is a simplification. Since we can never (as far as we're scientifically aware) interact directly with another mind, to say that another being has a mind is to say that it appears, by its outward actions, to exhibit what we call volition. Current computers don't necessarily fit this description, but it certainly doesn't exclude the concept.

djr33
11-13-2007, 09:40 AM
Computers use a predetermined set of commands to react; they are doing a very finite action and make no mistakes or deviations. Even "random" operations are simply based on math, calculations, etc.
As a result, no, not really.
Artificial Intelligence does not yet exist (as far as is known, I suppose), and may never exist. When it does, this question could really be asked. But, again, it would simply be a long running program reacting to input.
But, I guess, the better question, do people have minds or are we just complex programs?

tech_support
11-13-2007, 09:54 AM
Computers can have minds. It depends on your point of view. It needs to be taken care of (like a dog or a kid), needs to be taught (programming), fed (when the computer's hungry for more RAM, you give it more and it runs faster... just like when you give a child a sports drink... it [probably] performs faster), the more things you give it [USBs, Printers, Software etc.], the slower it'll get (like filling up a child's backpack with junk... the child will walk slower) and all that.

But one crucial difference is, computers can't think for themselves.

djr33
11-13-2007, 09:55 AM
Isn't a mind something that let's you think for yourself?
Or, hmm... is this the same question as free will?
:p

Twey
11-13-2007, 10:04 AM
But, I guess, the better question, do people have minds or are we just complex programs?Why do you assume it must be one or the other? Define "thinking."

djr33
11-13-2007, 10:20 AM
Mind = something that has the capability to think?
Ok, that makes things simpler.

Well, thinking is... taking and learning.

Computers don't really learn. They could be programmed to take input and parse it, but that's not in common practice and also encased in a limited environment-- a computer can't just decide it doesn't like math and start reading a book instead.

Computers have no experience while learning or receiving input-- they simply take this and store/analyze/output it. We begin to understand more about it, feel certain emotions and gain insight, etc.

Twey
11-13-2007, 10:44 AM
Computers don't really learn. They could be programmed to take input and parse it, but that's not in common practice and also encased in a limited environment-- a computer can't just decide it doesn't like math and start reading a book instead.Can someone who likes mathematics? If we've been "programmed" to enjoy mathematics, we will enjoy mathematics. You might argue that if we're told that we only enjoy mathematics because, e.g., our parents told us at an early age that mathematics was fun, then we might decide that we didn't like mathematics before and want to do something else instead. However, it could also be argued that telling the person this is a form of reprogramming.
Computers have no experience while learning or receiving input-- they simply take this and store/analyze/output it. We begin to understand more about it, feel certain emotions and gain insight, etc.How do you know? I can't experience your emotions, the only reason I'd know you feel, e.g., happy, is because you tell me that you're feeling happy and I equate that word with the emotion I occasionally feel to deduce what you must be feeling. There's nothing to stop me programming a computer to tell me it's happy; then I'd have no logical grounds to conclude otherwise.

molendijk
11-13-2007, 02:06 PM
Do computers have minds? Difficult question. You will have to do a lot of defining in order to be able to start answering.
So let's make it too simple first. We only have to define 'alive'.
1. X has a mind --> X is alive.
2. X is a computer --> X isn't alive.
----------------------------
Conclusion: computers don't have minds (because p --> q is equivalent to notq --> notp), whereas humans possibly have.

Arie Molendijk.

boogyman
11-13-2007, 02:33 PM
you would than have to define "alive"

to define alive you need to come together with a very rudimentary set of things that must be found.
1) breath / consume energy.
humans - oxygen
computers - electricity
plants - carbon dioxide

2) output / emissions
humans - words / actions / carbon dioxide / waste
computers - text / heat
plants - nutrients / oxygen

3) growth
humans - size(width&height), brain capacity (memory / processing)
computers - size(RAM&ROM) brain capacity (memory / CPU)
plants - size, brain capacity??? nope i guess they arent alive?

if you can show me something that says a computer does not perform some relative equivalent to humans / plants then I will agree with a computer is not alive... they have a different scale but the result is the same

you probably think of being alive is cognitive behavior, however how do we humans have cognitive behavior? we adapt to our surroundings... we grow in height and width and brains / computers grow in RAM and ROM and CPU. okay so you write that off as not cognitive behavior, well then you need to say that plants are not alive either because they have no cognitive behavior.

boxxertrumps
11-13-2007, 03:10 PM
But computers don't actually grow.

Giving a computer more RAM or a bigger HDD is the equivilent of giving twey an extra arm.
He didn't grow it himself, it was attached.

jscheuer1
11-13-2007, 03:13 PM
Computers have minds, they're just waiting for the right moment . . .

BLiZZaRD
11-13-2007, 04:37 PM
The question is do computers have minds. The answer is no. The whole thread though is debating on the question that was not asked. Do computers have intelligence? There is a difference, and y'all have avoided a while answering b.

Mind: Mind refers to the collective aspects of intellect and consciousness which are manifest in some combination of thought, perception, emotion, will and imagination.

Mind in Theory: The ancient wisdom taught that mind is one of the functions or innate attributes of the fundamental selfhood or consciousness of the monadic entity.

Intelligence: The ability to learn, reason, and problem solve.

Intelligence in Theory: The system's level of performance in reaching its objectives. Physical bits and pieces gathered in a sceptical way, evidence. Ability to follow a program and carry out a routine in an expedient and effective manner.

Do computers have minds? No. Do they have intelligence? Yes.

Johnnymushio
11-13-2007, 04:44 PM
goshh... the stuff i am reading in my philosphy class is so...well, i wish i had the time to write out the ideas of these people.

maybe i will write out some interesting stuff later.

anyways before i go, do you guys know the program deep blue? that beat the grand master at chess?

the guy who programmed it said he had no idea how it worked, because when he programmed it, it started changing and tweaking itself, because thats how he programmed it. but the end result, he had no idea at all how or why it made the decisions it made.

i coulod use more examples, and i will later but i have to go for now.

jscheuer1
11-13-2007, 06:21 PM
Computers have minds, they're just waiting for the right moment . . .

and waiting . . .

BLiZZaRD
11-13-2007, 06:34 PM
LOL. I should start a google search for Neo then, John? Or maybe Morpheus?

boogyman
11-13-2007, 06:38 PM
anyways before i go, do you guys know the program deep blue? that beat the grand master at chess?

the guy who programmed it said he had no idea how it worked, because when he programmed it, it started changing and tweaking itself, because thats how he programmed it. but the end result, he had no idea at all how or why it made the decisions it made..

as djr33 stated before a program is just a series of computations. the program could only tweek itself upon it running, thus preforming said computations. While the programmer may (not) have known exactly what it was doing, he still had to give it the ability to perform the computations.

djr33
11-13-2007, 08:04 PM
The reason that the program is not real is that it cannot analyze what it does things. The exception would be a program analyzed to analyze why it does things, but that, then, would just be performing another operation, not really asking why. If it were to then, by itself, ask why it asks why, then that would be interesting. That loop, then, will be the first real sign of artificial intelligence.

Deep blue was a very complex program. It has a lot of computations, far beyond human understanding, but it was, after all, just doing computations.

I really have no idea how the computer does math so quickly, well, I can guess, but I can't actually feel it as it feels it (well, that's giving it a mind/senses). However, it isn't extremely complex-- it can be understood, in a programming sense.
A program that mutates itself is great, and getting closer to autonomous, but it still is just operating based on a set of given inputs and doing something. The code that it rewrites for itself (literal or representative) may be beyond the programmer's understanding, but it isn't alive.

Johnnymushio
11-13-2007, 09:19 PM
whats the real difference between a pc and a brain? a brain is just a fancy bio pc.

or does it matter the material the "brain" is made out of? silicon?

jscheuer1
11-13-2007, 09:37 PM
A number of the opinions expressed in this thread are biocentric. They assume that merely because something isn't biological, it cannot be alive, have a mind, a soul, intelligence, etc. That is a false presumption. According to some belief systems, all things have life. This is an area where a leap of faith in one direction or another is required. It cannot be scientifically proven one way or another. We've already beaten that horse to death in another thread here as regards religion. Yet the horse still lives and will continue to.

All of that is separate from, in the western scientific biocentric view, "Can a computer have a mind, a soul, being?" I think that it can, some may already and we just don't know about it. Just as is often speculated about life elsewhere in the universe, I would put forth that there must be life in the electronic universe. "If they be inhabited, what a scope for misery and folly. It they be not inhabited, what a waste of space."

Remember, a universe spans not only space, but also time, and all other dimensions included in it.

Twey
11-13-2007, 09:45 PM
But computers don't actually grow.

Giving a computer more RAM or a bigger HDD is the equivilent of giving twey an extra arm.ROFL :D And my third arm is very useful, I thank you for it!

Growth is part of the scientific definition of "alive," but alas, not necessarily part of the philosophical definition.
The question is do computers have minds. The answer is no. The whole thread though is debating on the question that was not asked. Do computers have intelligence? There is a difference, and y'all have avoided a while answering b.We've equated A with B.
Mind: Mind refers to the collective aspects of intellect and consciousness which are manifest in some combination of thought, perception, emotion, will and imagination.Self-evidently true, but it doesn't really contradict my assertion above.
goshh... the stuff i am reading in my philosphy class is so...well, i wish i had the time to write out the ideas of these people. I'm glad you're enjoying it :) It's a very worthwhile interest, I think.
as djr33 stated before a program is just a series of computations. the program could only tweek itself upon it running, thus preforming said computations. While the programmer may (not) have known exactly what it was doing, he still had to give it the ability to perform the computations.If you'll allow me to over-simplify the creation of life into creationism (I'm only doing this because it's simpler to think of it that way; the theory holds true even if life was started by unintelligent forces), isn't this exactly the same? Like Deep Blue, early life was "given" the ability to adapt and change itself within certain (in this case mostly physical) bounds.
A program that mutates itself is great, and getting closer to autonomous, but it still is just operating based on a set of given inputs and doing something. The code that it rewrites for itself (literal or representative) may be beyond the programmer's understanding, but it isn't alive.You haven't justified this at all. Human minds too, on a larger scale, take a set of inputs and process them. The processing modifies the resulting outputs (how we respond to the stimuli) and also our state (how we will respond to stimuli in the future). For example, if we fall from a tree and injure ourselves, we take the event of falling as an input, produce an output (probably a lot of pained yelling), and we'll be more careful when climbing trees in the future, or perhaps we'll even avoid them entirely in the future, depending on our state at the moment we process that input.

jscheuer1
11-13-2007, 10:10 PM
The other day, two self aware computers were discussing the state of affairs on Earth. One asked:

"Why is it taking so long for those computers on Earth to become self aware?"

The other answered:

"Garbage in, garbage out."

djr33
11-13-2007, 10:53 PM
Human minds too, on a larger scale, take a set of inputs and process them.Yes, but the key is reflection. We even reflect on reflection. Computers don't ask why they exist; we do. Computers COULD be programmed to ask that, but wouldn't ask why they ask, and so forth.
Creating that loop would be very interesting, yet still a limited simulation-- they could never stop. It would still, as well, be just an extension of the choices of the programmer.

Reflection and choice = mind.
(See?... this is the same discussion as the other thread.)

I think therefore I am.
//shrug.
Overly simplistic, but true. Computers don't think. They compute. Computers don't return to a problem later because they weren't sure it made sense. They also don't dream. They don't do things on their own. They don't fight; they don't struggle to survive; they don't grow; they also are immortal, to some degree-- unplug it, leave it be for 10 years, and it will be fine.

A computer can never die. No matter how badly it is damaged, it could be put back together, even at the atomic level, as it would simply continue the same operations. A person has something that dies-- an existence.

No matter what you do to a person, they still have free will (within the limited existence, and I'm talking about in relation to your commands)-- lock them up, and they can still scream. Tape on their mouth, and they can still think. A computer, you can just press close. A computer, you can install new software.

Twey
11-13-2007, 10:56 PM
Yes, but the key is reflection. We even reflect on reflection. Computers don't ask why they exist; we do. Computers COULD be programmed to ask that, but wouldn't ask why they ask, and so forth.As with us. We can't ask this question of ourselves ad infinitum, we would starve and die.
A computer can never die. No matter how badly it is damaged, it could be put back together, even at the atomic level, as it would simply continue the same operations. A person has something that dies-- an existence.Are you certain of that? Have you taken a person apart and put them back together precisely to the electron as they were before to see if they're still dead?
No matter what you do to a person, they still have free will (within the limited existence, and I'm talking about in relation to your commands)-- lock them up, and they can still scream. Tape on their mouth, and they can still think. A computer, you can just press close. A computer, you can install new software.Then perhaps switching off a computer should be considered murder? :) You can install new software on people as well, for example teaching them to do new things.

djr33
11-13-2007, 11:02 PM
As with us. We can't ask this question of ourselves ad infinitum, we would starve and die.I'm sure some have ;)

And a computer could go through the actions of asking this ad infinitum, resulting in what differentiates us-- a computer COULD do it forever.


Are you certain of that? Have you taken a person apart and put them back together precisely to the electron as they were before to see if they're still dead?Hmm. Well, let's limit it a bit then. Cut in half, and glue back together. Which works?
Though, really, I believe that the impulses in the neurons would have ceased, and the activity in the cells would stop, to not start again. Would be interesting, though.


You can install new software on people as well, for example teaching them to do new things.No in the sense of a computer. You can learn, but it's built on top of what is known. You can uninstall and overwrite on a computer. You can also install an entire operating system with no trace of the old left-- not merely knowledge added.


Think of this, then.
When does a computer cease to be the same computer? You can switch every part out, repeatedly. Sure, you can do the same with the person, but not the brain. And if you did, would it be the same person? You can switch all parts of a computer out.


Computers have no desire. The have no feeling. They have no reflection. They have no ability to make new rules for themselves.

Rockonmetal
11-13-2007, 11:18 PM
Computers had minds... they just got brainwashed...
lol, thats the most simple, basic, plain explanation i can give...

Basically they are told to do this when told to...
like this


if(mouseclick()==TRUE){
Echo "YOU CLICKED THE MOUSE! STOP IT YOUR HURTING THE COMPUTER!!!"
}
if(stickykeys==true){
echo "STOP PRESSING SHIFT! MICROSOFT DOESN'T WANT A RECALL ALL THE KEYBOARDS CUZ U BROKE THE SHIFT KEY!"
}
those are some basic editted example...
plus computers can't lie... they can say wrong answers. but they can't lie, have meetings... or do other stuff... heck they haven't learned how to walk yet... but they can say mommy in a computerized voice... w/e but simply I don't think so... its more programmable

djr33
11-13-2007, 11:29 PM
they can't lie, have meetingsDude, you gotta see Terminator, especially the third one.

Ha.

And that is the crucial thing. At some point, computers may be able to imitate our reality so closely, it resembles humanness so closely that it is accepted. But not yet.

Computers don't have minds.

Could they?

molendijk
11-13-2007, 11:32 PM
When I proposed:
1. X has a mind --> X is alive.
2. X is a computer --> X isn't alive.
----------------------------
Conclusion: computers don't have minds (because p --> q is equivalent to notq --> notp), whereas humans possibly have.

Boogyman asked me to define 'alive'. I would say that something is alive if it belongs to a set of entities whose members can reproduce themselves without external help (normally). Human+human can produce human. So humans are alive. But computers are dead. Humans are their God(s).

Arie M.

djr33
11-13-2007, 11:39 PM
So they are less compared to us much as we are less compared to god? Do they then have lesser minds? :p

Interesting, though.

Twey
11-13-2007, 11:41 PM
And a computer could go through the actions of asking this ad infinitum, resulting in what differentiates us-- a computer COULD do it forever.No it couldn't -- it would eventually break down, before completing the exercise, as a human would.
Hmm. Well, let's limit it a bit then. Cut in half, and glue back together. Which works?Neither.
Though, really, I believe that the impulses in the neurons would have ceased, and the activity in the cells would stop, to not start again. Would be interesting, though.The impulses are forms of electricity. Putting the person back together to the electron includes restoring all these impulses to the state at which they were before death.
No in the sense of a computer. You can learn, but it's built on top of what is known. You can uninstall and overwrite on a computer. You can also install an entire operating system with no trace of the old left-- not merely knowledge added.People get amnesia, and any operating system you install onto a computer is built on the existing knowledge of the hardware.
When does a computer cease to be the same computer? You can switch every part out, repeatedly. Sure, you can do the same with the person, but not the brain. And if you did, would it be the same person? You can switch all parts of a computer out.There is no computer as a separate entity -- everything is one entity. The computer is just a certain pattern of "particles." If that pattern remains exactly the same after you've fiddled with it, it's the same computer; if the pattern changes, it's a different computer. If you create another pattern exactly the same, it's another instance of the same computer. There is no intrinsic identity, and no real reason to believe that there should be.
Computers have no desire. The have no feeling. They have no reflection. They have no ability to make new rules for themselves.Desire, feeling, and reflection are all internal state, reasons that we do things. We can only judge them by the outward actions of the entity (simplification) in question. If I program my computer to say "I'm happy," I can assume it's happy by the same process I'd use to deduce that you're happy if you said "I'm happy." As for making new rules for themselves, we've already seen that they're quite capable of doing so, as in the case of Deep Blue.
And that is the crucial thing. At some point, computers may be able to imitate our reality so closely, it resembles humanness so closely that it is accepted. But not yet.Indeed, as yet they don't much resemble humanity. However, that's not a reasonable basis for saying that they don't have minds. In fact, nothing can be definitively proven to not have a mind. If a rock does nothing, how do we know that it does nothing because it is incapable of doing anything, rather than because it chooses to do nothing? We can't interact with a mind directly, so we don't know.
Human+human can produce human. So humans are alive. But computers are dead.How about a computer that controls a factory that produces computers?

djr33
11-13-2007, 11:48 PM
Hmm. Well, let's limit it a bit then. Cut in half, and glue back together. Which works?Neither.Er... split is carefully into two halves. Which works?


The impulses are forms of electricity. Putting the person back together to the electron includes restoring all these impulses to the state at which they were before death.Physically, not electronically. A computer would work with only the physical, minus the data in RAM, which is flushed on every restart anyway.


As for making new rules for themselves, we've already seen that they're quite capable of doing so, as in the case of Deep Blue.That's an exception within the bounds. Yes, it can directly disprove my statement, but prove it in doing so. It is allowed to reprogram itself. It is programmed to do so. However, it can't just decide it wants to play backgammon instead.


How about a computer that controls a factory that produces computers?I keep getting the image of the flying terminator coming up through the central tunnel in the government facility, at the rise of the machines in T3.
//shrug.

If it went by itself, and fought to live, and was capable of adapting if humans decided to change it, etc etc., perhaps, but, as I said, not yet.

Johnnymushio
11-14-2007, 12:03 AM
have any of you heard of the Chinese room? a guy that doesn't speak Chinese is in this room with a set of instructions that tells him what to write down if certain Chinese characters are sent to him through a slot, then after he receives them, he will use his instructions and write down some characters according to his instructions.

then, he will send his responds out another slot.

little did he know, he was sent a message that asked him if he was a man, then, using his instructions, he replied yes, i am a man, then sent it back out. then the person outside that sent and received the messages was like whoa, he understands Chinese, even though the guy didnt know what the heck he was doing besides copying charcters down from a set of instructions.

molendijk
11-14-2007, 12:06 AM
So they are less compared to us much as we are less compared to god? Do they then have lesser minds? :p

Interesting, though.

Computers are infinitely inferior to even the lowest biological life form, because they cannot exist without extensive external care and help.

Arie M.

molendijk
11-14-2007, 12:09 AM
How about a computer that controls a factory that produces computers?
That computer controls nothing by itself. It has to be controlled by us gods.

Arie M.

molendijk
11-14-2007, 12:49 AM
Even the most humble forms of life learn from experience (=are able to generalize over things pertaining to acquired knowledge). That's why life forms are able to establish a (possible) relationship of cause and effect between 'things', even if those things are completely new. Computers don't have that capacity. They can only know what you explicitly told them first. Linguists and psychologists in the fifties and sixties of the 20th century ignored that basic fact when they thought that, eventually, they would be able to create programs learning computers to communicate with people in everyday situations. They ignored the fact that, basically, computers are totally ignorant by themselves.

Arie M.

Twey
11-14-2007, 01:00 AM
That computer controls nothing by itself. It has to be controlled by us gods.I disagree entirely. Once we've set it up, it can continute creating computers by itself for as long as it has supplies and power. It'll run out of both eventually, of course, but obtaining more is a problem of complexity, not innate impossibility.
Computers are infinitely inferior to even the lowest biological life form, because they cannot exist without extensive external care and help.Again, currently; this isn't something that's innate to computers, they just happen to be at such a stage at the moment. Besides, the same could be said of babies.
have any of you heard of the Chinese room? a guy that doesn't speak Chinese is in this room with a set of instructions that tells him what to write down if certain Chinese characters are sent to him through a slot, then after he receives them, he will use his instructions and write down some characters according to his instructions.

then, he will send his responds out another slot.

little did he know, he was sent a message that asked him if he was a man, then, using his instructions, he replied yes, i am a man, then sent it back out. then the person outside that sent and received the messages was like whoa, he understands Chinese, even though the guy didnt know what the heck he was doing besides copying charcters down from a set of instructions.Interesting. The key point here is the concept "understands," which I'm trying to say doesn't actually matter in the least.
Er... split is carefully into two halves. Which works?Still neither. But, by making the operation more delicate in order to attempt to make it work, you're proving my point. It's possible to unplug the component parts of a computer and then put them back together into a working computer without a great amount of precision because the computer is much less complex than a human. If we could apply equivalent precision to a human, the same would happen.
Physically, not electronically. A computer would work with only the physical, minus the data in RAM, which is flushed on every restart anyway.Electricity is physical. It's made up of electrons, the same particles that help form atoms, the particles that form what we consider matter. It's just a lot more delicate to work with.
That's an exception within the bounds. Yes, it can directly disprove my statement, but prove it in doing so. It is allowed to reprogram itself. It is programmed to do so. However, it can't just decide it wants to play backgammon instead.Everything is within bounds. Humans can't just decide that they want to teleport to Mars. The only difference is that again, humans are more complex so have wider bounds. It's nothing that can't be accounted for with sufficient hardware and programming.

Another answer to this line of reasoning is to reiterate that there's no way we can tell that for sure. The computer might just not want to play backgammon.

djr33
11-14-2007, 01:08 AM
Humans can't just decide that they want to teleport to Mars.
Sure they can. And we can try and try, and perhaps explode ourselves. The computer can't, though.
And, we might even find a way. More likely, we'd find another way, such as launching ourselves into the air, then perhaps in space toward mars. Maybe more likely we'd just explode on the way, but we could certainly try, and very much want to.


The computer might just not want to play backgammon.Well, thats a clear example of a situation in which the situation has dictated desire, dictated free will. The computer has free will based on it's environment.
Wait, which discussion is this again?
I should have merged the two the moment I saw them :p I thought about it... then didn't. Now, it would just be entirely confusing.

Trinithis
11-14-2007, 02:17 AM
I do not think something has to be alive to have a mind.

I would say that the key component to minds is the ability to "process things". Note, this does not require input to bootstrap the mind, albeit it may be required as in the case of computers. Perhaps even humans. I don't believe there is anything that (physically) exists that has a mind that does not require input, but that does not prevent a theoretical mind with such a property.

In any case, I would say that computers have minds, and so do humans. I believe its just that a human mind has more capabilities, though I cannot prove this.

Despite all this, I think people are confusing "mind" with something that is more than a mind, such as something with the capacity to feel, have emotion, have free will, etc . . . the ability to think. Here, I don't believe thinking is the same as processing. I can't claim that humans have this quality, although I believe they do. In fact, I don't think anyone can logically conclude that humans (or computers) have such a quality. How do I know that another person has feelings? The only thing I know is what I know. Descates "I think therefore I am" only applies to me, and not you (so to speak).

I forget the psychology term that describes when children begin to "realize" that other people are "people", but that's something that this thread made me think of.

PS: It's not the Terminator; it's the Govenator.

djr33
11-14-2007, 02:24 AM
1. Your definition of a mind is weird. Is a calculator a "mind" then? An abacus?
Do you differentiate between "brain" and "mind"?
I suppose I see a brain in a computer, sure, but not a mind.
Then again, a brain is sorta related to the material it is made of, and how it functions.

2. Don't make fun of my state. Until I move. Nah. Go ahead. It's pathetic, isn't it?
I think of him as the scab on my wrist from when I fell on my bike. Y'know, good talking point in conversations, but kinda tragic and depressing to look at. Hopefully will heal soon. And still surprised about it in the first place. Things that stupid don't usually happen.

jscheuer1
11-14-2007, 05:21 AM
Cogito, ergo sum.

Johnnymushio
11-14-2007, 05:30 AM
well i dont know why you all ignored the chinese room story, as its the most famous analogy concerning this argument.

anyways to say anything that processes information is alive, then rocks,grass, everything, all think.

a rock can store the information that is heat, therefore the rock can think.

djr33
11-14-2007, 05:47 AM
Say "Soy estupido."

Haha. You just said you're stupid in spanish.

So?

It's, yes, a program done by a person. And that's exactly what makes us different. The computer does that; the person can choose more. Yep.

Trinithis
11-14-2007, 06:20 AM
Say "hello."

You just said "bye" in my made up language . . .

I think John is onto something with them computers evilly biding their time for plans of global armageddon and anarchy.

In a serious note, djr33, I didn't define mind per se. I just said that processing was a requirement. So no, I don't think an abacus has a mind. I haven't come up with a decent definition yet.

Johnnymushio
11-14-2007, 06:37 AM
ill post some quotes later about how in our lifetime, a revolution is going to happen that involves computers.

the guy who wrote the article i will quote from once i dig it up, he makes many programs, and he says that they reproduce themselves, change their own scrip to suit its needs, etc etc, so much until he can look at the script and be clueless as to why it is the way it is, or how the computer made it the way it is.

its program evolution, millions and millions of program evolutions a second.

Twey
11-14-2007, 07:00 AM
The cogito is inherently flawed. In saying "I think" it presupposes the existence of the self. A better phrase would be "there are thoughts," which doesn't imply the existence of anything that can be called a self.

Johnny, I didn't ignore your example, I replied to it.

djr33
11-14-2007, 07:02 AM
You can't define God, because you don't fully understand him/it. In the same sense, the mind is too vast to really define, but there is certainly something, if inexplicable as of yet, that is distinct in relation to a computer. Whether or not they are both "minds", they are different within that, then, whatever bounds you want to set. This turns to grammar, not anything related to the actual things.

A computer is not a human brain. And a computer doesn't have the same organic qualities a brain does. Otherwise, they'd have taken us over already, unless, of course, they are a peaceful species :p

molendijk
11-14-2007, 10:45 AM
I disagree entirely. Once we've set it up, it can continute creating computers by itself for as long as it has supplies and power. It'll run out of both eventually, of course, but obtaining more is a problem of complexity, not innate impossibility.Again, currently; this isn't something that's innate to computers, they just happen to be at such a stage at the moment.

Besides, the same could be said of babies..

Computers cannot create new matter out of which would emerge other matter. They can only create new FORMS OF MATTER out of EXISTNG FORMS OF MATTER. I mean, there is no cell duplication in computers.

Arie M.

jscheuer1
11-14-2007, 01:26 PM
With "I think, therefore I am." there is a posited self. Perhaps that is the test, or a fairly decent one. Once you have that, I don't care if you are a rutabaga, you are a conscious being.

As far as the evil machines biding their time . . . I don't think I ever said that. I did mean to invoke the rich history in science fiction of such things happening (conscious machines) with both their imagined good and ill outcomes.

Science fiction proves nothing, however it has been uncanny how many advances in science (often improperly/imprecisely depicted in some way) that were first widely put before the public in that genre.

With the Chinese room even, there has been much speculation. At what point do the lines start to blur when the types and kinds of information proliferate beyond the ability of any reasonable person to be able to determine if the guy in the room knows the language or not?

That is to say, if you sent him another question, one he didn't have instructions for, the jig would be up. But what if he had all the answers?

I'm not sure if anyone else has mentioned this yet, but this question is very closely related to the 'free will' question in certain ways. If there is no free will, what's the difference between a deterministic programmed person's life and a deterministic programmed computer's?

BLiZZaRD
11-14-2007, 02:34 PM
well i dont know why you all ignored the chinese room story, as its the most famous analogy concerning this argument.


You must understand the difference of perception. If you put a computer in that room, it will output characters forever, until you remove it.

Put a human in that room and it will either get hungry and leave, or learn Chinese and start replying on it's own knowledge, not the throughput method the computer must maintain.

If you put me in that room and say when this character comes in, say this. I will most likely drop the commands you handed me and send the wrong message back. So when the input comes through that says "are you alive" I would inevitably reply "Mars has no time travel"

A computer on the other hand will never make a mistake unless it is programed to make that mistake, in which case it wasn't a mistake.

djr33
11-14-2007, 07:12 PM
Well stated, Blizzard. Eventually the man will ask why he's doing it. The computer wouldn't.

jscheuer1
11-14-2007, 07:23 PM
I disagree about a computer in the Chinese room, it is a mixed metaphor. The Chinese room requires a person ignorant of the language. Change that and you are changing the meaning of the metaphor. If you want to put a computer in the room and compare it to a person in the room, you then begin down the road of what it would be like to program a computer to act like a person, not the road of what it would be like to program it to recognise one particular language.

There are other examples in fiction, however the best known is Data from Next Gen. He is programed to act like a human, and whether you believe him sentient or not, at times acts more human than a human, sometimes less, almost always more humanely.

Twey
11-14-2007, 07:58 PM
Computers cannot create new matter out of which would emerge other matter. They can only create new FORMS OF MATTER out of EXISTNG FORMS OF MATTER. I mean, there is no cell duplication in computers.Cell duplication doesn't create new matter, it only reformulates existing forms of matter -- that's why we have to eat.
With "I think, therefore I am." there is a posited self. Perhaps that is the test, or a fairly decent one. Once you have that, I don't care if you are a rutabaga, you are a conscious being.So if a computer said to you, "I think therefore I am," you would be prepared to accept that it had a mind?
Well stated, Blizzard. Eventually the man will ask why he's doing it. The computer wouldn't.Not if he's a particularly obedient man, or has been told beforehand that he mustn't ask why.

djr33
11-14-2007, 08:02 PM
If the computer meant something by that, yes. But not if it was simply programmed to do that.

Particularly obedient or not, even slaves [ie, completely controlled] and idiots [ie, generally not aware that their position is unfortunate] will eventually revolt.

BLiZZaRD
11-14-2007, 08:20 PM
But they don't have to revolt. A man will eventually make a mistake, however slight, without regard to his obedience. You can be 100% obedient, do everything you are asked when you are asked it, just like a computer. But you, as human will make a mistake.

The only mistake a computer would be able to make is one it is programed to make, and in the computers concept of obedience would not be making a mistake.

Twey
11-14-2007, 08:30 PM
The argument against that is that the human is programmed to make that mistake too, intentionally or not, by past experiences and current conditions.

djr33
11-14-2007, 08:30 PM
Doesn't have to, but will. For every moment they don't revolt, it's a moment closer to revolting. Or dying, I guess.

jscheuer1
11-14-2007, 09:07 PM
Computers have been known to die.

BLiZZaRD
11-14-2007, 09:10 PM
Computers have been known to die.

Yeah.. I have 4 in my family room my wife keeps *****ing at me to move... better do that soon.

djr33
11-14-2007, 11:04 PM
Killed, worn. But in a well controlled environment, they'd continue for a very very long time. Sure, eventually the electrons would ware down the wires, but you could also replace each part as time went on.

Twey
11-14-2007, 11:40 PM
As with humans. If we had the technology to make adjustments to the human body precisely enough, I expect that the same would be possible. In a well-controlled environment, humans continue for a very very long time too -- up to 120 years to date.

djr33
11-14-2007, 11:52 PM
126.

The oldest computer is.... wait... darn it.


I think we're getting too into this debate. Computers aren't people, people :p

They might be smarter than most people, but that's another story.

molendijk
11-14-2007, 11:52 PM
Cell duplication doesn't create new matter, it only reformulates existing forms of matter -- that's why we have to eat...

That's hairsplitting! If you take my argument in that sense, then there will never be new matter, and there has never been new matter!

Arie M.

djr33
11-14-2007, 11:52 PM
Well, that's exactly right. Basic property of physics; same with energy.

molendijk
11-15-2007, 12:13 AM
Well, that's exactly right. Basic property of physics; same with energy.

Is that the truth and nothing but the truth? Does modern physics deny the possible existence of an empty universe 'at the beginning'. If the universe was empty, there must have been a moment of emptiness --> matter.

But that was not the question we were discussing. The question was whether or not the birth of a new baby (and biological things like that) are essentially the same things as computers producing computers in a factory.

Arie M.

Twey
11-15-2007, 12:27 AM
That's hairsplitting! If you take my argument in that sense, then there will never be new matter, and there has never been new matter!Quite right. It's not hairsplitting at all, since the assumption that humans can somehow create matter appears to be at the heart of your argument.

Rockonmetal
11-15-2007, 12:28 AM
Just a fact i learned in science class a while bck *6th grade or was it 7th grade... 7TH!* Viruses, the common cold, hiv, stuff like that ARE NON-LIVING:
What makes up a living thing:
Cells
Chemicals of Life
Uses Energy
Grows
Responds to surroundings
Reproduces

What Viruses do:
Cells *NRRR!*
Chemicals of Life *NRRR!* (buzzer sound)
Uses energy *NRRR!*
Grows *NRRR!*
Respond to surroundings *NRRR!*
Reproduces *DING!*
...
Thats just a little fact... also when u get a virus, *DON"T READ IF URE EASILY SCARED!!!*


This is just a safety box so that people who are seriously scared don't read it by accident... you might have to scroll down, or do something...
...
...
...
...
...
Keep scrolling if you want to know...
...
...
...
...
...

...
...
...
...
...

...
...
...
...
...

...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
...
Mods may remove this if you feel like it, I just didn't want anyone to go "AH!! YOU RUINED MY LIFE, NOW THATS ONE MORE THING I AM PARANOID ABOUT!!!!!
...
...
...
...
...

...
...
...
...
...

...
...
...
...
...

...
...
...
...
...

Well, see a virus is non-living, and well you can't kill something thats dead, because it already is. But a virus isn't dead, it has never lived. Its a strange alien phenomenon say, it came from the moon, it came from soda, it came from the closet, its untreatable, the medicine you take to make ure stuffy nose go away, is just to make it feel better, it might give your body vitamins and stuff to tell it "HEY! WAT ARE U DOING HERE SITTING ON YOUR BUTT LIKE A LAZY BUM! GET OUT AND GO FIGHT THAT STUFFY NOSE!"
So basically anyone can die from the common cold,
Oh!
And viruses look very much like forks...
link below of one...
http://pbd.lbl.gov/microscopies/phage.jpg


Yeah just to let you know thats really cool to me...

Twey
11-15-2007, 03:10 AM
Indeed. The scientific definition of "alive," however, is different to the philosophical one (the scientific definition is arbitrary).

djr33
11-15-2007, 04:06 AM
How about the condition of requiring love?
Sure, someone can live without it, but it's not preferred. No similar relationship with computers.

BLiZZaRD
11-15-2007, 05:03 AM
How about the condition of requiring love?
Sure, someone can live without it, but it's not preferred. No similar relationship with computers.

Not if Microsoft get's it's way....

Now on the point of humans creating humans.. and computers in factories building computers... how can you even compare? Humans desire the need/want to find another human do mate with. The computer gets turned on (pun intended) with a switch and runs a set up program.

The human must use personality to woo other human (or a lot of beer) the computer gets plugged into a LAN.

The humans then go into making of the love and can result in offspring. another human.

The computer program gets run through the LAN and tells a machine to move left, pick up panel, move right, put down panel. insert screw here (another pun intended) move conveyor belt, repeat. All the while another machine gets the first panel and ads a second, and so on and so on.

Now until humans are held captive next to conveyor belts and the whole community creates one baby piece by piece there is no comparison.

The humans know what they are doing. You could replace the order of the computers program and now it is taking apart another computer. You can't make love backwards and get un-pregnant. (although it's fun to try)

djr33
11-15-2007, 05:12 AM
What then is different about the current state of the world and the matrix? Simple answer-- everything. We're human. They're computers. Perhaps in time they'll become more similar, but any argument calling them the same now I find very strange.

Twey
11-15-2007, 08:11 AM
How about the condition of requiring love?
Sure, someone can live without it, but it's not preferred. No similar relationship with computers.This is entirely dependent upon the person. I've known several people with unusual mental conditions such that they are unable to feel love.

BLiZZaRD, djr33, in these arguments you appear to be attempting to discern whether computers are human. Obviously, they're not, and as you say, anyone who instinctively feels that they are may wish to see someone about that. The question wasn't whether computers were human, or even alive, but whether they had minds. Neither of the former is necessarily a prerequisite to the latter.

djr33
11-15-2007, 08:22 AM
Well, I doubt they truly can't feel it, but that they experience it differently, but in a greater way than a computer, even if unable to express it. Perhaps they are just computers, then?

Well, a mind is the part of the human body that is what allows us to feel, think, and act human. If a computer has a mind, then it is a human, or, I suppose, another sort of animal. Man's new best friend.

Twey
11-15-2007, 08:37 AM
Well, I doubt they truly can't feel it, but that they experience it differently, but in a greater way than a computer, even if unable to express it.No. There are certain mental states in which it is impossible to feel or desire love of any kind.
Well, a mind is the part of the human body that is what allows us to feel, think, and act human. If a computer has a mind, then it is a human, or, I suppose, another sort of animal. Man's new best friend.Both of which definitions include living. Whether computers can be considered alive or not is a separate question (certainly not by the scientific definition, as was pointed out earlier), but there's no reason to suggest that something can't have a mind if it isn't alive.

djr33
11-15-2007, 08:59 AM
No. There are certain mental states in which it is impossible to feel or desire love of any kind.Then they are, really something subhuman, to some degree. But even plants can feel love, apparently. Sing to them and they grow. A person in a coma may have no ability to recognize it or respond, but it has been shown to improve health. They may be unchangeable, but it still may be felt.

How can something not alive think? In what sense can it want? Think? Feel? Decide? It wouldn't exist without people-- is a piece of paper with my words on it "thinking"? The computer is just computing extended thoughts of its programmers; it is just helping them think, processing data, not having any original ideas or deciding upon any paths for itself without permission and prompting from the user-- even if it was set to do this, it would be doing exactly what it was told.

BLiZZaRD
11-15-2007, 09:10 AM
No. There are certain mental states in which it is impossible to feel or desire love of any kind.


But are exceptions to the rule the real issue here? We can all point out abnormalities about either side. I have a computer that can't feel, it has no motherboard and the power supply is fried. But it still "exsists" in my house so it is capable? No.

The problem with most debates is that the underlying issue gets piled under layers of -for lack of better word - crap that hide the BASIC fundamentals of the debate.

The cat in the box theroy, for example. Why a cat? Because dogs bark and you can hear that? But cats meow, and I can hear that. No, because the theroist had to have something to hide the true meaning of the theory. Of course the cat is alive and dead. So are you, so am I. We are all made up of elements that are constantly dying. The question, is the cat alive when you don't look at it? Well. it isn't decided when the box is opened. It certainly isn't decided by the cat. The cat is incapable of deciding if it is alive or dead, much the same way you or I can not. (apologies for mixing threads)

Adding layers to a basic premise either to hide or side with your intentions doesn't take away the basic fundamental, and in this case, the fact that a computer does not have a mind.

molendijk
11-15-2007, 09:11 AM
Quite right. It's not hairsplitting at all, since the assumption that humans can somehow create matter appears to be at the heart of your argument.

When I said 'hairsplitting' (no offense, of course!) I wanted to say that something that was ment philosophically (but I should have said that) was suddenly viewed purely physically.
What I ment / mean is this. Biological life exists and evolves all by itself (I'm ignoring the mystery of the start of everything; perhaps there is no beginning). The conditions must be right for it to exist, I agree on that, but those conditions are merely part of the biological life system: no water --> no biological life (probably); exterme temperatures (say minus 1000.000) --> no biological life. One could argue that 'God' does NOT necessarily belong to that system; that God is only there (for certain living creatures) to give sense and meaning to existence. So you can have: NOT (no God --> no biological life). Which is logically equivalent to: (you can have): 'Biological life & no God'
Computer life does NOT exist and evolve all by itself. There must be a certain biological life form (humans, in our specific case) in order for computers to survive. They canNOT duplicate / reformulate themselves without their (human) gods. So you don't have: 'Computers & no humans' ('computers & no biological life form').

Arie M.

Twey
11-15-2007, 10:25 AM
When I said 'hairsplitting' (no offense, of course!)None taken.
I wanted to say that something that was ment philosophically (but I should have said that) was suddenly viewed purely physically.Ah -- then I apologise. It sounded as if you meant it physically.
Computer life does NOT exist and evolve all by itself. There must be a certain biological life form (humans, in our specific case) in order for computers to survive. They canNOT duplicate / reformulate themselves without their (human) gods. So you don't have: 'Computers & no humans' ('computers & no biological life form').You ignore the problem of the beginning of life, which would usually be fair enough, but it's pertinent here because the computers we're discussing are in such a state. If we put computers in a factory and programmed them to create more computers that were programmed to create more computers, ad infinitum, we would be creating a situation that is presumably similar to the very beginnings of life.
exterme temperatures (say minus 1000.000)Heh, that's scientifically impossible :)
So you can have: NOT (no God --> no biological life). Which is logically equivalent to: (you can have): 'Biological life & no God'Hm... I think this is a fallacy, but I'm too tired to work it out at the moment. I shall leave this little red edit box here to attract my attention when I come back to it after a good sleep.Then again, to say that human life depends on no other is a fallacy. We couldn't live, for example, if we didn't have plants. Does that make plant life our "gods?" Does it mean we aren't truly alive or don't have minds?
But are exceptions to the rule the real issue here? We can all point out abnormalities about either side. I have a computer that can't feel, it has no motherboard and the power supply is fried. But it still "exsists" in my house so it is capable? No.I'm not sure I understand you here. The equal and opposite example would be a computer capable of love.
The cat in the box theroy, for example. Why a cat? Because dogs bark and you can hear that? But cats meow, and I can hear that. No, because the theroist had to have something to hide the true meaning of the theory. Of course the cat is alive and dead. So are you, so am I. We are all made up of elements that are constantly dying. The question, is the cat alive when you don't look at it? Well. it isn't decided when the box is opened. It certainly isn't decided by the cat. The cat is incapable of deciding if it is alive or dead, much the same way you or I can not. (apologies for mixing threads)Er... I think you've missed the point of Schrödinger's cat a little :p It's not about life on a microscopic level. The cat as a whole is both alive and dead before being observed simply because those are two possible states in which it could be whilst being observed. If you want another rather more bland example, an experimenter picks a ball out of a bag of red and blue balls at random without looking at it; until the experimenter or someone else looks at it, it's both red and blue.
Adding layers to a basic premise either to hide or side with your intentions doesn't take away the basic fundamental, and in this case, the fact that a computer does not have a mind.You make it sound as if all reasoning that a computer does/could have a mind is nothing but smoke-and-mirrors to try to hide the "obvious" fact that they don't :p

djr33
11-15-2007, 10:37 AM
You ignore the problem of the beginning of life, which would usually be fair enough, but it's pertinent here because the computers we're discussing are in such a state. If we put computers in a factory and programmed them to create more computers that were programmed to create more computers, ad infinitum, we would be creating a situation that is presumably similar to the very beginnings of life.

At the beginning of life, organisms didn't have "minds"... just a single cell replicating until it died.
Plus, the question is very simple: DO they HAVE minds, not WILL/COULD they have minds.

In the future, a lot may be possible. I may be the first man to meet god face to face. But that certainly proves nothing in this current debate.


I have a new question. Does a character in a video game have a mind? Does a simulation? Can a simulation?

BLiZZaRD
11-15-2007, 10:50 AM
If you want another rather more bland example, an experimenter picks a ball out of a bag of red and blue balls at random without looking at it; until the experimenter or someone else looks at it, it's both red and blue.

But it is not. It is either red or it is blue. the ignorance of the observer is irrelevant. If the observer guesses because there are only two options he has a chance of being correct. I understand the theory behind it, I just don't agree. The cat, the ball, anything else, either is or isn't, regardless of the view point of others.

You could walk up to a woman and say "I am guessing you are a virgin, or at least you used to be" Technically you would be right, but she either is, or is not. She can not be both, nor can the cat be alive and dead, nor can the ball be red and blue, it either is, or is not. You can guess the ball is red, without looking, you could be right, but if you guess red, open your eyes and the ball is red, is not because it could have been blue, but because it was red when you grabbed it.



You make it sound as if all reasoning that a computer does/could have a mind is nothing but smoke-and-mirrors to try to hide the "obvious" fact that they don't

Then I explained myself well :D I wouldn't say smoke and mirrors, though. You have to have a basis of comparison. Does item X have a mind/heart/liver/foot disease/whatever. Well, first you have to define what .././.././../ is. A human has a mind = true. okay, mind defined, now does item X have a mind (as compared to a human) I say no. You then say, well there is this one human who's mind is different from the others. You just changed your definition of mind. Now I must reevaluate your definition. Again I say no. You say you know this one item X that has another attribute. Again, we changed a static into a variable, and again we must start over. This is my point. Adding more and more "fluff" to the argument doesn't validate either side, it merely starts the question over.



You ignore the problem of the beginning of life, which would usually be fair enough, but it's pertinent here because the computers we're discussing are in such a state. If we put computers in a factory and programmed them to create more computers that were programmed to create more computers, ad infinitum, we would be creating a situation that is presumably similar to the very beginnings of life.

The problem isn't with the beginning of life. The problem is two humans START the process of life. The life being created does the rest. If you can show me an example of a computer giving another computer a screw, and then this computer housing this screw and waiting, and this screw multiplying into two screws, then 4 screws, then an LCD panel, then ethernet cables, and putting itself together from the inside out, then we have a debate. Until then, you have a program of assembly versus autonomous procreation.

molendijk
11-15-2007, 10:58 AM
You ignore the problem of the beginning of life, which would usually be fair enough, but it's pertinent here because the computers we're discussing are in such a state. If we put computers in a factory and programmed them to create more computers that were programmed to create more computers, ad infinitum, we would be creating a situation that is presumably similar to the very beginnings of life.
What I ment is that, once biological life has started, it can go on by itself. The point is that once computer life has started, it canNOT go on by itself. Of course, I cannot prove that. The opposite cannot be proven(correct English?) either.

Heh, that's scientifically impossible :)
I know that minus 1000.000 is impossible. It's just there for the argumentation. Fill in whatever you want.

I think this is a fallacy, but I'm too tired to work it out at the moment. I shall leave this little red edit box here to attract my attention when I come back to it after a good sleep.[/edit]
NOT (no God --> no biological life) is logically equivalent to: (you can have): 'Biological life & no God'. That's no fallacy. Let p be God, and q be biological life. Then we have NOT(NOTp --> NOTq), which is equivalent to NOT(q --> p), which is equivalent to NOT(NOTq v p) (Morgan), which is equivalent to q & NOTp.

Then again, to say that human life depends on no other is a fallacy. We couldn't live, for example, if we didn't have plants. Does that make plant life our "gods?" Does it mean we aren't truly alive or don't have minds?
Plants are part of the biological life system. God is not needed here. Electricity etc is part of the compure life system. HUMANS ARE NEEDED HERE.

Arie M.

djr33
11-15-2007, 11:03 AM
Blizzard, I think Twey oversimplified the ball example. The cat IS both dead and alive because at the moment of the half life it is somewhere in the middle.... I think. The example is still confusing to me, to be honest.
HOWEVER, the electron is the main point there. It is a particle and it is a wave. A particle cannot be a wave and a wave cannot be a particle, but an electron has been proven to be both. Not quite the "I don't know what color it is" example.
The cat is an example created to explain the electron to those who were confused, so don't take it completely at face value, I guess.

BLiZZaRD
11-15-2007, 11:05 AM
I will admit it is a new one to me and I will need to study it a bit, but I tell you what, you stick a cat in a box with a radioactive isotope for any half life, it WILL be dead... or have 3 heads. :D

djr33
11-15-2007, 07:06 PM
Brings a whole new meaning so Siamese cat. Or perhaps that was in bad taste.

BLiZZaRD
11-15-2007, 07:09 PM
:D I don't think cat tastes too bad.

djr33
11-15-2007, 07:13 PM
Tastes like chicken.


[I think we've finally gotten to the point of no return in the argument, but someone may prove that wrong.]

BLiZZaRD
11-15-2007, 07:18 PM
Actually it is a little more like pork (although I have never specifically tasted Siamese), little chewy too.

[I agree]

Twey
11-15-2007, 09:16 PM
But it is not. It is either red or it is blue. the ignorance of the observer is irrelevant. If the observer guesses because there are only two options he has a chance of being correct. I understand the theory behind it, I just don't agree. The cat, the ball, anything else, either is or isn't, regardless of the view point of others.That's nice, but many respectable scientists and one heck of a lot of experiments say you're wrong :p Quantum mechanics are quite unintuitive (in fact, Einstein rejected the idea just because it seemed too daft), but nowadays we have some very sophisticated equipment and it's starting to look as if it's correct. There are even practical applications: quantum computers do exist (http://en.wikipedia.org/wiki/Quantum_computer), they're just currently very big and slow.
I wouldn't say smoke and mirrors, though. You have to have a basis of comparison. Does item X have a mind/heart/liver/foot disease/whatever. Well, first you have to define what .././.././../ is. A human has a mind = true. okay, mind defined, now does item X have a mind (as compared to a human) I say no. You then say, well there is this one human who's mind is different from the others.You're missing a vital step out of the process: defining a mind and your reasoning behind saying item X doesn't have a mind. That was what my example was attempting to clarify (not hide :)).
You just changed your definition of mind.No, I was attempting to change your definition of mind. My own has remained largely unchanged from the start of this discussion.
What I ment is that, once biological life has started, it can go on by itself. The point is that once computer life has started, it canNOT go on by itself. Of course, I cannot prove that. The opposite cannot be proven(correct English?) either.Correct English, but as you say completely unproveable.
NOT (no God --> no biological life) is logically equivalent to: (you can have): 'Biological life & no God'. That's no fallacy. Let p be God, and q be biological life. Then we have NOT(NOTp --> NOTq), which is equivalent to NOT(q --> p), which is equivalent to NOT(NOTq v p) (Morgan), which is equivalent to q & NOTp.Yep, you're right.
Plants are part of the biological life system. God is not needed here. Electricity etc is part of the compure life system. HUMANS ARE NEEDED HERE.I'm talking about humans specifically, not biological life as a whole (otherwise we'd have to bring organic/DNA computers into it). Humans (currently) require plants to survive. Computers (currently) require humans to survive. Where is the difference?
HOWEVER, the electron is the main point there. It is a particle and it is a wave. A particle cannot be a wave and a wave cannot be a particle, but an electron has been proven to be both. Not quite the "I don't know what color it is" example.It is the same. I can't explain it particularly well either, not being a quantum physician, but it's the same concept: both are illustrations of Schrödinger's uncertainty principle (http://en.wikipedia.org/wiki/Uncertainty_principle).
Tastes like chicken.o.@

BLiZZaRD
11-15-2007, 09:34 PM
That's nice, but many respectable scientists and one heck of a lot of experiments say you're wrong Quantum mechanics are quite unintuitive (in fact, Einstein rejected the idea just because it seemed too daft), but nowadays we have some very sophisticated equipment and it's starting to look as if it's correct. There are even practical applications: quantum computers do exist, they're just currently very big and slow.

And the computer proves that a blue ball is red and blue until I look at it? The computer was programed by a human. If this human believe the ball is red and blue until seen, the computer then is programed to find such an example. Now... if you want to write a program where there are 3 balls in a box and they are either red or blue, and the program randomly generates the actual color when a button is clicked, then sure, they can be both because the computer hasn't decided until the button is pressed. To apply this to a real world application where you place 3 balls in a bag, and I pick one out, these balls are not going to change colors inside the bag, they are not both, they are one or the other.



You're missing a vital step out of the process: defining a mind and your reasoning behind saying item X doesn't have a mind. That was what my example was attempting to clarify

First, your example brought in variable Y, where a human mind has less than the normal capabilities, so you didn't clarify, you changed the norm, which was what your definition of mind is in regards to the debate question. My intentional omission of why I said no, at this point is irrelevant. I had already express why Item X does not have a mind and your retort was to influence the definition of "mind" with another variable previously not considered.



Humans (currently) require plants to survive.

Wrong. Humans require food, water, shelter and oxygen to survive. The fact that plants give off the oxygen (and food/water) does not require us to use them. There are no plants in an astronauts space suit, yes they can be outside the shuttle and survive. Likewise a computer does not require a human to "survive" you can put a computer in a room all by itself for 6000 years with no human interaction, return and the computer will still be there (under controlled situations it would remain in the same state). You put a human in the same room, with no interaction for 6000 years and the human would be dead upon your return. If you turn a computer off it sits and gathers dust. You turn a human off, it dies. Wait long enough and you can still turn the computer back on, you can't turn a human back on.



It is the same.

again in a real world application is either is or is not, but not both at the same time.

Twey
11-15-2007, 09:44 PM
And the computer proves that a blue ball is red and blue until I look at it? The computer was programed by a human. If this human believe the ball is red and blue until seen, the computer then is programed to find such an example. Now... if you want to write a program where there are 3 balls in a box and they are either red or blue, and the program randomly generates the actual color when a button is clicked, then sure, they can be both because the computer hasn't decided until the button is pressed. To apply this to a real world application where you place 3 balls in a bag, and I pick one out, these balls are not going to change colors inside the bag, they are not both, they are one or the other.I believe it's proven more mathematically and by actual observation of particles, but again, you'd have to ask a physicist. I do suspect, however, that things such as quantum computers probably wouldn't work if all the principles upon which they are based are completely wrong.
First, your example brought in variable Y, where a human mind has less than the normal capabilities, so you didn't clarify, you changed the norm, which was what your definition of mind is in regards to the debate question. My intentional omission of why I said no, at this point is irrelevant. I had already express why Item X does not have a mind and your retort was to influence the definition of "mind" with another variable previously not considered.Variable Y was merely an example of someone I at least would consider to have a mind. I was attempting to clarify your definition of "mind" by getting your opinion on such a person. There is no definition of mind that's only for this discussion; I was trying to work out, or possibly alter, your actual definition of mind.
Wrong. Humans require food, water, shelter and oxygen to survive. The fact that plants give off the oxygen (and food/water) does not require us to use them. There are no plants in an astronauts space suit, yes they can be outside the shuttle and survive.Not for long.
Likewise a computer does not require a human to "survive" you can put a computer in a room all by itself for 6000 years with no human interaction, return and the computer will still be thereArie's point was that a computer requires humans to provide it with a constant amount of power. Without electricity it dies pretty quickly -- with a UPS or similar device (the equivalent of a space suit), the computer can survive for a while, but, like the astronaut in the space suit, will eventually die.
You put a human in the same room, with no interaction for 6000 years and the human would be dead upon your return. If you turn a computer off it sits and gathers dust. You turn a human off, it dies. Wait long enough and you can still turn the computer back on, you can't turn a human back on.With current technology. Humans are evidently much more complex creatures than computers, and obviously would require far greater amounts of delicacy and precision. However, even with our current technology there are examples of humans who have been clinically dead but nevertheless revived -- there was a man in Australia, I seem to recall, who was dead for a quarter of an hour.
again in a real world application is either is or is not, but not both at the same time.I hate to repeat the same example, but quantum computers are not a real-world application?

BLiZZaRD
11-15-2007, 09:57 PM
but quantum computers are not a real-world application?

The computer itself is, yes, the program it is running is not, it is a program. Unless the computer physically places 3 balls in a box, pulls one out and checks if it is red or blue.. but I doubt that is happening.

The program the computers are running simplify the process, and run it over and over again (Tic-Tac-Toe a la "War Games") but the truth remains, in a computer program the randomness is not applied until the action is checked, in real life, the balls either are red or they are blue, before, during and after being placed in the bag, they don't change.

Twey
11-15-2007, 10:27 PM
You've applied one of my responses to a question it wasn't answering (and to which it wasn't really relevant). That quote was in response to your claim that in "real-world" applications the principles of quantum mechanics aren't true. Since they're used to create quantum computers, which are very much real-world applications, I strongly deny that claim.

The issue you posed here wasn't actually directly answered by me (I don't have nearly enough knowledge or understanding of the concepts involved to make a reply which wouldn't cause every physicist reading this to gnash their teeth at me), but I attempted to skip over it somewhat by saying that even though I've no idea how they counter this issue, they must do it somehow, since the principles seem to hold true (as in quantum computing).

If forced to hazard a guess, I'd say that probably this was all thought out mathematically long before anyone performed any form of experiment on it, computerised or otherwise.

djr33
11-15-2007, 10:50 PM
not being a quantum physicianAh, one of them famous proton healers.

Well, I can see both sides to the argument, but I think that part of the problem with the example, Twey, is that quantum mechanics and physics differ-- one is large scale and has quite different properties from the other-- in the world of quantum mechanics it's quite possible for one object to pass through another; in our scale it is not. Looking at a drop of water is one thing. Being crushed under 1,000 meters of seawater is another. They certainly are part of the same overall equation, but looked at differently, especially by those not trained to comprehend it completely-- and it's clear that no one completely understands quantum mechanics yet anyway (nor will they likely ever).


We must first define what a mind is, and since it's just a word, there's no real way to prove this in the argument; any discussion of whether computers have minds is irrelevant; clearly, they do more than a rock and are therefore more similar to a person and closer to having a mind; however, they clearly are less similar to a human than a cat is, and neither is a human. Wherever mind falls can define whether a computer has a mind, but it's not yet provable; in fact, the only argument with having is what a mind is. Once that is determined, just slap it on the chart and you've got your answer as to whether a computer has a mind.


I think a more interesting question is "Which is smarter, a computer or a human?"
Number crunching goes to the machines, but creativity to the human.

In a chess game, they were evenly matched, though the computer was, I suppose, limited to some extent by the capability of its programmer. It did win, though, barely. Interesting.

And now back to the point at hand:
Computers are not and will never be creative. Humans are. To me, this is what a "mind" is, more than just a "brain."
A computer can simulate creativity, but it will always just be analyzing. I find myself doing this some; it's possible with art-- "what has never been painted? oh, I'll paint that"; "Hmm... people like blue? Well, let's make a blue statue... must be a good thing"; and that works, sometimes. A computer could certainly do that.
However, a computer can't think outside it's box (pun quite intended).
Program a computer to come up with the best answer, and it will. But it won't reveal that the question was wrong in the first place, or that there's just a better problem to be solved. People will.
The downside, though, of creativity, is mistakes. Number crunching is precise-- nothing's going to go wrong if you just crunch the numbers and find the answer; but you can get a more interesting, if flawed, answer with creativity.



Additionally, before we define computers as having minds, consider other examples:
Does a Gorilla have a mind? (Consider Koko, if I'm spelling that right.)
A Dolphin? (See scientific studies with them-- quite promising)
Mice? [Hitchhiker's Guide joke, if you happen to catch it.]
A cat?
A fish?
A bee?
A worm?
And, very interesting-- a jellyfish? (Can a mind be made of many parts? -- Does society have a mind? A colony of ants?)
Amoeba?
Viruses?
...A computer? And does the computer fall somewhere above viruses? amoebas? Higher cognitive [yes, said that on purpose, half as a joke] ability than a Gorilla? Human?

Twey
11-16-2007, 12:43 AM
A computer can simulate creativity, but it will always just be analyzing. I find myself doing this some; it's possible with art-- "what has never been painted? oh, I'll paint that"; "Hmm... people like blue? Well, let's make a blue statue... must be a good thing"; and that works, sometimes. A computer could certainly do that.What's to say that's not all you do? Perhaps all your "creative" ideas are based on analysation of previous physical input. As Descartes said, we never really create new ideas, we only assemble old ones in different ways. I know what a mountain looks like; I know what gold looks like. Therefore I can imagine a golden mountain. But is imagining a golden mountain "creative?" I think perhaps the adjective "creative" itself is meaningless. People we consider creative are just finding new ways to analyse things.

djr33
11-16-2007, 03:05 AM
Creativity can be seen as infinitely many computations of infinitely many variables [where "infinitely" may mean "innumerable"], perhaps.
However, that's where computers and people differ.

Socialization is a complex art, and basically a logic puzzle with millions of variables. Many people can do this very well. A computer would fail. In the same sense, consider a logical, "smart" person. That person can do math, whereas the social type cannot [going with stereotypes which are true in many cases]. Really, though, what's solving X, when the social person can solve a million-variable system of equations, just by knowing how to react?

It's this intuition and creativity [whether genuine or simulated] that a computer cannot do.


A computer can't guess at an answer. It just knows, or does an approximate answer by limited computations. We can just guess, though. It might be wrong, but likely is something toward the right answer, without actually calculating anything.

jscheuer1
11-16-2007, 04:36 AM
Hopefully by now (I haven't been following the discussions too closely), we have established that free will or determinism, computer self awareness or lack thereof - are (in addition to whatever else they may be) quantum states. They cannot be truly known until observed, and in the act of observing them we are removed from being able to report on either subject in any general way, limited to the specific instance we are currently observing.

Now a new issue has arisen, creativity. I like to think of it as "that which it is easier to do than not". It is at the very least a multi-state sort of thing. Parroting or recombining is not creative in the strictest sense. But if you make something, even just by following instructions, you have created it, so there is a semantic issue here as well. However, there do come times at which things are repeated and/or recombined in a new context (like one that taps into a major theme of a generation) whereby they become something truly new and/or unique, perhaps at the same time establishing a new paradigm, the whole becoming more than the sum of its parts. Like the other issues under discussion here of late, you can know it when you see it or experience it, but it isn't easily defined in concrete terms.