1 2
Anti-stance
Anti-stance UberDork
2/5/15 6:45 p.m.

I read this article from a page I follow on FB. Probably the most interesting blog post I have ever put my eyes on. It is a pretty long read, but the way this guy writes, you will get sucked in. Apparently he even got noticed by Elon Musk on the subject.

Part 1

Part 2

I'm just curious where the hive lands on this subject.

¯\_(ツ)_/¯
¯\_(ツ)_/¯ HalfDork
2/5/15 8:01 p.m.

Honestly? The only other way I really see things going is we all die horrible prolonged deaths or kill eachother before the AI overlord comes into existence, so I'm all for it. I think his timelines are a bit optimistic though (yes, even the "pessimistic" one).

I love waitbutwhy.

Giant Purple Snorklewacker
Giant Purple Snorklewacker MegaDork
2/5/15 8:06 p.m.

"AI has by now succeeded in doing essentially everything that requires ‘thinking’ but has failed to do most of what people and animals do ‘without thinking.'” --Donald Knuth

This is the tricky bit... all of our computer intellect comes from the part of our brain we think we understand pretty well. Thinking. We have replicated our own logical analysis and in some instances been very successful at outperforming our own computational capacity. But thinking, that is just a program running on top of our essential functions. Perception is a program. The part of us we understand as "us" is a program to the brain. The parts we don't understand are much harder because everything we know lives above that level. Try to kill yourself by holding your breath and you will discover that you are not your own master. The program cannot understand it's own nature. It cannot see beyond it's inputs. It's like the OS on your computer trying to understand the BIOS.

An interesting thought exercise... we ARE the product of an ancient AI development.

Giant Purple Snorklewacker
Giant Purple Snorklewacker MegaDork
2/5/15 8:34 p.m.

Next thought exercise... writing code that runs on the human brain platform. It's possible... the brain does it itself. A thing that is new is hard. But then it makes something we call muscle memory. It has nothing to do with muscles. It gets committed to code - the brain does it and our thinking processes are free of it. They can process new information. You have to look no further than a bicycle for an example. How do you balance it? You don't. You could read a book while riding a bike.

GameboyRMH
GameboyRMH MegaDork
2/5/15 8:50 p.m.

AI's probably going to be bad for us, at least in the short term. Slight chance of that being because the AI itself is dangerous, much bigger chance of that being because it will be in the hands of selfish people. Read Marshall Brain's short story "Manna" for an example. And then read about how Amazon's warehouses work for a butt pucker.

I think a more immediate threat is the invention of an "enforcer droid." A soldier who has no empathy, remorse, or free will to question orders, immune to anything but heavy kinetic damage. That could be very, very bad and both the hardware and software are going to be ready much sooner than strong AI.

Anti-stance
Anti-stance UberDork
2/5/15 8:51 p.m.

In reply to Giant Purple Snorklewacker:

Right beneath that quote from Knuth, it talks about ways that maybe possible to bridge that enormous gap from computational speed to actual thinking and intelligence.

"This is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.

The idea is that we’d build a computer whose two major skills would be doing research on AI and coding changes into itself—allowing it to not only learn but to improve its own architecture. We’d teach computers to be computer scientists so they could bootstrap their own development. And that would be their main job—figuring out how to make themselves smarter."

Could the computer figure out how to think or gain general intelligence by doing the research and upgrading it's own architecture? That's where this starts getting into the freaky side of things.

Anti-stance
Anti-stance UberDork
2/5/15 8:57 p.m.
GameboyRMH wrote: I think a more immediate threat is the invention of an "enforcer droid." A soldier who has no empathy, remorse, or free will to question orders, immune to anything but heavy kinetic damage. That could be very, very bad and both the hardware and software are going to be ready much sooner than strong AI.

In the second part of this article, they talk specifically about this. From the standpoint that a military would produce a battlefield robot.

It's not that a robot would just turn bad like you see in the movies, it's more that the robot was designed to perform two functions... Kill humans and constantly update its own software to kill humans even better. Then what happens if it gets too good at it.

fritzsch
fritzsch Dork
2/5/15 8:57 p.m.

I am against it, but I know that people aren't going to stop so I at least hope they tread very very carefully. Like Elon, Gates, and many others I think it is one of the biggest dangers to face mankind.

Anti-stance
Anti-stance UberDork
2/5/15 8:58 p.m.
¯\_(ツ)_/¯ wrote: Honestly? The only other way I really see things going is we all die horrible prolonged deaths or kill eachother before the AI overlord comes into existence, so I'm all for it. I think his timelines are a bit optimistic though (yes, even the "pessimistic" one). I love waitbutwhy.

If you haven't done so, read his blog about the Fermi Paradox. It's some mind blowing E36 M3 too.

Giant Purple Snorklewacker
Giant Purple Snorklewacker MegaDork
2/5/15 9:01 p.m.

The approach itself is an really interesting thought exercise too. A computer that can change it's thinking is also limited to it's inputs and outputs. What capabilities it can exploit beyond that are subject to it's programmer's imaginations - limit again by our own in/out puts.

Imagine if our brains really exploit quantum mechanics on a level our conscious mind can't establish? In a universe where time is a potential ETL job of the mind (or even better the product of traveling uni-directionally thru a black hole... ) the idea that we are a subliminal hive beneath our consciousness is only slightly less crazy an idea than that Bruce Jenner will be happier as a woman.

Giant Purple Snorklewacker
Giant Purple Snorklewacker MegaDork
2/5/15 9:27 p.m.

What!? None of you berkeleyers are up for some pot talk?

berkeley you you berkeleying berkeleys.

wearymicrobe
wearymicrobe SuperDork
2/5/15 9:30 p.m.
Anti-stance wrote: "This is when scientists get desperate and try to program the test to take itself. But it might be the most promising method we have.

As a programmer I call bull on this. I am not saying that coding is a art form or beyond automation but getting something like this is way beyond the languages that we have now and the computational power in the next 50-100 years and even then its not deterministic. Its just down to random chance and advancement its not intelligence.

We would be better off trying to augment the meat sacks that already exist on this earth. IF I have perfect recall of every single biological and chemical pathway and the experience to actually use it then we are on to something.

fritzsch
fritzsch Dork
2/5/15 11:14 p.m.

THEY'RE MADE OUT OF MEAT

"They're made out of meat."

"Meat?"

"Meat. They're made out of meat."

"Meat?"

"There's no doubt about it. We picked up several from different parts of the planet, took them aboard our recon vessels, and probed them all the way through. They're completely meat."

"That's impossible. What about the radio signals? The messages to the stars?"

"They use the radio waves to talk, but the signals don't come from them. The signals come from machines."

"So who made the machines? That's who we want to contact."

"They made the machines. That's what I'm trying to tell you. Meat made the machines."

"That's ridiculous. How can meat make a machine? You're asking me to believe in sentient meat."

"I'm not asking you, I'm telling you. These creatures are the only sentient race in that sector and they're made out of meat."

"Maybe they're like the orfolei. You know, a carbon-based intelligence that goes through a meat stage."

"Nope. They're born meat and they die meat. We studied them for several of their life spans, which didn't take long. Do you have any idea what's the life span of meat?"

"Spare me. Okay, maybe they're only part meat. You know, like the weddilei. A meat head with an electron plasma brain inside."

"Nope. We thought of that, since they do have meat heads, like the weddilei. But I told you, we probed them. They're meat all the way through."

"No brain?"

"Oh, there's a brain all right. It's just that the brain is made out of meat! That's what I've been trying to tell you."

"So ... what does the thinking?"

"You're not understanding, are you? You're refusing to deal with what I'm telling you. The brain does the thinking. The meat."

"Thinking meat! You're asking me to believe in thinking meat!"

"Yes, thinking meat! Conscious meat! Loving meat. Dreaming meat. The meat is the whole deal! Are you beginning to get the picture or do I have to start all over?"

"Omigod. You're serious then. They're made out of meat."

"Thank you. Finally. Yes. They are indeed made out of meat. And they've been trying to get in touch with us for almost a hundred of their years."

"Omigod. So what does this meat have in mind?"

"First it wants to talk to us. Then I imagine it wants to explore the Universe, contact other sentiences, swap ideas and information. The usual."

"We're supposed to talk to meat."

"That's the idea. That's the message they're sending out by radio. 'Hello. Anyone out there. Anybody home.' That sort of thing."

"They actually do talk, then. They use words, ideas, concepts?" "Oh, yes. Except they do it with meat."

"I thought you just told me they used radio."

"They do, but what do you think is on the radio? Meat sounds. You know how when you slap or flap meat, it makes a noise? They talk by flapping their meat at each other. They can even sing by squirting air through their meat."

"Omigod. Singing meat. This is altogether too much. So what do you advise?"

"Officially or unofficially?"

"Both."

"Officially, we are required to contact, welcome and log in any and all sentient races or multibeings in this quadrant of the Universe, without prejudice, fear or favor. Unofficially, I advise that we erase the records and forget the whole thing."

"I was hoping you would say that."

"It seems harsh, but there is a limit. Do we really want to make contact with meat?"

"I agree one hundred percent. What's there to say? 'Hello, meat. How's it going?' But will this work? How many planets are we dealing with here?"

"Just one. They can travel to other planets in special meat containers, but they can't live on them. And being meat, they can only travel through C space. Which limits them to the speed of light and makes the possibility of their ever making contact pretty slim. Infinitesimal, in fact."

"So we just pretend there's no one home in the Universe."

"That's it."

"Cruel. But you said it yourself, who wants to meet meat? And the ones who have been aboard our vessels, the ones you probed? You're sure they won't remember?"

"They'll be considered crackpots if they do. We went into their heads and smoothed out their meat so that we're just a dream to them."

"A dream to meat! How strangely appropriate, that we should be meat's dream."

"And we marked the entire sector unoccupied."

"Good. Agreed, officially and unofficially. Case closed. Any others? Anyone interesting on that side of the galaxy?"

"Yes, a rather shy but sweet hydrogen core cluster intelligence in a class nine star in G445 zone. Was in contact two galactic rotations ago, wants to be friendly again."

"They always come around."

"And why not? Imagine how unbearably, how unutterably cold the Universe would be if one were all alone ..."

RealMiniDriver
RealMiniDriver UltraDork
2/5/15 11:53 p.m.

In reply to fritzsch:

Daberk is dat? It's berkeleying hilarious!

GameboyRMH
GameboyRMH MegaDork
2/6/15 6:34 a.m.

Film version:

https://www.youtube.com/watch?v=IfPdhsP8XjI

Will
Will SuperDork
2/6/15 6:48 a.m.

No fate but what we make for ourselves.

GameboyRMH
GameboyRMH MegaDork
2/6/15 6:49 a.m.

Oh look, here's a timely article from Wired:

http://www.wired.com/2015/02/can-now-build-autonomous-killing-machines-thats-bad-idea/

Beer Baron
Beer Baron UltimaDork
2/6/15 7:31 a.m.

People talk about how computers are outpacing our abilities. Trouble is, they oupace us only on tests ddesigned to see how good a computer is at a single task. I heard an statement to the effect that: "we can build and program a computer that will beat a human at chess, or at poker, or at jeopardy, or at backgammon. But we can't design a single system that can play all of those games." We are nowhere in sight of building a machine that can learn a new game.

We compare processing power and memory, but only of the brain. We ignore the storage capacity of our DNA. We have autonomic systems that can regulate, repair, and reproduce themselves. We are nowhere near a computer that can do that.

I do see more integrated technology. We can augment our generalist abilities with specialized devices that can do single, difficult tasks far more precisely and efficiently: like measure time or recall ling streams of data precisely. But no computer could stand on the African Savannah and figure out what to eat, how to get it, and how to avoid getting eaten.

GameboyRMH
GameboyRMH MegaDork
2/6/15 7:44 a.m.
Beer Baron wrote: We compare processing power and memory, but only of the brain. We ignore the storage capacity of our DNA.

I agree with most of your points, but human DNA doesn't have a lot of storage capacity, your DNA could fit comfortably on a DVD. The amazing thing about DNA isn't the size, it's how you use it

MadScientistMatt
MadScientistMatt UberDork
2/6/15 8:07 a.m.
GameboyRMH wrote: I think a more immediate threat is the invention of an "enforcer droid." A soldier who has no empathy, remorse, or free will to question orders, immune to anything but heavy kinetic damage. That could be very, very bad and both the hardware and software are going to be ready much sooner than strong AI.

On the flip side, such an enforcer droid would also not feel anger or panic, and could be programmed to hold fire until it had a completely positive ID of its target. So while it could be programmed not to question its orders to kill, it also wouldn't be inclined to kill without an order. An enforcer droid wouldn't lose it in the heat of battle and massacre civilians.

But programming such a thing to be able to rewrite its own code would be seriously boneheaded. It's something you would want to keep under tight control.

If they aren't made self reprogramming, enforcer droids seem like the generic "superweapon that might fall into the wrong hands" problem, and not very different from some sort of advanced battle armor or other sort of mechanized weapon system.

Other than that you'd need to program this thing to be very, very certain of what its target is.

GameboyRMH
GameboyRMH MegaDork
2/6/15 8:12 a.m.
MadScientistMatt wrote: If they aren't made self reprogramming, enforcer droids seem like the generic "superweapon that might fall into the wrong hands" problem, and not very different from some sort of advanced battle armor or other sort of mechanized weapon system.

True, but the size of this problem would be far bigger than any past superweapon. It doesn't carry massive environmental safety blowback or the regulating counterbalance of M.A.D. like nuclear/bio/chemical weapons. It's the first mechanized replacement for boots-on-the-ground power and could be used against civilians. It wouldn't only be the worst superweapon yet created, but it's probably the 2nd worst yet imagined, after Gray Goo.

PHeller
PHeller PowerDork
2/6/15 8:47 a.m.

I think we have far more problems to solve before AI becomes a threat. Food and water shortages. Massive unemployment. Regional wars based on the previous factors.

GameboyRMH
GameboyRMH MegaDork
2/6/15 8:49 a.m.

^That's why I'm worried about battlefield robots/android enforcers. If those come into use while we're suffering with those problems, then we might really dislike the solutions.

Hungary Bill
Hungary Bill SuperDork
2/6/15 8:57 a.m.

3-laws safe?

GameboyRMH
GameboyRMH MegaDork
2/6/15 9:13 a.m.

A "three laws" system requires a strong AI, but it's not a bad idea - each law will need a lot of detail though.

Some software is already having Asimov's laws written into the license terms, and it could affect things down the line - for example, if Linux had such a license, it would be a license violation to use it as the OS on an attack drone control terminal.

1 2

You'll need to log in to post.

Our Preferred Partners
jHzHRINKT1kbhhw4wYdjT2tomkBJUfVkwEUCXjWk5aKeTk1EIPhxdE0KvcaS4Mcz