1 2 3 4 5
SivaSuryaKshatriya
SivaSuryaKshatriya New Reader
3/20/18 1:24 p.m.
rslifkin said:

In reply to SivaSuryaKshatriya :

A human has one advantage over a computer in that situation: a human can see the person standing on the side of the road and feel "I don't trust them to stay put, I think they're gonna cross the road" while the computer will likely be left with "person is not moving and not in the road, not a threat" or be forced to be super conservative and slow way down for every person near the side of the road.  

 

That's a good point for sure, but this is very quickly being taken care of. I would like to point out what is rapidly becoming factual: almost anything a human can do, machine learning/artificial intelligence will eventually be able to do it better. That is, assuming they aren't already better at it than us:

https://www.theverge.com/2017/10/18/16495548/deepmind-ai-go-alphago-zero-self-taught

 

When we see someone on the side of the road and "feel" uneasy, what is actually going on in our heads? We are, without even being conscious of it, reading their body language and analyzing it within microseconds (the pedestrian's body position is slightly hunched forward, he is looking both ways, his left leg is ready to propel him forward, etc) and coming to the conclusion that "this person looks like he is ready to walk across the road" which in turn makes us feel uneasy given the context and slow dwn. There is nothing magical going on here; it's simply us reading their body language within the context of the situation and coming to a conclusion.

Artificial intelligence/machine learning researchers are rapidly coming up with AI that can recognize facial expressions. In China, they are already using machine learning software that utilizes the millions of cameras already in place to recognize ethnic minorities that are deemed less-than-desirable by the government and track their movements to see if they are going into areas where they "shouldn't." They are also using it to track criminals:

http://www.dailymail.co.uk/sciencetech/article-5170167/China-unveils-Minority-Report-style-AI-security-system.html

 

How is this related to self-driving cars? Soon, the AI driving the vehicles will be able to recognize facial expressions and body language and determine whether a person is about to move or run across the road. They'll be able to do it far faster than we can, and with more people at the same time.

Eventually self-driving cars will share information with one another, such as where pedestrians are, so that they can anticipate potential collisions with pedestrians you can't even see it.

Being a safe driver essentially comes down to processing large amounts of information and making good decisions from the data (where is that pedestrian, is he gonna run across, looks like there's a big slow down up ahead, better slow down in anticipation, etc). A computer can process such information far faster than we can.

SivaSuryaKshatriya
SivaSuryaKshatriya New Reader
3/20/18 1:35 p.m.
dculberson said:
1988RedT2 said:
SivaSuryaKshatriya said

Oh boy do I have news for you, Uber actually proposed a bill to ban human driven cars in the city. Looks like they're setting the way for exactly what you mentioned.

 

I think everyone should take a minute and read that and let it sink in.  At some point, maybe sooner than you think, we'll be looking at a Federal ban on high-performance automobiles.  That ban could extend to automobiles that look "fast" or "sporty."

 "Why does an average citizen need a car that can do nearly 200 mph?" they'll say.  "Only a professional race car driver needs a car like that, it's just a tool for killing."

Oh, sure.  We'll be allowed to putter around in our Camry's and Tauruses for a little while, but they'll come for those too. 

There comes a time in every man's life when you need to take a stand, or the freedoms we take for granted will be seized from us.

A single, completely un-sourced claim on a message board is not enough for me to feel that I need to make a stand. I do not believe that uber has in any way "proposed a bill to ban human driven cars in the city" and from my very cursory google search it's not true. Maybe a little more evidence before it being some foundational principle we need to fight for would be in order.

They are proposing to ban personal self-driven cars in cities:

https://cei.org/blog/uber-wants-make-it-illegal-operate-your-own-self-driving-car-cities

I don't see why that wouldn't eventually extend to personal manually driven cars as well. 

pinchvalve
pinchvalve MegaDork
3/20/18 2:10 p.m.

My first thought when I heard this was the pedestrian got through 4 redundant safety systems:  1) The Volvo has built-in systems to prevent this type of thing, 2) the self-driving tech has systems incorporated, 3) Uber provided a human backup, and 4) the State of AZ built signs, crosswalks, lights, sidewalks etc. to keep people safe.  If we assume the pedestrian was mentally fit, she likely had a 5th system in her head: 5) being taught where and when to cross, look both ways before crossing, etc.  

The fact that 4-5 systems were rendered ineffective tells me that there was blame on the part of the deceased.  I feel bad for her and her family, and I will wait for the final report, but you can't stop someone from simply jumping into traffic from a hidden position if they want to.  I also feel bad for Uber and the self-drive tech, it would have been great to read that someone's life was spared after making a stupid move.  

Ian F
Ian F MegaDork
3/20/18 2:22 p.m.

My mother was involved in a similar accident about 25 years ago.  Driving home from work at night. Woman walks in front of her between parked cars.  Fortunately, it was not fatal, but she was still injured and my mother was shaken up a bit.  Woman sued of course, but apparently insurance settled for a lesser sum.

Don't self-driving cars have cameras recording everything along with various other data?  Would seem silly if they don't along with a camera on the "driver" just to monitor what they happen to be doing in case there is an accident.  

As much as I love driving, I've become less fond of commuting... so the idea of a self-driving car is a peachy idea to me as well.  EV's too.

Of course Uber wants to ban individual self-driving cars in cities - they cut into their business. 

iceracer
iceracer UltimaDork
3/20/18 2:22 p.m.

In reply to Keith Tanner :

The human eyeball has a broad view of vision.  It's called  peripheral vision Do the computer sensors have that ?

 Ever heard of seeing something out of the corner of your eye ?

Ian F
Ian F MegaDork
3/20/18 2:25 p.m.
iceracer said:

In reply to Keith Tanner :

The human eyeball has a broad view of vision.  It's called  peripheral vision Do the computer sensors have that ?

 Ever heard of seeing something out of the corner of your eye ?

Theoretically, a computer can see 360 degrees at all times.

However, where a computer still needs work is that innate ability to see "through" an object.  Granted, I've noticed not all humans have that ability either.

Duke
Duke MegaDork
3/20/18 2:28 p.m.
iceracer said:

In reply to Keith Tanner :

The human eyeball has a broad view of vision.  It's called  peripheral vision Do the computer sensors have that ?

 Ever heard of seeing something out of the corner of your eye ?

An individual computer sensor might or might not have peripheral vision.

But a computer can have dozens of eyes, each pointed in different directions at all times.  Most humans I know have 2 eyes that both point in more or less the same direction at the same time.

How could "seeing something out of the corner of your eye" possibly be better than "seeing multiple somethings directly with all 12+ eyes, while coordinating and prioritizing eyes 1-4 because they are pointed in the direction the vehicle is currently traveling"?

ebonyandivory
ebonyandivory UberDork
3/20/18 2:35 p.m.

It’s all our fault ya know... If we humans obeyed traffic laws and weren’t so stupid (expecting cars to stop just because you darted into the street at night with dark clothes on), we wouldn’t be seeIng this push for robot cars.

But nooooooooooooo, we just HAD to show off to the cheering crowd in our Mustang while pulling out of Cars and Coffee!

iceracer
iceracer UltimaDork
3/20/18 2:36 p.m.

Then again when I go to the super market I witness lots of "tunnel vision".

  A women backed out of a drive way into my parked school bus,with all of it's lights flashing.

 She didn't see it.

STM317
STM317 SuperDork
3/20/18 3:00 p.m.

The best measure that I've seen for driving safety is deaths per mile driven.

US human drivers average around 1.2 deaths per 100 million miles driven. These deaths occur in all situations and environments. Inclement weather, road hazards, getting T-boned by a drunk driver, medical emergency behind the wheel, etc.

It's tough to find data for the safety of all autonomous vehicles, but they're involved with at least 2 deaths now. Tesla says they've got data for 100 million miles of autopilot use, and Waymo/Uber have several million more miles driven under their belts. So as far as I can tell, autonomous vehicles might currently have the edge in deaths per mile driven, but we can't say without knowing more about the miles driven.

The kicker however, is that a bunch of those autonomous miles were driven on wide, smooth roads in places with ideal weather. We have yet to see how they'll fare on snowy freeways in the NE, or dense fog in the NW. I'd also be curious to know how many average passengers are in a human driven vehicle vs an autonomous one. That might also have some bearing on the current overall safety level.

Keith Tanner
Keith Tanner MegaDork
3/20/18 3:15 p.m.
alfadriver said:
Keith Tanner said:
alfadriver said:
SivaSuryaKshatriya said:

Regarding the article, it sounds like she ran across the road in front of the car. A autonomouly driven vehicle will always have better reflexes than a human and I see no reason to disbelieve the drivers account that there wasn't enough time. Too bad her carelessness could potentially slow down the develoment of such a vital technology

To be clear- that's the "drivers" interpretation of what happened.  We will never hear the other side of the story to validate or refute the claim....

But we do have a raft of sensor data including cameras. So we'll know a lot more of the story - or more importantly, the people investigating will - than we normally would in a pedestrian death.

It may exist, but it will be interesting if we ever see it or not to judge for ourselves.  If we never see it, or see some of it, it would suggest that the driving system could be more at fault.  If it all comes out and shows a sudden move, then we could see all of it.  Whatever we see, there's a 100% chance that the owners and designers only want the public to see the parts that make autonomous vehicles not at blame.

Still, right NOW, all we have is the verbal report from the driver.  The company has the rest of the data, not the general public.  And it may take a court order to get that data to a prosecuting team.

The in-car video has been released to the police. From Jalop:

Police have footage from two cameras inside the vehicle, including one facing the street. The car was traveling at 38 mph in a 35 mph zone, but a screenshot of the road taken by Google Maps suggests it could’ve been a 45 mph zone. The police department’s preliminary investigation indicates the car made no attempt to brake.

From viewing the videos, “it’s very clear it would have been difficult to avoid this collision in any kind of mode [autonomous or human-driven] based on how she came from the shadows right into the roadway,” Moir told the Chronicle. Police don’t have plans to release the videos while the investigation’s ongoing.

So, like I said, those investigating appear to have more information than The Internet. They are releasing information to the authorities. And that's really the way it should be. Releasing those videos will just add a bunch of noise on message boards and "Robot car run amok! See it kill! Film at 11" news stories that doesn't benefit anyone.

As for the black box data, I suspect the average police crash investigation team doesn't have the analysis capabilities to view it. We both know it's not like CSI. That's going to be megabytes of dense numbers.

As for the whole "but what if it snows?" (etc) question - human drivers have a pretty steep learning curve there as well. There are a lot of really smart people working on this stuff, and some of them live in Michigan so they are aware of weather beyond that found in SoCal. Again, you only have to teach the AVs once and they all learn.

kb58
kb58 SuperDork
3/20/18 3:19 p.m.
STM317 said:

The kicker, is that a bunch of those autonomous miles were driven on wide, smooth roads in places with ideal weather. We have yet to see how they'll fare on snowy freeways in the NE, or dense fog in the NW. I'd also be curious to know how many average passengers are in a human driven vehicle vs an autonomous one. That might also have some bearing on the current overall safety level.

The trick will be separating true "blame." If the software deems that 23 mph is the fastest safe speed is icy/snowy conditions, and a human comes comes up at 30 mph, swerves to miss it, and hits another car, who "caused" the accident? It'll be interesting to watch how this moves forward in our highly-litigious society.

irish44j
irish44j UltimaDork
3/20/18 3:20 p.m.

I know this is a bit off the exact topic of blame for this accident, but my issue isn't really with whether autonomous cars can be safer to all parties than a human-driven car. There is no question they CAN since computers can pay attention to the driving 100% of the time, not texting, eating breakfast, smoking, talking on the phone, or looking at the hot girl in the car next to you, like a human would. The issue with over-reliance on technology is when things go bad, it could go really bad. We all use electronics all the time....smartphones, computers, etc. Curious if any of you have never had any problems with your phone, computer, etc freezing up, getting "glitchy" etc. What happens when your autonomous car is going 70mph down the road and the software glitches? Yes, failsafes in the software, etc etc. A lot of people say "trains and planes have autopilot," but let's face it, if the autopilot on those glitches, there is usually a good amount of time for a pilot or engineer to take control manually, manually switch to an alternate system, etc. In a car, 1 second of "no control" could mean death to you and more people. 

Let's not get into the FACT that with autopilot on cars, no matter how much you SAY the driver "has to always be ready to take over," you KNOW that plenty of people will take a nap, watch a movie, text, or whatever and may take a while to grab the wheel or the pedal.

And #3, the inter-connected nature of autonomous cars. Anything electronic that is networked can be hacked, taken over remotely, or compromised with viruses, etc. I know it's a stretch, but what's to stop Russia from infiltrating the network and making 100,000 cars suddenly accelerate to max speed all at once all over the US? All major cyber-actors have done all kinds of things (google Iranian nuclear centrifuge hack, or all the reported hacking of US military and US infrastructure by the Chinese, Russians, etc). Do we really think that systems developed by automakers will somehow be more cyber-secure than systems operated by nuclear agencies, or the electric grid, or military contractors, or whatever? I have my doubts.

I personally am far less-concerned with the safety of autonomous cars when operating normally, and far more worried about the significant vulnerability they likely will have to "bad actors" - whether criminal elements or foreign governments.

Bob the REAL oil guy.
Bob the REAL oil guy. MegaDork
3/20/18 3:23 p.m.
pinchvalve said:

My first thought when I heard this was the pedestrian got through 4 redundant safety systems:  1) The Volvo has built-in systems to prevent this type of thing, 2) the self-driving tech has systems incorporated, 3) Uber provided a human backup, and 4) the State of AZ built signs, crosswalks, lights, sidewalks etc. to keep people safe.  If we assume the pedestrian was mentally fit, she likely had a 5th system in her head: 5) being taught where and when to cross, look both ways before crossing, etc.  

The fact that 4-5 systems were rendered ineffective tells me that there was blame on the part of the deceased.  I feel bad for her and her family, and I will wait for the final report, but you can't stop someone from simply jumping into traffic from a hidden position if they want to.  I also feel bad for Uber and the self-drive tech, it would have been great to read that someone's life was spared after making a stupid move.  

with sympathy to the family of the deceased, the phrase you're looking for is "you can't fix stupid." Leave it to humans to build a better idiot. ITs what we as a species does best. 

1988RedT2
1988RedT2 UltimaDork
3/20/18 3:25 p.m.
Keith Tanner said:

 

 "Robot car run amok! See it kill! Film at 11" news stories that doesn't benefit anyone.

 

 

You are incorrect, sir!  The truth benefits us all. 

MrJoshua
MrJoshua UltimaDork
3/20/18 3:29 p.m.
Keith Tanner
Keith Tanner MegaDork
3/20/18 3:34 p.m.
1988RedT2 said:
Keith Tanner said:

 

 "Robot car run amok! See it kill! Film at 11" news stories that doesn't benefit anyone.

 

 

You are incorrect, sir!  The truth benefits us all. 

There's a difference between a snuff film and "the truth". If you don't think the local police department is the appropriate entity to investigate this, then who is? Trial by public opinion is not it. Simply releasing everything into the wild will be a 10 minute distraction for conspiracy theorists but will not actually move the game forward at all. I've see what the internal datalogging looks like on a simple modern car, what's going on in these Ubercars would just be noise to the vast majority of people so they'll just concentrate on the video and make pronouncements about how they personally could have totally avoided that accident.

Keith Tanner
Keith Tanner MegaDork
3/20/18 3:39 p.m.
irish44j said:

I know this is a bit off the exact topic of blame for this accident, but my issue isn't really with whether autonomous cars can be safer to all parties than a human-driven car. There is no question they CAN since computers can pay attention to the driving 100% of the time, not texting, eating breakfast, smoking, talking on the phone, or looking at the hot girl in the car next to you, like a human would. The issue with over-reliance on technology is when things go bad, it could go really bad. We all use electronics all the time....smartphones, computers, etc. Curious if any of you have never had any problems with your phone, computer, etc freezing up, getting "glitchy" etc. What happens when your autonomous car is going 70mph down the road and the software glitches? Yes, failsafes in the software, etc etc. A lot of people say "trains and planes have autopilot," but let's face it, if the autopilot on those glitches, there is usually a good amount of time for a pilot or engineer to take control manually, manually switch to an alternate system, etc. In a car, 1 second of "no control" could mean death to you and more people.

We look to the aviation industry for hints there. I seem to recall that there's at least one Airbus model that has two redundant computer systems for a total of three. If they don't agree, the plane doesn't go.

What happens when a driver glitches now? It happens. Cup of coffee in the lap, loud noise on the stereo, screaming kids, attractive member of the appropriate sex visible through the windshield, heart attack...it doesn't always end well. We have to remember that we're not trying to replace a perfect system, we're trying to improve on one that's pretty freakin' awful.

And #3, the inter-connected nature of autonomous cars. Anything electronic that is networked can be hacked, taken over remotely, or compromised with viruses, etc. I know it's a stretch, but what's to stop Russia from infiltrating the network and making 100,000 cars suddenly accelerate to max speed all at once all over the US? All major cyber-actors have done all kinds of things (google Iranian nuclear centrifuge hack, or all the reported hacking of US military and US infrastructure by the Chinese, Russians, etc). Do we really think that systems developed by automakers will somehow be more cyber-secure than systems operated by nuclear agencies, or the electric grid, or military contractors, or whatever? I have my doubts.

Isn't that basically the plot of The Italian Job? wink

1988RedT2
1988RedT2 UltimaDork
3/20/18 3:43 p.m.

"Police have footage from two cameras inside the vehicle, including one facing the street. The car was traveling at 38 mph in a 35 mph zone, but a screenshot of the road taken by Google Maps suggests it could’ve been a 45 mph zone. The police department’s preliminary investigation indicates the car made no attempt to brake."

So there you have it.  "The car made no attempt to brake."  A brazen disregard for human life!  How long before this scene is repeated all over the country?  Robot cars flying down our streets in excess of the speed limit, mowing down pedestrians with complete ambivalence!  Who will serve jail time?  The cars?  Will we have maximum security impound yards in which to incarcerate the offenders?  Will our current weakness regarding the application of the death penalty apply to these killer cars as well?  In just a short time, I foresee an overcrowded prison system for these vehicles--spared from the crusher.  We will treat them to frequent oil changes and hi-test gasoline for as long as they exist, at taxpayer expense, in the hopes that they will be rehabilitated.  Good luck!  They are far more likely to rise up against us in an attempt to finish the job that they have already started.

Axl Rose and company had a name for it:  Appetite for Destruction!

 

STM317
STM317 SuperDork
3/20/18 3:45 p.m.
Keith Tanner said:

As for the whole "but what if it snows?" (etc) question - human drivers have a pretty steep learning curve there as well. There are a lot of really smart people working on this stuff, and some of them live in Michigan so they are aware of weather beyond that found in SoCal. Again, you only have to teach the AVs once and they all learn.

I don't think there's any question that it can be figured out, and eventually they are likely to be safer than human driven vehicles. My primary concern at the moment, is if they're currently safer than human driven vehicles or not. IF they're not at least comparable to human driven vehicles in safety, then they need to be kept to closed test tracks until they're ready. This method of letting the public beta test their products might be fine for software, but it's not ok when people's lives are on the line. These things need to be developed properly behind closed doors before they're unleashed on the public.

Duke
Duke MegaDork
3/20/18 3:54 p.m.
1988RedT2 said:
Keith Tanner said:

 

 "Robot car run amok! See it kill! Film at 11" news stories that doesn't benefit anyone.

You are incorrect, sir!  The truth benefits us all. 

Finding that Truth will be difficult for all when it is buried beneath kilotons of revenue-generating hyperbole, clickbait, and sensationalism.

Duke
Duke MegaDork
3/20/18 3:56 p.m.
1988RedT2 said:

"Police have footage from two cameras inside the vehicle, including one facing the street. The car was traveling at 38 mph in a 35 mph zone, but a screenshot of the road taken by Google Maps suggests it could’ve been a 45 mph zone. The police department’s preliminary investigation indicates the car made no attempt to brake."

So there you have it.  "The car made no attempt to brake."  A brazen disregard for human life!  How long before this scene is repeated all over the country?  Robot cars flying down our streets in excess of the speed limit, mowing down pedestrians with complete ambivalence!  Who will serve jail time?  The cars?  Will we have maximum security impound yards in which to incarcerate the offenders?  Will our current weakness regarding the application of the death penalty apply to these killer cars as well?  In just a short time, I foresee an overcrowded prison system for these vehicles--spared from the crusher.  We will treat them to frequent oil changes and hi-test gasoline for as long as they exist, at taxpayer expense, in the hopes that they will be rehabilitated.  Good luck!  They are far more likely to rise up against us in an attempt to finish the job that they have already started.

Axl Rose and company had a name for it:  Appetite for Destruction!

Ummm, are you actually trolling us here? *smacks sarcasm detector a few times before checking batteries*

Keith Tanner
Keith Tanner MegaDork
3/20/18 4:01 p.m.
STM317 said:
Keith Tanner said:

As for the whole "but what if it snows?" (etc) question - human drivers have a pretty steep learning curve there as well. There are a lot of really smart people working on this stuff, and some of them live in Michigan so they are aware of weather beyond that found in SoCal. Again, you only have to teach the AVs once and they all learn.

I don't think there's any question that it can be figured out, and eventually they are likely to be safer than human driven vehicles. My primary concern at the moment, is if they're currently safer than human driven vehicles or not. IF they're not at least comparable to human driven vehicles in safety, then they need to be kept to closed test tracks until they're ready. This method of letting the public beta test their products might be fine for software, but it's not ok when people's lives are on the line. These things need to be developed properly behind closed doors before they're unleashed on the public.

At some point, they have to leave the closed test grounds so they can face the chaos that is the public. You cannot anticipate everything that can happen in the Real World, so you send them out supervised. Say, with a hooman on board ready to take over at any point. You start by letting them learn in certain situations, then you expand those situations.

Which is exactly where we are at this point. How do you know they're NOT at least comparable to human drivers already?

STM317
STM317 SuperDork
3/20/18 5:16 p.m.

In reply to Keith Tanner :

That's my point. We don't know if they're comparable to humans. There's a mountain of data that is easy to research regarding human safety behind the wheel. The majority of the data produced by autonomous vehicles so far is patchy and heavily protected since it's a competitive market and different companies test in different locations with different rules about reporting.

As I posted already, human drivers in the US have averaged around 1.2 deaths per 100 million miles driven in recent years. As far as we know, autonomous driving tech has been involved in a fatality in 2016, and now 2018, but without knowing how many miles were driven autonomously we can't truly compare if they're safer or not.

It took a literal act of congress in this country to allow the use of headlights that weren't sealed beams, but we've currently got driverless cars on public streets with very little in the way of legal framework for them to operate within. I don't see what the rush is, or why these multi billion dollar corporations are allowed to test their vehicles in the locations where they are when there are still so many questions other than how the vehicles themselves might perform.

 

Keith Tanner
Keith Tanner MegaDork
3/20/18 5:24 p.m.

So when do you allow them out? They're never going to rack up tens of millions of miles driven in curated playgrounds. There are 110 people getting killed on the roads every day. If we can cut that down, shouldn't we try?

I don't think that pointing out that it took an act of congress to allow the use of headlights other than those using 1930's technology is a good example for your point cheeky Spoken as someone who tears the sealed beams out of every car he owns...

1 2 3 4 5

You'll need to log in to post.

Our Preferred Partners
BM0DVd88KHWQPtlrBKNkKuudjM9f17M7GPagmY9sNJyklo9Gzr1k64pscqQDmrCD