rslifkin said:In reply to SivaSuryaKshatriya :
A human has one advantage over a computer in that situation: a human can see the person standing on the side of the road and feel "I don't trust them to stay put, I think they're gonna cross the road" while the computer will likely be left with "person is not moving and not in the road, not a threat" or be forced to be super conservative and slow way down for every person near the side of the road.
That's a good point for sure, but this is very quickly being taken care of. I would like to point out what is rapidly becoming factual: almost anything a human can do, machine learning/artificial intelligence will eventually be able to do it better. That is, assuming they aren't already better at it than us:
https://www.theverge.com/2017/10/18/16495548/deepmind-ai-go-alphago-zero-self-taught
When we see someone on the side of the road and "feel" uneasy, what is actually going on in our heads? We are, without even being conscious of it, reading their body language and analyzing it within microseconds (the pedestrian's body position is slightly hunched forward, he is looking both ways, his left leg is ready to propel him forward, etc) and coming to the conclusion that "this person looks like he is ready to walk across the road" which in turn makes us feel uneasy given the context and slow dwn. There is nothing magical going on here; it's simply us reading their body language within the context of the situation and coming to a conclusion.
Artificial intelligence/machine learning researchers are rapidly coming up with AI that can recognize facial expressions. In China, they are already using machine learning software that utilizes the millions of cameras already in place to recognize ethnic minorities that are deemed less-than-desirable by the government and track their movements to see if they are going into areas where they "shouldn't." They are also using it to track criminals:
http://www.dailymail.co.uk/sciencetech/article-5170167/China-unveils-Minority-Report-style-AI-security-system.html
How is this related to self-driving cars? Soon, the AI driving the vehicles will be able to recognize facial expressions and body language and determine whether a person is about to move or run across the road. They'll be able to do it far faster than we can, and with more people at the same time.
Eventually self-driving cars will share information with one another, such as where pedestrians are, so that they can anticipate potential collisions with pedestrians you can't even see it.
Being a safe driver essentially comes down to processing large amounts of information and making good decisions from the data (where is that pedestrian, is he gonna run across, looks like there's a big slow down up ahead, better slow down in anticipation, etc). A computer can process such information far faster than we can.