Support road.cc

Like this site? Help us to make it better.

Cyclist doing trackstand leaves Google's self-driving car confused

Tech giant welcomes incident as it continues to refine software to predict bike riders' behaviour

A cyclist has told of how he confused Google’s self-driving car – by trackstanding at a junction. Google says it welcomes the incident, as it helps the software behind the technology learn about bike riders' behaviour.

Posting on the Road Bike Review forum, site user Oxtox described his encounter with the vehicle as he rode his fixed-gear bike.

He wrote:

A Google self-driving Lexus has been in my neighbourhood for the last couple of weeks doing some road testing.

Near the end of my ride today, we both stopped at an intersection with 4-way stop signs.

The car got to the stop line a fraction of a second before I did, so it had the ROW [right of way]. I did a trackstand and waited for it to continue on through.

It apparently detected my presence (it's covered in Go-Pros) and stayed stationary for several seconds. It finally began to proceed, but as it did, I rolled forward an inch while still standing. The car immediately stopped ...

I continued to stand, it continued to stay stopped. Then as it began to move again, I had to rock the bike to maintain balance. It stopped abruptly.

We repeated this little dance for about 2 full minutes and the car never made it past the middle of the intersection. The two guys inside were laughing and punching stuff into a laptop, I guess trying to modify some code to 'teach' the car something about how to deal with the situation.

Google has been testing driverless technology in a variety of vehicles including its own prototype ‘pods’ for several years, with one of its features being the ability to predict the behaviour of road users including cyclists and pedestrians.

The vehicles - or rather, the tech and software behind them - are, understandably, cautious to the extreme. The firm has said that there have been 11 minor road incidents in the six years in which they have been tested and that all bar one were caused by the drivers of other vehicles. In the sole incident caused by a Google car, it was being driven in manual mode by a member of staff.

Dmitri Dolgov, the head of software for the self-driving car project, has said Google's software is getting better at predicting the behaviour of pedestrians and other road users and cited one example in which a Google car paused when a cyclist ran a red light, while another car, driven by a human, continued and nearly hit them. The firm’s co-founder, Sergey Brin, says the goal is to create something that is safer than human drivers.

– Google to test purpose-built driverless vehicle on California roads

Earlier this year, it emerged that the company had patented technology that recognises cyclists’ hand signals as the technology evolves to assess and predict the behaviour of vulnerable road users with the overarching goal of improving safety for all.

– Google patent reveals how driverless cars recognise hand signals

The learning curve has now apparently expanded to include cyclists who are trackstanding.

A spokeswoman for Google told the Washington Post that the incident provides a good example of the feedback the company, now trialling the concept in areas such as Austin, Texas as well as near its headquarters in Mountain View, California, wants to get.

As for the trackstanding cyclist’s opinion of the encounter, Oxtox added:

The odd thing is that even tho it was a bit of a CF, I felt safer dealing with a self-driving car than a human-operated one.

Simon joined road.cc as news editor in 2009 and is now the site’s community editor, acting as a link between the team producing the content and our readers. A law and languages graduate, published translator and former retail analyst, he has reported on issues as diverse as cycling-related court cases, anti-doping investigations, the latest developments in the bike industry and the sport’s biggest races. Now back in London full-time after 15 years living in Oxford and Cambridge, he loves cycling along the Thames but misses having his former riding buddy, Elodie the miniature schnauzer, in the basket in front of him.

Add new comment

41 comments

Avatar
blinddrew | 8 years ago
0 likes

I'd rather deal with the 1 in a billion ethical choice that the driverless car is programmed to make than the everyday ethical choice you see drivers fail when they speed through a built-up area / drive too fast on a wet road / allow themselves to be distracted etc etc etc

Avatar
Podc | 8 years ago
0 likes

The fundamental flaw will be relevant unless all motorised vehicles are driverless. Whilst there is a mix of driven and driverless, the driven will always be able to generate circumstances where a crash for a driverless vehicle cannot be avoided.

Avatar
Carton replied to Podc | 8 years ago
0 likes
Podc wrote:

The fundamental flaw will be relevant unless all motorised vehicles are driverless. Whilst there is a mix of driven and driverless, the driven will always be able to generate circumstances where a crash for a driverless vehicle cannot be avoided.

True.

However, there's no need to go that far. Mechanical malfunctions happen all the time. So do computing glitches in complex software. People chuck glass bottles outside their cars oblivious to the cars trailing them.

Since some people seem to have absolutely no imagination (not you, Pod), here's a scenario: a car can suddenly swerve headfirst into your lane (given a tyre blowout or a sleepy driver) a few meters away, putting a driver-less car into the position of either swerving left and potentially killing unseen pedestrians or trying to brake and potentially killing everyone aboard both cars as well as pedestrians as a result of shrapnel from a massive collisions. And no, these aren't overly rare or easily fixable issues. About 1,240,000 people die on the roads every year, and they're not all due to driver error.

Avatar
alansmurphy | 8 years ago
0 likes

It's a bloody good point and worthy of discussion - and suggesting stopping dead is just stupid, there will always be a situation that is outside of the parameters of 'normal'.

I would imagine that whatever decision it makes would be based on logical algorithms but how much information would it have to process and how quickly could it do it, who would decide the criteria. If the mother pushing the child was the best heart surgeon in the country would it be a different outcome to a pramface scrounging off the state. I suppose whatever decision it makes is likely to be better than the human one which is simply self preservation...

Avatar
Canyon48 | 8 years ago
0 likes

That must be one hell of a tough programme to write to allow for track-standing cyclists!

I don't know where I track-stand (sorry, excuse the pun) on driver-less cars. Being interested in aviation and learning to fly, the idea of autonomous planes is one I'm not all to happy with. After all, a "robot" cannot think nor react to something outside of what it has been programmed to do i.e. an autonomous vehicle is only as good as it's programmer.

Moreover, if you give a computer system two conflicting inputs (of equal priority) the system will be able to respond to neither and therefore may crash or switch itself off. i.e. for arguments sake; an autonomous car is travelling along but recognises an obstacle ahead for which it must stop and at the same time recognises that it will causes a collision if it stops. This type of situation has happened in aviation and will happen in driver-less cars .

Avatar
danthomascyclist replied to Canyon48 | 8 years ago
0 likes
wellsprop wrote:

Moreover, if you give a computer system two conflicting inputs (of equal priority) the system will be able to respond to neither and therefore may crash or switch itself off. i.e. for arguments sake; an autonomous car is travelling along but recognises an obstacle ahead for which it must stop and at the same time recognises that it will causes a collision if it stops. This type of situation has happened in aviation and will happen in driver-less cars .

Oh wow. You must have gone to the same programming school that my grandparents probably went to in order to get their current wealth of programming knowledge. In computing, no two things happen at the same time.

How would roads full of self-driving cars end up in such situations? They'd be long averted. Seriously, do you believe what you're typing?

Drawing parallels with aviation is silly. If you're 30,000 feet in the air, coming to a safe stop isn't an option (as it was with the Google car in this article).

Avatar
portec replied to danthomascyclist | 8 years ago
0 likes
danthomascyclist wrote:
wellsprop wrote:

Moreover, if you give a computer system two conflicting inputs (of equal priority) the system will be able to respond to neither and therefore may crash or switch itself off. i.e. for arguments sake; an autonomous car is travelling along but recognises an obstacle ahead for which it must stop and at the same time recognises that it will causes a collision if it stops. This type of situation has happened in aviation and will happen in driver-less cars .

Oh wow. You must have gone to the same programming school that my grandparents probably went to in order to get their current wealth of programming knowledge. In computing, no two things happen at the same time.

I may be accused of being pedantic here but 2 things can happen simultaneously. It's called a race condition. But the important point is that it's a known issue, a very rare one, and can be programmed for. I agree with your post.

Avatar
danthomascyclist replied to portec | 8 years ago
0 likes
portec wrote:

I may be accused of being pedantic here but 2 things can happen simultaneously. It's called a race condition. But the important point is that it's a known issue, a very rare one, and can be programmed for. I agree with your post.

(Going off topic but what the hell).

No, a race condition is when a device attempts to do two things at the same time. But it can't and it must be done in sequence. Because as I say, in computing two things don't happen at the same time.

And yes, it can be mitigated so I'm not entirely sure why you brought it up.

Avatar
portec replied to danthomascyclist | 8 years ago
0 likes
danthomascyclist wrote:
portec wrote:

I may be accused of being pedantic here but 2 things can happen simultaneously. It's called a race condition. But the important point is that it's a known issue, a very rare one, and can be programmed for. I agree with your post.

(Going off topic but what the hell).

No, a race condition is when a device attempts to do two things at the same time. But it can't and it must be done in sequence. Because as I say, in computing two things don't happen at the same time.

And yes, it can be mitigated so I'm not entirely sure why you brought it up.

A slightly confusing and contradictory (disagreeing then commencing to repeat what I said using different words) response to my comment, but ok. A little aggressive too considering I tried to be friendly and agreed with the general sentiment your post.  7

Avatar
Canyon48 replied to danthomascyclist | 8 years ago
0 likes
danthomascyclist wrote:
wellsprop wrote:

Moreover, if you give a computer system two conflicting inputs (of equal priority) the system will be able to respond to neither and therefore may crash or switch itself off. i.e. for arguments sake; an autonomous car is travelling along but recognises an obstacle ahead for which it must stop and at the same time recognises that it will causes a collision if it stops. This type of situation has happened in aviation and will happen in driver-less cars .

Oh wow. You must have gone to the same programming school that my grandparents probably went to in order to get their current wealth of programming knowledge. In computing, no two things happen at the same time.

How would roads full of self-driving cars end up in such situations? They'd be long averted. Seriously, do you believe what you're typing?

Drawing parallels with aviation is silly. If you're 30,000 feet in the air, coming to a safe stop isn't an option (as it was with the Google car in this article).

I think it would be very naive to place full trust in driverless cars and assume they would avoid any conflict. I didn't intend to mean that flying and driverless tech is the same, merely that currently aviation is the biggest user of autonomous technology in the form of autopilot and there have been several instances where it has been unable to cope due to a system error which has very bad outcomes.

It is not much a stretch of the imagination to believe that this could happen on a road.

Will just point out that my programming is very limited, I was just suggesting a theoretical scenario where a computer may not be able to cope (it has happened even on the most advanced autonomous vehicles).

Avatar
jollygoodvelo replied to Canyon48 | 8 years ago
0 likes
wellsprop wrote:

That must be one hell of a tough programme to write to allow for track-standing cyclists!

Pretty simple if you have an IR sensor.

Avatar
joules1975 | 8 years ago
0 likes

there is one fundamental flaw with driverless cars, and that is the ethics programmed into them. For example, if the car is going to have a crash, a crash that cannot be avoided but simply altered, what does it do? e.g. if the choice it has to make is between ploughing into a lorry, killing its occupants, or turning onto the pavement and mowing down a mother and child, what does it do? And who is then considered to be responsible?

There was recently a huge conference to discuss such scenarios.

There is no doubt that driverless cars will at some point take over, but there are a lot of things that need sorting before that can happen.

Avatar
dafyddp replied to joules1975 | 8 years ago
0 likes

maybe it stops dead? That way the only collision possible would be another vehicle going INTO it. Obviously this might put the occupant in peril, but it would remove the dilemma of who to hit...

Avatar
hectorhtaylor replied to joules1975 | 8 years ago
0 likes
joules1975 wrote:

there is one fundamental flaw with driverless cars, and that is the ethics programmed into them. For example, if the car is going to have a crash, a crash that cannot be avoided but simply altered, what does it do? e.g. if the choice it has to make is between ploughing into a lorry, killing its occupants, or turning onto the pavement and mowing down a mother and child, what does it do? And who is then considered to be responsible?

There was recently a huge conference to discuss such scenarios.

There is no doubt that driverless cars will at some point take over, but there are a lot of things that need sorting before that can happen.

Speaking of ethics - patenting technology that recognises hand signals makes the technology licence expensive for other manufacturers, so will they bother? Google just don't make enough money on an hourly basis to risk altruism and save lives...

Avatar
vonhelmet replied to hectorhtaylor | 8 years ago
0 likes
hectorhtaylor wrote:

Speaking of ethics - patenting technology that recognises hand signals makes the technology licence expensive for other manufacturers, so will they bother? Google just don't make enough money on an hourly basis to risk altruism and save lives...

Patents don't apply forever, and in any case I'm quite certain that once these cars are actually coming to market other companies will be queueing up to license the tech.

Avatar
ConcordeCX replied to hectorhtaylor | 8 years ago
0 likes
hectorhtaylor wrote:

Speaking of ethics - patenting technology that recognises hand signals makes the technology licence expensive for other manufacturers, so will they bother? Google just don't make enough money on an hourly basis to risk altruism and save lives...

If the inventor doesn't patent it someone else will, and the inventor would have to pay to use their own invention.

What makes you think Google will make the licence expensive?

Avatar
vonhelmet replied to joules1975 | 8 years ago
0 likes
joules1975 wrote:

For example, if the car is going to have a crash, a crash that cannot be avoided but simply altered, what does it do? e.g. if the choice it has to make is between ploughing into a lorry, killing its occupants, or turning onto the pavement and mowing down a mother and child, what does it do? And who is then considered to be responsible?

Can you describe the circumstances that lead up to such a crash?

Given how observant and cautious these cars are programmed to be, I'm sure it would see things coming a long way off, would be mitigating things way before they became an issue and - in the event that a crash did occur - it would be massively endowed with evidence to show whose fault it was.

Avatar
danthomascyclist replied to joules1975 | 8 years ago
0 likes
joules1975 wrote:

there is one fundamental flaw with driverless cars, and that is the ethics programmed into them. For example, if the car is going to have a crash, a crash that cannot be avoided but simply altered, what does it do? e.g. if the choice it has to make is between ploughing into a lorry, killing its occupants, or turning onto the pavement and mowing down a mother and child, what does it do? And who is then considered to be responsible?

There was recently a huge conference to discuss such scenarios.

There is no doubt that driverless cars will at some point take over, but there are a lot of things that need sorting before that can happen.

Oh give it a rest. These woolly bullshit arguments with some absurd scenario have been countered so many times. It's not a "fundamental flaw", at best it's a consideration.

1) These scenarios won't exist anywhere near as often as they do now without humans in the mix
2) When they do exist, they'll be so rare it'll make international news and the technology will be improved to prevent it
3) What difference does it make? I'd rather a computer make this decision than a screaming motorist who has locked up with their eyes closed and has no control of the car.

Avatar
jollygoodvelo replied to danthomascyclist | 8 years ago
0 likes
danthomascyclist wrote:
joules1975 wrote:

there is one fundamental flaw with driverless cars, and that is the ethics programmed into them. For example, if the car is going to have a crash, a crash that cannot be avoided but simply altered, what does it do? e.g. if the choice it has to make is between ploughing into a lorry, killing its occupants, or turning onto the pavement and mowing down a mother and child, what does it do? And who is then considered to be responsible?

There was recently a huge conference to discuss such scenarios.

There is no doubt that driverless cars will at some point take over, but there are a lot of things that need sorting before that can happen.

Oh give it a rest. These woolly bullshit arguments with some absurd scenario have been countered so many times. It's not a "fundamental flaw", at best it's a consideration.

1) These scenarios won't exist anywhere near as often as they do now without humans in the mix
2) When they do exist, they'll be so rare it'll make international news and the technology will be improved to prevent it
3) What difference does it make? I'd rather a computer make this decision than a screaming motorist who has locked up with their eyes closed and has no control of the car.

Quite so: yes, it's an ethical question. It's being debated widely all over the internet. And yes, I'd definitely be a little uneasy about a car that would drive itself off a bridge to kill me in preference to causing a full school bus coming the other way (for example) to do the same. But, where do you draw the line? If driverless cars "cause" 10 deaths a year, but save 1000 deaths a year (of different people) - cyclists, pedestrians, drunk drivers, etc, it's a net win for the human race. Is that OK?

Avatar
danthomascyclist replied to jollygoodvelo | 8 years ago
0 likes
Gizmo_ wrote:

And yes, I'd definitely be a little uneasy about a car that would drive itself off a bridge to kill me in preference to causing a full school bus coming the other way (for example) to do the same.

If driverless cars "cause" 10 deaths a year, but save 1000 deaths a year (of different people) - cyclists, pedestrians, drunk drivers, etc, it's a net win for the human race. Is that OK?

No, that's not "OK", that's incredible.

There have been cases where airbags have killed people who would have otherwise survived. But we don't piss and moan about that. We accept a small loss as part of the solution.

Your bus over bridge analogy has so many flaws it's untrue. These analogies aren't thought-up by engineers, technologists or smart people, they're dreamed up by naysayers who are miles behind in terms of their understanding and are wholly irrelevant. Here's a clue: self-driving cars and buses won't be programmed to put themselves in such a scenario in the first place.

I've never seen anyone think up an analogy and thought "yeh, I can see that happening".

Avatar
vonhelmet replied to jollygoodvelo | 8 years ago
0 likes
Gizmo_ wrote:

And yes, I'd definitely be a little uneasy about a car that would drive itself off a bridge to kill me in preference to causing a full school bus coming the other way (for example) to do the same.

Describe the circumstances where such a choice has to be made, whereby the driverless car has not taken itself out of the situation 100 yards back?

Avatar
Carton replied to vonhelmet | 8 years ago
0 likes
vonhelmet wrote:
Gizmo_ wrote:

And yes, I'd definitely be a little uneasy about a car that would drive itself off a bridge to kill me in preference to causing a full school bus coming the other way (for example) to do the same.

Describe the circumstances where such a choice has to be made, whereby the driver-less car has not taken itself out of the situation 100 yards back?

There are a literally infinite number of permutations in which the driver-less car has not taken itself out of the situation when a potentially lethal incident occurs. And there are many people who are uneasy with entrusting control of their car to the decision-making of giant multinational's hired ethicists, mechanical engineers and traffic experts, and the observational and response capabilities of their bodies to cameras, radars and computers. However, I have enough faith in humanity to believe that even the majority of those people are reasonable enough to conclude that their better off trusting said experts and equipment than relying on the capabilities of a group from which 49% of US traffic deaths were caused by alcohol or drug impaired drivers. That is why I think the vast, vast majority of sensible people will soon come to trust driver-less cars over actual drivers.

However, given how many people seem to believe that no cases will present themselves where the ethical decisions are unclear, I'm not really sure how many sensible people are actually out there. The argument that driver-less cars would be exponentially better than actual drivers overall does not in any way imply that they are unerring and could cause comparatively minimal yet indubitably tragic outcomes as well. Cyclist to cyclist it truly boggles me that you could be unaware of the potential for harm inherent in thousand-pound steel boxes propelled at any significant speed inter-playing with each other as well as cyclists and pedestrians while carrying humans, however expertly they are being driven.

Avatar
jollygoodvelo replied to vonhelmet | 8 years ago
0 likes
vonhelmet wrote:
Gizmo_ wrote:

And yes, I'd definitely be a little uneasy about a car that would drive itself off a bridge to kill me in preference to causing a full school bus coming the other way (for example) to do the same.

Describe the circumstances where such a choice has to be made, whereby the driverless car has not taken itself out of the situation 100 yards back?

1. Malfunction.
2. Malfunction.
3. etc

I'm not saying it's likely and I know it's flawed. The trolley problem is an excellent example though.

Avatar
Chuck replied to joules1975 | 8 years ago
0 likes
joules1975 wrote:

there is one fundamental flaw with driverless cars, and that is the ethics programmed into them. For example, if the car is going to have a crash, a crash that cannot be avoided but simply altered, what does it do? e.g. if the choice it has to make is between ploughing into a lorry, killing its occupants, or turning onto the pavement and mowing down a mother and child, what does it do? And who is then considered to be responsible?

It really surprises me that this keeps being held up as some sort of obstacle. As an academic exercise in adapting the trolley problem (https://en.wikipedia.org/wiki/Trolley_problem) for self-driving cars then fine.

But in the context of the hundreds of deaths and serious injuries currently caused by human drivers every single year, then dreaming up astronomically unlikely scenarios in which the worst outcome is, at the very least, no worse than you'd get with a person behind the wheel seems a bit redundant. And that's if you can even decide what the worst outcome is in the first place, and the whole point of these thought experiments is that it's very difficult for people to do that.

Avatar
ConcordeCX replied to joules1975 | 8 years ago
0 likes
joules1975 wrote:

there is one fundamental flaw with driverless cars, and that is the ethics programmed into them. For example, if the car is going to have a crash, a crash that cannot be avoided but simply altered, what does it do? e.g. if the choice it has to make is between ploughing into a lorry, killing its occupants, or turning onto the pavement and mowing down a mother and child, what does it do? And who is then considered to be responsible?

There was recently a huge conference to discuss such scenarios.

There is no doubt that driverless cars will at some point take over, but there are a lot of things that need sorting before that can happen.

That's not a flaw with driverless cars. The cars are not moral agents and they're not faced with the choice. It's the designers of the software who have to make the choice. So they are in a similar, but probably easier, position than a human who has to make the choice in a fraction of a second with no warning. The software designers have plenty of time to think about how to program it, and they have time to consult philosophers of ethics and the general public.

Of course one option is not to make a reasoned choice but leave it to a randomiser...

In any case, if and when these cars take to the road for real it is likely that they will overall be safer than human drivers and we will be faced with another choice - should it still be allowable for a human to drive?

Avatar
simonsays replied to joules1975 | 8 years ago
0 likes

I am not sure that I would pass that test scenario

Avatar
awjr | 8 years ago
0 likes

There will come a time where people will not want to share roads with non-robotic cars.

Some people are predicting this will massively reduce car ownership as well. I kinda like the idea of driving to work then sending my car out to work as a taxi during the day.

Avatar
earth replied to awjr | 8 years ago
0 likes
awjr wrote:

There will come a time where people will not want to share roads with non-robotic cars.

Some people are predicting this will massively reduce car ownership as well. I kinda like the idea of driving to work then sending my car out to work as a taxi during the day.

I think your onto something there.

Avatar
Simon_MacMichael replied to awjr | 8 years ago
0 likes
awjr wrote:

Some people are predicting this will massively reduce car ownership as well. I kinda like the idea of driving to work then sending my car out to work as a taxi during the day.

You'd still need a human taxi driver there just in case the software crashed.

I understand Bruce Willis is between projects  3

Avatar
eddie11 | 8 years ago
0 likes

Is this guy just finding a round about way of boasting he can trackstand for 2 minutes?

Pages

Latest Comments