Tuesday, February 27, 2018

Ethical Conundrums in Driverless Cars

Driverless cars used to be the stuff of science fiction. But more and more it seems they are becoming the reality of today. Most major car manufacturers are working on the technology, as are tech companies such as Google. Several luxury car companies already have early generations of self-driving tech in the showrooms. For example, Tesla currently offers an "Autopilot" feature that is supposed to offer limited self-driving capability, and is said to enhance safety even when the human driver is supposed to be in control. Likewise, Mercedes-Benz, Volvo, Audi, and Cadillac all offer systems with similar capabilities. Many car companies are currently selling somewhat more limited versions of the technology, including automated emergency braking, or even automatic parallel parking capability. It almost seems that the manufacturers and developers are barrelling forward - "pedal to the floor" - towards their fully-automatic goal, citing claims about the potential of greater safety of these systems. Given that studies show that the vast majority of crashes are caused by human error (94%, according to NHTSA), safety is an admirable goal. While the goal of this push toward fully-automatic driving is to reduce the "human error" factor, there are still a lot of unanswered questions and serious reliability (and liability) issues that have yet to be addressed.

As cyclists, we have very good reasons to be skeptical.

Joshua Brown was killed when he handed over the control of his
Tesla to the Autopilot system. The system was unable to detect a
 tractor trailer that was crossing the roadway.
First off, how reliable is the technology? So far, not reliable enough to put our full faith in it. A well-publicized incident in 2016 centered on the driver of a Tesla Model S who was killed when he drove straight through a tractor trailer while he was using their Autopilot system. It is believed that the car's sensors were not able to detect the truck's trailer which may have blended in somehow with the color and/or brightness of the sky. In Tesla's defence, the driver of the car, Joshua Brown, was not using the system as it was designed to be used - that is to say, the intent of the system is that it should assist the driver during momentary lapses in attention - not that it would completely take over for a driver who fully relinquishes control of the car for an extended period. Though Brown's family and lawyers have disputed it, some of the first people on the scene after the crash reported that he had been watching a movie when the incident occurred. A National Traffic Safety Board (NTSB) study into the crash determined that the driver was essentially misusing the Autopilot system by over-relying on it (NTSB found that he only had his hands on the wheel for about 25 seconds out of a 39-minute period of driving).

But what is there to keep other drivers from doing the same thing? Consider how many people without autonomous technology think nothing of taking their eyes off the road for extended moments to send/read text messages,etc., it should not be surprising that those with such technology would be even more inclined to put their faith in the autonomous driving functions for even longer periods.

Another incident, this time non-fatal, happened just last month when another Tesla Model S slammed into the back of a stopped fire truck. It couldn't see a big red fire truck in its path? Seriously?

Yet Tesla's owners manual even acknowledges that this exact situation can be a problem for the automatic driving system. It states: Traffic-Aware Cruise Control cannot detect all objects and may not brake/decelerate for stationary vehicles, especially in situations when you are driving over 50 mph (80 km/h) and a vehicle you are following moves out of your driving path and a stationary vehicle or object is in front of you instead.” Basically, the system gets confused by a changing and unpredictable traffic situation.

When these systems, which rely on a combination of radar, lidar, electric eye sensors, and GPS (and many gigabytes of computer microprocessing), can become "confused" by large, solid, and relatively predictable vehicles - what chance do we as cyclists have?

All the developers of automatic "driverless" technology admit that recognizing bicycles is a particularly difficult challenge. Bicycles are small. It can be difficult for computers to tell what direction they're heading. They tend to (though not always) move slower than surrounding traffic, but they can change direction very quickly. All the things that make cyclists "consternating" to human drivers make us a total puzzle for computers. These issues are exacerbated by the fact that many cyclists simply don't follow traditional traffic rules.

The tech-meets-transportation company Uber is another company that has been developing self-driving technology, and they revealed last year that their self-driving cars seemed unable to distinguish bike lanes from car lanes, and as a result, had difficulty spotting cyclists, and potentially worse, keeping the cars from driving in the bike lanes. Uber is still doing small-scale, localized testing of their tech, so it's unlikely that we'll be run down by a self-driving Uber (unless you live in Pittsburgh) - at least for now. And hopefully they'll figure out that hurdle before they go nationwide with it.

Some developers, acknowledging the weaknesses in their systems, seem to be trying to "share" the responsibility of safety by putting compatible technology onto bicycles, or onto the cyclists themselves, to help the cars' systems "see" them better. These solutions include putting chips or transmitters into helmets, or embedded into bikes, or creating special apps for the riders' cell phones. All of that sounds great for those cyclists who can afford (and desire) to equip themselves with the latest "smart" technology that will help keep them from being run down by self-driving cars. But it leaves a huge segment of cyclists on the road completely vulnerable. Are these riders expendable? I mean, living in an urban area I regularly see riders who are poorer, and riding beat up old bikes because they can't afford better, and they are on bikes in the first place because they can't afford cars. Cheap bikes are their sole source of transportation. What are the car and tech companies doing for them? These cyclist-centered solutions seem to me to place the burden on cyclists, rather than on the drivers and the companies pushing the technology. It's like saying, "You don't want our automated cars to hit you? Then you need to wear this special 'smart suit' or 'smart helmet,' ride a special 'smart bike,' or strap on some other kind of 'smart sensors' every time you ride. Oh yeah, and it's up to you to pay for it all."

Being the somewhat cynical and pessimistic person I am, I wouldn't dismiss the possibility that the automakers and tech companies could get together and pressure lawmakers to legally put the burden on the cyclists in the form of some kind of mandate. As this technology becomes more popular and profitable, if they can't figure out a way to make the systems more reliable as far as recognizing and reacting to cyclists, they could lobby to mandate that all cyclists strap on some variation of "smart" devices before taking to the roads, or else be held responsible for their own injuries when they get hit. Don't think that's likely? It's happened before - remember that the concept of "jaywalking" wasn't even a thing until the auto interests came up with it and got it written into the law books.

Another issue that comes up relates to a type of moral or ethical dilemma, sometimes referred to as the Trolley Problem, wherein a person must choose between two potentially deadly outcomes. In this case, the question is if an automated car has to make a choice between hitting another car or hitting a cyclist or a pedestrian, which course will it take? It isn't difficult to imagine a scenario where this could present itself. Picture an automated car overtaking a cyclist when an oncoming car suddenly moves left-of-center. Does the automated car remain in its determined path and take a head-on collision with the other car? Or does it swerve right to avoid the car, but hit the cyclist?

Shockingly (or perhaps not-so-shockingly, depending on your level of cynicism) one car company has already made that determination, and it doesn't bode well for cyclists. According to an article in Car and Driver, Mercedes-Benz has already decided to program its next-level autonomous cars to prioritize the protection of the people inside the car -- you know, the very people who shelled out big bucks for the self-driving technology with the expectation that it would keep them safer. Obviously, M-B wants to make sure their drivers live to buy another M-B. According to Christoph von Hugo, M-B's manager of driver assistance systems, "If you know you can save at least one person, save the one in the car. If all you know for sure is that one death can be prevented, then that's your first priority."

Apparently, Mercedes has decided that if the car kills a cyclist or pedestrian, that victim's family will sue them. And if their car takes an action that "saves" the cyclist, but results in the death of the Mercedes driver or other occupants in the car, then they will still get sued. I suppose they figure that if they're going to get sued either way, they're better off protecting the M-B owners (who can probably afford better lawyers). The only possible bright side is that ultimately, the goal of the developers of self-driving cars is to program these systems not to get into situations where they have to make a "trolley problem" choice in the first place. Is that possible? Or practical? I don't have that answer. There are so many potential variables in a typical driving scenario, I wonder if it would be possible to calculate them all.

The legal questions of regulation and liability are still totally up in the air, both in the U.S. and abroad. Here in the U.S., congress has only just begun to look at the issues of autonomous cars. Different states are looking at the issues separately, which could lead to a totally disconnected patchwork of laws nationwide. But in some states, it seems that legislators are willing to go full-throttle with robots in the driver's seat. Just this week, California lawmakers eliminated a requirement that autonomous vehicles must have a person in the driver's seat to take over in case of emergency. The new law also grants 50 companies a license to test self-driving cars in that state.

Is there a good side to all this? It's hard to say.

Currently, I believe one of the biggest threats to cyclists is probably the distracted driver, which I believe becomes a greater problem every year and with every new app or gadget. I'm still convinced that the "smarter" our phones get, the "dumber" the people get. Add that to a natural tendency towards self-indulgence and self-centered behavior that the phones seem to exacerbate, and the sense of anonymity, power, and entitlement that seem to infect many drivers anyhow, and you have a recipe that can be deadly for cyclists and pedestrians. Unfortunately, legislators seem almost as reluctant to cross the telecom industry as they are to cross the gun and auto industries - so real and effective bans on phone use while driving are few and far between. The development of autonomous vehicles almost seems to say "we can't (or won't) put a stop to it, so let's just enable it. If people won't put their phones away, let's find a way that they'll never have to."

Ultimately, I suppose it would be fair to ask the question: If I'm cycling home from work, would I rather the car behind me be driven by a texting teenager, or by a computer? And honestly, I just don't know the answer. On one hand, the noble idea of the autonomous car is that it doesn't get distracted. That sounds great. On the other hand, so far the technology seems to leave a lot to be desired. It would be difficult for me to put my faith in the robots until I get some reassurance that they can actually see me, and respond appropriately, and that they not be predisposed to sacrifice my life in exchange for the car's occupants. I might feel better if our laws would favor the more vulnerable road users over the industries' interests. So far, none of that seems truly certain.

I also wonder why should it seem like the only choice is between distracted drivers and robot cars? I mean, if I actually had a choice in the matter, I think I'd choose a human driver who's actually paying attention. Shouldn't we be able to reasonably expect that drivers not be distracted? Now I guess that would truly be the stuff of science fiction.

18 comments:

  1. A copy of this post should be on every single legislator's and government regulator's desk. Thank you.

    ReplyDelete
  2. There is an ad on TV for an auto company's "driver assist features". A mother is driving with a child in the back seat. In the course of the ad, maybe 20 or 30 seconds, the car 'saves' the situation twice. First by hauling the car back in lane when the driver is distracted. Then by emergency braking when the driver is looking down as the car approaches a child in a crosswalk. Every time I see that ad I just think, "Take her damn license! She doesn't belong on the road." How many others have that reaction when watching?

    ReplyDelete
  3. I cycle around Waymo driverless cars all the time in these parts. I'll trust them much more readily than a person. They have so many orders of magnitude more driving experience than a human it isn't even a funny comparison.

    As a road user, what I am concerned with is "helpful" automation that encourages people to pay less attention while still being held responsible. We've had that for decades in cruise control. Today we're seeing much more advanced cruise controls (of which Tesla's is but one - most luxury vehicle manufacturers have a variety of these that control acceleration, braking, and steering). They have incomplete information. They can and do make mistakes. But the driver is even less engaged and thus less likely to be situationally aware, assess the situation, and respond in an appropriate timely manner. That kills.

    Full autonomy doesn't bother me. It truly is better. Distracted semi-autonomy does.

    ReplyDelete
  4. What would you do if you were driving a car while passing a cyclist and an oncoming car suddenly swerved over the centre line? My guess is that you'd instinctively swerve right to avoid a head-on collision and you'd hit the cyclist. (BTW, most of the "trolley problems" are bogus because they describe a scenario that a self-driving car would probably be able to avoid in the first place. But, yes, some will happen, and the only consolation will be the overall lower fatality and injury statistics.)

    There are many "self driving" systems out there, and the Tesla crash shows the problem with anything less than Level 5. Personally, I wouldn't trust a self-driving system built by an auto company -- how many good software engineers would prefer working for GM in instead of for Google?

    FWIW, this 2-year-old video about Google's system mentions cyclists, pedestrians, cones on the road, police flashing lights, etc.: https://youtu.be/tiwVMrTLUWg?t=464

    ReplyDelete
  5. Will a self-driving car still drive 10+mph over the posted speed limit -just like most human drivers do?

    If not, they'll probably never catch on.

    ReplyDelete
  6. Lots of very difficult questions. Thanks for this article. It's one more example of where technology is taking us, for better or worse. Every year, I ride more and more on segregated paths and feel this is the only answer for keeping us safe. Segregated paths are not financially feasible for everywhere we need to ride, but is where I'm putting my money and where it's the safest for our children.

    ReplyDelete
  7. It seems to me that the question is whether the "driverless" technology is there to "assist" the driver, or to eliminate him or her altogether.

    If the latter is the answer, then the system really has to be perfect--or, at least, better-developed than it is now. It has to be able to detect, essentially, everything and make judgments. Will anyone ever design such a system.

    Now, if the idea is to "assist" the driver, it will need some way to ensure that the driver is always engaged. The problem, it seems to me, is that "driverless" technology mentally detaches or distracts the driver, as apps and other such technologies seem to do.

    ReplyDelete
    Replies
    1. One way to keep the driver engaged: send (mild) electric shocks through the seat if their hands aren't on the wheel and their eyes aren't looking through the windshield. I kid, of course, but I can dream...

      I've read up on the self-driving cars through news articles and papers posted on other sites. From what I can see, the self-driving variety is very safe, much more so than any human driver, and the biggest names in that business have factored cyclist detection, pedestrian detection, small pets, children, etc into their design early in the process. I remember a story from several years ago where a Google car stopped behind a cyclist waiting to cross a road. There was room for the car to pass, but the software played it safe and stayed stopped behind the cyclist, (much to the consternation of the driver/passenger; drivenger?). Now, weighing that against the number of close passes I've had from human-piloted cars, and the real danger of distracted drivers, I'll trust a a fully self-driving car any day.

      Delete
  8. Do a web search for "crushed in elevator" and you'll find news items where a human operator would have prevented the accident. Yet nobody worries about automated elevators failing -- the benefits outweigh the costs because human operators also made mistakes and overall automated elevators are safer than human-operated elevators (they're also cheaper to operate, but that's a separate issue).

    ReplyDelete
    Replies
    1. One could hardly compare an elevator shaft to a city street- or any other road, avenue, or highway.

      Delete
  9. A local body-shop owner reported he sees lots of Teslas that have rear-ended other vehicles while the owners were distracted by the huge and complex computer display. This problem is hardly unique to Teslas, and I see plenty of drivers on the freeway with their phones in front of them, reading. It's likely that autonomous vehicles will be safer than most drivers (though most drivers will continue to believe that they are safer than almost everybody else), but the transition will be difficult and perhaps rather slow. Bicycles, difficulties interacting with human drivers, and many other issues need to be addressed. A recent review in Science suggested it will be several decades before this technology is ready for widespread use. Meanwhile, half a loaf (driver assist) will only be better than none if we don't put too much faith in it. All that said, I would welcome a technology that would avoid the many near-misses I've seen due to morons passing on blind curves in the mountains or driving on the shoulder because they have a turn 800 yards ahead.

    ReplyDelete
  10. I don't want machines running everything.

    ReplyDelete
  11. Have a look at Rodney Brooks writing...
    For example, ...

    https://spectrum.ieee.org/transportation/self-driving/the-big-problem-with-selfdriving-cars-is-people

    Brooks goes beyond saddles, it's also a respected name in robotics and AI.

    ReplyDelete
  12. The problems with driverless cars and distracted drivers might be minimised if the cars were larger, connected together, and running on some sort of contained road——perhaps guided by lengths of steel somehow embedded in the ground. The cars could use a common power source (electricity is gaining support these days) and be driven by a single driver connected to a traffic control system at a central hub. They could stop at designated places along the road to allow passengers to embark and disembark; assengers could stand, sit, read, or talk together while facing each other in the cars while traveling. The roadways would be completely separated from bike paths & other, small roadways designed for low speed vehicles delivering goods to shops &c. Goods that need to travel long distances could be placed in trucks connected together and running on the same system of contained roadways. I believe there have been preliminary trials of such a system in many countries over the last 200 years.

    ReplyDelete
    Replies
    1. That sounds just crazy enough to be a real thing. But I'm sure you just made that up.

      Delete
  13. "I regularly see riders who are poorer, and riding beat up old bikes because they can't afford better, and they are on bikes in the first place because they can't afford cars. Cheap bikes are their sole source of transportation"

    --is this América?

    ReplyDelete