According to a recent report, Google's self-driving Lexus SUV was hit by a van driver who ran a red light. While the incident was fatal, no injuries due to the accident were reported. Reports indicate that this is the worst incident involving driverless cars as the side of the vehicle was severely damaged, unlike other previous incidents which involved humans rear-ending autonomous cars at low speeds.

A Google spokesperson told 9to5Google: "Our light was green for at least six seconds before our car entered the intersection. Thousands of crashes happen everyday on US roads, and red-light running is the leading cause of urban crashes in the US. Human error plays a role in 94% of these crashes, which is why we're developing fully self-driving technology to make our roads safer."

This is not the first time that Google's self-driving car has been in an accident. However, in most cases, they are rear-ended by a driver at low speeds or sometimes when they are stationary. But the side impact caused by this recent accident is one amongst very few when an expensive test vehicle was damaged so badly.

Google also explained that even though the car was in self-driving mode, there was also a person sitting behind the steering wheel. While the system apply breaks automatically when it sees another car running a red light, followed by human drivers behind the wheel doing the same, it was clearly not enough to avert the accident.

With this incident, it became apparent that the problem with driverless cars does not only lie in faulty systems, as has been highlighted in Tesla Motors' test runs. In fact, the real challenge lies in sharing the road space with fallible human drivers who are likely to cause careless mistakes and jeopardize an entire system of self-driving cars.

As per a report by Goldman Sachs, based on current replacement rates and existing models, it could be up to 2060 before North America reaches an optimum level of autonomous vehicles. In these circumstances, how responsible will these autonomous systems be in anticipating and dodging the errors made by human drivers?