Autonomous vehicles are one of the most talked-about technological breakthroughs of the past decade. But as we approach the end of another year of testing issues and minimal developments within the industry, is it fair to say that the self-driving car’s rollout is stalling?
In October 2020, Tesla grabbed the world’s attention with the arrival of its ‘Full Self Driving’ software for car owners to allow drivers to park, stay in a lane at cruising speed automatically, and stop at red lights. The unveiling was supposed to represent a significant step towards fully self-driving technology.
However, according to Fortune, there are fears that the name ‘Full Self Driving’ may be misleading to some drivers, who could mistakenly assume that their car needs no supervision. Regulators have warned users to monitor the technology closely and “take action to protect the public against unreasonable safety risks”.
According to the chart published in Medium in 2018, 2020 was the year in which fully autonomous vehicles began to enter the market. However, by industry standards, even Tesla’s Full Self Driving software can only be considered an advanced driver assistance system (ADAS).
So what’s holding autonomous vehicles back? Let’s take a deeper look at some of the more significant issues that self-driving cars need to hurdle before entering the market.
The Fallacy of Machines Thinking Like Humans
One of the biggest issues to overcome is that, despite huge advancements in IoT technology, computers are a long way from possessing humans’ levels of intelligence.
Of course, when it comes to individual tasks like identifying objects in pictures or following simple commands in a static environment, machines can far outperform humans. Still, many of these skills aren’t applicable in a more general situation.
While the act of driving a car and understanding when red lights appear may be achieved by simple machines, the sheer volume of variables that occur during driving means that machines will be required to obtain huge levels of intelligence and adaptability.
In a 2017 essay, leading robotics and artificial intelligence researcher, Rodney Brooks, stated that autonomous vehicles were not viable in the short-term due to the number of ‘edge cases’ that present themselves while driving. This refers to how often unusual events occur – such as the poor driving of others, obstacles on the road, and misleading road markings.
While humans are naturally capable of reacting to unusual events and making swift adjustments, machines can struggle to understand how to adapt their actions to respond in the right way to the road in front of them, for instance.
Obstacles to Sensors
Autonomous vehicles use multiple sensors to interact with the environment around them. This helps to detect objects like pedestrians, other vehicles android signs. Cameras can also help the car view objects and understand the distance between the vehicle and anything in its path. Radars can also chart the speed and direction of other vehicles and objects.
These sensors provide feedback to the car’s control system to advise where to steer or stop. Fully autonomous cars require a wide range of sensors that can detect objects with accuracy while quickly determining their distance and speed in every possible condition to ensure safety at all times.
However, factors like bad weather, dense traffic, weathered road signs, or damaged markings can significantly impact sensors’ capabilities.
For truly autonomous vehicles to work, the sensors used need to be highly adaptive and strong enough to understand signals in the worst conditions in a range of environments. Given that accidents have occurred with Tesla models where cars have hit parked vehicles, it shows that there’s still some way to go in developing reliable autonomous sensors.
There may be widespread concerns about the safety of autonomous vehicles in technical understanding and reacting to obstacles. Still, there’s also a significant danger that the interconnected vehicles could become the victim of cyberattacks.
Transportation reporter Christian Wolmar reported that widespread hackings have happened “in other areas of computing, such as the big-data hacks and security lapses, and it will happen in relation to autonomous cars.”
Given the dangers associated with a vehicle that’s been compromised by a hacker, any security issues within the firmware of cars will see widespread disruption occur as a result––making it extremely difficult to see the rollout of autonomous vehicles before rigorous tests are conducted.
Complexity of Insurance
There’s a whole host of financial issues attached to the rollout of autonomous vehicles. The matter of insurance and where the responsibility lies should accidents occur, a big hurdle to overcome.
Traditionally, drivers pay insurance for their cars to cover any accidents taking place. Still, when a vehicle is entirely autonomous, there’s no reason for an owner to pay out when it’s likely to be the fault of the vehicle itself.
The arrival of self-driving cars will spark a reformation of how owners pay for their cars. If insurance falls on the manufacturer, it could see the costs of vehicles rising further. In a world that’s become more accustomed to purchasing vehicles on HP or PCP finance, the complexity of ownership could be wholly off-putting for prospective buyers.
That said, there’s little doubting that autonomous vehicles will have a role to play in the future of motoring. Whether the technology and its complicated connotations develop to the point where we could see private vehicles hitting the roads by the end of the decade, or more likely that way see some form of light delivery vehicles rolled out, it’s a testament to interconnectivity and the potential for innovation in the industry.