Although the dream of fully autonomous cars is still in the future, autonomous vehicles (AV) are already part of our world. Like other forms of artificial intelligence, incorporating this technology into everyday life requires weighing its pros and cons. One of the main benefits of AVs is their potential to support sustainable transportation. They can reduce traffic congestion and reduce dependence on fossil fuels. Additionally, AVs can raise road safety and provide accessible transportation to communities that lack access, including those without driver’s licenses.
However, despite these advantages, many people are still afraid of fully automated AV systems.
continued below
An Australian study by Sjaan Koppel of Monash University found that 42% of participants would “never” operate an automated vehicle to transport unaccompanied children. In contrast, just 7% said they would “definitely” operate such a solution.
Lack of trust in artificial intelligence appears to stem from fear that machines may make mistakes or make decisions that are inconsistent with human values. This fear is reminiscent of the 1983 adaptation of Stephen King’s horror film “Christine,” in which a car turns murderous. People are worried about being increasingly excluded from the decision-making loop of machines.
Vehicle automation is divided into levels, with level 0 meaning “no automation” and level 5 meaning “fully automated driving”, in which humans are only passengers.
Currently, consumers have access to levels 0 to 2, while level 3, which provides “conditional automation”, is available on a confined basis. The second highest level, Level 4, or “high automation,” is being tested. Today’s antivirus systems require drivers to supervise and intervene when automation is not sufficient.
To prevent loss of control over the AV, AI developers operate a method known as value equalization. This approach becomes particularly vital as vehicles with higher levels of autonomy are developed and tested.
Value alignment involves programming artificial intelligence to perform in a way that is consistent with human goals, which can be done explicitly in the case of knowledge-based systems or implicitly through learning in neural networks.
For AV vehicles, the value adjustment will vary depending on the purpose and location of the vehicle. This would likely take into account cultural values and comply with local laws and regulations, such as stopping for an ambulance.
The “trolley problem” poses a significant challenge to AV customization.
The trolley problem, first introduced by philosopher Philippa Foot in 1967, examines human morality and ethics. Applied to AV, it can aid us understand the complexity of aligning AI with human values.
Imagine an automated vehicle heading towards an accident. He may turn right to avoid hitting five people but endanger one person instead, or turn left to avoid hitting one person but endanger five people instead.
What should an AV do? Which choice best reflects human values?
Now consider a scenario where the AV is a Level 1 or Level 2 vehicle, allowing the driver to take control. When the AV issues a warning, which direction would you take?
Would your decision change if the choice was between five adults and one child?
What if that one person was a close family member, like your mom or dad?
These questions emphasize that the trolley problem was never intended to have a final answer.
This dilemma shows that aligning AVs with human values is complicated.
Consider Google’s mishap with its Gemini language model. The attempt to curb racism and gender stereotypes has resulted in misinformation and absurd results, such as portraying Nazi-era soldiers as people of color. Achieving alignment is intricate, and deciding whose values to reflect is equally hard.
Despite these complications, the attempt to make AVs consistent with human values holds promise.
Customized AVs can make driving safer. Drivers often overestimate their driving skills. Most car accidents are the result of human errors such as speeding, distraction or fatigue.
Can antivirus systems aid us drive safer and more reliably? Technologies like lane keeping assist and adaptive cruise control found in Level 1 AVs are already helping you drive safer.
As AV vehicles increasingly appear on our roads, it becomes vital to support responsible driving in conjunction with this technology.
Our ability to make effective decisions and drive safely, even with AV support, is crucial. Research shows that people often over-rely on automated systems, a phenomenon known as automation bias. We tend to think of technology as infallible.
The term “death by GPS” has gained popularity due to cases in which people blindly follow navigation systems, even in the face of clear evidence that the technology is incorrect.
A notable example was when tourists from Queensland drove into the bay while trying to reach North Stradbroke Island using GPS.
The trolley problem illustrates that technology can be as unreliable as humans, perhaps more so because of a lack of embodied consciousness.
The dystopian fear of artificial intelligence taking over may not be as dramatic as imagined. A more immediate threat to AV security may be humans’ willingness to relinquish control to artificial intelligence.
The indiscriminate operate of artificial intelligence affects our cognitive functions, including our sense of direction. This means our driving skills may deteriorate as we become more dependent on technology.
While we may see Level 5 AVs in the future, the present depends on human decision-making and our inherent skepticism.
Exposure to AV failures may counteract automation bias. Demanding greater transparency in AI decision-making could aid AVs raise and even improve human-led road safety.
(Source – PTI)
Most read in Car Technology