Science
Tech

Ensuring the Safety of Tomorrow's Autonomous Vehicles


As autonomous technology becomes a reality in our daily lives, the pressing question of safety remains. Researchers work tirelessly to develop failsafe systems that can handle the unpredictable nature of the real world.

  

Published on 12/02/2024 21:33


  • Autonomous vehicles clocked 8 million miles in San Francisco, with over 850,000 drones in the U.S., raising safety concerns.
  • Nearly 400 crashes involving autonomous vehicles reported over a 10-month period, with six fatalities, highlighting the need for robust safety measures.
  • Traditional testing methods are inadequate; Sayan Mitra's team at the University of Illinois is working to provide safety guarantees for autonomous systems.
  • Mitra's concept of a perception contract aims to maintain safety despite uncertainties in machine learning algorithm outputs.
  • Practical applications of these safety guarantees are underway, including Sierra Nevada's drone landing tests and Boeing's experimental aircraft plans.

  • Autonomous vehicles, once a futuristic dream, are now navigating our roads and skies, reshaping transportation. In San Francisco, two trailblazing taxi companies have clocked an astonishing 8 million miles of self-driving navigation to date. Additionally, the skies are frequented by over 850,000 autonomous drones in U.S. civilian airspace, showcasing the advancement and adoption of autonomous flight technology. These developments, however, bring concerns regarding the safety and reliability of these autonomous systems.

    The issue of safety is further compounded by statistics from the National Highway Traffic Safety Administration, which reported nearly 400 crashes involving autonomous vehicles over a 10-month period ending in May 2022. Tragically, six lives were lost, and another five individuals sustained serious injuries in these events. These figures not only emphasize the potential danger associated with intelligent vehicles but also indicate the necessity for rigorous testing and safety assurances.

    Traditional safety validation, known as "testing by exhaustion," demands countless hours of operational testing in hopes of encountering every possible scenario that an autonomous system might face. However, according to Sayan Mitra, a computer scientist at the University of Illinois, Urbana-Champaign, this method is limited and cannot offer absolute guarantees of safety. Mitra and his team aim to move beyond the confines of conventional testing by providing proofs of safety for critical functions like lane-tracking in cars and landing systems in aircraft.

    Their approach involves a blend of machine learning algorithms and rigorous guarantees for the perception systems of autonomous vehicles. These perception systems are vital as they interpret environmental data from various sensors to understand vehicle positioning and identify obstacles. Despite the sophistication of these systems, the underpinning machine learning algorithms based on neural networks might yield incorrect interpretations, thereby jeopardizing the control systems that rely on their outputs.

    To counter this risk, Mitra's group introduced what they call a perception contract. This concept, borrowed from software engineering, entails a promise that the output of a program will stay within a defined range for a given input. The challenge lies in determining this range. By calculating the error band or known uncertainties, such as sensor inaccuracy and environmental conditions like fog or glare, the team can ascertain the safety margin of the vehicle's operations. If this uncertainty can be quantified and the vehicle's operation maintained within it, then, according to their research, a safety guarantee can be established.

    The benefit of a perception contract is akin to knowing the inaccuracy of a faulty speedometer: if the potential error is within 5 mph, then driving 5 mph below the speed limit ensures no speeding occurs. This strategy offers a way to deal with an imperfect system, like those dependent on machine learning, without necessitating perfection.

    Mitra's paradigm is gaining traction in practical applications. For instance, Sierra Nevada is testing these safety guarantees for drone landings on aircraft carriers, a task that adds complexity due to the aerial dimension. Boeing, too, intends to test these methods on an experimental aircraft later in the year. Incorporating such safety guarantees demands understanding the unknowns—the uncertainties in our estimations—and how they might affect safety outcomes. It's a venture into mitigating errors not just from known risks but also from those that are unforeseen.

    In summary, the incursion of autonomous vehicles into our modern landscape is an extraordinary leap forward in technology. Yet, with this advance, the imperative of safety looms ever larger. Mitra's work represents a significant stride in the quest for reliable autonomy, promising a future where the advent of driverless cars and pilotless planes is not only innovative but also secure. For readers eager to delve deeper into these topics, the original story offers extensive details and further underscores the critical nature of this research in ensuring the safety of autonomous vehicles.


    The article presents an exploration of the challenges and innovations in the field of autonomous vehicle safety. It highlights the efforts of researchers to go beyond traditional testing methods and develop new strategies that offer robust safety guarantees for autonomous systems. The text describes the statistical context of autonomous miles driven and crashes reported, underscoring the real-world urgency for improved safety measures. It also delves into the technical aspects, such as the perception contract concept and formal verification methods, and details practical applications of these technologies in both aerial and ground vehicles.


    • Subjectivity: Objective with some descriptive language
    • Polarity: Largely neutral with a slightly positive tone towards scientific progress

      A computer scientist at the University of Illinois, Urbana-Champaign, who is known for working on providing safety guarantees for autonomous systems such as cars and aircraft.

      AI chief technologist at Boeing, involved in operational testing of safety guarantees for landing autonomous drones on aircraft carriers.

      An associate professor at the Massachusetts Institute of Technology focusing on robotic perception and machine learning in autonomous systems.

      A research scientist at Carnegie Mellon University and NASA’s Ames Research Center, interested in safety and reliability of software systems.

      Vehicles capable of sensing their environment and operating without human intervention using a variety of technologies including radar, lidar, GPS, and computer vision.

      A subset of artificial intelligence involving the use of data and algorithms to imitate the way that humans learn, gradually improving its accuracy.

      Computational models inspired by the human brain's interconnected network of neurons, used to recognize patterns and solve complex problems.

      Sensory components in autonomous vehicles that gather data from the environment to inform navigation and other operational decisions.

      A testing methodology where a system is subjected to extensive trials in an attempt to uncover every possible error or failure scenario.

      A theoretical commitment in software engineering that ensures the output of a program remains within a specified uncertainty range for a known input.

      The range of uncertainty in the measurements or predictions of a system, such as the inaccuracies of sensors or algorithms.

      The process of using mathematical techniques to prove the correctness of algorithms or systems relative to a certain formal specification or property.

    8 million

    Autonomous Miles Driven

    The statistic denotes the total distance covered by autonomous taxis in San Francisco up until August 2023, providing an insight into the practical, real-world application and testing of autonomous driving technologies.

    850,000

    Registered Autonomous Aerial Vehicles

    This is the number of registered civilian drones in the United States as of the article's publication, which highlights the prevalence of autonomous technology in aerial systems and suggests a significant level of integration into the airspace.

    Nearly 400

    Reported Autonomous Vehicle Crashes

    Reported by the National Highway Traffic Safety Administration over a 10-month period, this figure illustrates the potential safety risks associated with autonomous vehicles and underscores the importance of effective safety measures and regulations.

    6

    Fatalities from Autonomous Vehicle Crashes

    In the referenced timeframe, this number signifies the human cost of incidents involving vehicles with autonomous control systems, underlining the real-world consequence beyond technical and statistical analyses.