Missy Cummings, an expert in automation, is optimistic that a dedicated course aimed at educating policymakers about artificial intelligence (AI) can provide a valuable solution in addressing the historical disappointments surrounding tech regulation.

Missy Cummings, an expert in automation, is optimistic that a dedicated course aimed at educating policymakers about artificial intelligence (AI) can provide a valuable solution in addressing the historical disappointments surrounding tech regulation.

During a recent hearing, US senators expressed concerns over the potential negative impacts of unchecked AI, such as job displacement and the spread of misinformation. OpenAI CEO Sam Altman emphasized the need for a new federal agency to oversee AI development. However, there was also a consensus among lawmakers that stifling AI's potential would not be beneficial, as it has the capacity to boost productivity and position the US at the forefront of a technological revolution.

To address these concerns, it would be worthwhile for senators to engage in discussions with Missy Cummings, an esteemed former fighter pilot and current engineering and robotics professor at George Mason University. Cummings specializes in studying the use of AI and automation in safety critical systems, including automobiles and aircraft. With her expertise and experience at the National Highway Traffic Safety Administration, which oversees automotive technologies like Tesla's Autopilot and self-driving cars, Cummings can provide valuable insights to politicians and policymakers who are grappling with the balance between the promises of AI advancements and the potential risks they entail.

In a recent conversation, Cummings expressed her deep concerns about the current state of autonomous systems deployed by car manufacturers. She emphasized that these systems are far from being as capable as commonly believed, leading to serious trouble in terms of their capabilities.

This situation reminded me of the parallels with ChatGPT and other similar chatbots, which both generate excitement and raise concerns about the power of AI. Automated driving features have been in existence for a longer period, but like large language models, they rely on machine learning algorithms that are inherently unpredictable, challenging to inspect, and require a different engineering approach.

Similar to ChatGPT, Tesla's Autopilot and other autonomous driving projects have been subject to excessive hype. The lofty aspirations for a transportation revolution led to significant investments by automakers, startups, and investors in a technology that still faces numerous unresolved issues. During the mid-2010s, the regulatory environment surrounding autonomous cars was permissive, as government officials were hesitant to impede a technology projected to be worth billions for US businesses.

Despite the billions spent on self-driving technology, it continues to face challenges, with some auto companies even discontinuing major autonomy projects. Furthermore, as Cummings points out, the general public often lacks clarity regarding the true capabilities of semi-autonomous technology.

In one sense, it is encouraging to witness governments and lawmakers promptly discussing regulations for generative AI tools and large language models. The current concern primarily revolves around large language models and tools like ChatGPT, which excel at answering questions and problem-solving, despite their significant limitations, including the ability to confidently fabricate facts.

During the Senate hearing, Altman from OpenAI, the organization behind ChatGPT, went so far as to propose a licensing system to govern whether companies like his should be permitted to work on advanced AI. Altman expressed his worst fear of the field, technology, and industry causing substantial harm to the world.

Cummings, drawing on her experience at the NHTSA, warns of the significant risk of regulatory capture if major AI and tech companies wield too much influence over regulatory decisions. She highlights instances within the NHTSA where cautious approaches were taken to avoid disrupting markets, which she believes is a clear indication of regulatory capture. According to Cummings, if a market is affected, so be it.

Cummings also emphasizes the urgent need for politicians and regulators to familiarize themselves with AI advancements and gain a deeper understanding of the technology and its workings. To address this, she is exploring ways to train individuals in investigating accidents involving AI and is developing a course at George Mason University aimed at informing policymakers and regulators about the technology.

While some respected AI experts believe that advanced AI systems may not only replace office workers and propagate disinformation but also pose existential risks if they become excessively intelligent and willful, Cummings is more concerned about the danger of once again placing too much faith in a technology that is not fully mature.

As an example of the likely consequences of the current AI boom, Cummings highlights that people everywhere will begin writing code using generative tools. This trend is expected to create an entirely new field for computer scientists dedicated to identifying and rectifying faulty code, ensuring job security for individuals like herself.



Comments