Our Lawless AI Era: Technology once again Outpaces Wisdom

The period between 1886 (when the first automobile was patented) and 1899 (when the first driving school was established) represents a fascinating case study in how society gradually recognizes and responds to new technological dangers.  It reveals a troubling progression: it took 13 years for society to realize formal driving instruction was necessary, but the intervening years were marked by escalating chaos, accidents, deaths, and economic damage. Today's AI development mirrors the chaotic early automobile era in several critical ways.

The History;

The first documented automobile fatality occurred surprisingly early—just three years after the invention of the gasoline-powered car. In 1869, Mary Ward became the world's first automobile death when she was thrown from an experimental steam-powered vehicle in Ireland and broke her neck under its wheels. However, the real surge in accidents began once gasoline-powered vehicles hit public roads.

The progression was alarming:

  • 1891: The first car accident in the United States occurred when inventor John William Lambert hit a tree root in Ohio, causing his vehicle to crash into a hitching post

  • 1896: The first car-versus-cyclist accident happened in New York's Central Park when Henry Wells struck Evelyn Thomas at 8 mph, breaking her leg

  • 1896: Bridget Driscoll became the first pedestrian killed by an automobile in Great Britain, struck by a car traveling only 4 mph

  • 1899: Henry Hale Bliss became the first automobile fatality in the United States when struck by an electric taxi in Manhattan

The statistics reveal how quickly automobile accidents became a public health crisis. According to official U.S. government data, automobile deaths escalated dramatically:

  • 1899: 26 deaths

  • 1900: 36 deaths

  • 1901: 54 deaths

  • 1902: 79 deaths

  • 1903: 117 deaths

By 1910, automobile deaths had reached 1,599 annually. The fatality rate was staggering: in 1913, there were 33.38 deaths per 10,000 vehicles on the road; a rate that would be considered catastrophic by today's standards. 

Granted that the period between 1886 and 1899 was characterized by complete regulatory chaos. There were no traffic laws, no speed limits, no driver licensing requirements, and no training standards. Early newspaper accounts described vehicles "tearing along the street at a lively rate, dodging people and teams". Detroit, which became the heart of the automotive industry, exemplified this chaos. As early as 1908, auto accidents were recognized as a "menacing problem": in just two months that summer, 31 people were killed in car crashes, with countless more injured. The main cause was identified as excessive speeding, but until 1909, there was no regulation of street traffic whatsoever.

The early automobile era generated significant public anxiety and media attention. Car crashes were so unusual that they "often ended up making newspaper headlines when they did occur". The public perception was mixed—while some saw automobiles as symbols of progress, others viewed them as dangerous nuisances.

A 1906 New York newspaper editorial warned that cars "stir up primitive emotions", while British Prime Minister Herbert Asquith called automobiles "a luxury which is apt to degenerate into a nuisance". The contrast with horse-drawn transportation was stark: London had experienced 124 deaths and 1,919 injuries from horse-drawn vehicles in 1870 alone, but the speed and unpredictability of automobiles created new types of dangers that society was unprepared to handle: Economic Impact and Property Damage.

The economic toll was substantial even in these early years. The first automobile liability insurance policy wasn't issued until 1897—eleven years after Benz's patent—to Gilbert J. Loomis in Dayton, Ohio. This policy covered property damage and potential fatalities, indicating that significant property damage was already recognized as a major concern.

Early automobiles were expensive to repair and maintain. The original Benz Patent-Motorwagen cost 600 German marks (approximately $150, or $5,200 in 2024 dollars), making any damage to these vehicles a significant financial burden. The lack of standardized parts and the artisanal nature of early automobile construction meant that repairs were costly and time-consuming.

And still, what's particularly striking is how long it took society to recognize that the problem wasn't the technology itself, but the lack of proper operator training. For 13 years, authorities tried various approaches:

  • Setting extremely low speed limits (5 mph to match horse-drawn wagons)

  • Requiring warning devices (England mandated drivers to notify constables who would walk ahead waving red flags)

  • Increasing penalties and enforcement

  • Public awareness campaigns

It wasn't until 1899 that the first driving school was established in Paris, France, followed by schools in London (1900) and Britain (1901)[user data]. The realization that systematic training was necessary came only after more than a decade of mounting casualties and chaos.

This 13-year gap illustrates a recurring pattern in technological adoption: society often underestimates the need for systematic training and regulation when introducing transformative technologies. The early automobile era demonstrates how the absence of proper training protocols can lead to:

  • Exponentially increasing casualty rates

  • Significant property damage and economic costs

  • Public fear and resistance to new technology

  • Ineffective regulatory responses that address symptoms rather than root causes

The period between 1886 and 1899 serves as a sobering reminder that technological progress without corresponding educational infrastructure can exact a heavy human and economic toll. It took 13 years, hundreds of deaths, thousands of injuries, and substantial economic damage before society recognized that teaching people to drive was not optional; it was essential for public safety. We must have not had enough critical AI accidents to warrant an expedited call for clear cut AI frameworks.

Our AI driven Present

Companies are deploying AI systems at breakneck speed, driven primarily by competitive pressure rather than safety considerations. According to recent research, 58% of businesses using AI started doing so because of "pressure from competitors", creating the same "who blinks first" environment that characterized the early automotive industry.

The pace of deployment is staggering. AI incidents reported in the media increased by 21.8 times between 2022 and 2024. More alarmingly, AI incidents directly related to safety and security grew by approximately 83.7% from 2023 to 2024, echoing the exponential rise in automobile fatalities during the 1890s. Just as early automobiles caused immediate harm, AI systems are already producing significant casualties:

High-Profile AI Failures in 2024-2025:

  • Samsung's ChatGPT data leak (2023): Employees accidentally uploaded confidential semiconductor code to ChatGPT, leading to a company-wide ban on generative AI tools

  • Waymo robotaxi recall (2024): Over 1,200 self-driving cars were recalled due to software flaws causing collisions with stationary objects

  • Korean industrial robot fatality (2023): An AI-guided robot killed a worker after misidentifying him as a box of vegetables

  • DeepSeek cyberattack (2025): The Chinese AI chatbot suffered major service failures and cyberattacks just as it gained global prominence

These incidents demonstrate that AI systems, like early automobiles, can cause death, economic damage, and security breaches when deployed without adequate safety measures.

The parallel to the pre-driving school era becomes even more striking when examining current AI governance. Only 5% of organizations have implemented any AI governance framework, despite 95% expressing confidence in their AI risk management practices. This mirrors the overconfidence of early automobile operators who believed they could safely operate vehicles without formal training.

Furthermore, only 40% of company boards have a director with expertise in AI ethics, and few companies have public AI policies. This governance vacuum is remarkably similar to the regulatory void that existed between 1886 and 1899, when there were no traffic laws, licensing requirements, or safety standards for automobiles.

The competitive pressures driving AI deployment today mirror those that prevented proper safety measures in the early automotive era. AI engineers across major tech companies report burnout from competitive pressure, shorter timelines, and lack of resources. At Google, employees criticized leadership for "rushed" and "botched" AI announcements, while AI workers describe their jobs as being "frequently assigned to placate investors rather than address user issues".

The AI Safety Index 2025 evaluated seven leading AI companies developing artificial general intelligence (AGI) and found alarming results: the best performer received only a C+, while others scored even lower. The report concluded that companies are "fundamentally unprepared" to manage AGI risks, with OpenAI specifically criticized for "lost safety team capability" and "mission drift" away from its original safety-focused goals.

And just as early automobile manufacturers prioritized market share over safety, today's AI companies face similar pressures. Studies show that up to 85% of AI projects fail, primarily due to poor data quality and rushed deployment. The racing dynamics create what experts call "corner cutting on safety standards", where adherence to safety protocols becomes secondary to being first to market.

Investment pressure is driving this race: roughly $8 billion in funding went to AI chip startups in both 2021 and 2022, creating intense pressure to deliver results quickly. This echoes the early automotive industry's rush to commercialize vehicles before understanding their full safety implications.

The current regulatory response to AI deployment parallels the delayed reaction to early automobile dangers. While some frameworks are emerging, they're consistently behind the pace of technology deployment:

  • EU AI Act: Officially became law in August 2024, but full implementation won't occur until 2026-2027

  • U.S. Federal Response: Only 59 AI-related regulations were introduced in 2024, doubling from 2023 but still minimal compared to deployment pace

  • Standards Development: AI safety standards are "behind schedule" and facing delays, with some key deliverables postponed indefinitely

This regulatory lag mirrors the 13-year gap between the first automobile and the first driving school, suggesting we may be repeating historical patterns of reactive rather than proactive safety measures.

Perhaps most tellingly, there's a notable absence of systematic AI safety training equivalent to driving schools. While some organizations are developing AI sandboxes and testing environments, these remain limited and voluntary. 95% of organizations lack AI governance frameworks, and 57% of employees hide their AI use, presenting AI-generated work as their own without oversight.

The industry acknowledges this gap: 82% of executives state that implementing AI governance solutions is "extremely pressing", and 85% plan to implement such solutions by summer 2025. However, this reactive approach mirrors the early automobile era's delayed recognition that systematic training was necessary. The economic toll is already substantial. AI failures are causing:

  • Corporate data breaches and intellectual property theft

  • Financial losses from AI-generated errors and misinformation

  • Reputational damage requiring expensive remediation

  • Regulatory fines and compliance costs

These costs mirror the property damage, insurance claims, and economic disruption caused by early automobiles before proper training and regulation were established.

The AI race demonstrates the same fundamental pattern as the early automobile era: transformative technology deployed without corresponding safety infrastructure. Just as it took 13 years, hundreds of deaths, and mounting economic damage before society recognized the need for systematic driver training, we appear to be following a similar trajectory with AI.

The key difference is that AI systems can potentially cause harm at a much larger scale and faster pace than early automobiles. While a runaway car in 1895 might injure a few people, a poorly designed AI system can affect millions of users simultaneously, manipulate financial markets, or compromise critical infrastructure.

The historical parallel suggests we're currently in the equivalent of the 1890s automobile era, experiencing mounting incidents while racing toward deployment without adequate safety measures. The question is whether we'll require our own 13-year period of escalating AI "accidents" before establishing systematic safety training and governance, or whether we can learn from history and act proactively.

Current trends suggest we may be heading toward the same delayed response that characterized the early automotive era, where competitive pressures and technological enthusiasm override safety considerations until the costs become undeniable. The challenge is recognizing this pattern now and implementing comprehensive AI safety training and governance before, rather than after, the casualties mount to crisis levels.

Joziane El Hawi

Joziane bridges the gap between complex technology and real-world impact. With 14+ years’ experience in humanitarian research and policy analysis, including work as a lawyer, UN Protection expert, published author and Gen AI prompt engineer, she brings a unique perspective to the AI landscape. As co-founder of WeCan AI, she empowers organizations and individuals, particularly those without a technical background, with the skills to correctly and efficiently use AI Tools.

https://www.linkedin.com/in/joziane-el-hawi-/
Previous
Previous

How Many Documents Does it Take to Break an LLM?

Next
Next

ChatGPT is not your Bestie!