Thursday, December 5, 2024

I’m 22 and dread the ways AI will affect our future—and worse, that of my children

Must read

When I was growing up, artificial intelligence lived in the realm of science fiction. I remember being in awe of Iron Man’s AI system Jarvis as it helped fight off aliens—but laughing at dumb NPCs (nonplayable characters) in video games or joking with my dad about how scratchy and unhuman-like virtual assistants like Siri were. The “real” AIs could only be found as Star Wars’ C-3PO and the like and were discussed mainly by nerds like me. More punchline than reality, AI was nowhere near the top of political agendas. But today, as a 22-year-old recent college graduate, I’m watching the AI revolution happen in real-time—and I’m terrified world leaders aren’t keeping pace.

In 2024, my generation is already seeing AI disrupt our lives. Gen Z classmates casually and frequently use ChatGPT to breeze through advanced calculus classes, write political essays, and conduct literary analysis. Young voters are forced to deal with increased amounts of AI-driven political disinformation, and teen girls are targeted by convincing deepfake pornography, with no disclaimers and little recourse. Even in prestigious fields like investment banking, entry-level jobs are beginning to feel squeezed. And tech companies are making ethically dubious plans to bring intimate humanlike AI companions to our lives.

Responding to AI’s rapid rise

The speed of change is mind-numbing. If today’s narrow AI tools can supercharge academic dishonesty, sexual harassment, workforce disruptions, and addictive relationships, imagine the impact the technology will have as it scales in access and power in the coming years. My fear is that today’s challenges are just a small preview of the AI-driven turbulence that will come to define Gen Z’s future.

This fear led me to join—and help lead—Encode Justice, a youth advocacy movement focused on making AI safer and more equitable. Our organization includes hundreds of young people across the world who often feel as if we are shouting into the void on AI risks—even as technological titans and competition-focused politicians push a hasty, unregulated rollout. It’s difficult to express the frustration of watching important lawmakers like Senate Majority Leader Chuck Schumer constantly kick the can down the road with regulation, as he did last week.

We’re done waiting on the sidelines. On Thursday, Encode Justice launched AI 2030, a sweeping call to action for global leaders to prioritize AI governance in this decade. It outlines concrete steps that policymakers and corporations should take by 2030 to help protect our generation’s lives, rights, and livelihoods as AI continues to scale.

Our framework is backed by powerful allies, from former world leaders like Irish President Mary Robinson to civil rights trailblazers such as Maya Wiley, as well as over 15,000 young people in student organizations around the world. We aim to insert youth voices into AI governance discussions that will disproportionately affect us—not to mention our kids.

Right now, the global policymaking community lags behind AI risks. As of last December, only 34 countries out of 190-plus have a national AI strategy. The United States had a start with President Biden’s Executive Order on AI, but it lacks teeth. Across the Atlantic, the EU’s AI Act will not take effect until 2026 at the earliest.

Meanwhile, AI capabilities will continue to evolve at an exponential rate.

Not all of this is bad. AI holds immense potential. It has been shown to enhance health care diagnoses, revolutionize renewable energy technology, and help personalize tutoring. It may well drive transformative progress for humanity. AI models are already being trained to predict disease outbreaks, provide real-time mental health support, and reduce carbon emissions. These innovations form the basis for my generation’s cautious optimism. However, fully unlocking AI’s benefits requires being proactive in mitigating risks. Only by developing AI responsibly and equitably will we ensure that its benefits are shared.

As algorithms become more human-like, my generation’s formative years may become shaped by parasocial AI relationships. Imagine kids growing up with an always “happy” Alexa-like friend that can mimic empathy—and know what kind of jokes you enjoy—while being there for you 24/7. How might that influence our youth’s social development or ability to build real human connections?

AI in the long run

Longer term, the economic implications are terrifying. McKinsey estimates that up to 800 million people worldwide could be displaced by automation and need to find new jobs by 2030. AI can already write code, diagnose complex illnesses, and analyze legal briefs faster and cheaper than humans can. (I helped code an AI tool to do the latter while in college.)

Without proper safeguards, these disruptions will disproportionately affect already marginalized groups. A landmark MIT study a few years ago showed that since 1980, over half of the increasing wage disparity between workers with higher and lower education levels can be attributed to automation. Young workers in the global south particularly—whose economies are more vulnerable to AI disruption—could face nearly insurmountable obstacles to economic mobility.

We are effectively being asked to trust that big technology firms such as OpenAI and Google will properly self-regulate as they roll out their products with world-altering potential and little to no transparency. To complicate matters, tech companies often say the right thing when in the public eye. OpenAI CEO Sam Altman famously testified in front of the U.S. Congress begging for regulation. In private, OpenAI spent considerable effort to dilute regulatory efforts within the EU AI Act.

With billions of dollars on the line, competitive industry dynamics can create perverse incentives—such as those that defined the social media revolution—to win the AI race at any cost. Trusting in corporate altruism is a reckless gamble with all our collective futures.

Critics have argued that calls for regulation will simply result in regulatory capture, where a company influences rules to benefit its own interests. This concern is understandable, but the truth is that there is no legitimate alternative to secure AI systems. The technology advances so rapidly that traditional regulatory processes will be unable to keep pace.

Regulating AI

So where do we go from here? To start, we need better government regulation and clear red lines around AI development and deployment. We have been tirelessly working on a bill in the California legislature with state senator Scott Wiener that we cosponsored—SB 1047—that would implement these kinds of guardrails for the highest-risk AI systems.

However, AI 2030 lays out a larger roadmap:

We call for independent audits that would test the discriminatory impacts of AI systems. We demand legal recourse for citizens to seek redress if AI violates their rights. We push for companies to develop technology that would clearly label AI-generated content and equip users with the ability to opt out of engaging with AI systems. We ask for enhanced protections of personal data and restrictions on deploying biased models. At the international level, we call on world leaders to come together and write treaties to ban lethal autonomous weapons and boost funding for technical AI safety research.

We recognize that these are complex issues. AI 2030 was developed over months of research, discussion, and constant consultation with civil society leaders, computer scientists, and policymakers. In conversations, we would often hear that youth activists are naive to demand ambitious action, that we should settle for incremental policy changes.

We reject that narrative. Incrementalism is untenable in the face of exponential timelines. Focusing on narrow AI challenges doesn’t do anything about the frontier models that are hurtling forward. What happens when AI can perfectly manipulate critical video footage, imitate our politicians, author important legislation with biases, or conduct military strikes? Or when it begins to achieve eerier capabilities in reasoning, strategy, and emotional manipulation?

We are talking about years and months, not decades, to reach these milestones.

Gen Z came of age with social media algorithms subtly pushing suicide to the most vulnerable of us and climate disasters wreaking havoc on our planet. We personally know the dangers of “moving fast and breaking things,” of letting technologies jump ahead of enforceable rules. AI will be all those things, but potentially on a more catastrophic scale. We must get this right.

To do so, world leaders must stop simply reacting to scandals after the damage is done and be more proactive in addressing AI’s long-term implications. These challenges will define the 21st century—short-term solutions will not work.

Critically, we need global cooperation to match threats that are not constrained by nation-state borders. Autocracies such as China have already begun to use AI for surveillance and social control. These same regimes are attempting to use AI to supercharge online censorship and discriminate against minorities. They are (unsurprisingly) beginning to use the United States’ own weak regulations to their advantage and push our kids to be more polarized.

Even well-intentioned developers can accidentally unleash catastrophic harms.

To paint a simple thought experiment: Consider Google DeepMind’s AlphaGo, an AI system trained to expertly play Go, a complex strategy game. When AlphaGo competed against human champions, it made moves never before seen in the game’s 4,000-year history. The strategies were so alien that its own creators did not understand its reasoning—and yet it beat top players repeatedly. Now imagine a similar system being tasked with biological design or molecular engineering. It could design new biochemical processes that are entirely foreign to human understanding. A bad actor could use this to develop unprecedented weapons of mass destruction.

These risks extend beyond the biological. AI systems will become more sophisticated in areas such as chemical synthesis, nuclear engineering, and cybersecurity. These tools could be used to create new chemical weapons, design more destructive nuclear devices, or create targeted cyberattacks on critical infrastructure. If these powerful capabilities are not safeguarded, the fallout could be devastating.

These are not abstract or distant scenarios. They are genuine global challenges that are crying out for new governance models. Make no mistake: The next several years will be critical. That’s why AI 2030 calls for establishing an international AI safety institute to coordinate technical research as well as create a global authority that would set AI development standards and monitor for misuse. Key global powers like the U.S., EU, U.K., China, and India must be involved.

A global call to action

Is AI 2030 an ambitious agenda? We’re counting on it. But my generation has no choice but to dream big. We cannot simply sit back and hope that big tech companies will act against their bottom-line interests. We must not wait until AI systems cause societal harms that we cannot or struggle to come back from. We must be proactive and fight for a future where AI development is safe and secure.

Our world is at an inflection point. We can stay as we are, sleepwalking into a dangerous AI future where algorithms exacerbate inequality, erode democratic intuitions, and spark conflict. Or we can wake up and take the path to a thriving, equitable digital age.

We need genuine international cooperation, not photo-op summits. We need lawmakers willing to spend political capital, not corporate mouthpieces. We need companies to radically redefine transparency, not announce shiny appointments to ethics boards with no power. More than anything, we need leaders thinking in terms of civilizational legacies, not just winning reelection.

As a young voter, I demand to see such commitments ahead of the November elections. I know I’m not alone. Millions of young people across the world are watching this AI age unfold with a mix of awe and anxiety. We don’t have all the answers. But we know this: Our generation deserves a voice in shaping the technologies that will come to define our lives and transform the very fabric of society.

What I ask now is: Will the leaders of today listen? Will they step up and risk making a change? Or will they fail and force my generation to shoulder the fallout as they have repeatedly on other critical issues?

As young leaders of tomorrow, we are making the choice to stand up and speak out while there’s still time to act. The future is not yet written. In 2030, let history show that we bent the arc of artificial intelligence for the betterment of humanity when it mattered most.

Sunny Gandhi is vice president of political affairs at Encode Justice. From Chicago, he graduated from Indiana University this month with a bachelor’s degree in computer science.

The opinions expressed in Fortune.com commentary pieces are solely the views of their authors and do not necessarily reflect the opinions and beliefs of Fortune.

More articles

Latest article