
I’ve listened to and interviewed more than 50 tech leaders this year, from executives running trillion-dollar firms to young founders betting their futures on AI.
Across boardrooms, conferences, and podcast interviews, the people building our AI future kept returning to the same four themes:
1. Use AI, because someone who understands AI better might replace you
This is the line I heard most often. Nvidia CEO Jensen Huang has said it multiple times this year.
“Every job will be affected, and immediately. It is unquestionable. You’re not going to lose your job to an AI, but you’re going to lose your job to someone who uses AI,” he said at the Milken Institute’s Global Conference in May.
Other tech leaders echoed his view, with some saying that younger workers may actually have an edge because they are already comfortable using AI tools.
OpenAI CEO Sam Altman said on Cleo Abram’s “Huge Conversations” YouTube show in August that while AI will inevitably wipe out some roles, college graduates are better equipped to adjust.
“If I were 22 right now and graduating college, I would feel like the luckiest kid in all of history,” Altman said, adding that his bigger concern is how older workers will cope as AI reshapes work.
Fei-Fei Li, the Stanford professor known as the “godmother of AI,” said in an interview on “The Tim Ferriss Show” published earlier this month that resistance to AI is a dealbreaker. She said she won’t hire engineers who refuse to use AI tools at her startup, World Labs.
This shift is already showing up in everyday roles. An accountant and an HR professional told me they’re using AI tools, including vibe coding, to level up their skills and stay relevant.
2. Soft skills matter more in the AI era
Another consensus I’ve heard among tech leaders is that AI makes soft skills more valuable.
Salesforce’s chief futures officer, Peter Schwartz, told me in an interview in May that “the most important skill is empathy, working with other people,” not coding knowledge.
“Parents ask me what should my kids study, shall they be coders? I said, ‘Learn how to work with others,’” he said.

LinkedIn’s head economist for Asia Pacific, Chua Pei Ying, also told me in July that she sees soft skills like communication and collaboration becoming increasingly important for experienced workers and fresh graduates.
As AI automates parts of our job and makes teams leaner, the human part of the job is starting to matter more.
3. AI is evolving fast — and superintelligence is coming
As the year went on, the stakes around AI’s future began to feel bigger and more real. Tech leaders increasingly spoke about chasing artificial general intelligence, or AGI, and eventually superintelligence.
AGI refers to AI systems that can match human intelligence across a range of tasks, while superintelligence describes systems that surpass human capabilities.
Altman said in September that society needs to be prepared for superintelligence, which could arrive by 2030. Mark Zuckerberg established Meta’s Superintelligence Labs in June and said that the company is pushing toward superintelligence.
These leaders don’t want to miss the AI moment. Zuckerberg underscored that urgency in September, saying he would rather risk “misspending a couple of hundred billion dollars” than be late to superintelligence.
Some tech leaders, such as Databricks CEO Ali Ghodsi, argued that the industry has already achieved AGI. Others are more cautious. Google DeepMind’s cofounder, Demis Hassabis, said in April that AGI could arrive “in the next five to 10 years.”
Even when tech leaders disagree on timelines, they tend to agree on one thing: AI progress is compounding.
I saw this acceleration from the outside as a user. New tools are rolling out at a dizzying pace — from ChatGPT adding shopping features and image generation to China’s “AGI cameras.“
Things that would have felt magical in January now feel normal.

4. The human needs to be at the center of AI
Many leaders also circled back to the need for human control amid AI acceleration.
Microsoft AI chief Mustafa Suleyman said superintelligence must support human agency, not override it. He said on an episode of the “Silicon Valley Girl Podcast” published in November that his team is “trying to build a humanist superintelligence,” warning that systems smarter than humans will be difficult to contain or align with human interests.
Anthropic CEO Dario Amodei has been blunt about the risks AI poses if it’s misused.
While advanced AI can lower the barrier to knowledge work, the risks scale alongside the rewards, Amodei said on an episode of the New York Times’ “Hard Fork” published in February.
“If you look at our responsible scaling policy, it’s nothing but AI, autonomy, and CBRN — chemical, biological, radiological, nuclear,” Amodei said.
“It is about hardcore misuse in AI autonomy that could be threats to the lives of millions of people,” he added.
Geoffrey Hinton, often referred to as the “godfather of AI,” said in August that as AI systems surpass human intelligence, safeguarding humanity becomes the central challenge.
“We have to make it so that when they’re more powerful than us and smarter than us, they still care about us,” Hinton said at the Ai4 conference in Las Vegas.
Read the original article on Business Insider
The post I spent a year interviewing and listening to over 50 tech leaders talk about AI. Here are the 4 biggest lessons. appeared first on Business Insider.