lexandr Wang isn’t your typical tech entrepreneur. At just 27, he’s already achieved billionaire status, having dropped out of MIT to co-found ScaleAI—an artificial intelligence startup now valued at $14 billion. Backed by Amazon and other major players, Wang’s company powers the development of cutting-edge AI through data labeling and infrastructure support. With a personal net worth of $2 billion, he holds the title of the world’s youngest self-made billionaire—and he’s got strong opinions on where the future of work and technology is headed.
When asked about the skills young people should develop for the coming AI-driven economy, Wang emphasized one that might sound surprising: prompt engineering. It’s the art and science of crafting inputs or instructions for AI models like chatbots to deliver more accurate and useful results. As more businesses integrate generative AI into their operations, the ability to communicate effectively with these systems will become a valuable, even essential, skill.
But prompt writing is just the surface. According to Wang, deeper understanding is needed to tackle one of AI’s most significant limitations—reasoning across longer time horizons. “I think this is something that humans will always be differentiated in,” he explained. “We’re very good at long-form thinking and very good at thinking over very long time horizons.”
While today’s AI excels at short-term prediction—guessing the next word in a sentence or generating a fast answer—Wang says it still stumbles when complex reasoning is required over multiple steps. “They usually make a mistake on the third, or fourth, or fifth reasoning step or chain of thought,” he said. This gap presents a clear opportunity for humans to maintain a critical advantage: deep, logical, long-range thinking.
So how do you train your brain for that kind of challenge? Wang points to the “classic stuff” like math and physics—technical disciplines that demand rigor, abstract thinking, and the ability to consider consequences far down the line. “There are a lot of fields like economics which force you to think very long-term and think through implications many steps down,” he noted.
These fields help develop mental endurance and structured reasoning, which are critical when working with or designing AI systems. They also train people to see patterns, solve puzzles, and understand systems—all necessary skills in a world increasingly shaped by intelligent machines.
Intel’s Lama Nachman, who shared the stage with Wang during this discussion, echoed a similar sentiment. She emphasized critical thinking and reasoning—especially when AI gets it wrong. The ability to question, verify, and correct AI outputs is becoming more important as we rely on it in education, medicine, finance, and creative industries.
Wang also made an important point about the limitations of AI training. Most large models are trained on what he calls “most of the Internet,” which raises questions about bias, outdated information, and reliability. That makes human judgment more important than ever. AI can be a powerful tool, but it’s only as good as the data and prompts it receives—and the people interpreting its results.
Ultimately, Wang’s advice is clear: if you want to thrive in an AI-powered future, don’t just learn how to use these tools—understand how they think, where they fail, and how to stay one step ahead. Whether you're studying mathematics, experimenting with prompts, or building your critical thinking muscles, the key is learning how to think in ways that AI can’t. Yet.