As a software engineer whose been building in the AI space for the past 5 years, and currently runs an AI YouTube channel, I’ve firsthand experienced the transformation of AI becoming mainstream in contrast to when it was only popular among technologists. Back when deep neural networks like Convolutional Neural Nets (CNNs) and Long Short Term Memory (LSTMs) were all the rage.
Since the launch of ChatGPT in 2022, transformer based neural networks have become the mainstream focus for their general purpose capability of powering Large Language Models (LLMs) and handling a variety of data in token format. With their multimodal capabilities, they can handle not only handle text generation, but binary and multiclass classification, image detection, time series forcasting, and now full video and audio generation at a hollywood level. As we move away from hyperparameter tuning, LLM prompting with clear instructions are now the most important skill needed to properly leverage these models.
The big, trillion dollar question is where is all this spending on AI now going to lead to, and what will AI look like in the next 5-10 years? Consequently, is AI something you should invest time and money into learning? Markets and “experts” love to exaggerate when new technology becomes mainstream and consequently when it fails to deliver on promised expectations. As we learn from investing and trading, we can profit from overvaluations not only in stock returns, but also from future proofing our careers with skills that will put you ahead in an increasingly AI driven world. If you’re not careful with who you get your AI news from, buzzwords can turn into buzz saws and tear apart your sanity. The impetus for me writing this long essay was describing what a realistic AI-driven world will look like, and what you should do to best position yourself for the future.
In this essay, I’ll be giving my predictions on the future of AI based on historical, economic, and innovation cycles. From analyzing the history of AI to looking at AI innovation today, I predict that AI interest will peak around 2035, and after, AI will be a tool used in most of the software we interact with on a daily basis. My primary goal is to give recommendations for how you can best position your career for the AI future. That way, whenever you hear about AI being used in a product, you have a better idea of whether someone knows what they’re talking about or is just using it as a marketing term.
I hope to provide a pragmatic perspective, from an engineer who builds with these tools daily, so that you can get a realistic idea of what the future holds. That way, you can avoid getting emotionally rattled by people who throw around AI buzzwords and statements like “AI will take my job” without any context or data to back their claims.
History of AI
As important context for my predictions, I wanted to first analyze the history of AI to lead up to where we are today.
The idea of AI and autonomous robots have been around for a while, and appear in mythology, but really first materialized with Alan Turing’s Turing Test in 1950 which was the first AI model benchmark for comparing computer and human generated text. The term, AI, wasn’t coined until 1956 at the Dartmouth Conference by Stanford’s John McCarthy, and after would become a formal subset of computer science.
Progress continued into the 1960s with notable achievements like Arthur Samuel's checkers-playing program in 1962 and the 1966 deployment of Shakey, the first general-purpose mobile robot capable of reasoning. However, from the late 1960s to the mid-1990s, AI research and development experienced a prolonged "AI Winter" due to limited practical success and a significant drop in funding and interest, leaving progress confined to academic research.
A turning point came in 1997 when IBM’s computer, Deep Blue, defeated world chess champion Garry Kasparov showcasing the potential of machine intelligence in complex strategy games. The 2000s brought more tangible applications, such as the 2002 launch of Roomba, the first mass-produced autonomous robotic vacuum, and Stanford winning the 2005 DARPA Grand Challenge with a self-driving car completing a 132-mile course.
AI was first used in consumer devices in 2010 with Apple’s integration of Siri into the iPhone 4S. In 2012, a major leap occurred with the creation of AlexNet, a deep convolutional neural network that won the ImageNet competition and demonstrated the power of training deep neural networks on GPUs. This work laid the foundation for the deep learning revolution in the late 2010’s, and was soon acquired by Google in 2013.
In 2015, OpenAI was founded with the mission of ensuring that artificial general intelligence benefits all of humanity. The release of GPT-3 in 2020 marked a new era in natural language processing, offering powerful conversational capabilities licensed exclusively to Microsoft. Amid economic disruptions from COVID-19, central banks lowered interest rates in 2021 and 2022 to stimulate recovery. At the end of 2022, OpenAI released ChatGPT, igniting widespread public interest in large language models (LLMs) and conversational AI.
The momentum continued into 2023, which saw a surge in the development of conversational interfaces. OpenAI launched GPT-4, a multimodal LLM capable of handling both text and images, while Google released Bard and Anthropic introduced Claude with both competitors focused on safety and user experience. New techniques like Chain of Thought Reasoning and Few-Shot Learning became standard practice, and Retrieval-Augmented Generation (RAG) enabled early integrations of LLMs with existing software systems.
By 2024, the industry shifted toward AI agents and deeper software integration. OpenAI unveiled GPT-4o, a fully multimodal model supporting audio, image, and file inputs. Meta released LLaMA 3 with 70B parameters, while Alibaba launched over one hundred Qwen 2.5 models at different sizes. AI agents became more mainstream, increasingly capable of addressing open-ended tasks such as research and problem-solving. OpenAI also released the o3 model, known for its strong reasoning capabilities, and tools like Cursor emerged to support software developers with AI-assisted coding. Towards the end of the year, Anthropic saw an opportunity for standardizing the connections between agents and software through the MCP protocol which became popular later in 2025.
In 2025, AI progressed toward deeper integration and more tangible value. DeepSeek released its R1 and R1 Zero models; OpenAI followed with GPT-4.5 and the ChatGPT agent, expanding autonomous capabilities. Anthropic launched Claude 3.7 Sonnet, and xAI rolled out Grok 3 and 4, signaling continued competition and innovation in the AI space. These advancements reflect a broader trend of AI systems evolving into full-fledged digital agents embedded across workflows, tools, and everyday experiences.
AI Today
A zero to 0.25% interest rate policy from the Federal Reserve during COVID-19 from 2020 to 2022, along with the mainstream success of ChatGPT’s launch, lead to technology firms increasing hiring and spending on AI to train their own models to replicate ChatGPT’s success. Tech companies are often valued by investors to deliver cash flows far out in the future, and to do so they need to borrow to finance multi-year and multi-decade long projects. It’s very likely that the current bubble in AI valuations was the result of a long period with zero interest rates (Slok, 2025). Nowadays, we have upward pressures on inflation coming from tariffs and interest rates are expected to remain high throughout the rest of 2025. This creates two headwinds for tech companies that is likely to slow down AI growth in the next decade.
The top 10 tech companies today are more overvalued than they were in the Dotcom boom

The top 7 tech stocks have primarily driven the growth of the S&P500 including Nvidia, Microsoft, Apple, Amazon, Meta, Broadcom, Google, and Tesla which have all added AI to their value proposition and core products (Slok, 2025).
Consequently, Apple, Nvidia, Microsoft, Google, Amazon, and Meta have increased their spending (CapEx) 63% year over year to $212 billion in 2024. Since 2022, Meta’s stock price has increased 500% with implementing AI to improve their advertising business and properly recovering from negative investor sentiment on the Metaverse. Likewise, Nvidia’s stock price has increased 526% since 2022 due to the sharp increase in demand for GPUs for companies and labs to train their own LLMs. The major tech companies are spending over $300 billion on AI in 2025 alone which is a 30-45% increase from 2024. Meta is spending $65 billion, Google $75 billion, Microsoft $80 billion, and Amazon $100 billion. S&P500 companies are also expected to spend over $1 trillion in AI over the next decade.
In the venture capital world, AI startup valuations are ballooning. Safe Superintelligence Inc. had their pre-seed round valued at $30 billion in March 2025 with just a landing page and no product yet. Other notable AI startups include Cursor which raised $900 million at a $9.9 billion valuation to make AI help developers write code faster, and Lovable which raised $200 million at a $1.8 billion valuation to make software development accessible to everyone. For almost every white-collar profession, there are now multiple startups promising to disrupt it with AI. Entrepreneurs and investors are enormously interested in geting involved in AI right now, and thus hold the media’s attention when it comes to guidance on AI.

This is a visual preview for what I will discuss in the next section that models the expectations of AI technnology and how much new innovation will be created. AI has had enough momentum to merit its interest for at least another several years, but after a decade, I predict that a new technology will come in and take mainstream attention away from AI.
This dosen’t mean AI driven products and jobs will go away after the AI bubble pops. Websites and software developers still were needed after the Dotcom crash of 2000, and the rest of the decade included the releases of generational tech software like Facebook, YouTube, and Skype.
Additionally, while the lines in the chart above seem smooth, it’s important to note that there are hype cycles within these hype cycles depending on the particular AI tool and its industry application. For example, chat interfaces are pretty common in most applications and are farther in their own hype cycle than agents.

One data point supporting the increase in AI demand is the rise of technology jobs that include AI in their description. They’ve increased at a staggering 200% since ChatGPT launched and 448% in the past decade. The phrase “AI isn’t coming for your job. Someone using AI will” holds true. That said, for most jobs, especially those outside tech, you will primarily need to master prompt engineering and knowing which AI tool is right for your use case. From summarization of long documents to automating the monotonous parts of your life, you can gain a very large edge in your work depending on how effectively you apply AI.

Likewise, AI adoption is increasing according to surveys by Pew Research and Elon University. The number of adults in the US who’ve used ChatGPT increased from 18% in 2023 to 37% in 2025. This is a key data point supporting my prediction of the AI bubble popping in the mid to late 2030’s, because there’s still 5-10 years worth of AI adoption by everyone outside of the Silcon Valley tech bubble.
The focus on AI right now, in the summer of 2025, is using AI agents to get model responses to execute useful work within a software system. I like to think of AI agents as employees with tons of knowledge, but only thrive under careful management given specific instructions.

Lovable is an example of a company whose main product, a no code app building tool, is driven by AI agents. And recently, Lovable broke Cursor’s record of the fastest product to reach $100M Annual Recurring Revenue (ARR) in only 8 months.
Other examples of agents include Slack and Teams bots that you can chat with and customize, and OpenAI’s recent ChatGPT agent that can execute tasks on its own computer and output files like excel sheets and powerpoint slides. While LLMs before needed clear inputs and narrowly defined tasks, agents can operate on less defined tasks and operate with goals, autonomy, and within guardrails.
AI agents are now the new wave of functional infrastructure that came from conversational interfaces. In the same way, the early 2000’s saw a change in focus from static to dynamic websites that lead to applications like Gmail and Google Maps to transform the internet from a collection of pages into a toolbox of utilities the everyday person can use.
The future of agents will reshape how we interact with software, and help companies with customer support and onboarding, research, scheduling, and optimizing internal operations. Enterprises want ecosystems built around autonomous execution powered by agents that understand company goals, generate plans, and self correct in real time as they operate. It’s very likely that many jobs a decade from now will involve managing a team of agents to execute more open ended tasks that purely chat-based LLMs weren’t able to do before.
AI in the future:
With the current focus on agents and AI’s history in mind, I predict that in the next 5 years we will see the increased reliability, integration, and impact of agents. Once agents are able to re-prompt themselves the right number of times to effectively answer the question, they become easier to manage and control for quality.
As agents become more powerful, they’ll be able to perform more tasks without human supervison, and be able to solve less defined problems. For example, by 2030, we’ll be seeing the first AI operated companies and research labs that use agents to self manage their own research, finance, and logisitics with minimal human input. AI will significantly improve its ability to be a collaborative partner in helping write novels, produce music, and design marketing materials.
As AI mass adoption increases, so too will government insight and involvement. The Trump Administration plans to spent $92 Billion in AI and energy infrastructure, and we’ll see other countries like China increase their spending on AI.
That’s what a realistic picture of what I think I will look like from 2025 to 2030, but past that is when I think AI will reach its peak in terms of mainstream interest. The early 2030’s is where we’ll likely see fully autonomous companies, systems, and organizations powered by a 100,000 AI agents working together effectively at mass scale.
From 2035-2040, the AI bubble will be popped, and we will see a new technology emerge (VR, AR, Robotics, or a new blockchain technology) and take its place making the 10/15 year technology hype cycle restart. After 2040, AI like the internet, powers our everyday life in the software we interact with through our computers or even through our brains via Neuralink.
What to do now
The AI tsunami has been lifting the tech industry and US economy up with such an enormous momentum that every company is rightfully rushing to use it. It’s begun a transformation of our digital age with enormous opportunities. Each industry and sector has its own waves following the AI tsunami where new AI startups are launching and raising, tens to hundreds of millions, for every niche and vertical you can think of. Likewise, there are cycles of technological development within this large tsunami where there are months where new models and software are released daily and other months where development is taking place.
With this opportunity in front of us all, that begs the question: what can I do to capitalize on AI in my career? Today, unlike the 1990’s, we have free access to the world’s information, thriving online communities to learn from, and access to low and no code AI agent builder tools. It starts with education and teaching yourself how to use the most popular AI tools currently being used in your field.
Most importantly, you must keep your AI information diet as healthy as you would your food. You want to be primarily reading information on the latest AI updates from actual engineers, data scientists, and research scientists who code with AI models every day. They have the most accurate perspective on what you should expect from using an AI model, and have less of a conflict of interest than the CEO of a tech company or AI lab announcing AI will be replacing all jobs. That said, business leaders tend to have a bigger picture of their company’s AI initiatives that you can pay attention to as well when making longer time horizon bets.
The second thing you should do is learn how to prompt well using specificity, giving your model a persona in its system prompt, utilizing chain of thought, and showing examples using few shot learning. These are the core prompt engineering techniques that will get you 90% of the outcomes you want from AI tools like ChatGPT. Some examples of tasks you can give to AI are summarizing YouTube videos, research papers, books, and long articles, so that you can decide if you want to spend more time reading deeper. If you have ideas for apps that you’ve always wanted to build, I’d recommend learning how to vibe code using AI app building tools, which don’t require any coding, like Lovable. If you’re a programmer or data scientist, I highly recommend using Cursor, as it can increase the speed of your existing programming skills.
Along with mastering how to prompt, I think it’s also important to be aware of an LLM model’s limitations, so you can have realistic expectations for their outputs. Five years ago, deterministic AI models like CNNs, LSTMs, and most deep neural networks always give the same prediction given the same input data. However, nowadays, Generative AI (GenAI) is mostly being used, and features probabilistic output instead. What you’re getting in more creativity and flexibility you’re making the trade off of a wider range of correctness compared to previous models. Thus, GenAI poses higher risks of hallucination and their outputs should be closely tracked if being used in an app.
Since AI models are mostly accessible to everyone, through a paid API or open source, then how do you build a competitive edge? Especially when OpenAI and large tech companies are quickly applying AI to the very verticals that most startups are trying to tackle. The edge, ever since software was first created, has always been in the proprietary data you connect to the AI to deliver an impactful outcome. Companies are desperately hiring AI engineers and scientists who can help them develop their own agentic apps specific to their business and internally within their own corporate environment. And I know this, because this is what I currently do for my job.
Don’t worry about getting all the details right on the first try with regards to learning AI. With enough reading and consistent research, you can get a good picture of where AI is going within a month.
Conclusions
“It does not matter how hard you row. It matters which boat you get in.” - Warren Buffet.
What you work on is more important than how hard you work. And with all the data, economic trends, and future outlook on AI, it’s very clear why it’s the right boat to be getting aboard.
Sources
- Sløk, Torsten. “AI Bubble Today Is Bigger Than the IT Bubble in the 1990s.” The Daily Spark, Apollo Academy, 16 July 2025.
- Stanford Institute for Human‑Centered AI (HAI). Artificial Intelligence Index Report 2025. 8th ed., Stanford University, Apr. 8 2025.
- https://www.bondcap.com/report/tai
- https://lovable.dev/blog/agent