On November 30, 2022, OpenAI launched ChatGPT. Within just two months, the AI-driven application had attracted one hundred million users, likely setting a record for the fastest adoption rate of any application to date. This milestone sparked global anticipation, with many expecting AI to disrupt the world imminently.

As 2023 arrived and the pandemic subsided, expectations were high for AI to go mainstream. Yet, despite numerous announcements and the introduction of new models, AI’s impact remained more a popular topic on YouTube than a reality. In the end, 2023 did not earn a place in history as the year of groundbreaking AI advancements. Fast forward to mid-2024, and public sentiment has shifted. The initial excitement has given way to a sense of disappointment. Are we witnessing what Gartner terms the “trough of disillusionment” in their hype cycle? The answer is far from clear.

It’s important to acknowledge that this narrative isn’t entirely accurate. Some individuals have indeed transformed their lives through generative AI, and several AI-first startups show great promise with a strong customer base. However, in general, generative AI has fallen short of the lofty expectations many had when OpenAI first introduced ChatGPT.

In this article, I am looking to share my vision on what AI will look like and how we can prepare ourselves. It turned out to be quite a long piece, hence I thought it would make sense to add a table of contents:

  1. Productivity of one, productivity of many
  2. This turns out to be possible; yet tough
  3. The brilliant failure of ChatGPT
  4. We suck at out-of-the-box-thinking
  5. To bundle or unbundle
  6. It’s a data story
  7. A glimpse into the future and AGI
  8. We will go multimodal
  9. Is humanity limiting AI transformation?
  10. To conclude

Productivity of One, Productivity of Many

When people are asked what they use generative AI tools like ChatGPT, Perplexity, or Gemini for, the top use case is often personal productivity. This makes sense, as personal productivity is the area where individuals can make the most immediate impact. Instead of worrying about the broader team or organization, many choose to bypass the AI-related rules and limitations that are often poorly enforced and use AI as much as they see fit. “Bring Your Own AI” (BYOAI) has become a trend, as everyone is eager to accomplish more with less—or at least, more with just a bit more.

Microsoft’s 2024 Work Trend Index clearly shows that people are increasingly adept at navigating the available AI solutions and are using them, whether CIOs and other executives approve or not. However, the impact of these applications remains limited. Yes, we might use Perplexity instead of Google to find information or turn to ChatGPT to help with university assignments or write amusing poems. But these use cases are typically confined to individual productivity. I strongly believe that we can achieve exponential results if we build AI solutions designed for collective use.

Why are B2B software applications like ERP and CRM systems so powerful? Because they are utilized company-wide, enabling collaboration towards a greater goal. When information is stored in such systems, it allows colleagues to work together on a massive, structured canvas. A sales order entered in Country A updates an MRP calculation for Plant B and triggers a replenishment order for Warehouse C. Collectively, these transactions empower executives to assess business performance and adjust strategy accordingly. Imagine achieving the same level of collaboration with AI. We could work together on this canvas, alongside various AI assistants, each tailored to specific tasks. My AI-driven interactions would be visible to the larger team, enabling them to make better decisions, conduct deeper analyses, and help us do more with less, at scale. That’s the goal we should be striving for.

This turns out to be possible; yet tough

This is precisely where we face the greatest challenge. We’ve discovered some value-adding, time-saving personal use cases, and that’s fine. But when it comes to identifying use cases that accelerate or even transform entire teams, the conversation often falls silent. One of the most frequent phrases I hear in AI-related discussions is, “Show me the use case!” Initially, this question was asked with hope and excitement, but over time, it has taken on a more desperate tone. We’re struggling—significantly. And it’s not just you; it’s a global issue. People everywhere recognize the potential of AI, yet they struggle to realize it.

It is definitely not the case that the transformative use cases are not around. Take for example Duolingo, who have stepped up their game and introduced Duolingo Max, a premium subscription tier that uses AI to explain user-mistakes upon taking language learning tests. Typically a very custom, user-specific case, difficult to encapsulate in standard documentation, not to mention that good documentation is hard to build and probably even harder to maintain for humans anyhow. In addition, Duolingo Max also offers users the opportunity to “call” Duolingo and have conversations in the language they’re learning. Brilliant case if you ask me. According to the internet, Duolingo has around eight million paid subscribers. Imagine you’d have to build human “conversation-teams” to have conversations with those paid users. Pretty significant payroll if you ask me. Through the use of AI though, this use case all the sudden becomes feasible.

Or take Cradle, a Dutch startup that uses AI to “design improved variants of target protein sequences”. I have very little knowledge about proteins except that I know I need them to power up my body. What is interesting to see though is that their approximately 40 person teams-page holds a combination of specialists in ML and Deep Learning, former Google, IBM and Google employees. And despite young and small, they are apparently already having a customer base including the names of Novonesis, Johnson&Johnson, Grifols and Twist Bioscience. It’s a very particular use case. However, a potentially interesting one.

Or take Woebot, which is just an app you can install on your mobile phone, that provides in-the-moment mental health support. There’s a catch though: this app is apparently so good and sophisticated that it actually requires a referral from your medical specialist or health insurance before you can get access to it. At the principal level though, it’s brilliant. Therapists are scarce, not always available, humans too, tired sometimes, etc. etc. Woebot is always available, never tired, does not run on a nine to five schedule, will never have a bad hair day or anything similar and is all the sudden very scalable. Woebot health, the company behind the app is very clear that their app is not mean to replace human-to-human consultation.

The above cases are just three examples and I am 100% certain there are more (I just choose not to include them in this article). I do feel though that some of these cases have a degree of “who would have thought of that…?”. They’re not necessarily evolutionary use cases or logic increments on the the back of technologies that were already in use for a number of years. To a degree, this feels more like revolutionary use cases, sudden radical changes that fundamentally alter the existing order. I was recently watching a TED-talk by Ray Kurzweil, who has been working on AI for the past 60 years, where he spoke about the ability of AI (not just generative AI) to dramatically increase both the quality and pace at which the pharmaceutical would be able to run advanced simulations in the development of medicines, which would bring us much better healthcare, much faster. It is happening. It will happen. It doesn’t come automatic or easy though. It is tough.

Now why’s that? Could it be that “the bigger the tech, the harder it is to grasp”? It certainly seems that way. After all, nobody struggles with the latest Android or iOS updates. Sure, those updates may be incremental these days, and quite some of those new features will never be adopted by the majority of the users anyway. Still though, even many of the use cases I’ve seen appear, for example, the likes of Microsoft copilot are evolutionary, less so revolutionary. We think level 1, not level 2. We look for more efficiency (doing things right) yet less for more effectiveness (doing the right things).

The brilliant failure of ChatGPT

And then, there’s ChatGPT. Again? Why this time? Well, you have to admit, OpenAI did a brilliant job with the launch of ChatGPT. They made something quite powerful available across the world at a very accessible barrier to entry, resulting in the rapid adoption curve we witnessed in early 2023. In doing so, they set a high bar for AI applications: “It needs to be amazing. I want to be blown away and instantly reap the benefits.” But that’s easier said than done. And, unfortunately, it’s too late to reset those expectations.

Even OpenAI has struggled to meet the high hopes they initially sparked. Shortly after ChatGPT’s release, they launched GPT-4 and several variants. While these updates made the technology more powerful—proven through complex charts and impressive examples—or made API calls more efficient (which was nice for budgets), they didn’t quite live up to the initial excitement. Then came the announcement of Project Sora, which could potentially raise the bar once again. However, OpenAI was quick to clarify that Project Sora is still very much in beta and not ready for public release.

Now, the world is anxiously awaiting GPT-5. At this point, we don’t know how it will work exactly, how good it will be, when it will be available, or at what price—but none of that matters. We just want GPT-5 because we’re eager to be amazed once more. Silly as it may be, we find ourselves sitting, waiting, wishing (as the Jack Johnson song goes). And while we do, it almost seems like OpenAI is doing the same, waiting for us to fully embrace the technology. Stuck in a status quo.

We suck at out-of-the-box thinking

Let’s face it—we struggle with out-of-the-box thinking, at least in this context. The recurring demand for “show me the use case” reflects this challenge. For some reason, we find it difficult to connect our work, or that of others, to this new technology and rethink what we do daily.

Why is that? First, I believe one limitation lies within our own neural networks—our brains. Our brains are wired to store everything we experience. Each experience forges connections between individual neurons, and when we recall something, we’re essentially tapping into those neural pathways. The more we experience something, the stronger these connections become, making it easier to recall. This is experience. This is expertise. But it’s also narrow-minded thinking—the mindset of “we’ve always done it this way.” So why change?

Second, I feel we’re reluctant to dedicate the necessary time for deep thinking to come up with radical new solutions, partly due to the influence of OpenAI’s launch of ChatGPT. The expectation has become, “Show me the use case!”—preferably with a fully detailed system architecture, information flows, governance mechanisms, and testing scenarios. This is a tall order. It demands deep thinking, so we need to invest in it. Einstein once said, “If I had an hour to solve a problem, I would spend 55 minutes defining the problem and five minutes solving it.”

I’m confident the use cases are out there, across all business domains, from Finance to Supply Chain, and from Marketing to Customer Service. While I’m primarily a marketing professional and hesitate to define use cases for other fields without proper collaboration, I can easily come up with a dozen AI-driven use cases that would benefit marketing teams and, by extension, larger organizations. To do the same in your domain, you need to understand what generative AI is capable of at a fundamental level.

To bundle or unbundle

As we consider what AI looks like and explore how we can potentially put it to use, we need to address the question of whether to bundle or unbundle its capabilities. What does this mean?

When integrating AI into our business, we have two main options: we can either incorporate AI into existing applications (bundling) or develop a new, AI-first application for a specific purpose (unbundling). Both approaches are possible and viable, depending on what you aim to achieve. Let’s take a closer look.

The key differentiator of generative AI is its ability to create new content—text, images, sound, video, or 3D models—that didn’t exist before. It accomplishes this using unstructured data. While I’m not an expert on the structure and feeding of OpenAI’s language models, it’s clear that the internet—likely the largest and most unstructured dataset we have—is a significant source that ChatGPT relies on.

Let’s focus on text, the primary mode of communication in business applications. AI can generate various types of text, offering a wide range of possibilities for enhancing workflows and processes.

For example, you could ask AI to analyze your cash flow for a specific period and rate it on a scale from 1 to 10. In this case, generative AI simply provides a single digit. However, you can take it a step further by asking AI to explain why it scored your cash flow a 7 for that period. It would likely respond with detailed paragraphs explaining its reasoning.

In marketing, you might ask AI to write your next blog post—although this isn’t always recommended, for other reasons. Alternatively, you can ask it to review an existing blog you’ve written, improving it in terms of style, grammar, and vocabulary. You could even request an analysis of the blog against Robert Cialdini’s Seven Principles of Persuasion, with explanations for each ranking and suggestions for improvement. Additionally, you could run this analysis across all your published articles or ask the AI to scan all marketing assets within your organization, identifying those that score four out of five on all seven principles while focusing on a specific topic.

If you’re in manufacturing, optimizing efficiencies at scale is likely a top business objective, supported by data from various systems. But are you confident you know everything? According to whose model are you optimizing your strategy? You could hand over all that data to generative AI and ask it to propose at least three different strategies that could yield better results than your current approach. Once it does, you can request detailed explanations, including the relevant numbers that would impact the final output.

As a sales manager, you might hold weekly one-on-one meetings with your direct reports. If I were in your shoes, I’d want to standardize these conversations to build consistent habits over time.

Now, to prepare such call, I would probably want the following questions answered:

  • How is this seller performing compared to others in the business? Consider factors such as the number of opportunities, average deal size, win rates, proactivity, and stakeholder influence.
  • Identify the biggest opportunities for this seller. For each one, provide the current status, potential risks, the next action currently planned, and alternative actions that could help advance the opportunity.
  • Identify the three quietest opportunities. Explain in detail why they have gone quiet, review the process followed for each opportunity so far, and determine if any steps were missed. Finally, propose actions to reactivate these opportunities based on their current phase.
  • Analyze the seller’s third-party communications over the last six months and suggest improvements. Then, compare these communications with those from the previous six months to identify any improvements in how the seller interacts with prospects, customers, and internal stakeholders.

Now, imagine yourself in the seller’s position, preparing for your weekly one-on-one with your manager. Why not equip the seller with the same powerful tools? This way, instead of focusing on minor issues, both parties can shift into a collaborative mode, working together to improve the seller’s performance and drive better results for the business. For example:

  • Show me my three biggest opportunities. Highlight potential risks and provide well-reasoned suggestions to mitigate those risks and advance the opportunities.
  • Identify my three quietest opportunities. For each, explain why it has gone quiet and offer recommendations on how to reactivate engagement with the account to move the opportunity forward.

Do you see it? Both sides of the table are raising nearly identical questions. So why not address them upfront, making them visible to everyone so we can have a more productive conversation? While there may still be disagreements on conclusions and suggestions, this approach brings multiple, non-threatening perspectives into the discussion. Those uninterested in such collaboration may have either stopped learning or are simply arrogant, believing that a single perspective holds more knowledge than the sum of many.

In the scenarios mentioned, generative AI will produce text, which can be delivered in countless formats—emails, web pages, fields in web applications, PDFs, PowerPoint presentations, and more. The format doesn’t matter, which both increases the opportunity and complicates the task. You’ll need to think of new, innovative ways to present information so that people can collaborate and build upon it.

Integrating new technology into new applications is quicker and easier than adding generative AI to existing ones. However, building dedicated AI-first applications risks isolating them from your broader tech stack, limiting their access to relevant data and preventing them from reaching their full potential. On the other hand, adding generative AI features, functions, buttons, and output fields to existing applications is more cumbersome, costly, and challenging and while you may be able to make it pretty far, you may just not reach the point you envisioned, resulting in poor user adoption and ultimately a disapointing ROI. Yet, in the end, this approach may yield better results due to the integrated nature of your tech stack, even if it aligns with the “we’ve always done it this way” mindset. It’s a significant dilemma.

Looking to take a next step in the world of AI?

There’s much more to say and present on generative AI and how it will transform what’s ahead. Let’s get in touch.

It’s a data story

In the last few paragraphs, I frequently used the word “your”—your cash flow, your marketing assets, your sales opportunities. This brings us to an essential topic: your data.

Large Language Models (LLMs) are quite intelligent, with impressive reasoning capabilities. However, they still have significant limitations. While they can generate content that didn’t exist before and even create something genuinely new, their scope is confined by the data they were trained on. Training data is a complex topic in itself, but let’s focus on the present.

LLMs become truly powerful when their default knowledge is enriched with situational knowledge—your knowledge. Your data. Your general ledger, your sales opportunities, your production planning data, your email conversations, your HR files (which could finally help mitigate biased end-of-year reviews). Anything you can think of, securely stored and properly governed, yet accessible.

In the past, we used terms like intranet, internet, and extranet to describe different levels of web environments that individuals could connect to. A similar concept applies to generative AI and data models. By default, LLMs have access to internet data sources. When you link your own data sources to the LLM, you’re essentially merging your intranet with the internet. Additionally, you may want to connect fenced-off third-party databases, like ZoomInfo or Dun & Bradstreet. The use cases that combine these three types of data sources—internal, external, and third-party—will be the most powerful and transformative.

A glimpse into the future and AGI

In July 2024, Bloomberg revealed OpenAI’s definitions of what the company believes are the required levels to get from where we are today, to artificial general intelligence (AGI), which is interesting to look into, for a number of reasons. Here are the levels:

  • Chatbots: AI with conversational language;
  • Reasoners: human-level problem solving;
  • Agents: systems that can take actions;
  • Innovators: AI that can aid in invention;
  • Organizations: AI that can do the work of an organization.

According to Bloomberg and OpenAI, everything the company has release so far is definitely only playing in the level 1 space. In other words: OpenAI acknowledges that they are still at the very beginning of gen AI, let alone AGI. On the other hand, the company has also shared they are coming fairly close towards releasing level 2 gen AI solutions: reasoners. On the back of that, quite some links have been put in place between Daniel Kahneman’s Level one and Level two thinking systems. Back to gen AI, one could imagine a use case where, instead of the instant responses that Copilot and ChatGPT gives us today, we can instruct the machine to spend a longer period of time with a more balanced set of resources to actually execute deep work on bigger challenges. Obviously, you’d need a well connected IT infrastructure or data estate, giving the machine the necessary tools and fuel to complete the required challenge but really, imagine what would be possible, by getting to level two alone, let alone subsequent levels.

The short-term future of generative AI will be a blended one, where professionals have access to a range of virtual assistants or agents tailored to specific needs. These specialized assistants will connect to particular data sources, answer targeted questions, and handle loosely defined use cases. Unlike today’s AI interactions, these purpose-built agents will go beyond simple conversation, taking proactive actions based on the dialogue with the user. In two to three years, I doubt we’ll still be manually updating CRM systems. Instead, ERP, CRM, and similar systems will become more proactive, updating in the background as new information flows in from a variety of data sources. We’ll be notified of changes, allowing us to intervene if necessary, but otherwise, the process will be seamless.

On top of that, expect AI concierge services to emerge, making it unnecessary to even think about which model, assistant, or agent to use. You’ll simply “tell” what you are looking to accomplish through a single interface, and the concierge, equipped with deep knowledge and situational awareness (aka: context), will automatically select the best agent or combination of agents and models to handle the task.

We will go multimodal

Remember how strange it once felt to talk into a brick-like device by your ear, especially in crowded places? Even today, it’s still a bit surprising when someone suddenly starts talking into thin air, only to realize they’re using AirPods to have a conversation. The average person types around 50 words per minute, but we can speak at about 140 words per minute—nearly three times faster. As we interact more with generative AI, the limitations of typing will become apparent, and we’ll be increasingly tempted to use voice dictation as the primary method of input, something I already see taking place within our organisation as more and more meetings and calls are automatically transcribed, building a huge true knowledge base where every additional transcription adds exponential value.

This brings us into the age of multimodal generative AI. Multimodality refers to the ability of AI models to process and generate data across various types of modalities—text, images, audio, and video—simultaneously and almost instantaneously. This evolution will turn our interactions into seamless conversations with a virtual companion who is always present, never tired, never distracted, consistently friendly, highly intelligent, and incredibly knowledgeable.

Microsoft, at its May 2024 Build conference, demonstrated a very early version of a Windows-based Copilot application that:

  • Continuously monitors what is displayed on your display
  • Understands what is happening on the display.
  • is capable of providing realtime feedback, almost as if you were having a conversation with a personal assistant sitting right next to you.

To me, that demo was a glimpse of where things are heading. This is what genAI applications will look like. Of course, there will be AI features within many of the applications we use today. And there will be new AI-first applications that we are not aware of yet. All of those will be trivial though in comparison to where we’re really going: a world where software (with genAI) integrates seamlessly into our daily lives, without us even noticing it.

Can you imagine you’re adding Copilot+++, or whatever Microsoft Marketing will come up with by then, to a conversation of two for it to effectively become a conversation of three, enabling the two initial participants to engage with Copilot+++ together, at the same time, through screen sharing and audiological input, seeing Copilot+++ do its magic in realtime, adding knowledge and expertise, answering questions, provide actionable insights to its pilots effectively enhancing both the quality of the conversation and the time to resolution?

Is humanity limiting AI transformation?

Microsoft further concretized its transition towards everything AI by announcing Recall AI, a Windows 11 feature that would take a screenshot of what’s shown on your display (despite what’s shown on your display) every 5 seconds. It was received with enthusiasm, and scepticism. This looked amazing, but at the same time, people weren’t too sure they’d want a screenshot of their display taken every five seconds. For reasons that I will leave up to the reader!

I remember a similar response a number of years ago when Microsoft made a series of announcements in the gaming space. Because, as part of these new releases, Microsoft required their Xbox computers to be connected to the internet 24/7. The result? Enthusiasm on the new titles, and scepticism on the connectivity requirement. Another day, another case, and the exact same behaviour.

Which brings me back to my earlier line of us being typically very bad at out of the box thinking. We make a lot of new things. But they often look at lot like the old things. Look at the first cars. It’s just a horse and carriage, without the horse. Look at the past five iPhones, or smartphones as a whole. New year, same phone.

To conclude

AI is there. It’s the next big thing in IT and will not go away. It may look different in a few years from now as it frankly is still in relatively early stages but it is here to stay. You also don’t really have a choice whether you embrace and adopt AI, as a person or an organisation, because certainly somebody (somebodies) will do, and there will be transformational cases that will leave others behind, significantly.

AI comes in many forms, shapes and sizes. This piece of text may be AI-generated. You will have AI-powered buttons on both the physical devices at your disposal and in the apps you use day in and day out. It will be difficult to distinguish what is gen AI and what not. However, in a few years from now, we will bother much less about it than we do today as the quality and relevancy of gen AI applications continue to increase.

At the same time, building AI solutions and coming up with AI use cases is tough, let’s face it. Meaning that all shots fired from the hip will likely hit nothing but laughter and frustration. It requires focus, it requires upfront inspiration and education. It requires the right environment, the right attitude, and a methodology geared towards use case identification.

When done correctly, I am convinced people will come up with relevant use cases. Don’t expect revolutions as your baseline, even though you should be aiming for the stars. Rather expect evolution. Try something new, something small. Learn. Adapt. Build feedback loops. Ask yourself “if it can do this… Could is also do that?”, while imagining the art of the possible. And scale your AI strategy on the back of those early results.


William

William is a Dutch millennial with a clear passion for two things: marketing and technology. Being a marketing lead at a multinational IT company in the Microsoft ecosystem, he is able able to bring these two passions together. Plunge into the new exiting stuff on the technology front, to transform that into compelling stories that makes people go "Oh. Right. Hadn't looked at things in that way yet!".