The Future of Internet Interaction: My Take on What's Coming

This article is going to be my opinion on a topic of our interaction with internet resources, websites, SEO, and how the AI buzz is disrupting this as we speak. I don't claim to know how things are going to play out, and I think anyone saying otherwise is either a prophet or a bullshitter. Things are changing at such a fast pace lately that it's hard for anyone to see clearly where this is going.

The only thing I think we know for sure is that our interaction with internet resources will change in a profound way, and it will happen very soon.

Brief history on development of internet, websites and SEO

The internet started as a relatively basic thing—it was a way for different computers to communicate with each other over a public network. User interaction with the internet happened through user interfaces on our computers, allowing us to write and send emails, play multiplayer games, and browse websites. The only prerequisite for all of these interactions was to know the address of another computer, and you'd be able to interact with it in real time.

The internet blew up, and more and more websites started appearing across all kinds of areas. This created a necessity to easily search for relevant websites, and a lot of search engines began popping up—we all know which one won. Google became, for a lot of people, a synonym for the internet.

Search engines were a way to find relevant websites by typing a natural language search query and getting a list of all websites that matched the query, ordered by relevance. Humans being lazy, we would mostly click and browse just a couple of the top results. This naturally led to website owners wanting to appear at the top of Google search results for relevant queries to beat competitors. This led to the emergence of search engine optimization, or SEO. Website owners started applying techniques while building their websites and publishing content in order to become more relevant for specific Google search queries—so that users writing "best wedding location around Munich" would land on their website and hopefully become customers.

SEA (Search Engine Advertisement) followed, basically as a way to pay Google to appear at the top of relevant searches, and companies started pouring millions into Google so their website could be seen.

This is where we are now, but things are starting to change—and there’s a very clear reason for that.

How do LLMs come into play?

It all started with ChatGPT—at least for normal, everyday people. Scientists, large tech companies, and nerds were playing with large language models and related technologies long before that, but it was ChatGPT that brought the technology closer to everyday users. It was like magic. Suddenly, you could speak with a computer, ask questions, and get answers as if you were talking to an expert—all without browsing through hundreds of Google searches.

It just felt too easy, like everyone got their own extremely intelligent and knowledgeable friend for free. Except, they’re not very intelligent—by design. In relatively simple and broad terms, LLMs are just computer programs that are very good at giving you the most likely next word for a given input. They don't really know what they're outputting, but since they’re trained on enormous amounts of internet data, it can feel like they do. That’s why they sometimes confidently say completely incorrect things and call it a day.

They are trained on vast amounts of data—but still not all data—and specifically, they don’t know anything beyond the day they were last trained. To mitigate this issue, ChatGPT and other providers introduced the capability for their tools to search the internet. This way, LLMs can get actual, relevant context to work with, which extends their capabilities immensely.

With this feature, you can use ChatGPT, Perplexity, and similar tools to search for information on the internet much like you would with Google—except you get exactly what you asked for, along with source references, instead of a list of websites you have to read manually. A lot of people are already using these tools for searches, and it’s taking a piece away from Google. Although it's still a relatively small piece, there are predictions that this will change in the near future—and if it does, it could significantly alter how we use the internet.

Side note for less technical people: It’s important to differentiate between ChatGPT and similar websites, and LLMs themselves. ChatGPT is an application that uses an LLM under the hood, but they are not the same. LLMs by themselves cannot search the internet—they can only generate text. It’s the task of the application (like ChatGPT) to search the internet and feed that result back into the LLM to generate a contextual response.

LLMs can do much more than just information search

Searching for relevant information is just one of the things we use the internet for. Email, real-time chats, website browsing, multiplayer gaming, and online shopping are just some of the many use cases. LLMs are capable of helping with these things too—well, kind of.

As already mentioned, an LLM by itself can’t do anything beyond generating text. Applications that wrap LLMs and give them the ability to trigger real-world actions are called agents. Autonomous Agents are basically LLM-based applications that can decide when to perform some action and then trigger its execution. ChatGPT’s browsing feature is one example of such an agent—you type a question, the search agent browses the web, gathers relevant information, and gives you an answer, quoting the sources.

In the same way, it’s possible (and already happening) to develop agents with extended capabilities like sending emails, making purchases, or pretty much anything a human could do with access to a computer and internet. There are already a lot of agents being developed and used, but somehow they’re still not everywhere.

Why aren’t agents everywhere already?

The problem with agents today is that enhancing them with new capabilities requires developers to implement very specific logic to connect with each application exposing those functionalities.

Let’s take a simple example: building an autonomous agent that can search for a product, compare prices and quality, and purchase the best one.

Here’s what the developer would need to do:

  • Connect to a product search API (e.g., Google Shopping)
  • Perform continuous searches and parse results
  • Depending on which site has the best product, navigate to that specific website
  • Programmatically "click through" the purchase form
  • Input payment and shipping data correctly
  • Confirm the purchase

Now repeat that for every online store—with different UIs, form structures, and APIs. It’s not just clunky—it’s a nightmare to maintain and very error-prone.

An alternative is to use the backend APIs that power these UIs. This is somewhat better, but there’s still no universal standard—each site has different APIs. While many of them use REST, REST itself is just an architectural style. It’s not a strict protocol.

This fragmented landscape limits AI agent adoption. Developers spend more time building fragile integrations than focusing on actual agent behavior.

What we need is a shared communication protocol that apps can follow to make themselves accessible to agents. That way, developers can build once and interact with many services in a predictable way. The benefits are clear—and it’s likely just a matter of time before this becomes reality.

What is MCP and how likely is it to become the agentic communication standard?

Standards are agreed-upon conventions that make life easier and enable progress. Think SI units, which allow everyone to speak the same measurement language (well, almost everyone). Or HTTP, which defines how applications talk over the internet.

There’s still no official standard for agentic communication so agents today must rely on ad-hoc integrations and clunky workarounds. Model Context Protocol aims to change that.

In short, MCP defines a standardized way for applications to expose their functionalities to AI agents, so that agents and services can understand each other out of the box, without needing custom integrations for every new tool or website. It’s being developed by Anthropic and is designed to make this kind of interoperability feasible.

Anthropic plans to support this ecosystem with two key components:

  • Centralized registry – a directory of MCP-compliant applications, allowing agents to discover them easily.
  • Well-Known Endpoints – a machine-readable description that applications expose to describe the capabilities they offer to agents.

These two things are important not only because they streamline communication, but because they allow agents to discover capabilities that didn’t exist at the time they were built—effectively enabling self evolving agents.

Whether MCP becomes the standard remains to be seen. OpenAI or others might propose competing protocols. But MCP has momentum.

What does all of this mean for businesses?

If the predictions come true and LLM-based search takes over part—or all—of current search volume, businesses will need to adapt to remain visible. Right now, LLM-based search still relies on classic search engines under the hood to gather candidate content, but that’s only because we don’t yet have something better.

It’s just a matter of time before a new, LLM-friendly standard takes off. Once that happens, agent-based interaction could become the norm. Businesses that rely heavily on SEO to generate leads will need to invest to become LLM-friendly.

No one can say for sure how far-reaching these changes will be, but we might be looking at a complete paradigm shift in how users interact with online content.


References and technical resources for further reading: