Imagine a world where the internet, once an open landscape of diverse voices, is increasingly shaped by a handful of powerful digital giants. These companies are rapidly weaving generative AI into the fabric of their core services, transforming how we access information and interact online. This integration promises convenience and efficiency, but it also raises critical questions about who controls the flow of information, what happens to the diverse businesses that rely on these platforms, and whether innovation will flourish or be stifled in this new AI-powered landscape. The implications of this shift are far-reaching, impacting not only the digital economy but also the very nature of online experience.
What are the practical implementations of generative AI within core platform services by major digital companies
Major digital companies, often referred to as “gatekeepers,” have been actively embedding generative AI (GenAI) into their core platform services (CPS) in recent years. This integration involves providing access to powerful GenAI systems that can recast online content or assist with specific information tasks. The ultimate goal for many of these companies is to create a central AI assistant, a “one-stop-shop” platform, that combines various tasks currently handled by multiple individual platforms. This ambition is driven by the potential for transformative innovations and productivity boosts that GenAI can offer when seamlessly integrated into CPS.
The practical implementations of GenAI within CPS are diverse and rapidly evolving. For example, Google introduced “AI Overviews” within its flagship search engine, which summarize text from multiple websites to directly answer user queries. It also integrates GenAI into its operating system Android, its web browser Chrome, its voice assistant, and its video sharing service YouTube. Amazon updated its voice assistant, Alexa, with GenAI, enabling users to discover new content and products. Amazon also offers sellers GenAI tools to create product listings and corresponding ads. Microsoft has integrated its AI-powered chat assistant, called “Copilot,” into its browser Edge, its search engine Bing, and its productivity software, Microsoft 365. Meta AI on Facebook, Instagram, WhatsApp, and Messenger allows users to access real-time information from across the web without leaving Meta’s apps. ByteDance enables both end users and advertisers to generate AI images, video, and audio content on TikTok. And Apple is integrating GenAI features into the iOS operating system, which powers all its devices.
A common feature across these integrations is the direct access end users gain to powerful GenAI modules through the ubiquitous CPSs of digital gatekeepers. Instead of merely offering GenAI tools as standalone services, these companies incorporate them directly into their core services. This strengthens existing business models and creates new opportunities for user engagement within these ecosystems. The ambition to provide an AI-powered personal support service capable of combining several intermediation and navigation functions, sometimes referred to as an “AI agent” or “AI assistant,” is also common. Such a “Super-CPS” would perform a series of complex tasks independently, aimed at achieving an overall (typically commercial) goal by generating monetizable AI answers to user queries.
What legal and economic issues arise from the integration of generative AI into core platform services
The integration of generative AI (GenAI) into core platform services (CPS) by digital gatekeepers raises significant legal and economic concerns. Initially, CPSs attracted users with the promise of open and low-cost access, but dominance has led to exploitation, compelling users to share data and businesses to pay for access. GenAI exacerbates this by enabling platforms to directly satisfy user demand with content repurposed from businesses, thus reducing business users’ value and threatening their disintermediation. Legal issues arise from the potential misuse of content provided by businesses for GenAI training and output, particularly concerning intellectual property rights and the fairness of data usage terms. The economic impact includes a shift from a diverse ecosystem of businesses to reliance on a few dominant platforms controlling access to customers.
Centralization and Manipulation
Another critical issue is the increased centralization of information and the associated risk of user manipulation. While CPSs previously allowed users to draw their own conclusions from multiple sources, GenAI tools provide centralized conclusions, influencing decision-making processes significantly. Furthermore, the personalization capabilities of GenAI can lead to individualized manipulation at unprecedented levels. Platforms can use GenAI to create customized content, steering users towards preferred advertisers and products without transparency, resulting in potential consumer harm and market distortion. This raises questions about transparency, accountability, and the ethical implications of deploying opaque AI systems that shape user behavior.
Prisoner’s Dilemma and Preferred Partnerships
Economically, businesses find themselves trapped in a prisoner’s dilemma, where they must allow platforms to use their data for GenAI to maintain prominence and access to customers, even though this ultimately harms their long-term viability. This dynamic is often reinforced by “preferred partnerships,” where selected businesses gain advantages at the expense of others, further distorting competition and creating an uneven playing field. Preferred status influences business users to accept terms for the use of their content, terms which the business users wouldn’t necessarily accept under normal circumstances. Legally, this raises concerns about exploitative platform discrimination, unfair competition, and the need for regulatory interventions to ensure a more equitable distribution of economic benefits and control over creative outputs.
How does the embedding of generative AI within core platform services worsen problems related to economic competition
The integration of generative AI (GenAI) into core platform services (CPS) by digital gatekeepers exacerbates existing competition concerns, primarily due to a shift from intermediation to disintermediation. Historically, CPSs connected end-users and business users, promising open access. However, as these platforms achieved dominance, they began to condition access, extracting value by demanding more user data and content from businesses while charging higher fees for reaching end-users. GenAI amplifies this exploitation by enabling gatekeepers to substitute the value provided by business users. By scraping and repurposing business content, CPSs can directly satisfy end-user demand, reducing the need for users to engage with external websites and services. This shift threatens businesses that rely on direct customer relationships, as they risk being relegated to mere data providers for AI assistants that control customer interactions and monetize those interactions directly.
As GenAI empowers CPSs to deliver synthesized content directly, the open and decentralized nature of the web is undermined, further centralizing power and diminishing user sovereignty. Previously, users could discover diverse sources of information and form their own conclusions. Now, AI-driven conclusions from a centralized source (the CPS) bypass that intellectual effort. Combined with personalized algorithms and massive data collection, GenAI intensifies the risk of user manipulation. Unlike traditional search rankings based on general algorithms, GenAI generates individualized responses even to the same query. This lack of transparency and repeatability makes it difficult to identify and counteract biases embedded within GenAI systems, particularly in advertising-funded CPSs where revenue maximization may incentivize skewed results. The long term outcome of this process is the creation of CPS’s that become profit-maximizing transaction machines that know ever more about customer preferences and have greater capacity to manipulate them
The dependency of business users on dominant CPSs creates a powerful “prisoner’s dilemma,” where they are forced to allow their data to be used for GenAI, even if it collectively disadvantages them. Gatekeepers create this dilemma by linking business exposure on the CPS to their support for GenAI tools. Because of saliency bias on central platforms, even minor differences in prominence can shift demand to some businesses while rendering others invisible. If a business wants to disallow content usage due to copyright and GenAI training reasons, the business might be penalized with less access to end-user through the same gatekeeper CPS they depend on. This coercion forces business users into a race to the bottom, where they enable AI solutions that may ultimately disintermediate them in exchange for short-term advantages. The situation is further complicated by preferential partnerships, where gatekeepers cooperate with select businesses, offering advantages in exchange for closer alignment with their systems. This unequal treatment torpedoes attempts by businesses to collectively escape the prisoner’s dilemma, further consolidating the gatekeeper’s power and control.
What is the effect of generative AI on the matching of supply and demand
The integration of generative AI (GenAI) into core platform services (CPSs) significantly alters how supply and demand are matched. Dominant CPSs, traditionally acting as intermediaries connecting businesses and end-users, now leverage GenAI to directly satisfy user needs, potentially displacing business users. This shift stems from GenAI’s ability to repurpose content initially provided by businesses, effectively undercutting their value proposition and reducing the necessity for users to interact directly with the businesses’ websites or services. Research indicates that AI-powered search can often fully address user queries without requiring them to visit external websites, highlighting the disintermediation effect of GenAI within CPS ecosystems. As a result, gatekeepers are evolving from mere matchmakers to direct suppliers, centralizing information and potentially diminishing the open web’s role in fulfilling user demand.
This transition from intermediation to disintermediation raises concerns about the decentralization of information. The open web, envisioned as a decentralized network with diverse content sources, is being challenged by the rise of dominant CPSs using GenAI to provide synthesized answers directly on their platforms. This eliminates the need for users to explore multiple sources and form their own conclusions, fostering a centralized source of information controlled by a few powerful gatekeepers. Furthermore, the rise in personalized AI answers and content increases the risk of user manipulation. Unlike traditional search engines with traceable ranking algorithms, GenAI can generate different responses to the same query based on user data, creating incentives to incorporate biases and influence decisions without transparent processes.
This move towards direct supply and centralized information raises critical issues of user sovereignty and market competition. While CPSs initially promised open access and low costs, platform dominance has led to conditions that maximize profits by extracting value from both end-users and businesses. GenAI exacerbates this dynamic, amplifying the power of platforms to manipulate matching processes. Instead of relying on the relevance of information for the end-user as the primary matching factor, platforms are increasingly choosing matches that generate the highest profit margin for themselves. The rise of preferred partnership models, in which certain suppliers are granted preferential access or treatment on CPSs, further skews the matching mechanism and incentivizes actions that cement the disproportionate gains of the gatekeepers. Overall, as CPSs become ever more adept at providing AI-derived content and answers, independent suppliers risk becoming relegated to the mere role of source material providers for the gatekeepers’ machine-generated offerings.
How does the centralizing of information and direct AI answers impact user experience and trustworthiness of information
The integration of generative AI into core platform services (CPS) by digital gatekeepers is fundamentally shifting the dynamics of information access and user experience. Previously, these platforms served as intermediaries, connecting users to a diverse array of sources, enabling them to compare perspectives and form their own conclusions. However, the advent of AI-powered direct answers marks a departure from this decentralized model. Instead of directing users to multiple websites or sources, platforms now offer synthesized responses, effectively centralizing information and potentially diminishing the need for users to vet information across multiple sources, or even leave the platform’s ecosystem. This shift has profound implications for the trustworthiness of information and the intellectual effort required of users, as they increasingly rely on the curated conclusions of a single, centralized source.
Centralized Conclusions and Decreased User Sovereignty
This centralization of information significantly impacts user sovereignty. The World Wide Web was envisioned as a decentralized network of diverse information repositories. The rise of market-dominant CPSs like Google search and Amazon Marketplace preferred directing user attention to their sources by preferential ranking, but still allowed users to discover diverse sources and draw their conclusions. However, the embedding of AI answers allow gatekeepers to regurgitate information from multiple sources directly on their platforms. The extraction of users’ intellectual effort of comparing and learning from multiple sources goes directly to one conclusion from a centralized source, the CPS. Gatekeepers train end users to trust their rankings and AI answers. Their centralized conclusions influence the decision-making process. The ability of CPSs to deliver conclusions does not only cannibalize business users, it threatens the entire notion of an open and decentralized web. As the decision-making process becomes more centralized, every business is more dependent on the goodwill of already mighty CPSs in control.
Furthermore, the use of AI in CPSs presents challenges to trust and transparency. Traditional search engines ranked results based on general relevance algorithms applied to every user. However, generative AI can produce different answers to the same question, even from the same user. The lack of repeatability and the opaqueness of AI decision-making processes can increase incentives to manipulate end user opinions. As these systems become more complex, it becomes difficult for users or authorities to understand the processes and outputs of CPS recommendations, potentially resulting in manipulation of users, as these are changed at any time. Gatekeepers have incentive to exploit such opacity by incorporating biases that maximize revenue per use query. All this risks the integrity of the information and the user’s capacity for independent judgement.
How can the power dynamic between platforms and business users lead to discriminatory practices and economic exploitation
The embedding of generative AI (GenAI) into core platform services (CPS) exacerbates existing power imbalances between platforms and business users, potentially leading to discriminatory practices and economic exploitation. Originally, CPSs attracted users with the promise of open access but have since become gatekeepers, conditioning access to their user base on terms that prioritize the platform’s profit margin. This includes businesses sharing more content, triggering intellectual property concerns, and paying more for reaching customers. Now, with GenAI, these platforms can substitute the value businesses provide by using user-generated content to directly satisfy end-user demand. This shift threatens to disintermediate businesses as information demand is satisfied within the intermediary platform itself. The incentive to direct users to third parties diminishes, especially if the platform’s AI agent can present all information directly, negating the need to visit external websites or apps.
This disintermediation risk is compounded by a further centralization of power and a loss of user sovereignty. Platforms, by embedding AI answers, remove the need for end users to compare and learn from multiple sources, instead offering a centralized conclusion. This centralized gatekeeping significantly influences decision-making processes, making businesses more dependent on the goodwill of already powerful platforms. Additionally, personalized manipulation becomes a significant concern as GenAI is capable of tailoring content at an unprecedented level. Unlike earlier systems that ranked based on general relevance, GenAI can create different answers to the same question, increasing incentives for manipulation. This opacity allows platforms to incorporate biases that maximize revenue, particularly in advertising-funded environments, nudging users towards the highest-paying advertisers without transparency or accountability.
Exploitation Through Prisoner’s Dilemma
Furthermore, dominant platforms can exploit business users through prisoner’s dilemmas. In a competitive environment with free choice, exploited users would simply switch services. However, with dominant CPSs, that choice is often nonexistent. Platforms condition access to consumers on allowing the use of business’ data to train proprietary AI models. Even if a business understands a platform’s AI will be used to compete with them, they may not be able to refuse, as they risk losing access to consumers on that platform if they do. The powerful “prisoner’s dilemma” forces business users to allow the use of their data for GenAI, even though collectively they are worse off by doing so. This dilemma culminates when promotion via unavoidable CPS is linked to a business’ approach to a gatekeeper’s GenAI tools. A business may rationally choose to share their data in exchange for increased prominence, as that prominence can shift demand to their services. Businesses face a race to the bottom as these gatekeepers extract value from all sides.
In what ways are ‘preferred partnerships’ used and under what conditions should they be analyzed
Gatekeepers operating core platform services (CPSs) frequently use “preferred partnerships” to deepen their ecosystems and exercise more control over market participants. These partnerships aren’t always inherently problematic, as efficiency-creating collaborations can benefit the marketplace. However, they often serve as a mechanism for gatekeepers to selectively advantage certain favored business users to the detriment of others. This unequal treatment can torpedo efforts by a group of business users to escape a prisoner’s dilemma created by a CPS, such as agreeing to unfavorable data sharing terms or content usage rights for generative AI (GenAI) in exchange for prominence. When these partnerships are deployed strategically, they can undermine collective bargaining efforts or rational industry responses to the challenges of increasingly powerful CPSs by offering a few companies advantages that disrupt broader cooperation.
The key dynamic to scrutinize is the degree to which these preferred partnerships serve to “divide and conquer.” Instead of fostering a fair and equitable marketplace where all participants have an equal opportunity to succeed, these arrangements often lead to the exclusion of rivals, thereby creating a competition problem. By offering enhanced visibility, access to resources, or direct payments to selected partners, gatekeepers can incentivize defection from collective strategies aimed at securing fair compensation or data usage terms. This outcome is especially concerning in industries reliant on creative content, since preferred partnerships can deter licensing deals or equitable copyright agreements between content creators and GenAI service providers.
Conditions for Antitrust Analysis
These partnerships should be closely analyzed by antitrust authorities under circumstances that would suggest potential anti-competitive harm. For instance, if preferential exposure to a gatekeeper’s user base requires accepting unfavorable terms around data usage for GenAI or effectively excludes competitors from reaching those same users, there is a cause for concern. Similarly, agreements that condition a partner’s prominence on a CPS on their acceptance of unreasonable terms for content used in GenAI warrant scrutiny. Agencies should examine whether the professed benefits of the preferred partnerships outweigh their exclusionary effects, and whether they promote genuine competition or merely solidify the gatekeeper’s position by suppressing independent business models. A thorough assessment of market dynamics, including the competitive landscape among business users and the gatekeeper’s incentives, is crucial to determine the true impact of these partnerships.
What legal and regulatory actions can be implemented to counter the market power of generative AI integrated into core platform services
Addressing the market power of generative AI (GenAI) integrated into core platform services (CPSs) requires a multi-pronged approach that goes beyond traditional antitrust remedies. A key element is rebalancing economic control over creative output. Currently, digital gatekeepers exert undue power over content monetization, effectively repurposing and monetizing content published online through their GenAI systems. To rectify this, it is essential to restore control to content providers, ensuring incentives for future innovation on which AI relies. This includes reinforcing intellectual prohibition rights. Antitrust law should prevent gatekeepers from penalizing those who prohibit the use of their content for GenAI by imposing disadvantages on unavoidable CPSs. By severing such anticompetitive links, antitrust law needs to ensure that content providers regain control over the monetization of their content to remain independent market participants, including both the right to say “no” and the right to say “yes, against reasonable compensation.”
Another critical measure involves mandating a bargaining framework for fair, reasonable, and non-discriminatory (FRAND) compensation. The king-maker power of gatekeepers in controlling CPSs creates an imbalance in bargaining power, allowing them to reject compensation for the use of protected content or insist on prices close to zero. Stronger inter-platform competition could shift the balance of power. As an interim measure, other antitrust tools, such as dispute resolution mechanisms, can be deployed to level the playing field between content creators and dominant platforms. Authorities should also consider providing exemptions for collective agreements to rebalance bargaining power. Affected content providers have a legitimate interest in discussing and agreeing on suitable countermeasures. Antitrust law shouldn’t penalize, but facilitate such defensive measures, providing a secure framework in which content providers can collectively negotiate with platforms about the use of their content for GenAI, and about ensuring continued nationwide coverage of their media. These framework could take the form of an (i) a block exemption, (ii) guidelines or (iii) binding commitments. Ultimately, severing the anti-competitive links between reliance on prominence on a CPS and a business’ (un)willingness to support GenAI tools capable of exploiting them is a critical task for antitrust law.
Furthermore, consideration should be given to structural separation of (ad-funded) content dissemination from content creation. When a gatekeeper operating a CPS funded by advertising owns and controls a GenAI system capable of creating personalized content and non-traceable marketing messages, it creates a massive unchecked incentive to manipulate end users. Given the non-transparency of such an architecture, the conflict of interest is unlikely to be supervised effectively. Therefore, structural separation might be necessary to prevent gatekeepers controlling access to content from becoming content providers themselves, a dynamic that places them at a competitive advantage over other content providers. Another avenue is addressing “preferred foreclosure partnerships,” which often serve as a vehicle to exclude rivals and create prisoner’s dilemmas at the expense of third parties. Authorities should thoroughly assess the exclusionary effects of partnerships that involve preferential presentation on a CPS in return for cooperation with the gatekeeper’s GenAI tools. Such contracts, which can facilitate the splitting up of customer groups, should be sanctioned accordingly. Ultimately, to maintain a fair, competitive, and innovative digital landscape, policy interventions are necessary to address the market power imbalances accentuated by the integration of GenAI into CPSs.
Embedding Generative AI into Core Platform Services Exacerbates Competitive Harm
The integration of generative AI (GenAI) into core platform services (CPSs) intensifies existing competition concerns by altering the dynamics of user engagement and value extraction. Traditionally, CPSs acted as intermediaries, matching businesses with end users. However, with GenAI, platforms can now directly satisfy end-user demand using content sourced from business users, effectively bypassing and diminishing the value provided by those businesses. This shift from intermediation to disintermediation allows gatekeepers to control information flow and user experience, reducing the need for users to explore the broader web. The initial promise of open access becomes compromised as platforms leverage GenAI to centralize information and maintain user engagement within their own walled gardens, leading to a decline in traffic to independent websites and a greater reliance on the gatekeeper’s curated content.
Furthermore, this centralization of information, coupled with the personalization capabilities of GenAI, significantly increases the risk of user manipulation. While CPSs previously segmented users for targeted advertising, GenAI enables individualized manipulation on an unprecedented scale. The opaque nature of GenAI systems and their capacity to generate non-repeatable, biased answers make it challenging for users and authorities to detect or predict manipulative practices. Gatekeepers have incentives to exploit this opacity by incorporating biases that maximize revenue, potentially leading to scenarios where users are constantly nudged toward the highest-paying advertisers through personalized, and often imperceptible, manipulations. The ability to control the customer journey—from demand generation to fulfillment—transforms these platforms into profit-maximizing “one-stop-shops” that prioritize revenue over user relevance.
This shift creates “prisoner’s dilemma” scenarios for businesses. Individually, businesses are coerced into enabling AI solutions that may ultimately disintermediate them, fearing immediate disadvantage if they refuse. However, collectively, this cooperation weakens their position relative to the gatekeeper, who benefits from the aggregated data and control. The dependence on CPSs allows gatekeepers to extract first-party data to train bespoke GenAI models, further solidifying their dominant position. The threat of reduced prominence on the platform forces businesses to comply, creating a race to the bottom where the gatekeeper is the sole beneficiary. This situation is often exacerbated by “preferred partnerships,” where select businesses receive advantages in exchange for closer cooperation, further dividing and weakening the collective bargaining power of other businesses.
The drive to embed generative AI into core platform services represents a pivotal shift, transforming these platforms from simple intermediaries to powerful, self-contained ecosystems. This transformation, however, comes at a cost. The promise of unbiased information access is eroded as centralized AI-driven answers and personalized content increasingly dominate the user experience. Independent businesses, forced into a precarious balancing act between visibility and self-preservation, face the risk of becoming mere suppliers to the platforms they once utilized. The future digital landscape hinges on whether we can ensure that the opportunities presented by AI are realized without sacrificing user autonomy, fair competition, and the vibrant diversity of the web.
