⭐ If you would like to buy me a coffee, well thank you very much that is mega kind! : https://www.buymeacoffee.com/honeyvig Hire a web Developer and Designer to upgrade and boost your online presence with cutting edge Technologies

Thursday, November 20, 2025

With negative review extortion scams on the rise, use Google’s report form

 

Last month, Google launched a form to report negative review extortion scams and it seems to work.

Google Business Profiles has a form where you can report negative review extortion scams, the form launched a month ago. You can find access to the form in this help document and I believe you need to be logged into your Google account with access to the Business Profile you want to report.

Review extortion scams. This negative review extortion scams are on the rise and a huge concern for local SEOs and businesses. A scammer will message you, likely over WhatsApp or email, and tell you that they left a one-star negative review and the only way to remove it is to pay them.

Google wrote in its help document, “These scams may involve a sudden increase in 1-star and 2-star reviews on your Google Business Profile, followed by someone demanding money, goods, or services in exchange for removing the negative reviews.”

The form. The form can be accessed while logged into your Google account by clicking here. The form asks for your information, the affected Google Business Profile details, more details on the extortion review, and additional evidence.

Do not engage. Google posted four tips when you are confronted with these scams:

  • Do not engage with or pay the malicious individuals. This can encourage further attempts and doesn’t guarantee the removal of reviews.
  • Do not try to resolve it yourself by offering money or services.
  • Gather all evidence immediately. The sooner you collect proof, the better.
  • Report all relevant communication you receive in the form.

Give it a try. There are some who are doubtful that this form actually does anything. But one local SEO tried it out over the weekend and within a few days, the review in question was removed. So it is worth giving it a shot.

Why we care. Reviews on your local listing, especially on Google Maps and Google Search, can have a huge impact on your business. Negative reviews will encourage customers to look for other businesses, even if those reviews are fraudulent. So, being on top of your reviews and removing the fake and fraudulent reviews is an important task most businesses should do on an ongoing basis.

This form will help you manage some of those fake reviews.

Tim Berners-Lee warns AI may collapse the ad-funded web

 

Sir Tim Berners-Lee helped build the modern web. Now he’s worried AI could help destroy its business model.

Sir Tim Berners-Lee, who invented the World Wide Web, is worried that the ad-supported web will collapse due to AI. In a new interview with Nilay Patel on Decoder, Berners-Lee said:

  • “I do worry about the infrastructure of the web when it comes to the stack of all the flow of data, which is produced by people who make their money from advertising. If nobody is actually following through the links, if people are not using search engines, they’re not actually using their websites, then we lose that flow of ad revenue. That whole model crumbles. I do worry about that.”

Why we care. There is a split in our industry, where one side thinks “it’s just SEO” and the other sees a near future where visibility in AI platforms has replaced rankings, clicks, and traffic. We know SEO still isn’t dead and people are still using search engines, but the writing is still on the wall (Google execs have said as much in private). Berners-Lee seems to envision the same future, warning that if people stop following links and visiting websites, the entire web model “crumbles,” leaving AI platforms with value while the ad-supported web and SEO fade.

On monopolies. In the same interview, Berners-Lee said a centralized provider or monopoly isn’t good for the web:

  • “When you have a market and a network, then you end up with monopolies. That’s the way markets work.
  • “There was a time before Google Chrome was totally dominant, when there was a reasonable market for different browsers. Now Chrome is dominant.
  • “There was a time before Google Search came along, there were a number of search engines and so on, but now we have basically one search engine.
  • “We have basically one social network. We have basically one marketplace, which is a real problem for people.”

On the semantic web. Berners-Lee worked on the Semantic Web for decades (a web that machines can read as easily as humans). As for where it’s heading next: data by AI, for AI (and also people, but especially AI):

  • “The Semantic Web has succeeded to the extent that there’s the linked open data world of public databases of all kinds of things, about proteins, about geography, the OpenStreetMap, and so on. To a certain extent, the Semantic Web has succeeded in two ways: all of that, and because of Schema.org.
  • “Schema.org is this project of Google. If you have a website and you want it to be recognized by the search engine, then you put metadata in Semantic Web data, you put machine-readable data on your website. And then the Google search engine will build a mental model of your band or your music, whatever it is you’re selling.
  • “In those ways, with the link to the data group and product database, the Semantic Web has been a success. But then we never built the things that would extract semantic data from non-semantic data. Now AI will do that.
  • “Now we’ve got another wave of the Semantic Web with AI. You have a possibility where AIs use the Semantic Web to communicate between one and two possibilities and they communicate with each other. There is a web of data that is generated by AIs and used by AIs and used by people, but also mainly used by AIs.”

On blocking AI crawlers. Discussion turned to Cloudflare and their attempt to block crawlers and its pay per crawl initiative. Berners-Lee was asked whether the web’s architecture could be redesigned so websites and database owners could bake a “not unless you pay me” rule into open standards, forcing AI crawlers and other clients across the ecosystem to honor payment requirements by default. His response:

  • “You could write the protocols. One, in fact, is micropayments. We’ve had micropayments projects in W3C every now and again over the decades. There have been projects at MIT, for example, for micropayments and so on. So, suddenly there’s a “payment required” error code in HTTP. The idea that people would pay for information on the web; that’s always been there. But of course whether you’re an AI crawler or whether you are an individual person, it’s the way you want to pay for things that’s going to be very different.”

The interview. Sir Tim Berners-Lee doesn’t think AI will destroy the web

SEO vs. AI search: 101 questions that keep me up at night

 

Here's why treating ChatGPT like Google will guarantee your failure in the age of RAG, reranking, and probabilistic systems.

Look, I get it. Every time a new search technology appears, we try to map it to what we already know.

  • When mobile search exploded, we called it “mobile SEO.”
  • When voice assistants arrived, we coined “voice search optimization” and told everyone this would be the new hype.

I’ve been doing SEO for years.

I know how Google works – or at least I thought I did.

Then I started digging into how ChatGPT picks citations, how Perplexity ranks sources, and how Google’s AI Overviews select content.

I’m not here to declare that SEO is dead or to state that everything has changed. I’m here to share the questions that keep me up at night – questions that suggest we might be dealing with fundamentally different systems that require fundamentally different thinking.

The questions I can’t stop asking 

After months of analyzing AI search systems, documenting ChatGPT’s behavior, and reverse-engineering Perplexity’s ranking factors, these are the questions that challenge most of the things I thought I knew about search optimization.

When math stops making sense

I understand PageRank. I understand link equity. But when I discovered Reciprocal Rank Fusion in ChatGPT’s code, I realized I don’t understand this:

  • Why does RRF mathematically reward mediocre consistency over single-query excellence? Is ranking #4 across 10 queries really better than ranking #1 for one?
  • How do vector embeddings determine semantic distance differently from keyword matching? Are we optimizing for meaning or words?
  • Why does temperature=0.7 create non-reproducible rankings? Should we test everything 10 times over now?
  • How do cross-encoder rerankers evaluate query-document pairs differently than PageRank? Is real-time relevance replacing pre-computed authority?

These are also SEO concepts. However, they appear to be entirely different mathematical frameworks within LLMs. Or are they?

When scale becomes impossible

Google indexes trillions of pages. ChatGPT retrieves 38-65. This isn’t a small difference – it’s a 99.999% reduction, resulting in questions that haunt me:

  • Why do LLMs retrieve 38-65 results while Google indexes billions? Is this temporary or fundamental?
  • How do token limits establish rigid boundaries that don’t exist in traditional searches? When did search results become limited in size?
  • How does the k=60 constant in RRF create a mathematical ceiling for visibility? Is position 61 the new page 2?

Maybe they’re just current limitations. Or maybe, they represent a different information retrieval paradigm.

The 101 questions that haunt me:

  1. Is OpenAI also using CTR for citation rankings?
  2. Does AI read our page layout the way Google does, or only the text?
  3. Should we write short paragraphs to help AI chunk content better?
  4. Can scroll depth or mouse movement affect AI ranking signals?
  5. How do low bounce rates impact our chances of being cited?
  6. Can AI models use session patterns (like reading order) to rerank pages?
  7. How can a new brand be included in offline training data and become visible?
  8. How do you optimize a web/product page for a probabilistic system?
  9. Why are citations continuously changing?
  10. Should we run multiple tests to see the variance?
  11. Can we use long-form questions with the “blue links” on Google to find the exact answer?
  12. Are LLMs using the same reranking process?
  13. Is web_search a switch or a chance to trigger?
  14. Are we chasing ranks or citations?
  15. Is reranking fixed or stochastic?
  16. Are Google & LLMs using the same embedding model? If so, what’s the corpus difference?
  17. Which pages are most requested by LLMs and most visited by humans?
  18. Do we track drift after model updates?
  19. Why is EEAT easily manipulated in LLMs but not in Google’s traditional search?
  20. How many of us drove at least 10x traffic increases after Google’s algorithm leak?
  21. Why does the answer structure always change even when asking the same question within a day’s difference? (If there is no cache)
  22. Does post-click dwell on our site improve future inclusion?
  23. Does session memory bias citations toward earlier sources?
  24. Why are LLMs more biased than Google?
  25. Does offering a downloadable dataset make a claim more citeable?
  26. Why do we still have very outdated information in Turkish, even though we ask very up-to-date questions? (For example, when asking what’s the best e-commerce website in Turkiye, we still see brands from the late 2010s)
  27. How do vector embeddings determine semantic distance differently from keyword matching?
  28. Do we now find ourselves in need to understand the “temperature” value in LLMs?
  29. How can a small website appear inside ChatGPT or Perplexity answers?
  30. What happens if we optimize our entire website solely for LLMs?
  31. Can AI systems read/evaluate images in webpages instantly, or only the text around them?
  32. How can we track whether AI tools use our content?
  33. Can a single sentence from a blog post be quoted by an AI model?
  34. How can we ensure that AI understands what our company does?
  35. Why do some pages show up in Perplexity or ChatGPT, but not in Google?
  36. Does AI favor fresh pages over stable, older sources?
  37. How does AI re-rank pages once it has already fetched them?
  38. Can we train LLMs to remember our brand voice in their answers?
  39. Is there any way to make AI summaries link directly to our pages?
  40. Can we track when our content is quoted but not linked?
  41. How can we know which prompts or topics bring us more citations? What’s the volume?
  42. What would happen if we were to change our monthly client SEO reports by just renaming them to “AI Visibility AEO/GEO Report”?
  43. Is there a way to track how many times our brand is named in AI answers? (Like brand search volumes)
  44. Can we use Cloudflare logs to see if AI bots are visiting our site?
  45. Do schema changes result in measurable differences in AI mentions?
  46. Will AI agents remember our brand after their first visit?
  47. How can we make a local business with a map result more visible in LLMs?
  48. Will Google AI Overviews and ChatGPT web answers use the same signals?
  49. Can AI build a trust score for our domain over time?
  50. Why do we need to be visible in query fanouts? For multiple queries at the same time? Why is there synthetic answer generation by AI models/LLMs even when users are only asking a question?
  51. How often do AI systems refresh their understanding of our site? Do they also have search algorithm updates?
  52. Is the freshness signal sitewide or page-level for LLMs?
  53. Can form submissions or downloads act as quality signals?
  54. Are internal links making it easier for bots to move through our sites?
  55. How does the semantic relevance between our content and a prompt affect ranking?
  56. Can two very similar pages compete inside the same embedding cluster?
  57. Do internal links help strengthen a page’s ranking signals for AI?
  58. What makes a passage “high-confidence” during reranking?
  59. Does freshness outrank trust when signals conflict?
  60. How many rerank layers occur before the model picks its citations?
  61. Can a heavily cited paragraph lift the rest of the site’s trust score?
  62. Do model updates reset past re-ranking preferences, or do they retain some memory?
  63. Why can we find better results by 10 blue links without any hallucination? (mostly)
  64. Which part of the system actually chooses the final citations?
  65. Do human feedback loops change how LLMs rank sources over time?
  66. When does an AI decide to search again mid-answer? Why do we see more/multiple automatic LLM searches during a single chat window?
  67. Does being cited once make it more likely for our brand to be cited again? If we rank in the top 10 on Google, we can remain visible while staying in the top 10. Is it the same with LLMs?
  68. Can frequent citations raise a domain’s retrieval priority automatically?
  69. Are user clicks on cited links stored as part of feedback signals?
  70. Are Google and LLMs using the same deduplication process?
  71. Can citation velocity (growth speed) be measured like link velocity in SEO?
  72. Will LLMs eventually build a permanent “citation graph” like Google’s link graph?
  73. Do LLMs connect brands that appear in similar topics or question clusters?
  74. How long does it take for repeated exposure to become persistent brand memory in LLMs?
  75. Why doesn’t Google show 404 links in results but LLMs in answers?
  76. Why do LLMs fabricate citations while Google only links to existing URLs?
  77. Do LLMs retraining cycles give us a reset chance after losing visibility?
  78. How do we build a recovery plan when AI models misinterpret information about us?
  79. Why do some LLMs cite us while others completely ignore us?
  80. Are ChatGPT and Perplexity using the same web data sources?
  81. Do OpenAI and Anthropic rank trust and freshness the same way?
  82. Are per-source limits (max citations per answer) different for LLMs?
  83. How can we determine if AI tools cite us following a change in our content?
  84. What’s the easiest way to track prompt-level visibility over time?
  85. How can we make sure LLMs assert our facts as facts?
  86. Does linking a video to the same topic page strengthen multi-format grounding?
  87. Can the same question suggest different brands to different users?
  88. Will LLMs remember previous interactions with our brand?
  89. Does past click behavior influence future LLM recommendations?
  90. How do retrieval and reasoning jointly decide which citation deserves attribution?
  91. Why do LLMs retrieve 38-65 results per search while Google indexes billions?
  92. How do cross-encoder rerankers evaluate query-document pairs differently than PageRank?
  93. Why can a site with zero backlinks outrank authority sites in LLM responses?
  94. How do token limits create hard boundaries that don’t exist in traditional search?
  95. Why does temperature setting in LLMs create non-deterministic rankings?
  96. Does OpenAI allocate a crawl budget for websites?
  97. How does Knowledge Graph entity recognition differ from LLM token embeddings?
  98. How does crawl-index-serve differ from retrieve-rerank-generate?
  99. How does temperature=0.7 create non-reproducible rankings?
  100. Why is a tokenizer important?
  101. How does knowledge cutoff create blind spots that real-time crawling doesn’t have?

When trust becomes probabilistic

This one really gets me. Google links to URLs that exist, whereas AI systems can completely make things up:

  • Why can LLMs fabricate citations while Google only links to existing URLs?
  • How does a 3-27% hallucination rate compare to Google’s 404 error rate?
  • Why do identical queries produce contradictory “facts” in AI but not in search indices?
  • Why do we still have outdated information in Turkish even though we ask up-to-date questions?

Are we optimizing for systems that might lie to users? How do we handle that?

Where this leaves us

I’m not saying AI search optimization/AEO/GEO is completely different from SEO. I’m just saying that I have 100+ questions that my SEO knowledge can’t answer well, yet.

Maybe you have the answers. Maybe nobody does (yet). But as of now, I don’t have the answers to these questions.

What I do know, however, is this: These questions aren’t going anywhere. And, there will be new ones.

The systems that generate these questions aren’t going anywhere either. We need to engage with them, test against them, and maybe – just maybe – develop new frameworks to understand them.

The winners in this new field won’t be those who have all the answers. There’ll be those asking the right questions and testing relentlessly to find out what works

Google shipping and returns policies in Search Console or using new markup

 

You don't even need a Google Merchant Center account to add this information anymore.

Google now lets merchants add their shipping and return policies to Google Search without having a Google Merchant Center account. You can do this within Google Search Console and/or by using new structured data.

Google wrote:

“We’re excited to announce that we’re now expanding the options for merchants to provide shipping and returns information, even if they don’t have a Merchant Center account. Merchants can now tell Google about their shipping and returns policies in two distinct ways: by configuring them directly in Search Console or by using new organization-level structured data.”

Search Console. If Google determines that your site makes sense to add shipping and return policies, i.e. your site sells product, then Google will show you a new screen to add this. Google had this screen, but previously, you had to have it hooked up to a Merchant Center account to show this screen.

Google wrote, “It’s important to note that settings configured in Search Console will take precedence over structured data on your site.”

The “Shipping and returns” configuration will be rolling out gradually over the coming weeks for all countries and languages within Search Console.

Here is what it looks like:

Sc Shipping Settings

Structured data. Google also added new structured data for you to communicate these detail to Google Search. This new markup is called organization-level shipping policy structured data. Check out that link for the technical details on how to implement this structured data.

Google wrote:

“This new markup support complements last year’s launch of organization-level return policies. Instead of adding shipping markup to every single product, you can now specify a general, site-wide shipping policy. This is ideal when your shipping policies apply to the majority of your products, as it reduces the amount of markup you need to manage. Shipping policies specified for individual products will still take priority over this general, organization-level policy for those specific items.

We recommend placing shipping structured data (nested under Organization) on the page where you describe your shipping policy. You can then test your markup using the Rich Results Test by submitting the URL of the page with shipping markup or pasting the code snippet with shipping markup. Using the tool, you can confirm whether or not your markup is valid. For example, here is a test for shipping policy markup.”

Here is sample markup:

Shipping Policy Markup Rrt

Why we care. If you run an e-commerce site or a site that has shipping and return policies, and you never set up Google Merchant Center, well now you can communicate these policies to Google Search without a Merchant Center account.

You still probably should use Merchant Center but here is a short cut for this specific task.

How to measure your AI search brand visibility and prove business impact

 

Your brand’s presence in AI answers shows real influence. Track visibility, benchmark competitors, and connect it to business growth.

Brand visibility is replacing rankings as the most important metric in SEO.

AI search engines now answer questions directly – often without a single click to a website.

If your brand isn’t mentioned in those AI answers, you’re invisible where it matters most.

This isn’t about being No. 1 in blue links anymore. 

It’s about being the brand ChatGPT recommends, the company Perplexity cites, and the solution featured in Google’s AI Overview.

So how do you measure and track that presence?

Here’s a simple three-step framework to help – starting with your brand visibility score.

Brand visibility tracker

(Copy this free spreadsheet to benchmark your current visibility.)

Brand visibility in AI search is an early signal of influence. It shows whether buyers are seeing, citing, and considering you before they ever reach your website.

The more often your brand appears, the more it’s trusted – and that trust is what turns visibility into real pipeline.

The brand visibility score is a simple calculation that shows your presence across AI-generated answers:

  • Brand visibility score = (Answers mentioning your brand ÷ Total answers for your space) × 100

For example, if you test 100 high-intent prompts like “best CRM software” across ChatGPT, Perplexity, and AI Overviews – and your brand appears in 22 of those AI responses – your Brand Visibility Score would be 22%.

A higher score means greater exposure during high-intent, AI-driven decision moments.

This calculation also tracks other key indicators of visibility:

  • Citation rate: The percentage of AI answers that cite your brand.
  • Share of voice: Answers mentioning your brand divided by answers mentioning your brand or competitors.
  • Sentiment: The context of those mentions – whether positive, neutral, or negative.

To improve your brand visibility score, evolve how you measure organic search growth. You can track it manually, automate it with tools, or do both.

First, I’ll walk through the manual approach using our free visibility tracker – then show how to automate it.

Step 1: Monitor your visibility footprint

Identify where AI answers appear for your key queries.

Start by running high-intent searches your buyers use, such as:

Check whether an AI Overview appears in the results. 

Step 2: Benchmark your brand mentions

Calculate your current visibility score and competitor benchmarks.

The brand visibility score shows:

  • How often your brand is cited.
  • How much share of voice you own.
  • The sentiment tied to those mentions. 

In other words, it’s how LLMs “vote” on your authority.

To track citations:

  • Audit across platforms: Note which AI search engines mention your brand – and which leave you out.
  • Benchmark competitors: Compare share of voice across key topics to see who dominates and who’s missing.
  • Measure sentiment: Not all mentions are positive. Track whether your citations are favorable, neutral, or negative.

Focus on your top 50 intent-driven queries. Track total answers, your citations, competitor citations, and sentiment scores weekly. These are your new KPIs.

Step 3: Track changes over time

Visibility isn’t static. It shifts as LLMs update, competitors refresh content, and search structures evolve. 

To prove impact, you need to monitor these shifts consistently and connect them to business outcomes.

Here are some trends we’re seeing:

  • Pages updated within the past 12 months are twice as likely to retain citations.
  • 60% of commercial queries cite refreshed content updated within the last six months.
  • Structured pages amplify this effect. URLs cited in ChatGPT averaged 17 times more list sections than uncited ones, and schema boosts citation odds by 13%.

Action steps:

  • Track brand mentions weekly or monthly across ChatGPT, Gemini, Perplexity, and Google AI Overviews.
  • Audit which pages gain or lose citations as freshness and structure change.
  • Refresh content. Add new data points, rework headings, create lists, FAQs, and expert POVs to strengthen structure and authority.
  • After updates, compare the competitor’s share of voice over the same period.
  • Tie lifts in citations to sourced pipeline or influenced deals.

Each trend offers tangible proof you can bring to the C-suite – showing how your brand is shaping buyer decisions at the exact moment of intent.

Get the newsletter search marketers rely on.


Measuring the impact of brand visibility

Tracking brand visibility connects marketing performance to real business outcomes.

  • Shape buyer perception early: Citations in AI answers show your brand is part of the conversation before prospects ever reach your site.
  • Show executives the pipeline link: Visibility metrics reveal where your brand is gaining or losing ground in key decision moments – especially across mid- and bottom-funnel searches. By tracking visibility, you can see how it translates into conversions and revenue.

To measure it:

  • Monitor AI citations regularly across ChatGPT, Gemini, Perplexity, and AI Overviews.
  • Compare against competitors to benchmark share of voice and sentiment shifts.
  • Map visibility trends to sales metrics such as demo requests, sourced opportunities, or closed-won deals.

When you treat visibility as a KPI, you can prove that content is building influence that drives pipeline.

Before using tools, get in the weeds and test prompts across different AI systems. Just as the best SEOs live on the SERP, the best AI SEOs live in AI chats.

It’s the only way to catch the shifts as LLMs evolve.

Once you have a solid grasp of the fundamentals, use tools like Semrush’s AI SEO Toolkit and AirOps to benchmark and track your visibility automatically. (Disclosure: I am the content and SEO lead at AirOps.)

Semrush offers the largest prompt database in the U.S., helping you identify which features drive visibility compared with your competitors.

Semrushs AI SEO Toolkit

AirOps takes it a step further – turning insight into action by automating campaigns for content refreshes, new content creation, outreach, and community engagement.

AirOps - content refresh opportunities

This level of campaign-level granularity makes it much easier to know what steps to take to improve.

So, you’ve got the fundamentals of brand visibility and tools to help you scale

Google Discover fixing fake AI spam problem

 

After many complaints with the quality of the Google Discover feed, Google is promising a fix is in the works.

Google is working to fix the Google Discover feed by removing the fake AI spam that has been creeping in over the past several weeks. “We’re actively working on a fix,” Google told the Press Gazette after the magazine documented many cases of the Google Discover feed being polluted with this AI spam.

Google’s statement. Here is the full statement Google provided:

“We keep the vast majority of spam out of Discover through robust spam-fighting systems and clear policies against new and emerging forms of low quality, manipulative content. We’re actively working on a fix that will better address the specific type of spam that’s being referenced here, maintaining our high bar for quality in Discover.”

AI spam in Discover. The Press Gazette documented numerous cases where “Fake news stories have been viewed tens of millions of times this week on Google’s Discover news aggregation platform.” Here is a screenshot of some of those stories, as told by The Press Gazette:


The theory is that these spammers are buying expired domains that were once trusted and leveraging its domain authority to spam Google Discover. Of course, this is not a new trick and Google Search has measures in place to deal with such techniques but some believe that is how this is working.

Jean-Marc Manach, a French data journalist who is tracking these issues, has a database of fake sites generating these AI stories, now includes more than 8,300 in French, 300 in English, and 150 in German.

Why we care. Google Discover can send tons of traffic to a publisher in no time. These fake sites can generate nice revenue, Google can then remove them, only for another site to pop up doing the same thing.

Google will eventually catch on and stop these efforts but who knows what new tricks will come up in the future.

EU investigating Google over site reputation abuse policy

 

The EU probe follows publisher complaints that their revenue was impacted as Google tried to eliminate parasite SEO from its search results.

Google parent Alphabet is facing a new EU investigation over claims that it demotes news publishers in search results if they run sponsored or promotional content, a significant revenue source for many media outlets.

What’s happening. The European Commission, the EU’s top antitrust enforcer, announced the probe today.

  • The case falls under the Digital Markets Act (DMA), a law that bars tech “gatekeepers” from unfairly favoring their own services or penalizing others.
  • Companies that break the rules can be fined up to 10% of their global revenue.

Site reputation abuse. Google’s enforcement against publishers is based on a spam policy introduced in March 2024 and updated in November 2024.

  • The policy targets “site reputation abuse” – better known to SEOs as parasite SEO – which occurs when third parties post low-quality content on trusted sites to piggyback on their authority and manipulate Google rankings.
  • Google said this kind of content can confuse or mislead users, and has taken manual action against sites hosting it.
  • The company later updated the policy to state that even content created with first-party oversight can violate the rule if its main goal is to exploit a site’s ranking signals.

Google responds. In a blog post by Pandu Nayak, Google called the investigation “misguided”:

  • “Google’s anti-spam policy is essential to how we fight deceptive pay-for-play tactics that degrade our results. Google Search is built to show trustworthy results, and we’re deeply concerned about any effort that would hurt the quality of our results and interfere with how we rank websites.
  • “Our anti-spam policy helps level the playing field, so that websites using deceptive tactics don’t outrank websites competing on the merits with their own content. We’ve heard from many of these smaller creators that they support our work to fight parasite SEO.
  • “This surprising new investigation risks rewarding bad actors and degrading the quality of search results. European users deserve better, and we’ll continue to defend the policies that let people trust the results they see in Search.”

EC press release. Commission opens investigation into potential Digital Markets Act breach by Google in demoting media publishers’ content in search results

Google’s response. Defending Search users from “Parasite SEO” spam

First reported. EU readies fresh investigation into Google over news publisher rankings (registration required)

Editor’s note: This article was updated following the EC’s confirmation of the investigation and Google’s response

How to boost your AI search visibility: 5 key factors

 As AI transforms how users search for information online, we all face a new challenge: ensuring our content is visible and impactful within emerging AI platforms. 

While traditional SEO tactics remain crucial, brands must also embrace AI SEO to truly excel. By optimizing content for AI systems, brands can ensure they stand out in AI-generated responses and large language models (LLMs) like Google’s Gemini, Microsoft Copilot, and ChatGPT.

Ninety percent of businesses are concerned about losing SEO visibility as AI reshapes search, according to a recent survey. The same report also found that 61.2% of businesses plan to increase their SEO budgets due to the growing influence of AI. However, many are unsure which strategies to prioritize.

To navigate this evolving landscape, it’s essential to focus on five key factors that can significantly enhance your AI search visibility. These five factors are well-established industry “pillars” of technical AI SEO and will be crucial for optimizing your content and standing out in the AI-driven search ecosystem.
1 IgnineVisibility 20251113 Factors To Consider For AI SEO5 key factors to focus on to boost visibility in AI search results
1. Content retrievability: Ensure AI can find your content

Content retrievability refers to how easily AI systems can find, extract, and attribute information from your content. In simple terms, it measures how discoverable your content is by AI crawlers and indexing systems.

If AI systems can’t access or accurately extract your content, it will never show up in generative answers. Without visibility in these AI-powered search results, your brand may miss a significant opportunity for engagement and increased visibility. 

Content that is easily retrievable ensures AI systems can pull relevant data, making your content more quotable and impactful in response generation.

To improve content retrievability:

    Use semantic chunking to group related ideas together.
    Structure your pages with clear headings, concise bullet points, and organized sections.
    Optimize multimodal content, such as images and videos, to enhance discoverability by AI systems.

Pages with implemented schema markup have 40% higher click-through rate than those without, according to a study by Schema App

Additionally, multimodal optimization is becoming increasingly crucial as AI systems evolve to understand and process a variety of content formats.
2 IgnineVisibility 20251113 Videos In Search Results Using Schema MarkupSource: Example of video schema markup using Clip properties
2. Content alignment: Speak the language of AI

Content alignment focuses on how well your content matches the way people ask questions in AI-powered search environments. AI systems favor content that provides clear, direct answers to users’ questions, especially those that align with conversational queries.

If your content doesn’t align with how people naturally phrase their queries, in a conversational manner, it may not be included in generative search results. Structuring your content to reflect common user queries increases the likelihood that AI will use your content in response generation.

To improve content alignment:

    Include direct answers or summaries at the beginning of your pages to provide AI with a quick, quotable response.
    Use a conversational tone and mirror the natural language of your target audience.
    Maintain consistent terminology and definitions to reduce ambiguity and improve AI comprehension.

Informational content is most likely to trigger AI Overviews, according to a recent study by Semrush: 88.1% of queries that trigger an AI Overview are informational.
3 IgnineVisibility 20251113 Share Of Keywords Triggering AI Overviews Based On IntentSource: Semrush AI Overviews study
3. Competitive differentiation: Stand out from the crowd

Competitive differentiation measures the uniqueness and value of your content compared to that of your competitors. In the AI search ecosystem, your content must offer something distinct, whether it’s new data, unique insights, or a fresh perspective on a topic.

AI systems aim to provide the most relevant and valuable information to users. 

If your content echoes what competitors are already saying, AI has no reason to highlight your brand over theirs. To stand out, your content must offer a unique value proposition that fills gaps competitors may have missed.

To improve competitive differentiation:

    Focus on presenting unique data or case studies that others don’t cover.
    Provide fresh perspectives, industry insights, or expert opinions that make your content stand out.
    Create content that answers niche questions that your competitors have overlooked.

In a recent AI SEO report, the Content Marketing Institute found that 22% of B2B marketers characterize the success of their content marketing as extremely or very successful. These top performers most often attribute their success to understanding their audience.   
4 IgnineVisibility 20251113 Why B2b Content Strategies Aren T As Effective As They Could BeSource: Content Marketing Institute Study 
4. Authority signals: Build trust with AI systems

Authority signals are markers that demonstrate the credibility and trustworthiness of your content. 

AI systems want to trust that content is from reliable, authoritative sources. These signals typically include source citations, verifiable credentials, and consistent, quality content from trusted authorities.

Without authority signals, even highly insightful content may be overlooked in favor of competitors that have more established trust with AI systems. By building authority signals, you position your brand as the go-to source that AI systems can trust and confidently cite in generative results.

Examples:

    Include consistent, authoritative source citations (e.g., from reputable publications, academic studies, or industry experts).
    Showcase your company’s credentials, certifications, and other markers of authority.
    Gain backlinks and media mentions to enhance your brand’s overall credibility.

The top results in Google had at least three times more backlinks than positions 2-10, according to a recent study from Backlinko. This highlights the importance Google places on authority in the top position in search engine results.
5 IgnineVisibility 20251113 Study On Top Positions In Google Having More Backlinks Backlinko Study shows how pages with more backlinks rank above those with fewer backlinks
5. Entity mapping: Connect the dots for AI

Entity mapping refers to how effectively your content enables AI systems to understand the relationships between key entities – such as people, products, organizations, or concepts – within your content. AI systems build knowledge graphs to map these entities, and content that clearly identifies and links them helps AI understand the larger context of your information.

Unlike traditional search engines, AI systems rely on knowledge graphs to build context and meaning. If your content doesn’t clearly identify entities or their relationships, it risks being overlooked. 

Strong entity mapping ensures your content fits seamlessly into AI’s understanding of the world, making it more likely to be surfaced in AI-driven answers.

To increase entity mapping:

    Explicitly name and link key entities (e.g., people, products, organizations) within your content.
    Use consistent terminology to describe entities across all your content.
    Build a semantically related internal linking strategy to strengthen connections between related entities.

In a controlled experiment comparing two identical websites for a fictional company, one implemented comprehensive schema markup and the other did not. ChatGPT demonstrated that the site with schema markup outperformed its counterpart by 30% in AI-driven retrieval and citation quality. 
AI visibility goes beyond traditional SEO

In the era of AI-driven search, your brand’s visibility and influence in AI-generated results are crucial. By optimizing for five key factors – content retrievability, content alignment, competitive differentiation, authority signals, and entity mapping – you can ensure your brand remains discoverable and stands out.

As AI continues to accelerate in search, partnering with the right SEO agency will be crucial for staying visible in 2026 and beyond. 

An agency with a strong foundation in traditional SEO, coupled with proven strategies and a framework for AI SEO, will ensure your brand not only adapts to the changing landscape but leads the charge. 

With AI transforming the way users find information, working with experts who understand both traditional SEO and AI SEO will be key to sustaining your competitive edge and securing long-term visibility.

Generative AI and defamation: What the new reputation threats look like

 

AI chatbots are changing how defamation spreads. Safeguarding credibility now means watching what AI says about you.

As generative AI becomes more embedded in search and content experiences, it’s also emerging as a new source of misinformation and reputational harm. 

False or misleading statements generated by AI chatbots are already prompting legal disputes – and raising fresh questions about liability, accuracy, and online reputation management.

When AI becomes the source of defamation

It’s unsurprising that AI has become a new source of defamation and online reputation damage. 

As an SEO and reputation expert witness, I’ve already been approached by litigants involved in cases where AI systems produced libelous statements.

This is uncharted territory – and while solutions are emerging, much of it remains new ground.

Real-world examples of AI-generated defamation

One client contacted me after Meta’s Llama AI generated false, misleading, and defamatory statements about a prominent individual. 

Early research showed that the person had been involved in – and prevailed in – previous defamation lawsuits, which had been reported by news outlets. 

Some detractors had also criticized the individual online, and discussions on Reddit included inaccurate and inflammatory language. 

Yet when the AI was asked about the person or their reputation, it repeated those vanquished claims, added new warnings, and projected assertions of fraud and untrustworthiness.

In another case, a client targeted by defamatory blog posts found that nearly any prompt about them in ChatGPT surfaced the same false claims. 

The key concern: even if the court orders the original posts removed, how long will those defamatory statements persist in AI responses?

Google Trends shows that there has been a significant spike in searches related to defamation communicated via AI chatbots and AI-related online reputation management:

Source: Google Trends
Source: Google Trends

Fabricated stories and real-world harm

In other cases revealed by lawsuit filings, generative AI has apparently fabricated entirely false and damaging content about people out of thin air. 

In 2023, Jonathan Turley, the Shapiro Professor of Public Interest Law at George Washington University, was falsely reported to have been accused of sexual harassment – a claim that was never made, on a trip that never happened, while he was at a faculty where he never taught. 

ChatGPT cited a Washington Post article that was never written as its source.

In September, former FBI operative James Keene filed a lawsuit against Google after its AI falsely claimed he was serving a life sentence for multiple convictions and described him as the murderer of three women. 

The suit also alleges that these false statements were potentially seen by tens of millions of searchers.

Generative AI can fabricate stories about people – that’s the “generative” part of “generative AI.” 

After receiving a prompt, an AI chatbot analyzes the input and produces a response based on patterns learned from large volumes of text. 

So it’s no surprise that AI answers have at times included false and defamatory content about individuals.

Improvements and remaining challenges

Over the past two years, AI chatbots have shown improvement in handling biographical information about individuals.

The most prominent chatbot companies seem to have focused on refining their systems to better manage queries involving people and proper names.

As a result, the generation of false information – or hallucinations – about individuals seems to have declined significantly.

AI chat providers have also begun incorporating more disclaimer language into responses about people’s biographical details and reputations.

These often include statements noting:

  • Limited information.
  • Uncertainty about a person’s identity.
  • The lack of independent verification.

It’s unclear how much such disclaimers actually protect against false or damaging assertions, but they are at least preferable to providing no warning at all.

In one instance, a client who was allegedly defamed by Meta’s AI had their counsel contact the company directly.

Meta reportedly moved quickly to address the issue – and may have even apologized, which is nearly unheard of in matters of corporate civil liability.

At this stage, the greatest reputational risks from AI are less about outright fabrications.

The more pressing threats come from AI systems:

  • Misconstruing source material to draw inaccurate conclusions.
  • Repeating others’ defamatory claims.
  • Exaggerating and distorting true facts in misleading ways.

Because the law around AI-generated libel is still rapidly developing, there is little legal precedent defining how liable companies might be for defamatory statements produced by their AI chatbots.

Some argue that Section 230 of the Communications Decency Act could shield AI companies from such liability.

The reasoning is that if online platforms are largely immune from defamation claims for third-party content they host, then AI systems should be similarly protected since their outputs are derived from third-party sources.

However, derived is far from quoted or reproduced – it implies a meaningful degree of originality.

If legislators already believed AI output was protected under Section 230, they likely would not have proposed a 10-year moratorium on enforcing state or local restrictions on artificial intelligence models, systems, and decision-making processes.

That moratorium was initially included in President Trump’s budget reconciliation bill, H.R.1 – nicknamed the “One Big Beautiful Bill Act” – but was ultimately dropped when the law was signed on July 4, 2025.

Get the newsletter search marketers rely on.


AI’s growing role in reputation management

The rising prominence of AI-generated answers – such as Google’s AI Overviews – is making information about people’s backgrounds and reputations both more visible and more influential. 

As these systems become increasingly accurate and dependable, it’s not a stretch to say that the public will be more inclined to believe what AI says about someone – even when that information is false, misleading, or defamatory.

AI is also playing a larger role in background checks. 

For example, Checkr has developed a custom AI that searches for and surfaces potentially negative or defamatory information about individuals – findings that could limit a person’s employment opportunities with companies using the service. 

While major AI providers such as Google, OpenAI, Microsoft, and Meta have implemented guardrails to reduce the spread of defamation, services like Checkr are less likely to include caveats or disclaimers. 

Any defamatory content generated by such systems may therefore go unnoticed by those it affects.

At present, AI is most likely to produce defamatory statements when the web already contains defamatory pages or documents. 

Removing those source materials usually corrects or eliminates the false information from AI outputs. 

But as AI systems increasingly “remember” prior responses – or cache information to save on processing – removing the original sources may no longer be enough to erase defamatory or erroneous claims from AI-generated answers.

What can be done about AI defamation?

One key way to address defamation appearing in AI platforms is to ask them directly to correct or remove false and damaging statements about you. 

As noted above, some platforms – such as Meta – have already taken action to remove content that appeared libelous. 

(Ironically, it may now be easier to get Meta to delete harmful material from its Llama AI than from Facebook.)

These companies may be more responsive if the request comes from an attorney, though they also appear willing to act on reports submitted by individuals.

Here’s how to contact each major AI provider to request the removal of defamatory content:

Meta Llama

Use the Llama Developer Feedback Form or email LlamaUseReport@meta.com to report or request removal of false or defamatory content.

ChatGPT

In ChatGPT, you can report problematic content directly within the chat interface. 

On desktop, click the three dots in the upper-right corner and select Report from the dropdown menu. 

On mobile or other devices, the option may appear under a different menu.

Image 42

AI Overviews and Gemini

There are two ways to report content to Google. 

You can report content for legal reasons. (Click See more options to select Gemini, or within the Gemini desktop interface, use the three dots below a response.)

However, Google typically won’t remove content through this route unless you have a court order, since it cannot determine whether material is defamatory.

Alternatively, you can send feedback directly. 

For AI Overviews, click the three dots on the right side of the result and choose Feedback

From Gemini, click the thumbs-down icon and complete the feedback form. 

While this approach may take time, Google has previously reduced visibility of harmful or misleading information through mild suppression – similar to its approach with Autocomplete. 

When submitting feedback, explain that:

  • You are not a public figure.
  • The AI Overview unfairly highlights negative material.
  • You would appreciate Google limiting its display even if the source pages remain online.

Bing AI Overview and Microsoft Copilot

As with Google, you can either send feedback or report a concern. 

In Bing search results, click the thumbs-down icon beneath an AI Overview to begin the feedback process. 

In the Copilot chatbot interface, click the thumbs-down icon below the AI-generated response.

When submitting feedback, describe clearly – and politely – how the content about you is inaccurate or harmful.

For legal removal requests, use Microsoft’s Report a Concern form. 

However, this route is unlikely to succeed without a court order declaring the content illegal or defamatory.

Perplexity

To request the removal of information about yourself from Perplexity AI, email support@perplexity.ai with the relevant details.

Grok AI

You can report an issue within Grok by clicking the three dots below a response. Legal issues can also be reported through xAI. 

According to xAI’s privacy policy:

  • “Please note that we cannot guarantee the factual accuracy of Output from our models. If Output contains factually inaccurate personal information relating to you, you can submit a correction request and we will make reasonable efforts to correct this information – but due to the technical complexity of our models, it may not be feasible for us to do so.”

To submit a correction request, go to https://xai-privacy.relyance.ai/.

Additional approaches to addressing reputation damage in AI

If contacting AI providers doesn’t fully resolve the issue, there are other steps you can take to limit or counteract the spread of false or damaging information.

Remove negative content from originating sources

Outside of the decreasing instances of defamatory or damaging statements produced by AI hallucinations, most harmful content is gathered or summarized from existing online sources. 

Work to remove or modify those sources to make it less likely that AIs will surface them in responses. 

Persuasion is the first step, where possible. For example:

  • Add a statement to a news article acknowledging factual errors.
  • Note that a court has ruled the content false or defamatory.

These can trigger AI guardrails that prevent the material from being repeated. 

Disclaimers or retractions may also stop AI systems from reproducing negative information.

Overwhelm AI with positive and neutral information

Evidence suggests that AIs are influenced by the volume of consistent information available. 

Publishing enough accurate, positive, or neutral material about a person can shift what an AI considers reliable. 

If most sources reflect the same biographical details, AI models may favor those over isolated negative claims. 

However, the new content must appear on reputable sites that are equal to or superior in authority to where the negative material was published – a challenge when the harmful content originates from major news outlets, government websites, or other credible domains.

Displace the negative information in the search engine results

Major AI chatbots source some of their information from search engines

Based on my testing, the complexity of the query determines how many results an AI may reference, ranging from the first 10 listings to several dozen or more. 

The implication is clear: if you can push negative results further down in search rankings – beyond where the AI typically looks – those items are less likely to appear in AI-generated responses.

This is a classic online reputation management method: utilizing standard SEO techniques and a network of online assets to displace negative content in search results. 

However, AI has added a new layer of difficulty. 

ORM professionals now need to determine how far back each AI model scans results to answer questions about a person or topic. 

Only then can they know how far the damaging results must be pushed to “clean up” AI responses.

In the past, pushing negative content off the first one or two pages of search results provided about 99% relief from its impact. 

Today, that’s often not enough. 

AI systems may pull from much deeper in the search index – meaning ORM specialists must suppress harmful content across a wider range of pages and related queries. 

Because AI can conduct multiple, semantically related searches when forming answers, it’s essential to test various keyword combinations and clear negative items across all relevant SERPs.

Obfuscate by launching personas that share the same name

Using personas that “coincidentally” share the same name as someone experiencing reputation problems has long been an occasional, last-resort strategy. 

It’s most relevant for individuals who are uncomfortable creating more online media about themselves – even when doing so could help counteract unfair, misleading, or defamatory content. 

Ironically, that reluctance often contributes to the problem: a weak online presence makes it easier for someone’s reputation to be damaged.

When a name is shared by multiple individuals, AI chatbots appear to tread more carefully, often avoiding specific statements when they can’t determine who the information refers to. 

This tendency can be leveraged. 

By creating several well-developed online personas with the same name – complete with legitimate-seeming digital footprints – it’s possible to make AIs less certain about which person is being referenced. 

That uncertainty can prevent them from surfacing or repeating defamatory material.

This method is not without complications. 

People increasingly use both AI and traditional search tools to find personal information, so adding new identities risks confusion or unintended exposure. 

Still, in certain cases, “clouding the waters” with credible alternate personas can be a practical way to reduce or dilute defamatory associations in AI-generated responses.

Old laws, new risks

A hybrid approach combining the methods described above may be necessary to mitigate the harm experienced by victims of AI-related defamation.

Some forms of defamation have always been difficult – and sometimes impossible – to address through lawsuits. 

Litigation is expensive and can take months or years to yield relief. 

In some cases, pursuing a lawsuit is further complicated by professional or legal constraints. 

For example, a doctor seeking to sue a patient over defamatory statements could violate HIPAA by disclosing identifying information, and attorneys may face similar challenges under their respective bar association ethics rules.

There’s also the risk that defamation long buried in search results – or barred from litigation by statutes of limitation – could suddenly resurface through AI chatbot responses. 

It may eventually lead to interesting case law, arguing that an AI-generated response constitutes a “new publication” of defamatory content, potentially resetting the limitations on those claims.

Another possible solution, albeit a distant one, would be to advocate for new legislation that protects individuals from negative or false information disseminated through AI systems. 

Other regions, such as Europe, have established privacy laws, including the “Right to be Forgotten,” that give individuals more control over their personal information. 

Similar protections would be valuable in the United States, but they remain unlikely given the enduring force of Section 230, which continues to shield large tech companies from liability for online content.

AI-driven reputational harm remains a rapidly evolving field – legally, technologically, and strategically. 

Expect further developments ahead as courts, lawmakers, and technologists continue to grapple with this emerging frontier

Google FastSearch: Everything you need to know

 

FastSearch powers AI Overviews – a faster, lighter version of Search built for speed over depth. Here’s how it changes what gets seen.

Court filings in Google’s antitrust case revealed FastSearch, a proprietary system few search marketers have heard of.

It sits at the core of how Google grounds its AI Overviews, prioritizing speed over the deeper analysis behind traditional search results.

That distinction raises an important question: what exactly does FastSearch prioritize?

What is Google FastSearch?

FastSearch is Google’s internal technology for grounding Gemini models and generating AI Overviews. 

While traditional Google Search analyzes massive amounts of web data using hundreds of ranking signals, FastSearch takes a more targeted approach.

The antitrust case filing explains: 

  • “To ground its Gemini models, Google uses a proprietary technology called FastSearch. FastSearch is based on RankEmbed signals which are a set of search ranking signals that generates abbreviated, ranked web results that a model can use to produce a grounded response. FastSearch delivers results more quickly than Search because it retrieves fewer documents, but the resulting quality is lower than Search’s fully ranked web results.”

Marie Haynes brought this to the industry’s attention after reviewing the judge’s decision in Google’s monopoly case remedy rulings. 

The revelation appeared on page 35 of the filing, tucked into technical explanations about Google’s AI infrastructure.

Dig deeper: The ABCs of Google ranking signals: What top search engineers revealed

The speed-versus-quality tradeoff

FastSearch makes three key compromises to achieve faster response times.

Smaller document pool

Rather than searching Google’s full index, FastSearch pulls from a targeted subset of pages. 

This dramatically reduces processing time when Gemini needs real-time grounding for conversational responses.

Simplified ranking signals

FastSearch relies primarily on RankEmbed signals instead of Google’s complete ranking arsenal. 

These signals focus on the semantic relationships between queries and content, rather than traditional authority metrics such as backlinks or domain reputation.

Acceptable accuracy threshold

Google acknowledged on page 35 of the court filing that “the resulting quality is lower than Search’s fully ranked web results,” though the results remain “good enough for grounding” AI responses. 

This explains why AI Overviews occasionally surface questionable content as the streamlined process prioritizes semantic matching over comprehensive quality assessment.

Dig deeper: How to balance speed and credibility in AI-assisted content creation

RankEmbed: The semantic signal that matters

The filing also describes RankEmbed as one of Google’s “top-level” deep-learning signals on page 138, capable of “finding and exploiting patterns in vast data sets.” 

Unlike signals that measure popularity or count backlinks, RankEmbed asks a simpler question: How closely does this content align with what the user actually meant?

This semantic focus means a page with modest backlinks but crystal-clear topical relevance might outperform a high-authority domain with vague or meandering content.

This shift has significant implications. Traditional SEO strength doesn’t automatically translate to AI Overview visibility.

Dig deeper: Organizing content for AI search: A 3-level framework

Get the newsletter search marketers rely on.


Limited third-party access through Vertex AI

Google doesn’t offer FastSearch as a standalone API. 

Instead, the technology is integrated into Google Cloud’s Vertex AI, allowing businesses to ground their own AI applications.

The filing notes: 

  • “Vertex customers do not, however, receive the FastSearch-ranked web results themselves, only the information from those results. Google limits Vertex in this manner to protect its intellectual property.”

This means you can’t directly test FastSearch performance in the same way as Google analyzes traditional rankings.

The system remains a black box, with visibility limited to what surfaces in AI Overviews.

What this means for content strategy

FastSearch’s architecture reveals four strategic priorities for AI visibility.

  • Lead with clarity: If RankEmbed prioritizes semantic relationships, content needs to address user intent immediately and precisely. Don’t bury your main point three paragraphs in.
  • Build topical depth: FastSearch’s semantic focus suggests comprehensive topic coverage matters more than acquiring additional backlinks. Content clusters that demonstrate expertise across related subjects may perform better.
  • Structure for extraction: Content that helps AI systems quickly identify topic relationships and pull relevant information holds advantages. This aligns with best practices around schema markup, clear heading hierarchies, and logical information architecture.
  • Balance both systems: While FastSearch uses different signals, significant overlap exists between traditional search rankings and AI Overview citations. Sites with genuine authority tend to succeed in both environments.

Don’t abandon SEO fundamentals

Google’s Danny Sullivan emphasizes that good SEO creates good generative engine optimization (GEO). 

The foundational principles remain consistent: 

  • Understand how people search.
  • Create helpful content.
  • Make information accessible to search systems.

Research indicates that sites that establish genuine expertise tend to perform well across both traditional search and AI-powered search results. 

The difference lies in presentation rather than wholesale changes to what works.

Dig deeper: Google Danny Sullivan: Good SEO means good GEO

Your action plan

FastSearch doesn’t require overhauling your entire content strategy, but these areas deserve renewed focus.

  • Conduct a semantic audit: Review content to ensure it clearly addresses user intent from the first paragraph. Eliminate ambiguity about what each piece covers and strengthen explicit topic relationships.
  • Track AI performance separately: Monitor which content appears in AI Overviews and identify patterns. Compare semantic characteristics between your citations and competitors’.
  • Test structural approaches: Experiment with different content architectures, heading hierarchies, and schema implementations. Measure impact on AI visibility alongside traditional metrics.
  • Maintain traditional SEO: FastSearch powers one specific use case. Traditional ranking factors still drive the majority of search visibility and traffic.

What FastSearch reveals about Google’s direction

The court documents revealing FastSearch provided a rare glimpse into Google’s internal infrastructure. 

These insights remind us that surface experiences, whether traditional search results or AI Overviews, rely on complex systems making millions of calculations behind the scenes.

As Google expands AI Overviews to more queries, languages, and countries, understanding technologies like FastSearch becomes increasingly important. 

However, the core principle remains unchanged: to create clear, helpful, and authoritative content that serves users well.

FastSearch may use lighter signals than traditional Google Search, but both systems ultimately aim to connect people with valuable information. 

Search marketers who nail that fundamental goal will succeed regardless of which technology delivers the answer.